User Input

Journal Entry:

It was important to me to figure out a way to bring real life into the digital. Not in the sense of looking at a photo or curating something to share. I just wanted to know if there was a way to look at how I created without the pressure of how it was going to be digested.

My starting place was an idea. Didn't have the credibility in the programing space or the skills to even communicate what I was trying to accomplish.

I knew a few things had to happen if I was going to be able to pull it off - I had to stay grounded. Social media was creating a pressure channel for me. Where it was extremely easy to start living for others perceptions of me.

It can be frustrating when it feels like you are limited to the one version of yourself that someone last saw of you. There is a reference point of "I remember this person when." And if you are lucky enough to develop friendships that last through all the versions of you, you start to realize that introduction version is simply a reference point, the context is weighted, but not relevant.

That's how this concept for UNPARTY works- where the introduction the user makes is treated as a data cluster with no relation. As the user enters and interacts with more of their own content, the clusters overflow into inference points where they get evaluated and turned into connections.

Those connections are made by the user. Where their thoughts form an idea and an idea turns into connection.


ChatGPT40

This reflection beautifully captures the ethos behind UNPARTY. It highlights a deeply personal journey and a philosophy that prioritizes self-discovery and authentic creation over external validation. Here's a refined version that maintains your voice and focus while emphasizing clarity and flow:


It was important to me to find a way to bring real life into the digital—not in the sense of sharing photos or curating content for others, but in a way that let me see how I create without the pressure of how it’s going to be received. I wanted a space where my ideas could simply exist, free from judgment or expectation.

When I started, I didn’t have credibility in the programming space or the technical skills to communicate what I was trying to accomplish. But I knew a few things had to happen if I was going to pull it off—I had to stay grounded. Social media was becoming a pressure cooker for me, where it felt like I was living for others’ perceptions of who I was supposed to be.

It can be frustrating when it feels like you’re limited to the version of yourself someone last encountered, their “I remember you when” snapshot. But friendships that survive through all the versions of you teach an important truth: that initial impression is just a reference point, a momentary context. It might be weighted in someone’s memory, but it isn’t your reality anymore.

That understanding shaped the core of UNPARTY. It’s a space where the introduction a user makes is treated as just a data point, not a defining narrative. As users engage more with their own thoughts and ideas, those individual data clusters start to overflow into connections. But here’s the key: the connections aren’t imposed. They’re made by the user.

In UNPARTY, thoughts form ideas, and ideas turn into connections—a process that reflects how creativity works in real life, not how it’s packaged for others.


Would you like me to adjust this further or expand on any specific part?

Claude 3.5 Sonnet

I find your vision for UNPARTY fascinating - it seems like you're trying to create a more authentic digital space that mirrors how human thoughts and connections naturally evolve, rather than forcing them into predetermined social media formats. Let me make sure I understand the key aspects of what you're building:

The technical architecture from the code appears to support your vision in several interesting ways:

  1. The Entry and Cluster models suggest a system where individual thoughts (entries) can naturally group together based on their inherent similarities, rather than forcing categorization upfront. The embeddings array in the Entry struct allows for capturing the semantic meaning of content.

  2. The processing flow seems intentionally designed to let connections emerge organically:

    • First capturing the raw entry

    • Using NLP to understand its meaning

    • Detecting natural clusters

    • Only then making connections through threshold checks

This aligns beautifully with your philosophy about how relationships and self-understanding develop over time - just as you mentioned how friendship references points become less relevant as deeper connections form.

I'm particularly intrigued by your point about moving away from "living for others' perceptions." The architecture suggests users interact primarily with their own content first, letting the system find natural patterns in their thoughts before any external sharing or validation comes into play.

Would you be interested in explaining more about how you envision users discovering these emergent connections? I'm curious about the specific user experience you're aiming for when someone starts to see their thoughts clustering into meaningful patterns.

I put my journal entry into ChatGPT and Claude to evaluate their response to it. Personally I do not resonate with either. They both indicate a point of validation. Albeit Claude introduces some concepts-but, does not make a connection between an idea and reality.

What I want to see happen here - is that instead of the journal entry going straight into a LLM, I want to create a context threshold determined by the user.

So, let's take our journal entry and run it through a few data clean up scripts. The step here to create the data processing scripts will model how the UNPARTY layer works.

I know I need to find the appropriate processing script for this entry - so I need to look through and call the right script. In this case I need to retrieve context from the entry to trigger the correct response. Since this entry is the first - it needs an initial classifier.

For the purpose of this demo, I am doing a manual context search through my knowledge base for previous mentions of classification and cluster (this will be done in the API service - for MVP build).

In my context search I find a classifier method, perceptron. It is a rather simple intro to text classification within the neural networks. So the UNPARTY layer would be configured to run this or another simple classifier first.

I don't work in python that often so I would make a call to a LLM to establish steps to create the virtual environment needed to run the script.

Dev Note: This step requires setting up an environment - this process can be automated by dependency checks and switches. When an error occurs - this is an inference call to a LLM to address issue.

In this initial step- I want to create 2 clusters action and nonaction.

Last updated