Feature/userRapport More tailored twitter interaction and rapport building #18
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
To get this to work, you can pull these changes if you project already has
Relates to
Original PR:
elizaOS/eliza#3647
Twitter-client PR :
#18
Core-changes PR :
elizaOS/eliza#3962
Risks
Moderate, the PR interacts with the memory, twitter client and runtime.
Background
What does this PR do?
This PR introduces a new feature to the Eliza framework that enhances agent interactions based on user interaction history. This makes the agent feel more human and genuine to the people it interacts with. The feature is currently implemented inside the Twitter Client only, but it could be applied to any client. The feature works in two parts.
The first step is to store every conversation and review the ones that become inactive in a process.

The second step is to fetch the score of the user whenever the agent responds to someone to define the tone of the interaction.

Conversations storage: Conversations get saved inside the agent memory and time stamped.
Conversation Analysis: The agent analyzes recent conversations after they have been inactive >45 minutes. Based on the analysis, the agent assigns scores to users involved in the conversations. Positive scores indicate favorable interactions, while negative scores represent unfavorable ones.
Score Storage: These user scores are stored in the "Account" table inside the DB.
Interaction Adjustment: In subsequent interactions, the agent uses these stored scores to modulate its tone and overall interaction style with users.
Adaptive Behavior: Users with high positive scores will experience friendlier and more supportive interactions from the agent. Conversely, users with negative scores will encounter a more hostile and unfriendly agent demeanor.
Limitations: The Conversations storage and conversation analysis is only implemented inside the Twitter client. Also, the conversation/
UserScores
storing is done inside SQLite client only, but it could easily be implemented inside other DB clients.Other Usecase: The Conversation storing can be used by any client.
What kind of change is this?
I started experimenting with the Eliza framework and built a Twitter agent in character that interacts with a fanbase community. My main goal was to make the agent feel genuine and human.
But I was bothered by the fact that people that interacted with her for the first time would get a very similar interaction as those loyal fans that have sent 100's of messages to the agent. So I added a scoring system to the Twitter client. It makes the responses more tailored and evolve over time, just like your first interaction with someone wouldn't be the same as your 100th or 1000th one.
Why analyze conversation as a whole?
I wanted to limit the number of API calls and the tokens used for the analysis. It also provides more context, which you wouldn't get with individual messages, which is essential for a good analysis.
Why User scores:
It seemed like the most logical and easy way to provide the agent with a description of how they should feel towards the user they are responding to, since the conversation analysis happens during cooldown time, there's no added latency.
Why only Twitter client: This is the client I am most familiar with and that I use the most. This feature can be easily added to other clients by adding a mechanism to store the conversations, add a review loop, and...
Documentation changes needed?
Testing
Where should a reviewer start?
This is the changes required related to the Elizaos twitter-client core to implement User Scores. I made the PR a month ago but, since 0.25.9, the twitter-client plugin is now in a separate repo so I had to separate the changes in 2 different PR's.
Starting with a fresh repo, install the twitter client and get it working. Then, pull the core changes and the twitter-client changes and start with a fresh db and a character that uses the twitter client. Make it interact with a conversation thread (by forcing it, or simply waiting until it happens), then let it analyze the conversation. Check if the conversations were built correctly in the DB and check if the
UserRapport
of the corresponding users was updated correctly.Make sure you have a working elizaos with the twitter-client already implemented.
Detailed testing steps
Tested the code for 3 weeks with my agent on Twitter on Eliza release 1.7.
Then, started with a fresh repo with the latest release (0.25.6-alpha.1) and only implemented my changes, which I tested for 3 week straight. Fixed a couple issues along the way but nothing critical happened, worst case is, the agent keeps running without the user scores.
Can do more testing if needed.
This will increase the llm token usage and number of API calls, but it shouldn't be that significant,