Skip to content

Conversation

Flare576
Copy link

@Flare576 Flare576 commented Aug 17, 2025

Primary Feature

Add a new option to the chat sub-command, allowing users to view the prior prompt/responses of a resumed conversation.

How to Use

When calling the chat sub-command, simply add -n,--count (like you would for the logs subcommand). Like logs, it will default to the last 3 messages of the conversation

llm chat -c -n

Additional Features

Count Control

You can retrieve any number of messages, or ALL of the messages using 0 (like -n/--count for logs)

llm chat -c -n 1 # See most recent prompt/response
llm chat -c -n 0 # See full chat history

Safety

Calling chat with -n/--count without -c,--continue,--cid,--conversation will simply start a new chat (since, logically, there is no history)

llm chat -n 99 # 99 problems but this command ain't one

Bug Fix

While formatting the output of this option, I realized that there was a bug in how _BaseResponse entities were created from DB rows - the datetime_utc wasn't being loaded into _start_utcnow, and the format wasn't matching the rest of the llm output. I fixed it, and included a test in my suite to validate the formatting.

Default to no history (maintain current behavior)
Naked flag (-l,--last-messages) defaults to 5 (based on other defaults)
Optional NUMBER to request different number
TODO: Setting to 0 gets full history (same as logs)

This code written on a Steam Deck with llm
- Defaults to 3 now, matching logs list subcommand
- Passing 0 now shows full history
- Each Response now shows date + id, matching logs format
- BaseResponse now sets `_start_utcnow` field in `from_row`
- Response's `datetime_utc` function now formats like logs
- Finalize test suite

This commit made on a Steam Deck w/ llm + gemini-2.5-flash
Also, -l might be used for --log to enable logging in chat
Copy link

@solomonjoeykao solomonjoeykao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@CodiumAI-Agent /review

@Flare576
Copy link
Author

This falls under "Don't trust the dev to test their own code," but I've been using this feature locally since I put up the PR and have been absolutely loving it.

My use case is that I made a little persona (template) to be my workout "trainer" that I check in with after each of my workouts. The system prompt is to give a little encouragement or tip.

Each day I open it with llm chat --cid 1234 -n 2 so that I can see the last couple of check-ins, and any reminders the agent gave me, so that I can address them in my update.

(technically, I have alias workout = "llm chat --cid 1234 -n 2" so I don't have to remember the conversation ID, but you get it)

Could I have done this via a Gemini Chat instead? Well, yeah, but why? I do everything else on the command line ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants