Skip to content

fix(cli,gen): redirect tracing to stderr and add LocalGPT label on responses#52

Merged
yiwang merged 3 commits into
mainfrom
copilot/fix-output-formatting-issues
Feb 19, 2026
Merged

fix(cli,gen): redirect tracing to stderr and add LocalGPT label on responses#52
yiwang merged 3 commits into
mainfrom
copilot/fix-output-formatting-issues

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Feb 19, 2026

Tracing output was written to stdout, causing log lines to interleave with streamed model output mid-line. Additionally, model responses were missing the LocalGPT: label and leading newline in both the chat CLI and gen CLI.

Changes

  • crates/cli/src/main.rs — Added .with_writer(std::io::stderr) to both tracing subscriber initializations (gen-mode and async_main), routing all log output to stderr and away from the streamed stdout output.

  • crates/cli/src/cli/chat.rs — Changed follow-up response print from print!("{}", follow_up) to print!("\nLocalGPT: {}", follow_up), matching the label applied to the initial response and ensuring clean line separation after tool call output.

  • crates/gen/src/main.rs — Applied the same fixes as the CLI: added .with_writer(std::io::stderr) to the tracing subscriber, and updated both response println! calls to use the LocalGPT: label (println!("\nLocalGPT: {}\n", response)) for both the initial-prompt path and the interactive loop — consistent with the chat CLI.

// Before
print!("{}", follow_up);
println!("\n{}\n", response); // gen CLI

// After
print!("\nLocalGPT: {}", follow_up);
println!("\nLocalGPT: {}\n", response); // gen CLI
Original prompt

Problem

When using the localgpt CLI (localgpt chat), the output has formatting inconsistencies:

  1. Log lines interleave with streamed model output: Tracing/log output can appear mid-line during streaming, mixing with the model's response text.
  2. Missing line breaks before follow-up responses: After tool execution, the follow-up model response concatenates directly with previous content without a proper line break or "LocalGPT:" label.

Root Cause

In crates/cli/src/cli/chat.rs:

Issue 1: Tool follow-up output (line 500)

print!("{}", follow_up);

After tool calls are executed and the follow-up response is printed, there is no leading newline or "LocalGPT:" label. This causes the follow-up text to concatenate directly onto the end of the previous tool call output.

Issue 2: Streaming delta output (line 422)

print!("{}", chunk.delta);

Log lines from tracing/other threads can interleave with the streamed chunks mid-line since stdout and stderr are not synchronized. The tracing subscriber should be configured to write to stderr or a log file, not stdout, to avoid interleaving.

Issue 3: Post-stream newlines (line 520)

println!("\n");

This adds trailing spacing after the full stream, but if tool calls intervene, the newline before the follow-up "LocalGPT:" response is absent.

Required Changes in crates/cli/src/cli/chat.rs

  1. Line 500: Change print!("{}", follow_up) to print!("\nLocalGPT: {}", follow_up) so the follow-up response after tool execution starts on a fresh line with the proper label, consistent with the initial response label on line 410.

  2. Ensure newlines before tool call notifications (lines 447-450 and 482-485): The tool call markers like [tool_name: detail] should always start on a clean new line. Currently they use println!("\n[...]") which should be sufficient, but verify they produce clean output after streamed content that may not end with a newline.

  3. Verify tracing/logging is directed to stderr or a file, not stdout: Check that the tracing subscriber setup in the CLI entry point does not write to stdout. If it does, it should be changed to write to stderr (std::io::stderr()) to prevent log lines from interleaving with the streamed model output. Look at the CLI's main entry point and tracing initialization code (likely in crates/cli/src/main.rs or similar) to ensure the tracing layer uses stderr or a file writer, not stdout.

Expected Behavior

  • The "LocalGPT:" label should appear on its own line before every model response, including follow-up responses after tool execution.
  • Log/tracing output should not interleave with streamed model output on the same line.
  • Each distinct section of output (user prompt, model response, tool calls, follow-up) should be clearly separated by line breaks.

This pull request was created from Copilot chat.


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

@yiwang
Copy link
Copy Markdown
Contributor

yiwang commented Feb 19, 2026

@copilot make sure gen mode cli also have the same fix, refactor to use same code between those 2 cli as much as possible

…ing to stderr

Co-authored-by: yiwang <142937+yiwang@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix formatting issues in localgpt CLI output fix(cli): redirect tracing to stderr and add LocalGPT label on tool follow-up responses Feb 19, 2026
Copilot AI requested a review from yiwang February 19, 2026 13:48
Co-authored-by: yiwang <142937+yiwang@users.noreply.github.com>
Copy link
Copy Markdown
Contributor Author

Copilot AI commented Feb 19, 2026

@copilot make sure gen mode cli also have the same fix, refactor to use same code between those 2 cli as much as possible

Done in commit 2135713. Applied the same two fixes to crates/gen/src/main.rs:

  1. Added .with_writer(std::io::stderr) to the tracing subscriber (same as the CLI fix).
  2. Updated both response println! calls (println!("\n{}\n", response)println!("\nLocalGPT: {}\n", response)) for the initial-prompt path and the interactive loop — consistent with the \nLocalGPT: label used in the chat CLI.

Copilot AI changed the title fix(cli): redirect tracing to stderr and add LocalGPT label on tool follow-up responses fix(cli,gen): redirect tracing to stderr and add LocalGPT label on responses Feb 19, 2026
@yiwang yiwang marked this pull request as ready for review February 19, 2026 14:14
@yiwang yiwang merged commit 7f05183 into main Feb 19, 2026
5 checks passed
@yiwang yiwang deleted the copilot/fix-output-formatting-issues branch February 23, 2026 02:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants