Skip to content

[Bug]: Auto-scroll too sticky when LLM streams code #179

@ServeurpersoCom

Description

@ServeurpersoCom

Browser

All

Description

When the LLM streams long code blocks, the chat view sticks so hard to the bottom that it's nearly impossible to scroll up and review the beginning of the conversation.

For normal text it feels fine ! but for code outputs it becomes frustrating: as soon as I try to scroll up, the view snaps back down.

Steps to Reproduce

Ask the LLM to spit out a big fat chunk of C++ code (no need for long lines ; it’s actually worse with short ones).
Try to scroll up, on PC with mousewhell, scrollbar or smartphone problem look the same.
Use fast >40 t/s generation on a small model or MoE to be sure

Expected Behavior

Auto-scroll should follow generation only if the user stays near the bottom.

If the user scrolls up past a threshold, auto-scroll should pause.

If the user scrolls back down to the bottom, auto-scroll can resume.

Version

No response

Additional Context

This is not trivial; I'm specifically looking for a good algorithm here. It's tricky to implement
The goal is to find a smooth balance between:
staying in sync with generated tokens,
and letting the user freely scroll without being pulled down.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions