Skip to content

fix(core): fix trim_messages misclassification of per-message token_counter#35630

Open
GAUTAM V DATLA (gautamvarmadatla) wants to merge 5 commits intolangchain-ai:masterfrom
gautamvarmadatla:fix/core-trim-messages-token-counter-annotation
Open

fix(core): fix trim_messages misclassification of per-message token_counter#35630
GAUTAM V DATLA (gautamvarmadatla) wants to merge 5 commits intolangchain-ai:masterfrom
gautamvarmadatla:fix/core-trim-messages-token-counter-annotation

Conversation

@gautamvarmadatla
Copy link
Copy Markdown

Fixes : #35629

I replaced the brittle annotation is BaseMessage check in trim_messages with get_type_hints() to resolve annotations to live types where possible and issubclass() to correctly match BaseMessage and its subclasses. This should fix misclassification for subclass annotations like HumanMessage, string/forward-reference annotations, and common cases involving postponed annotation evaluation. I also added a token_counter_is_per_message flag as an explicit escape hatch for lambdas and unannotated callables where auto-detection cannot work. Also added regression tests covering:

  • exact BaseMessage annotation
  • subclass annotation
  • string annotation
  • lambda with explicit override
  • unannotated function with explicit override
  • list-based counter backwards compatibility
  • precedence of get_num_tokens_from_messages

@github-actions github-actions bot added core `langchain-core` package issues & PRs external fix For PRs that implement a fix labels Mar 7, 2026
@codspeed-hq
Copy link
Copy Markdown

codspeed-hq bot commented Mar 7, 2026

Merging this PR will not alter performance

⚠️ Unknown Walltime execution environment detected

Using the Walltime instrument on standard Hosted Runners will lead to inconsistent data.

For the most accurate results, we recommend using CodSpeed Macro Runners: bare-metal machines fine-tuned for performance measurement consistency.

✅ 13 untouched benchmarks
⏩ 27 skipped benchmarks1


Comparing gautamvarmadatla:fix/core-trim-messages-token-counter-annotation (f8a8823) with master (0f4f3f7)

Open in CodSpeed

Footnotes

  1. 27 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@gautamvarmadatla
Copy link
Copy Markdown
Author

GAUTAM V DATLA (gautamvarmadatla) commented Mar 14, 2026

hi All CIs seem to be failing with this error on linting. It seems to be related to yesterday's PR #35851 which introduced RUF013 violations. Could you please have a look?

I tried merging my PRs #35420, #35239, and #35630, but all three failed

cc ccurme (@ccurme) , Mason Daugherty (@mdrxy)

Error: langchain_text_splitters/base.py:230:26: RUF013 PEP 484 prohibits implicit `Optional`
  help: Convert to `T | None`
Error: langchain_text_splitters/base.py:307:26: RUF013 PEP 484 prohibits implicit `Optional`
  help: Convert to `T | None`
make: *** [Makefile:46: lint_package] Error 1
Error: Process completed with exit code 2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core `langchain-core` package issues & PRs external fix For PRs that implement a fix size: S 50-199 LOC

Projects

None yet

Development

Successfully merging this pull request may close these issues.

trim_messages breaks when token_counter is a per-message callable (lambda, subclass annotation, or postponed annotations)

1 participant