Skip to content

Conversation

@hank9999
Copy link

@hank9999 hank9999 commented Jan 7, 2026

Background

The Anthropic SDK only read input_tokens from message_start block now, but a lot of compatible providers only return real tokens in message_delta block.

Summary

Compare existing input_tokens with those in the message_delta block(and, existing in message_delta). If they do not match, use the data from message_delta.

Manual Verification

It will change nothing in official provider.

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

@hank9999 hank9999 changed the title fix(anthropic): read real output_tokens in message_delta block @hank9999 fix(anthropic): read real output_tokens in message_delta block Jan 7, 2026
@hank9999 hank9999 force-pushed the fix/use-real-input-tokens-in-delta branch from 919f882 to 1458c7a Compare January 8, 2026 03:31
Copy link
Contributor

@vercel vercel bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Suggestion:

The message_delta.usage schema is missing input_tokens and cache_read_input_tokens fields, causing TypeScript to reject accessing these properties from the API response.

Fix on Vercel

@hank9999
Copy link
Author

hank9999 commented Jan 8, 2026

Additional Suggestion:

The message_delta.usage schema is missing input_tokens and cache_read_input_tokens fields, causing TypeScript to reject accessing these properties from the API response.

Fix on Vercel

It exists

    {
      "type": "message_delta",
      "delta": {
        "stop_reason": "end_turn",
        "stop_sequence": null
      },
      "usage": {
        "input_tokens": 10,
        "cache_creation_input_tokens": 0,
        "cache_read_input_tokens": 22063,
        "output_tokens": 74
      },
      "context_management": {
        "applied_edits": []
      }
    }

@aayush-kapoor aayush-kapoor self-assigned this Jan 8, 2026
Copy link
Contributor

@aayush-kapoor aayush-kapoor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add test cases with actual model response fixtures to verify this?

look at https://github.com/vercel/ai/blob/main/contributing/testing.md#manual-testing for adding streaming fixtures

@hank9999
Copy link
Author

hank9999 commented Jan 9, 2026

can you add test cases with actual model response fixtures to verify this?

look at https://github.com/vercel/ai/blob/main/contributing/testing.md#manual-testing for adding streaming fixtures

of course, i will update commit

@hank9999
Copy link
Author

hank9999 commented Jan 9, 2026

can you add test cases with actual model response fixtures to verify this?

look at https://github.com/vercel/ai/blob/main/contributing/testing.md#manual-testing for adding streaming fixtures

@aayush-kapoor Hi, I added a stream format test by using a ping pong prompt. Could you please check it? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants