Update the correct naming for llama 3 70b instruct models#5074
Update the correct naming for llama 3 70b instruct models#5074
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
💡 Enable Vercel Agent with $100 free credit for automated AI reviews |
|
Claude finished @juliettech13's task —— View job PR Review Analysis
|
There was a problem hiding this comment.
Greptile Overview
Summary
This PR correctly differentiates between two distinct Llama 3.3 70B model variants that were previously conflated:
- llama-3.3-70b-versatile - Groq's optimized variant with 131K context and 32K max output tokens
- llama-3.3-70b-instruct - Meta's standard instruct model with 128K context and 16K max output tokens
The changes ensure accurate model identification across providers:
- Groq now correctly maps to the "versatile" variant
- Novita and OpenRouter correctly map to the standard "instruct" model
Test snapshots have been regenerated to reflect the model registry now tracking both variants separately.
Confidence Score: 5/5
- This PR is safe to merge with no issues found
- The changes are straightforward and correct - they properly distinguish between two model variants that have different specifications. All provider mappings are accurate, test snapshots are appropriately updated, and the implementation follows the existing codebase patterns
- No files require special attention
Important Files Changed
File Analysis
| Filename | Score | Overview |
|---|---|---|
| packages/cost/models/authors/meta/llama/models.ts | 4/5 | Correctly splits llama-3.3-70b into two distinct models: versatile (131K context, 32K max output) and instruct (128K context, 16K max output) variants with appropriate descriptions |
| packages/cost/models/authors/meta/llama/endpoints.ts | 5/5 | Updates Groq endpoint key from llama-3.3-70b-instruct to llama-3.3-70b-versatile to match Groq's actual model identifier |
| packages/tests/cost/snapshots/registrySnapshots.test.ts.snap | 5/5 | Test snapshot correctly updated to reflect the model split - Groq now offers versatile variant, while Novita and OpenRouter offer the standard instruct model |
Sequence Diagram
sequenceDiagram
participant User
participant Registry
participant Groq
participant Novita
participant OpenRouter
Note over Registry: Before: Single model entry<br/>"llama-3.3-70b-instruct"<br/>mapped to all providers
User->>Registry: Request llama-3.3-70b model info
Registry->>Groq: Returns "llama-3.3-70b-instruct"
Note over Groq: ❌ Incorrect!<br/>Groq uses "versatile" variant
Note over Registry: After: Two distinct model entries
User->>Registry: Request llama-3.3-70b-versatile
Registry->>Groq: Returns "llama-3.3-70b-versatile"
Note over Groq: ✅ Correct!<br/>131K context, 32K output
User->>Registry: Request llama-3.3-70b-instruct
Registry->>Novita: Returns "llama-3.3-70b-instruct"
Registry->>OpenRouter: Returns "llama-3.3-70b-instruct"
Note over Novita,OpenRouter: ✅ Correct!<br/>128K context, 16K output
2 files reviewed, no comments

We already had the meta-llama/llama-3.3-70b-instruct model integrated, but the naming was wrong because of the groq provider offering the versatile version.
This PR fixes it so we display both.