Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds support for local model testing via LM Studio, enabling developers to test fine-tuned models that aren't available through the AI SDK. The changes refactor the existing gateway-based model selection into a provider abstraction and add LM Studio as a second provider option.
Key changes:
- Added provider abstraction layer with support for Vercel AI Gateway and LM Studio
- Refactored model selection and pricing logic into separate provider modules
- Enhanced result metadata to track provider type and configuration
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
| lib/providers/lmstudio.ts | New provider module for LM Studio integration with local model discovery and selection |
| lib/providers/ai-gateway.ts | Extracted and refactored Vercel AI Gateway logic from index.ts into a dedicated provider module |
| index.ts | Refactored to use provider abstraction, added provider selection UI, and updated result metadata to include provider information |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
paoloricciuti
left a comment
There was a problem hiding this comment.
Overall looks good, a minor comment but approving already. I agree we need something like this.
| const customUrl = await confirm({ | ||
| message: "Use custom LM Studio URL? (default: http://localhost:1234/v1)", | ||
| initialValue: false, | ||
| }); | ||
|
|
||
| if (isCancel(customUrl)) { | ||
| cancel("Operation cancelled."); | ||
| process.exit(0); | ||
| } | ||
|
|
||
| let baseURL = "http://localhost:1234/v1"; | ||
|
|
||
| if (customUrl) { | ||
| const urlInput = await text({ | ||
| message: "Enter LM Studio server URL", | ||
| placeholder: "http://localhost:1234/v1", | ||
| }); | ||
|
|
||
| if (isCancel(urlInput)) { | ||
| cancel("Operation cancelled."); | ||
| process.exit(0); | ||
| } | ||
|
|
||
| baseURL = urlInput || "http://localhost:1234/v1"; | ||
| } |
There was a problem hiding this comment.
Instead of using two questions we can ask for the lm studio URL and prefill with the deafault no?
Prompt cache cost estimation
|
Let's pause this for now, there were lots of other changes made that makes this difficult to merge. |
I know we said that we can add just use AI SDK, but if we want to test fine tunes we can't use that, so I still think it's good to have a way to do this as LM Studio can run anything from HF.