@@ -23,7 +23,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
2323- ** Features** : Function calling, JSON mode, vision, DALL-E, embeddings
2424- ** Pricing** : Pay per token
2525- ** Best for** : General purpose, production applications
26- - ** Setup** : [ OpenAI Setup Guide] ( setup.md#openai )
26+ - ** Setup** : [ OpenAI Setup Guide] ({{ site.baseurl }}/ setup.md#openai)
2727
2828#### Anthropic
2929- ** Models** :
@@ -34,7 +34,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
3434- ** Features** : 200K+ context, vision support
3535- ** Pricing** : Pay per token
3636- ** Best for** : Long context, careful reasoning
37- - ** Setup** : [ Anthropic Setup Guide] ( setup.md#anthropic )
37+ - ** Setup** : [ Anthropic Setup Guide] ({{ site.baseurl }}/ setup.md#anthropic)
3838
3939#### Google AI Studio
4040- ** Models** :
@@ -45,7 +45,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
4545- ** Features** : Multimodal, 1M+ context, JSON mode
4646- ** Pricing** : Free tier available
4747- ** Best for** : Multimodal tasks, long context
48- - ** Setup** : [ Google Setup Guide] ( setup.md#google )
48+ - ** Setup** : [ Google Setup Guide] ({{ site.baseurl }}/ setup.md#google)
4949
5050#### Mistral
5151- ** Models** :
@@ -56,7 +56,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
5656- ** Features** : European hosting, function calling
5757- ** Pricing** : Pay per token
5858- ** Best for** : EU compliance, multilingual
59- - ** Setup** : [ Mistral Setup Guide] ( setup.md#mistral )
59+ - ** Setup** : [ Mistral Setup Guide] ({{ site.baseurl }}/ setup.md#mistral)
6060
6161### ⚡ Fast Inference Providers
6262
@@ -69,7 +69,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
6969- ** Features** : Ultra-fast LPU inference, 10x faster
7070- ** Pricing** : Pay per token
7171- ** Best for** : Real-time applications, low latency
72- - ** Setup** : [ Groq Setup Guide] ( setup.md#groq )
72+ - ** Setup** : [ Groq Setup Guide] ({{ site.baseurl }}/ setup.md#groq)
7373
7474#### Together AI
7575- ** Models** :
@@ -82,7 +82,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
8282- ** Features** : Open source models, custom fine-tunes
8383- ** Pricing** : Simple per-token pricing
8484- ** Best for** : Open source models, research
85- - ** Setup** : [ Together Setup Guide] ( setup.md#together )
85+ - ** Setup** : [ Together Setup Guide] ({{ site.baseurl }}/ setup.md#together)
8686
8787#### Fireworks
8888- ** Models** :
@@ -94,7 +94,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
9494- ** Features** : Optimized inference, function calling
9595- ** Pricing** : Competitive per-token
9696- ** Best for** : Fast open model serving
97- - ** Setup** : [ Fireworks Setup Guide] ( setup.md#fireworks )
97+ - ** Setup** : [ Fireworks Setup Guide] ({{ site.baseurl }}/ setup.md#fireworks)
9898
9999#### Anyscale
100100- ** Models** :
@@ -105,7 +105,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
105105- ** Features** : Ray integration, schema-based JSON
106106- ** Pricing** : $1/million tokens flat rate
107107- ** Best for** : Scale-out workloads
108- - ** Setup** : [ Anyscale Setup Guide] ( setup.md#anyscale )
108+ - ** Setup** : [ Anyscale Setup Guide] ({{ site.baseurl }}/ setup.md#anyscale)
109109
110110### 🌐 Specialized Providers
111111
@@ -117,7 +117,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
117117- ** Features** : 128K context window
118118- ** Pricing** : Premium
119119- ** Best for** : Large context, reasoning
120- - ** Setup** : [ X.AI Setup Guide] ( setup.md#xai )
120+ - ** Setup** : [ X.AI Setup Guide] ({{ site.baseurl }}/ setup.md#xai)
121121
122122#### Perplexity
123123- ** Models** :
@@ -127,7 +127,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
127127- ** Features** : Real-time web access, citations
128128- ** Pricing** : Pay per request
129129- ** Best for** : Current information, research
130- - ** Setup** : [ Perplexity Setup Guide] ( setup.md#perplexity )
130+ - ** Setup** : [ Perplexity Setup Guide] ({{ site.baseurl }}/ setup.md#perplexity)
131131
132132#### DeepSeek
133133- ** Models** :
@@ -137,7 +137,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
137137- ** Features** : Chinese/English bilingual
138138- ** Pricing** : Competitive
139139- ** Best for** : Chinese language, coding
140- - ** Setup** : [ DeepSeek Setup Guide] ( setup.md#deepseek )
140+ - ** Setup** : [ DeepSeek Setup Guide] ({{ site.baseurl }}/ setup.md#deepseek)
141141
142142#### Moonshot
143143- ** Models** :
@@ -148,7 +148,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
148148- ** Features** : Long-context (200K+ tokens), Chinese/English bilingual, vision support
149149- ** Pricing** : Cost-effective (~ 5x cheaper than Claude/Gemini)
150150- ** Best for** : Long-context processing, Chinese language, document analysis
151- - ** Setup** : [ Moonshot Setup Guide] ( setup.md#moonshot )
151+ - ** Setup** : [ Moonshot Setup Guide] ({{ site.baseurl }}/ setup.md#moonshot)
152152
153153#### GLM (Zhipu AI)
154154- ** Models** :
@@ -158,7 +158,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
158158- ** Features** : Chinese/English bilingual, streaming, function calling, vision
159159- ** Pricing** : Competitive
160160- ** Best for** : Chinese language tasks, cost-effective inference
161- - ** Setup** : [ GLM Setup Guide] ( setup.md#glm )
161+ - ** Setup** : [ GLM Setup Guide] ({{ site.baseurl }}/ setup.md#glm)
162162
163163#### Cohere
164164- ** Models** :
@@ -167,7 +167,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
167167- ** Features** : RAG optimization, embeddings
168168- ** Pricing** : Enterprise/startup plans
169169- ** Best for** : Enterprise NLP, search
170- - ** Setup** : [ Cohere Setup Guide] ( setup.md#cohere )
170+ - ** Setup** : [ Cohere Setup Guide] ({{ site.baseurl }}/ setup.md#cohere)
171171
172172### 🌍 Multi-Provider Gateways
173173
@@ -179,7 +179,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
179179- ** Features** : Unified billing, free models
180180- ** Pricing** : Small markup on provider prices
181181- ** Best for** : Model exploration, fallbacks
182- - ** Setup** : [ OpenRouter Setup Guide] ( setup.md#openrouter )
182+ - ** Setup** : [ OpenRouter Setup Guide] ({{ site.baseurl }}/ setup.md#openrouter)
183183
184184#### Vercel AI Gateway
185185- ** Models** :
@@ -192,7 +192,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
192192- ** Features** : Unified billing, streaming, function calling, vision
193193- ** Pricing** : Provider passthrough with optional markup
194194- ** Best for** : Production deployments, unified billing
195- - ** Setup** : [ Vercel Setup Guide] ( setup.md#vercel )
195+ - ** Setup** : [ Vercel Setup Guide] ({{ site.baseurl }}/ setup.md#vercel)
196196
197197### ☁️ Enterprise Cloud
198198
@@ -205,7 +205,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
205205- ** Features** : Enterprise SLA, VNet integration
206206- ** Pricing** : Same as OpenAI
207207- ** Best for** : Enterprise, compliance
208- - ** Setup** : [ Azure Setup Guide] ( setup.md#azure )
208+ - ** Setup** : [ Azure Setup Guide] ({{ site.baseurl }}/ setup.md#azure)
209209
210210#### AWS Bedrock
211211- ** Models** :
@@ -217,7 +217,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
217217- ** Features** : AWS integration, multiple providers
218218- ** Pricing** : Pay per use
219219- ** Best for** : AWS ecosystem
220- - ** Setup** : [ Bedrock Setup Guide] ( setup.md#bedrock )
220+ - ** Setup** : [ Bedrock Setup Guide] ({{ site.baseurl }}/ setup.md#bedrock)
221221
222222#### Google Vertex AI
223223- ** Models** :
@@ -227,7 +227,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
227227- ** Features** : MLOps platform, enterprise
228228- ** Pricing** : Enterprise pricing
229229- ** Best for** : GCP ecosystem
230- - ** Setup** : [ Vertex AI Setup Guide] ( setup.md#vertex )
230+ - ** Setup** : [ Vertex AI Setup Guide] ({{ site.baseurl }}/ setup.md#vertex)
231231
232232### 💻 Local Providers
233233
@@ -241,7 +241,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
241241- ** Features** : Local hosting, model management
242242- ** Pricing** : Free (self-hosted)
243243- ** Best for** : Privacy, offline use
244- - ** Setup** : [ Ollama Setup Guide] ( setup.md#ollama )
244+ - ** Setup** : [ Ollama Setup Guide] ({{ site.baseurl }}/ setup.md#ollama)
245245
246246#### llama.cpp
247247- ** Models** :
@@ -253,7 +253,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
253253- ** Features** : Direct inference, GPU support
254254- ** Pricing** : Free (self-hosted)
255255- ** Best for** : Maximum control, embedded
256- - ** Setup** : [ llama.cpp Setup Guide] ( setup.md#llama-cpp )
256+ - ** Setup** : [ llama.cpp Setup Guide] ({{ site.baseurl }}/ setup.md#llama-cpp)
257257
258258## Provider Comparison
259259
@@ -412,7 +412,7 @@ response = client.chat.completions.create(
412412
413413## Next Steps
414414
415- - [ Provider Setup] ( setup.md ) - Detailed setup instructions
416- - [ Provider Capabilities] ( capabilities.md ) - Feature comparison matrix
417- - [ Examples] ( ../ examples/providers.md) - Provider-specific examples
418- - [ Best Practices] ( ../ guides/best-practices.md) - Choosing providers
415+ - [ Provider Setup] ({% link providers/ setup.md %} ) - Detailed setup instructions
416+ - [ Provider Capabilities] ({% link providers/ capabilities.md %} ) - Feature comparison matrix
417+ - [ Examples] ({% link examples/providers.md %} ) - Provider-specific examples
418+ - [ Best Practices] ({% link guides/best-practices.md %} ) - Choosing providers
0 commit comments