Skip to content

Commit ae2d0cd

Browse files
ranaroussiclaude
andcommitted
Fix Jekyll baseurl handling for all documentation links
Updated all relative markdown links to use {{ site.baseurl }} to properly handle the /onellm baseurl in Jekyll. Changes: - Converted all relative .md links to use {{ site.baseurl }}/ prefix - Ensures links work correctly with baseurl: "/onellm" configuration - Affects 14 documentation files across docs directory This fixes broken navigation links in the deployed GitHub Pages documentation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
1 parent e1d2177 commit ae2d0cd

File tree

14 files changed

+91
-91
lines changed

14 files changed

+91
-91
lines changed

docs/README.md

Lines changed: 29 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -25,44 +25,44 @@ Welcome to the OneLLM documentation! OneLLM is a unified interface for 300+ LLMs
2525

2626
### Getting Started
2727

28-
- [Installation](installation.md) - How to install OneLLM
29-
- [Quick Start](quickstart.md) - Get up and running in 5 minutes
30-
- [Configuration](configuration.md) - Setting up API keys and options
28+
- [Installation]({{ site.baseurl }}/installation.md) - How to install OneLLM
29+
- [Quick Start]({{ site.baseurl }}/quickstart.md) - Get up and running in 5 minutes
30+
- [Configuration]({{ site.baseurl }}/configuration.md) - Setting up API keys and options
3131

3232
### Core Concepts
3333

34-
- [Architecture](architecture.md) - How OneLLM works under the hood
35-
- [Provider System](providers/README.md) - Understanding providers and models
36-
- [Error Handling](error-handling.md) - Handling errors gracefully
34+
- [Architecture]({{ site.baseurl }}/architecture.md) - How OneLLM works under the hood
35+
- [Provider System]({{ site.baseurl }}/providers/README.md) - Understanding providers and models
36+
- [Error Handling]({{ site.baseurl }}/error-handling.md) - Handling errors gracefully
3737

3838
### API Reference
3939

40-
- [Client API](api/client.md) - OpenAI-compatible client interface
41-
- [Chat Completions](api/chat-completions.md) - Chat completion methods
42-
- [Completions](api/completions.md) - Text completion methods
43-
- [Embeddings](api/embeddings.md) - Embedding generation
44-
- [Files](api/files.md) - File operations
45-
- [Audio](api/audio.md) - Speech-to-text and text-to-speech
46-
- [Images](api/images.md) - Image generation
40+
- [Client API]({{ site.baseurl }}/api/client.md) - OpenAI-compatible client interface
41+
- [Chat Completions]({{ site.baseurl }}/api/chat-completions.md) - Chat completion methods
42+
- [Completions]({{ site.baseurl }}/api/completions.md) - Text completion methods
43+
- [Embeddings]({{ site.baseurl }}/api/embeddings.md) - Embedding generation
44+
- [Files]({{ site.baseurl }}/api/files.md) - File operations
45+
- [Audio]({{ site.baseurl }}/api/audio.md) - Speech-to-text and text-to-speech
46+
- [Images]({{ site.baseurl }}/api/images.md) - Image generation
4747

4848
### Providers
4949

50-
- [Provider List](providers/README.md) - All 18 supported providers
51-
- [Provider Capabilities](providers/capabilities.md) - Feature comparison
52-
- [Provider Setup](providers/setup.md) - Setting up each provider
50+
- [Provider List]({{ site.baseurl }}/providers/README.md) - All 18 supported providers
51+
- [Provider Capabilities]({{ site.baseurl }}/providers/capabilities.md) - Feature comparison
52+
- [Provider Setup]({{ site.baseurl }}/providers/setup.md) - Setting up each provider
5353

5454
### Guides
5555

56-
- [Migration Guide](guides/migration.md) - Migrating from OpenAI
57-
- [Best Practices](guides/best-practices.md) - Tips and recommendations
58-
- [Advanced Usage](guides/advanced.md) - Advanced features
59-
- [Troubleshooting](guides/troubleshooting.md) - Common issues
56+
- [Migration Guide]({{ site.baseurl }}/guides/migration.md) - Migrating from OpenAI
57+
- [Best Practices]({{ site.baseurl }}/guides/best-practices.md) - Tips and recommendations
58+
- [Advanced Usage]({{ site.baseurl }}/guides/advanced.md) - Advanced features
59+
- [Troubleshooting]({{ site.baseurl }}/guides/troubleshooting.md) - Common issues
6060

6161
### Examples
6262

63-
- [Basic Examples](examples/basic.md) - Simple usage examples
64-
- [Provider Examples](examples/providers.md) - Provider-specific examples
65-
- [Advanced Examples](examples/advanced.md) - Complex use cases
63+
- [Basic Examples]({{ site.baseurl }}/examples/basic.md) - Simple usage examples
64+
- [Provider Examples]({{ site.baseurl }}/examples/providers.md) - Provider-specific examples
65+
- [Advanced Examples]({{ site.baseurl }}/examples/advanced.md) - Complex use cases
6666

6767
## 🚀 Quick Links
6868

@@ -82,15 +82,15 @@ Welcome to the OneLLM documentation! OneLLM is a unified interface for 300+ LLMs
8282

8383
## 📖 How to Use This Documentation
8484

85-
1. **New Users**: Start with [Installation](installation.md) and [Quick Start](quickstart.md)
86-
2. **Migrating**: Check the [Migration Guide](guides/migration.md)
87-
3. **API Reference**: Use the [API docs](api/client.md) for detailed method information
88-
4. **Provider Setup**: See [Provider Setup](providers/setup.md) for configuration
89-
5. **Examples**: Browse [Examples](examples/basic.md) for practical usage
85+
1. **New Users**: Start with [Installation]({{ site.baseurl }}/installation.md) and [Quick Start]({{ site.baseurl }}/quickstart.md)
86+
2. **Migrating**: Check the [Migration Guide]({{ site.baseurl }}/guides/migration.md)
87+
3. **API Reference**: Use the [API docs]({{ site.baseurl }}/api/client.md) for detailed method information
88+
4. **Provider Setup**: See [Provider Setup]({{ site.baseurl }}/providers/setup.md) for configuration
89+
5. **Examples**: Browse [Examples]({{ site.baseurl }}/examples/basic.md) for practical usage
9090

9191
## 🤝 Contributing
9292

93-
We welcome contributions! Please see our [Contributing Guide](../CONTRIBUTING.md) for details.
93+
We welcome contributions! Please see our [Contributing Guide]({{ site.baseurl }}/CONTRIBUTING.md) for details.
9494

9595
## 📝 License
9696

docs/advanced-features.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -447,6 +447,6 @@ except OneLLMError as e:
447447

448448
## Next Steps
449449

450-
- [Provider Capabilities](providers/capabilities.md) - Compare provider features
451-
- [Error Handling](error-handling.md) - Handle errors gracefully
452-
- [Best Practices](guides/best-practices.md) - Production recommendations
450+
- [Provider Capabilities]({{ site.baseurl }}/providers/capabilities.md) - Compare provider features
451+
- [Error Handling]({{ site.baseurl }}/error-handling.md) - Handle errors gracefully
452+
- [Best Practices]({{ site.baseurl }}/guides/best-practices.md) - Production recommendations

docs/api/chat-completions.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -447,7 +447,7 @@ response = client.chat.completions.create(
447447

448448
## Next Steps
449449

450-
- [Streaming](../guides/streaming.md) - Detailed streaming guide
451-
- [Function Calling](../guides/function-calling.md) - Advanced function calling
452-
- [Error Handling](../error-handling.md) - Handle errors properly
453-
- [Provider Capabilities](../providers/capabilities.md) - Provider-specific features
450+
- [Streaming]({{ site.baseurl }}/guides/streaming.md) - Detailed streaming guide
451+
- [Function Calling]({{ site.baseurl }}/guides/function-calling.md) - Advanced function calling
452+
- [Error Handling]({{ site.baseurl }}/error-handling.md) - Handle errors properly
453+
- [Provider Capabilities]({{ site.baseurl }}/providers/capabilities.md) - Provider-specific features

docs/api/client.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,7 @@ response = client.chat.completions.create(...)
285285

286286
## Next Steps
287287

288-
- [Chat Completions](chat-completions.md) - Detailed chat API
289-
- [Embeddings](embeddings.md) - Generate embeddings
290-
- [Error Handling](../error-handling.md) - Handle errors properly
291-
- [Examples](../examples/basic.md) - See more examples
288+
- [Chat Completions]({{ site.baseurl }}/chat-completions.md) - Detailed chat API
289+
- [Embeddings]({{ site.baseurl }}/embeddings.md) - Generate embeddings
290+
- [Error Handling]({{ site.baseurl }}/error-handling.md) - Handle errors properly
291+
- [Examples]({{ site.baseurl }}/examples/basic.md) - See more examples

docs/architecture.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -512,6 +512,6 @@ The architecture is designed to be extensible:
512512

513513
## Next Steps
514514

515-
- [Provider System](providers/README.md) - Detailed provider documentation
516-
- [API Reference](api/client.md) - API documentation
517-
- [Contributing](../CONTRIBUTING.md) - How to extend OneLLM
515+
- [Provider System]({{ site.baseurl }}/providers/README.md) - Detailed provider documentation
516+
- [API Reference]({{ site.baseurl }}/api/client.md) - API documentation
517+
- [Contributing]({{ site.baseurl }}/CONTRIBUTING.md) - How to extend OneLLM

docs/configuration.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -388,6 +388,6 @@ for provider in providers:
388388

389389
## Next Steps
390390

391-
- [Provider Setup](providers/setup.md) - Detailed provider configuration
392-
- [Best Practices](guides/best-practices.md) - Configuration best practices
393-
- [Troubleshooting](guides/troubleshooting.md) - Common configuration issues
391+
- [Provider Setup]({{ site.baseurl }}/providers/setup.md) - Detailed provider configuration
392+
- [Best Practices]({{ site.baseurl }}/guides/best-practices.md) - Configuration best practices
393+
- [Troubleshooting]({{ site.baseurl }}/guides/troubleshooting.md) - Common configuration issues

docs/error-handling.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -415,6 +415,6 @@ def test_rate_limit_handling():
415415

416416
## Next Steps
417417

418-
- [Best Practices](guides/best-practices.md) - Error handling best practices
419-
- [Troubleshooting](guides/troubleshooting.md) - Common error solutions
420-
- [API Reference](api/client.md) - Complete API documentation
418+
- [Best Practices]({{ site.baseurl }}/guides/best-practices.md) - Error handling best practices
419+
- [Troubleshooting]({{ site.baseurl }}/guides/troubleshooting.md) - Common error solutions
420+
- [API Reference]({{ site.baseurl }}/api/client.md) - Complete API documentation

docs/guides/migration.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -377,6 +377,6 @@ The response includes the full model name (e.g., "openai/gpt-4") showing which p
377377

378378
## Next Steps
379379

380-
- [Provider Setup](../providers/setup.md) - Configure additional providers
381-
- [Advanced Features](../advanced-features.md) - Learn about fallbacks and retries
382-
- [Best Practices](best-practices.md) - Optimize your usage
380+
- [Provider Setup]({{ site.baseurl }}/providers/setup.md) - Configure additional providers
381+
- [Advanced Features]({{ site.baseurl }}/advanced-features.md) - Learn about fallbacks and retries
382+
- [Best Practices]({{ site.baseurl }}/best-practices.md) - Optimize your usage

docs/installation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -246,6 +246,6 @@ pip uninstall onellm
246246

247247
## Next Steps
248248

249-
- [Quick Start Guide](quickstart.md) - Get started with OneLLM
250-
- [Configuration](configuration.md) - Configure providers and settings
251-
- [Provider Setup](providers/setup.md) - Set up specific providers
249+
- [Quick Start Guide]({{ site.baseurl }}/quickstart.md) - Get started with OneLLM
250+
- [Configuration]({{ site.baseurl }}/configuration.md) - Configure providers and settings
251+
- [Provider Setup]({{ site.baseurl }}/providers/setup.md) - Set up specific providers

docs/providers/README.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
2323
- **Features**: Function calling, JSON mode, vision, DALL-E, embeddings
2424
- **Pricing**: Pay per token
2525
- **Best for**: General purpose, production applications
26-
- **Setup**: [OpenAI Setup Guide](setup.md#openai)
26+
- **Setup**: [OpenAI Setup Guide]({{ site.baseurl }}/setup.md#openai)
2727

2828
#### Anthropic
2929
- **Models**:
@@ -34,7 +34,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
3434
- **Features**: 200K+ context, vision support
3535
- **Pricing**: Pay per token
3636
- **Best for**: Long context, careful reasoning
37-
- **Setup**: [Anthropic Setup Guide](setup.md#anthropic)
37+
- **Setup**: [Anthropic Setup Guide]({{ site.baseurl }}/setup.md#anthropic)
3838

3939
#### Google AI Studio
4040
- **Models**:
@@ -45,7 +45,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
4545
- **Features**: Multimodal, 1M+ context, JSON mode
4646
- **Pricing**: Free tier available
4747
- **Best for**: Multimodal tasks, long context
48-
- **Setup**: [Google Setup Guide](setup.md#google)
48+
- **Setup**: [Google Setup Guide]({{ site.baseurl }}/setup.md#google)
4949

5050
#### Mistral
5151
- **Models**:
@@ -56,7 +56,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
5656
- **Features**: European hosting, function calling
5757
- **Pricing**: Pay per token
5858
- **Best for**: EU compliance, multilingual
59-
- **Setup**: [Mistral Setup Guide](setup.md#mistral)
59+
- **Setup**: [Mistral Setup Guide]({{ site.baseurl }}/setup.md#mistral)
6060

6161
### ⚡ Fast Inference Providers
6262

@@ -69,7 +69,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
6969
- **Features**: Ultra-fast LPU inference, 10x faster
7070
- **Pricing**: Pay per token
7171
- **Best for**: Real-time applications, low latency
72-
- **Setup**: [Groq Setup Guide](setup.md#groq)
72+
- **Setup**: [Groq Setup Guide]({{ site.baseurl }}/setup.md#groq)
7373

7474
#### Together AI
7575
- **Models**:
@@ -82,7 +82,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
8282
- **Features**: Open source models, custom fine-tunes
8383
- **Pricing**: Simple per-token pricing
8484
- **Best for**: Open source models, research
85-
- **Setup**: [Together Setup Guide](setup.md#together)
85+
- **Setup**: [Together Setup Guide]({{ site.baseurl }}/setup.md#together)
8686

8787
#### Fireworks
8888
- **Models**:
@@ -94,7 +94,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
9494
- **Features**: Optimized inference, function calling
9595
- **Pricing**: Competitive per-token
9696
- **Best for**: Fast open model serving
97-
- **Setup**: [Fireworks Setup Guide](setup.md#fireworks)
97+
- **Setup**: [Fireworks Setup Guide]({{ site.baseurl }}/setup.md#fireworks)
9898

9999
#### Anyscale
100100
- **Models**:
@@ -105,7 +105,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
105105
- **Features**: Ray integration, schema-based JSON
106106
- **Pricing**: $1/million tokens flat rate
107107
- **Best for**: Scale-out workloads
108-
- **Setup**: [Anyscale Setup Guide](setup.md#anyscale)
108+
- **Setup**: [Anyscale Setup Guide]({{ site.baseurl }}/setup.md#anyscale)
109109

110110
### 🌐 Specialized Providers
111111

@@ -117,7 +117,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
117117
- **Features**: 128K context window
118118
- **Pricing**: Premium
119119
- **Best for**: Large context, reasoning
120-
- **Setup**: [X.AI Setup Guide](setup.md#xai)
120+
- **Setup**: [X.AI Setup Guide]({{ site.baseurl }}/setup.md#xai)
121121

122122
#### Perplexity
123123
- **Models**:
@@ -127,7 +127,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
127127
- **Features**: Real-time web access, citations
128128
- **Pricing**: Pay per request
129129
- **Best for**: Current information, research
130-
- **Setup**: [Perplexity Setup Guide](setup.md#perplexity)
130+
- **Setup**: [Perplexity Setup Guide]({{ site.baseurl }}/setup.md#perplexity)
131131

132132
#### DeepSeek
133133
- **Models**:
@@ -137,7 +137,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
137137
- **Features**: Chinese/English bilingual
138138
- **Pricing**: Competitive
139139
- **Best for**: Chinese language, coding
140-
- **Setup**: [DeepSeek Setup Guide](setup.md#deepseek)
140+
- **Setup**: [DeepSeek Setup Guide]({{ site.baseurl }}/setup.md#deepseek)
141141

142142
#### Moonshot
143143
- **Models**:
@@ -148,7 +148,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
148148
- **Features**: Long-context (200K+ tokens), Chinese/English bilingual, vision support
149149
- **Pricing**: Cost-effective (~5x cheaper than Claude/Gemini)
150150
- **Best for**: Long-context processing, Chinese language, document analysis
151-
- **Setup**: [Moonshot Setup Guide](setup.md#moonshot)
151+
- **Setup**: [Moonshot Setup Guide]({{ site.baseurl }}/setup.md#moonshot)
152152

153153
#### GLM (Zhipu AI)
154154
- **Models**:
@@ -158,7 +158,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
158158
- **Features**: Chinese/English bilingual, streaming, function calling, vision
159159
- **Pricing**: Competitive
160160
- **Best for**: Chinese language tasks, cost-effective inference
161-
- **Setup**: [GLM Setup Guide](setup.md#glm)
161+
- **Setup**: [GLM Setup Guide]({{ site.baseurl }}/setup.md#glm)
162162

163163
#### Cohere
164164
- **Models**:
@@ -167,7 +167,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
167167
- **Features**: RAG optimization, embeddings
168168
- **Pricing**: Enterprise/startup plans
169169
- **Best for**: Enterprise NLP, search
170-
- **Setup**: [Cohere Setup Guide](setup.md#cohere)
170+
- **Setup**: [Cohere Setup Guide]({{ site.baseurl }}/setup.md#cohere)
171171

172172
### 🌍 Multi-Provider Gateways
173173

@@ -179,7 +179,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
179179
- **Features**: Unified billing, free models
180180
- **Pricing**: Small markup on provider prices
181181
- **Best for**: Model exploration, fallbacks
182-
- **Setup**: [OpenRouter Setup Guide](setup.md#openrouter)
182+
- **Setup**: [OpenRouter Setup Guide]({{ site.baseurl }}/setup.md#openrouter)
183183

184184
#### Vercel AI Gateway
185185
- **Models**:
@@ -192,7 +192,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
192192
- **Features**: Unified billing, streaming, function calling, vision
193193
- **Pricing**: Provider passthrough with optional markup
194194
- **Best for**: Production deployments, unified billing
195-
- **Setup**: [Vercel Setup Guide](setup.md#vercel)
195+
- **Setup**: [Vercel Setup Guide]({{ site.baseurl }}/setup.md#vercel)
196196

197197
### ☁️ Enterprise Cloud
198198

@@ -205,7 +205,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
205205
- **Features**: Enterprise SLA, VNet integration
206206
- **Pricing**: Same as OpenAI
207207
- **Best for**: Enterprise, compliance
208-
- **Setup**: [Azure Setup Guide](setup.md#azure)
208+
- **Setup**: [Azure Setup Guide]({{ site.baseurl }}/setup.md#azure)
209209

210210
#### AWS Bedrock
211211
- **Models**:
@@ -217,7 +217,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
217217
- **Features**: AWS integration, multiple providers
218218
- **Pricing**: Pay per use
219219
- **Best for**: AWS ecosystem
220-
- **Setup**: [Bedrock Setup Guide](setup.md#bedrock)
220+
- **Setup**: [Bedrock Setup Guide]({{ site.baseurl }}/setup.md#bedrock)
221221

222222
#### Google Vertex AI
223223
- **Models**:
@@ -227,7 +227,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
227227
- **Features**: MLOps platform, enterprise
228228
- **Pricing**: Enterprise pricing
229229
- **Best for**: GCP ecosystem
230-
- **Setup**: [Vertex AI Setup Guide](setup.md#vertex)
230+
- **Setup**: [Vertex AI Setup Guide]({{ site.baseurl }}/setup.md#vertex)
231231

232232
### 💻 Local Providers
233233

@@ -241,7 +241,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
241241
- **Features**: Local hosting, model management
242242
- **Pricing**: Free (self-hosted)
243243
- **Best for**: Privacy, offline use
244-
- **Setup**: [Ollama Setup Guide](setup.md#ollama)
244+
- **Setup**: [Ollama Setup Guide]({{ site.baseurl }}/setup.md#ollama)
245245

246246
#### llama.cpp
247247
- **Models**:
@@ -253,7 +253,7 @@ OneLLM supports 21 providers, giving you access to 300+ language models through
253253
- **Features**: Direct inference, GPU support
254254
- **Pricing**: Free (self-hosted)
255255
- **Best for**: Maximum control, embedded
256-
- **Setup**: [llama.cpp Setup Guide](setup.md#llama-cpp)
256+
- **Setup**: [llama.cpp Setup Guide]({{ site.baseurl }}/setup.md#llama-cpp)
257257

258258
## Provider Comparison
259259

@@ -412,7 +412,7 @@ response = client.chat.completions.create(
412412

413413
## Next Steps
414414

415-
- [Provider Setup](setup.md) - Detailed setup instructions
416-
- [Provider Capabilities](capabilities.md) - Feature comparison matrix
417-
- [Examples](../examples/providers.md) - Provider-specific examples
418-
- [Best Practices](../guides/best-practices.md) - Choosing providers
415+
- [Provider Setup]({% link providers/setup.md %}) - Detailed setup instructions
416+
- [Provider Capabilities]({% link providers/capabilities.md %}) - Feature comparison matrix
417+
- [Examples]({% link examples/providers.md %}) - Provider-specific examples
418+
- [Best Practices]({% link guides/best-practices.md %}) - Choosing providers

0 commit comments

Comments
 (0)