|
| 1 | +# Models.dev |
| 2 | + |
| 3 | +A comprehensive database of AI model specifications, pricing, and capabilities. |
| 4 | + |
| 5 | +## Contributing |
| 6 | + |
| 7 | +We welcome contributions to expand our model database! Follow these steps to add a new model: |
| 8 | + |
| 9 | +### Adding a New Model |
| 10 | + |
| 11 | +#### 1. Create Provider (if it doesn't exist) |
| 12 | + |
| 13 | +If the AI provider doesn't already exist in the `providers/` directory: |
| 14 | + |
| 15 | +1. Create a new folder in `providers/` with the provider's ID (e.g., `providers/newprovider/`) |
| 16 | +2. Add a `provider.toml` file with the provider information: |
| 17 | + |
| 18 | +```toml |
| 19 | +name = "Provider Name" |
| 20 | +``` |
| 21 | + |
| 22 | +#### 2. Add Model Definition |
| 23 | + |
| 24 | +Create a new TOML file in the provider's `models/` directory where the filename is the model ID: |
| 25 | + |
| 26 | +```toml |
| 27 | +name = "Model Display Name" |
| 28 | +attachment = true # or false - supports file attachments |
| 29 | +reasoning = false # or true - supports reasoning/chain-of-thought |
| 30 | +temperature = true # or false - supports temperature parameter |
| 31 | + |
| 32 | +[cost] |
| 33 | +input = 3.00 # Cost per million input tokens (USD) |
| 34 | +output = 15.00 # Cost per million output tokens (USD) |
| 35 | +inputCached = 0.30 # Cost per million cached input tokens (USD) |
| 36 | +outputCached = 0.30 # Cost per million cached output tokens (USD) |
| 37 | + |
| 38 | +[limit] |
| 39 | +context = 200_000 # Maximum context window (tokens) |
| 40 | +output = 8_192 # Maximum output tokens |
| 41 | +``` |
| 42 | + |
| 43 | +#### 3. Submit Pull Request |
| 44 | + |
| 45 | +1. Fork this repository |
| 46 | +2. Create a new branch for your changes |
| 47 | +3. Add your provider and/or model files |
| 48 | +4. Open a pull request with a clear description |
| 49 | + |
| 50 | +### Validation |
| 51 | + |
| 52 | +GitHub Actions will automatically validate your submission against our schema to ensure: |
| 53 | + |
| 54 | +- All required fields are present |
| 55 | +- Data types are correct |
| 56 | +- Values are within acceptable ranges |
| 57 | +- TOML syntax is valid |
| 58 | + |
| 59 | +### Schema Reference |
| 60 | + |
| 61 | +Models must conform to the following schema (defined in `app/schemas.ts`): |
| 62 | + |
| 63 | +**Provider Schema:** |
| 64 | +- `name`: String - Display name of the provider |
| 65 | + |
| 66 | +**Model Schema:** |
| 67 | +- `name`: String - Display name of the model |
| 68 | +- `attachment`: Boolean - Whether the model supports file attachments |
| 69 | +- `reasoning`: Boolean - Whether the model supports reasoning capabilities |
| 70 | +- `temperature`: Boolean - Whether the model supports temperature control |
| 71 | +- `cost.input`: Number - Cost per million input tokens (USD) |
| 72 | +- `cost.output`: Number - Cost per million output tokens (USD) |
| 73 | +- `cost.inputCached`: Number - Cost per million cached input tokens (USD) |
| 74 | +- `cost.outputCached`: Number - Cost per million cached output tokens (USD) |
| 75 | +- `limit.context`: Number - Maximum context window in tokens |
| 76 | +- `limit.output`: Number - Maximum output tokens |
| 77 | + |
| 78 | +### Examples |
| 79 | + |
| 80 | +See existing providers in the `providers/` directory for reference: |
| 81 | +- `providers/anthropic/` - Anthropic Claude models |
| 82 | +- `providers/openai/` - OpenAI GPT models |
| 83 | +- `providers/google/` - Google Gemini models |
| 84 | + |
| 85 | +### Questions? |
| 86 | + |
| 87 | +Open an issue if you need help or have questions about contributing. |
0 commit comments