A fast, interactive web app for comparing AI models across the metrics that actually matter when choosing what to use: price, context window, speed, benchmark performance, vision support, and free-tier availability.
Live demo: ai.tsatsin.com
When you're comparing models across providers, the information is usually scattered across pricing pages, benchmark sites, and launch posts. This project pulls the useful bits into one place so you can scan tradeoffs quickly instead of opening ten tabs.
Useful when you want to answer questions like:
- Which model gives the best price/performance tradeoff right now?
- How much more do you pay for a larger context window or better coding scores?
- Which providers have vision support, free tiers, or unusually fast output?
- Which model is worth testing before wiring it into a product or workflow?
- Arena ELO scores (general and coding)
- Processing speed (tokens/second)
- Context window size
- Input and output pricing per million tokens
- Vision capabilities
- Free tier availability
- Sortable columns for fast comparison
- Tooltips that explain less obvious metrics
- Direct links to official provider pricing pages
- Regular data updates as models and pricing change
- Public benchmark data
- Official provider pricing and documentation
- Ongoing manual updates as models evolve
- TypeScript
- React
- Vite
- Tailwind CSS
npm install
npm run devThen open the local Vite URL shown in the terminal.
Contributions are welcome, especially for:
- pricing/data corrections
- newly released models
- UX improvements
- documentation fixes
MIT