Skip to content
Thoughts on GitHub Models?

Mistral Small

Mistral Small can be used on any language-based task that requires high efficiency and low latency.
Context
33k input · 4k output
Training date
Mar 2023
Rate limit tier
Provider support
Try Mistral Small
Get early access to our playground for modelsJoin our limited beta waiting list today and be among the first to try out an easy way to test models

Model navigation navigation

Mistral AI

Mistral Small is Mistral AI's most efficient Large Language Model (LLM). It can be used on any language-based task that requires high efficiency and low latency.

Mistral Small is:

  • A small model optimized for low latency. Very efficient for high volume and low latency workloads. Mistral Small is Mistral's smallest proprietary model, it outperforms Mixtral 8x7B and has lower latency.
  • Specialized in RAG. Crucial information is not lost in the middle of long context windows (up to 32K tokens).
  • Strong in coding. Code generation, review and comments. Supports all mainstream coding languages.
  • Multi-lingual by design. Best-in-class performance in French, German, Spanish, and Italian - in addition to English. Dozens of other languages are supported.
  • Responsible AI. Efficient guardrails baked in the model, with additional safety layer with safe_mode option

Resources

For full details of this model, please read release blog post.

Languages

 (5)
French, German, Spanish, Italian, and English

About

Mistral Small can be used on any language-based task that requires high efficiency and low latency.
Context
33k input · 4k output
Training date
Mar 2023
Rate limit tier
Provider support

Languages

 (5)
French, German, Spanish, Italian, and English