Skip to content
Discussion options

You must be logged in to vote

OpenRAG isn't guaranteed to be compatible with all Ollama models. Some models might produce unexpected results (like JSON output instead of natural language) or aren't appropriate for RAG tasks.

Recommended models:

Language models:

  • gpt-oss:20b (requires at least 16GB of RAM - consider using Ollama Cloud or a remote machine)
  • mistral-nemo:12b

Embedding models:

  • nomic-embed-text:latest
  • mxbai-embed-large:latest
  • embeddinggemma:latest

You can experiment with other models, but if you encounter issues that you can't resolve through RAG best practices (like context filters and prompt engineering), try switching to one of these recommended models.

If you need support for a specific model, please…

Replies: 3 comments

Comment options

You must be logged in to vote
0 replies
Answer selected by CharnaParkey
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants