Open
Description
I don't have any experience with it, but LocalAI might be more attractive for people working in environments where sending source code out to the interwebs is frowned upon.
LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.
- Text generation with llama.cpp, gpt4all.cpp and more
- OpenAI functions
- Embeddings generation for vector databases
- Download Models drectly from Huggingface