Skip to content

Sinapsis-AI/sinapsis-chatbots

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

28 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation



sinapsis-chatbots

A comprehensive monorepo for building and deploying AI-driven chatbots with support for multiple large language models

🐍 Installation β€’ πŸ“¦ Packages β€’ 🌐 Webapps πŸ“™ Documentation β€’ πŸ” License

The sinapsis-chatbots module is a powerful toolkit designed to simplify the development of AI-driven chatbots and Retrieval-Augmented Generation (RAG) systems. It provides ready-to-use templates and utilities for configuring and running large language model (LLM) applications, enabling developers to integrate a wide range of LLM models with ease for natural, intelligent interactions.

Important

We now include support for Llama4 models!

To use them, install the dependency (if you have not installed sinapsis-llama-cpp[all])

uv pip install sinapsis-llama-cpp[llama-four] --extra-index-url https://pypi.sinapsis.tech

You need a HuggingFace token. See the official instructions and set it using

export HF_TOKEN=<token-provided-by-hf>

and test it through the cli or the webapp by changing the AGENT_CONFIG_PATH

Note

Llama 4 requires large GPUs to run the models. Nonetheless, running on smaller consumer-grade GPUs is possible, although a single inference may take hours

🐍 Installation

This mono repo includes packages for AI-driven chatbots using various LLM frameworks through:

  • sinapsis-anthropic
  • sinapsis-chatbots-base
  • sinapsis-llama-cpp
  • sinapsis-llama-index
  • sinapsis-mem0

Install using your preferred package manager. We strongly recommend using uv. To install uv, refer to the official documentation.

Install with uv:

uv pip install sinapsis-llama-cpp --extra-index-url https://pypi.sinapsis.tech

Or with raw pip:

pip install sinapsis-llama-cpp --extra-index-url https://pypi.sinapsis.tech

Replace sinapsis-llama-cpp with the name of the package you intend to install.

Important

Templates in each package may require extra dependencies. For development, we recommend installing the package with all the optional dependencies:

With uv:

uv pip install sinapsis-llama-cpp[all] --extra-index-url https://pypi.sinapsis.tech

Or with raw pip:

pip install sinapsis-llama-cpp[all] --extra-index-url https://pypi.sinapsis.tech

Be sure to substitute sinapsis-llama-cpp with the appropriate package name.

Tip

You can also install all the packages within this project:

uv pip install sinapsis-chatbots[all] --extra-index-url https://pypi.sinapsis.tech

πŸ“¦ Packages

This repository is structured into modular packages, each facilitating the integration of AI-driven chatbots with various LLM frameworks. These packages provide flexible and easy-to-use templates for building and deploying chatbot solutions. Below is an overview of the available packages:

Sinapsis Anthropic

This package offers a suite of templates and utilities for building text-to-text and image-to-text conversational chatbots using Anthropic's Claude models.

  • AnthropicTextGeneration: Template for text and code generation with Claude models using the Anthropic API.

  • AnthropicMultiModal: Template for multimodal chat processing using Anthropic's Claude models.

For specific instructions and further details, see the README.md.

Sinapsis Chatbots Base

This package provides core functionality for LLM chat completion tasks.

  • QueryContextualizeFromFile: Template that adds a certain context to the query searching for keywords in the Documents added in the generic_data field of the DataContainer

For specific instructions and further details, see the README.md.

Sinapsis llama-cpp

This package offers a suite of templates and utilities for running LLMs using llama-cpp.

  • LLama4MultiModal: Template for multi modal chat processing using the LLama 4 model.

  • LLaMATextCompletionWithContext: Template to initialize a LLaMA-based text completion model with context added in the prompt.

  • LLaMATextCompletion: Configures and initializes a chat completion model, supporting LLaMA, Mistral, and other compatible models.

  • LLama4TextToText: Template for text-to-text chat processing using the LLama 4 model.

For specific instructions and further details, see the README.md.

Sinapsis llama-cpp

Package with support for various llama-index modules for text completion. This includes making calls to llms, processing and generating embeddings and Nodes, etc.

  • CodeEmbeddingNodeGenerator: Template to generate nodes for a code base.

  • EmbeddingNodeGenerator: Template for generating text embeddings using the HuggingFace model.

  • LLaMAIndexInsertNodes: Template for inserting embeddings (nodes) into a PostgreSQL vector database using the LlamaIndex PGVectorStore to store vectorized data.

  • LLaMAIndexNodeRetriever: Template for retrieving nodes from a database using embeddings.

  • LLaMAIndexRAGTextCompletion: Template for configuring and initializing a LLaMA-based Retrieval-Augmented Generation (RAG) system.

For specific instructions and further details, see the README.md.

Sinapsis Mem0

This package provides persistent memory functionality for Sinapsis agents using Mem0, supporting both managed (Mem0 platform) and self-hosted backends.

  • Mem0Add: Ingests and stores prompts, responses, and facts into memory.
  • Mem0Get: Retrieves individual or grouped memory records.
  • Mem0Search: Fetches relevant memories and injects them into the current prompt.
  • Mem0Delete: Removes stored memories selectively or in bulk.
  • Mem0Reset: Fully clears memory within a defined scope.

For specific instructions and further details, see the README.md.

🌐 Webapps

The webapps included in this project showcase the modularity of the templates, in this case for AI-driven chatbots.

Important

To run the app you first need to clone this repository:

git clone [email protected]:Sinapsis-ai/sinapsis-chatbots.git
cd sinapsis-chatbots

Note

If you'd like to enable external app sharing in Gradio, export GRADIO_SHARE_APP=True

Important

You can change the model name and the number of gpu_layers used by the model in case you have an Out of Memory (OOM) error.

Important

Anthropic requires an API key to interact with the API. To get started, visit the official website to create an account. If you already have an account, go to the API keys page to generate a token.

Important

Set your API key env var using export ANTHROPIC_API_KEY='your-api-key'

Note

Agent configuration can be changed through the AGENT_CONFIG_PATH env var. You can check the available configurations in each package configs folder.

🐳 Docker

IMPORTANT: This Docker image depends on the sinapsis-nvidia:base image. For detailed instructions, please refer to the Sinapsis README.

  1. Build the sinapsis-chatbots image:
docker compose -f docker/compose.yaml build
  1. Start the app container
  • For Anthropic text-to-text chatbot:
docker compose -f docker/compose_apps.yaml up sinapsis-claude-chatbot -d
  • For llama-cpp text-to-text chatbot:
docker compose -f docker/compose_apps.yaml up sinapsis-simple-chatbot -d
  • For llama-index RAG chatbot:
docker compose -f docker/compose_apps.yaml up sinapsis-rag-chatbot -d
  1. Check the logs
  • For Anthropic text-to-text chatbot:
docker logs -f sinapsis-claude-chatbot
  • For llama-cpp text-to-text chatbot:
docker logs -f sinapsis-simple-chatbot
  • For llama-index RAG chatbot:
docker logs -f sinapsis-rag-chatbot
  1. The logs will display the URL to access the webapp, e.g.,::
Running on local URL:  http://127.0.0.1:7860

NOTE: The url may be different, check the output of logs.

  1. To stop the app:
docker compose -f docker/compose_apps.yaml down

To use a different chatbot configuration (e.g. OpenAI-based chat), update the AGENT_CONFIG_PATH environmental variable to point to the desired YAML file.

For example, to use OpenAI chat:

environment:
 AGENT_CONFIG_PATH: webapps/configs/openai_simple_chat.yaml
 OPENAI_API_KEY: your_api_key
πŸ’» UV

To run the webapp using the uv package manager, follow these steps:

  1. Export the environment variable to install the python bindings for llama-cpp:
export CMAKE_ARGS="-DGGML_CUDA=on"
export FORCE_CMAKE="1"
  1. Export CUDACXX:
export CUDACXX=$(command -v nvcc)
  1. Sync the virtual environment:
uv sync --frozen
  1. Install the wheel:
uv pip install sinapsis-chatbots[all] --extra-index-url https://pypi.sinapsis.tech
  1. Run the webapp:
  • For Anthropic text-to-text chatbot:
export ANTHROPIC_API_KEY=your_api_key
uv run webapps/claude_chatbot.py
  • For llama-cpp text-to-text chatbot:
uv run webapps/llama_cpp_simple_chatbot.py
  • For OpenAI text-to-text chatbot:
export AGENT_CONFIG_PATH=webapps/configs/openai_simple_chat.yaml
export OPENAI_API_KEY=your_api_key
uv run webapps/llama_cpp_simple_chatbot.py
  • For llama-index RAG chatbot:
uv run webapps/llama_index_rag_chatbot.py
  1. The terminal will display the URL to access the webapp, e.g.:
Running on local URL:  http://127.0.0.1:7860

NOTE: The URL may vary; check the terminal output for the correct address.

πŸ“™ Documentation

Documentation for this and other sinapsis packages is available on the sinapsis website

Tutorials for different projects within sinapsis are available at sinapsis tutorials page

πŸ” License

This project is licensed under the AGPLv3 license, which encourages open collaboration and sharing. For more details, please refer to the LICENSE file.

For commercial use, please refer to our official Sinapsis website for information on obtaining a commercial license.

About

Monorepo for sinapsis templates supporting LLM based Agents

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •