Skip to content

Commit 481ad18

Browse files
authored
Merge branch 'main' into fix/websocket-event-loop-blocking
2 parents d1ae75e + 28462fe commit 481ad18

File tree

2 files changed

+67
-0
lines changed

2 files changed

+67
-0
lines changed

README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,14 @@ curl -X POST http://127.0.0.1:8000/api/chatbot/sessions
5151

5252
See [docs/README.md](docs/README.md) for detailed explanations.
5353

54+
## 🎥 Setup Video Tutorial
55+
56+
[![Local Setup Video Tutorial](https://img.youtube.com/vi/1DnMNA4aLyE/0.jpg)](https://youtu.be/1DnMNA4aLyE)
57+
58+
The tutorial shows how to fork the repo, set up the backend, download the LLM model, run the frontend, and verify the chatbot works.
59+
60+
61+
5462
## Troubleshooting
5563

5664
**llama-cpp-python installation fails**: Ensure build tools are installed and use Python 3.11+

docs/setup.md

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,15 @@ For the setup instructions have been provided for *Linux* and *Windows*. Moreove
4343
```bash
4444
pip install -r requirements.txt
4545
```
46+
47+
> **Note:** The backend requires `python-multipart` for multipart form handling.
48+
> This dependency is included in the requirements file, but if you encounter
49+
> runtime errors related to multipart requests, ensure it is installed:
50+
>
51+
> ```bash
52+
> pip install python-multipart
53+
> ```
54+
4655
5. **Set the `PYTHONPATH` to the current directory(`chatbot-core/`)**
4756
```bash
4857
export PYTHONPATH=$(pwd)
@@ -57,6 +66,14 @@ For the setup instructions have been provided for *Linux* and *Windows*. Moreove
5766
* Download the file named `mistral-7b-instruct-v0.2.Q4_K_M.gguf`
5867
* Place the downloaded file in `api\models\mistral\`
5968
69+
By default, the backend attempts to load the local GGUF model during
70+
startup. If the model file is missing, the server will fail to start.
71+
72+
Contributors who do not need local inference can run the backend
73+
without a model by using test mode
74+
(see “Running without a local LLM model (test mode)” below).
75+
76+
6077
## Installation Guide for Windows
6178
This guide provides step-by-step instructions for installing and running the Jenkins Chatbot on Windows systems.
6279
@@ -103,6 +120,14 @@ This guide provides step-by-step instructions for installing and running the Jen
103120
```bash
104121
pip install -r requirements-cpu.txt
105122
```
123+
> **Note:** The backend requires `python-multipart` for multipart form handling.
124+
> This dependency is included in the requirements file, but if you encounter
125+
> runtime errors related to multipart requests, ensure it is installed:
126+
>
127+
> ```powershell
128+
> pip install python-multipart
129+
> ```
130+
106131
> **Note**: If you encounter any dependency issues, especially with NVIDIA packages, use the `requirements-cpu.txt` file which excludes GPU-specific dependencies.
107132
108133
5. **Set the PYTHONPATH**
@@ -123,6 +148,13 @@ This guide provides step-by-step instructions for installing and running the Jen
123148
* Download the file named `mistral-7b-instruct-v0.2.Q4_K_M.gguf`
124149
* Place the downloaded file in `api\models\mistral\`
125150
151+
By default, the backend attempts to load the local GGUF model during
152+
startup. If the model file is missing, the server will fail to start.
153+
154+
Contributors who do not need local inference can run the backend
155+
without a model by using test mode
156+
(see “Running without a local LLM model (test mode)” below).
157+
126158
## Automatic setup
127159
128160
To avoid running all the steps each time, we have provided a target in the `Makefile` to automate the setup process.
@@ -141,6 +173,33 @@ make setup-backend IS_CPU_REQ=1
141173
142174
> **Note:** The target **does not** include the installation of the LLM.
143175
176+
### What does `setup-backend` do?
177+
178+
The `setup-backend` Makefile target prepares the Python backend by:
179+
- Creating a virtual environment in `chatbot-core/venv`
180+
- Installing backend dependencies from `requirements.txt`
181+
(or `requirements-cpu.txt` when `IS_CPU_REQ=1` is set)
182+
183+
You usually do not need to run this manually.
184+
The `make api` target automatically runs `setup-backend`
185+
if the backend has not already been set up.
186+
187+
## Running without a local LLM model (test mode)
188+
189+
By default, the backend loads a local GGUF model on startup.
190+
For contributors who do not need local inference, a test configuration
191+
is available.
192+
193+
The backend includes a `config-testing.yml` file that disables local
194+
LLM loading. This configuration is activated when the
195+
`PYTEST_VERSION` environment variable is set.
196+
197+
Example:
198+
199+
```bash
200+
PYTEST_VERSION=1 make api
201+
```
202+
144203
## Common Troubleshooting
145204
146205
This section covers common issues encountered during setup, especially when installing

0 commit comments

Comments
 (0)