You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/setup.md
+59Lines changed: 59 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,6 +43,15 @@ For the setup instructions have been provided for *Linux* and *Windows*. Moreove
43
43
```bash
44
44
pip install -r requirements.txt
45
45
```
46
+
47
+
>**Note:** The backend requires `python-multipart`for multipart form handling.
48
+
> This dependency is included in the requirements file, but if you encounter
49
+
> runtime errors related to multipart requests, ensure it is installed:
50
+
>
51
+
>```bash
52
+
> pip install python-multipart
53
+
>```
54
+
46
55
5. **Set the `PYTHONPATH` to the current directory(`chatbot-core/`)**
47
56
```bash
48
57
export PYTHONPATH=$(pwd)
@@ -57,6 +66,14 @@ For the setup instructions have been provided for *Linux* and *Windows*. Moreove
57
66
* Download the file named `mistral-7b-instruct-v0.2.Q4_K_M.gguf`
58
67
* Place the downloaded file in `api\models\mistral\`
59
68
69
+
By default, the backend attempts to load the local GGUF model during
70
+
startup. If the model file is missing, the server will fail to start.
71
+
72
+
Contributors who do not need local inference can run the backend
73
+
without a model by using test mode
74
+
(see “Running without a local LLM model (test mode)” below).
75
+
76
+
60
77
## Installation Guide for Windows
61
78
This guide provides step-by-step instructions for installing and running the Jenkins Chatbot on Windows systems.
62
79
@@ -103,6 +120,14 @@ This guide provides step-by-step instructions for installing and running the Jen
103
120
```bash
104
121
pip install -r requirements-cpu.txt
105
122
```
123
+
> **Note:** The backend requires `python-multipart` for multipart form handling.
124
+
> This dependency is included in the requirements file, but if you encounter
125
+
> runtime errors related to multipart requests, ensure it is installed:
126
+
>
127
+
> ```powershell
128
+
> pip install python-multipart
129
+
> ```
130
+
106
131
> **Note**: If you encounter any dependency issues, especially with NVIDIA packages, use the `requirements-cpu.txt` file which excludes GPU-specific dependencies.
107
132
108
133
5. **Set the PYTHONPATH**
@@ -123,6 +148,13 @@ This guide provides step-by-step instructions for installing and running the Jen
123
148
* Download the file named `mistral-7b-instruct-v0.2.Q4_K_M.gguf`
124
149
* Place the downloaded file in`api\models\mistral\`
125
150
151
+
By default, the backend attempts to load the local GGUF model during
152
+
startup. If the model file is missing, the server will fail to start.
153
+
154
+
Contributors who do not need local inference can run the backend
155
+
without a model by using test mode
156
+
(see “Running without a local LLM model (test mode)” below).
157
+
126
158
## Automatic setup
127
159
128
160
To avoid running all the steps each time, we have provided a target in the `Makefile` to automate the setup process.
@@ -141,6 +173,33 @@ make setup-backend IS_CPU_REQ=1
141
173
142
174
>**Note:** The target **does not** include the installation of the LLM.
143
175
176
+
### What does `setup-backend` do?
177
+
178
+
The `setup-backend` Makefile target prepares the Python backend by:
179
+
- Creating a virtual environment in`chatbot-core/venv`
180
+
- Installing backend dependencies from `requirements.txt`
181
+
(or `requirements-cpu.txt` when `IS_CPU_REQ=1` is set)
182
+
183
+
You usually do not need to run this manually.
184
+
The `make api` target automatically runs `setup-backend`
185
+
if the backend has not already been set up.
186
+
187
+
## Running without a local LLM model (test mode)
188
+
189
+
By default, the backend loads a local GGUF model on startup.
190
+
For contributors who do not need local inference, a test configuration
191
+
is available.
192
+
193
+
The backend includes a `config-testing.yml` file that disables local
194
+
LLM loading. This configuration is activated when the
195
+
`PYTEST_VERSION` environment variable is set.
196
+
197
+
Example:
198
+
199
+
```bash
200
+
PYTEST_VERSION=1 make api
201
+
```
202
+
144
203
## Common Troubleshooting
145
204
146
205
This section covers common issues encountered during setup, especially when installing
0 commit comments