Skip to content

Commit 21bdfaf

Browse files
authored
doc: enhance TSFM support and deployment documentation (#34609)
1 parent 2f36c14 commit 21bdfaf

File tree

1 file changed

+242
-55
lines changed
  • docs/en/06-advanced/06-tdgpt/09-dev/04-tsfm

1 file changed

+242
-55
lines changed

docs/en/06-advanced/06-tdgpt/09-dev/04-tsfm/index.md

Lines changed: 242 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,35 @@ sidebar_label: Deploy a Time-Series Foundation Model
55

66
A number of research institutions and enterprises have released open-source time-series foundation models (TSFMs), greatly simplifying time-series data analysis. Beyond traditional data analysis algorithms, machine learning, and deep learning models, TSFMs offer a new and powerful option for advanced time-series analytics.
77

8-
TDgpt includes two TSFMs, TDtsfm and Time-MoE, but you can add more open-source or proprietary TSFMs to TDgpt as needed.
8+
TDgpt (since version 3.3.6.4) provides native support for six types of Time-Series Foundation Models (TSFMs): TDtsfm v1.0, Time-MoE, Chronos, Moirai, TimesFM, and Moment. All these models are deployed as local services that TDgpt connects to.
9+
10+
### Deployment Details
11+
12+
The server scripts for all six TSFM services are located in the `<TDgpt_root_directory>/lib/taosanalytics/tsfmservice/` directory.
13+
14+
TDgpt distinguishes between models that are configured by default and those that require manual configuration:
15+
16+
* **Default Models**: `TDtsfm` and `Time-MoE` are configured by default in `taosanode.ini`. You only need to start their respective server scripts to use them.
17+
* **Additional Models**: `Chronos`, `Moirai`, `TimesFM`, and `Moment` require you to start their server scripts and add their service URLs to `taosanode.ini` before use.
18+
19+
TDgpt has been adapted to interface with specific features of these models. If a certain function is unavailable, it may be due to a limitation of the model itself or because TDgpt has not yet been adapted to support that specific feature for that model.
20+
21+
<table>
22+
<tr><th rowspan="2">Models</th> <th rowspan="2">Files</th> <th colspan="3">Note</th><th colspan="5">Functions Description</th></tr>
23+
<tr><th>Name</th><th>Parameters (100M)</th><th>Model Size(MiB)</th><th>Forecast</th><th>Covariate Forecast</th><th>Multivariate Forecast</th><th>Anomaly Detection</th><th>Imputation</th></tr>
24+
<tr><th rowspan="2">timemoe</th><th rowspan="2">timemoe-server.py</th><th>Maple728/TimeMoE-50M</th><th>0.50</th><th align="right">227</th><th rowspan="2">✔</th><th rowspan="2">✘</th><th rowspan="2">✘</th><th rowspan="2">✘</th><th rowspan="2">✘</th></tr>
25+
<tr><th>Maple728/TimeMoE-200M</th><th>4.53</th><th align="right">906</th></tr>
26+
<tr><th rowspan="2">moirai</th><th rowspan="2">moirai-server.py</th><th>Salesforce/moirai-moe-1.0-R-small</th><th>1.17</th><th align="right">469</th><th rowspan="2">✔</th><th rowspan="2">✔</th><th rowspan="2">✘</th><th rowspan="2">✘</th><th rowspan="2">✘</th></tr>
27+
<tr><th>Salesforce/moirai-moe-1.0-R-base</th><th>9.35</th><th align="right">3,740</th></tr>
28+
<tr><th rowspan="4">chronos</th><th rowspan="4">chronos-server.py</th><th>amazon/chronos-bolt-tiny</th><th>0.09</th><th align="right">35</th><th rowspan="4">✔</th><th rowspan="4">✘</th><th rowspan="4">✘</th><th rowspan="4">✘</th><th rowspan="4">✘</th></tr>
29+
<tr><th>amazon/chronos-bolt-mini</th><th>0.21</th><th align="right">85</th></tr>
30+
<tr><th>amazon/chronos-bolt-small</th><th>0.48</th><th align="right">191</th></tr>
31+
<tr><th>amazon/chronos-bolt-base</th><th>2.05</th><th align="right">821</th></tr>
32+
<tr><th>timesfm</th><th>timesfm-server.py</th><th>google/timesfm-2.0-500m-pytorch</th><th>4.99</th><th align="right">2,000</th><th>✔</th><th>✘</th><th>✘</th><th>✘</th><th>✘</th></tr>
33+
<tr><th rowspan="3">moment</th><th rowspan="3">moment-server.py</th><th>AutonLab/MOMENT-1-small</th><th>0.38</th><th align="right">152</th><th rowspan="3">✘</th><th rowspan="3">✘</th><th rowspan="3">✘</th><th rowspan="3">✘</th><th rowspan="3">✔</th></tr>
34+
<tr><th>AutonLab/MOMENT-1-base</th><th>1.13</th><th align="right">454</th></tr>
35+
<tr><th>AutonLab/MOMENT-1-large</th><th>3.46</th><th align="right">1,039</th></tr>
36+
</table>
937

1038
This document describes how to integrate an independent TSFM service into TDengine, using [Time-MoE](https://github.com/Time-MoE/Time-MoE) as an example, and how to use the model in SQL statements for time-series forecasting.
1139

@@ -22,9 +50,9 @@ pip install accelerate
2250

2351
You can use the virtual Python environment installed by TDgpt or a separate environment.
2452

25-
## Configure TSFM Path
53+
## Configure TSFM Service Path & Port
2654

27-
The `lib/taosanalytics/time-moe.py` file in the TDgpt root directory deploys and serves the Time-MoE model. Modify this file to set an appropriate URL.
55+
The `lib/taosanalytics/time-moe.py` (rename to `/lib/taosanalytics/tsfmservice/timemoe-service.py` since 3.3.6.4) file in the TDgpt root directory deploys and serves the Time-MoE model. Modify this file to set an appropriate URL.
2856

2957
```python
3058
@app.route('/ds_predict', methods=['POST'])
@@ -37,15 +65,17 @@ Change `ds_predict` to the URL that you want to use in your environment.
3765
```python
3866
app.run(
3967
host='0.0.0.0',
40-
port=5001,
68+
port=6037,
4169
threaded=True,
4270
debug=False
4371
)
4472
```
4573

4674
In this section, you can update the port if desired. After you have set your URL and port number, restart the service.
4775

48-
## Run the Python Script
76+
## Run the Python Script (Available before 3.3.8.x)
77+
78+
> ⚠️ NOTE:The following method is only available before 3.3.8.x, if you're using later version, please refer to [Dynamic Model Download](#Dynamic Model Download)
4979
5080
```shell
5181
nohup python time-moe.py > service_output.out 2>&1 &
@@ -57,15 +87,15 @@ Check the `service-output.out` file to confirm that the model has been loaded:
5787

5888
```shell
5989
Running on all addresses (0.0.0.0)
60-
Running on http://127.0.0.1:5001
90+
Running on http://127.0.0.1:6037
6191
```
6292

6393
## Verify the Service
6494

6595
Verify that the service is running normally:
6696

6797
```shell
68-
curl 127.0.0.1:5001/ds_predict
98+
curl 127.0.0.1:6037/ds_predict
6999
```
70100

71101
The following indicates that Time-MoE has been deployed:
@@ -80,7 +110,7 @@ The following indicates that Time-MoE has been deployed:
80110

81111
## Load the Model into TDgpt
82112

83-
You can modify the [timemoe.py] file and use it in TDgpt. In this example, Time-MoE is adapted to provide forecasting.
113+
You can modify the [timemoe.py](https://github.com/taosdata/TDengine/blob/main/tools/tdgpt/taosanalytics/algo/fc/timemoe.py) file and use it in TDgpt. In this example, Time-MoE is adapted to provide forecasting.
84114

85115
```python
86116
class _TimeMOEService(AbstractForecastService):
@@ -94,55 +124,19 @@ class _TimeMOEService(AbstractForecastService):
94124
def __init__(self):
95125
super().__init__()
96126

97-
self.table_name = None
98-
99-
# find service address in taosanode.ini or use default if not found
100-
service_host = conf.get_tsfm_service("timemoe-fc")
101-
if service_host is not None:
102-
self.service_host = service_host
103-
else:
104-
self.service_host = 'http://127.0.0.1:5001/timemoe'
105-
106-
self.headers = {'Content-Type': 'application/json'}
107-
127+
# Use the default address if the service URL is not specified in the taosanode.ini configuration file.
128+
if self.service_host is None:
129+
self.service_host = 'http://127.0.0.1:6037/timemoe'
108130

109131
def execute(self):
110-
"""analytics methods"""
111-
if self.list is None or len(self.list) < self.period:
112-
raise ValueError("number of input data is less than the periods")
113-
114-
if self.rows <= 0:
115-
raise ValueError("fc rows is not specified yet")
116-
117-
# let's request the gpt service
118-
data = {"input": self.list, 'next_len': self.rows}
119-
try:
120-
# request tsfm service
121-
response = requests.post(self.service_host, data=json.dumps(data), headers=self.headers)
122-
except Exception as e:
123-
app_logger.log_inst.error(f"failed to connect the service: {self.service_host} ", str(e))
124-
raise e
125-
126-
# check returned value
127-
if response.status_code == 404:
128-
app_logger.log_inst.error(f"failed to connect the service: {self.service_host} ")
129-
raise ValueError("invalid host url")
130-
elif response.status_code != 200:
131-
app_logger.log_inst.error(f"failed to request the service: {self.service_host}, reason: {response.text}")
132-
raise ValueError(f"failed to request the service, {response.text}")
132+
# Verify support for past covariate analysis; raise an exception if unsupported. (Note: time-moe lacks this support and will trigger the exception.)
133+
if len(self.past_dynamic_real):
134+
raise ValueError("covariate forecast is not supported yet")
133135

134-
pred_y = response.json()['output']
135-
136-
res = {
137-
"res": [pred_y]
138-
}
139-
140-
# package forecasting results according to specifications
141-
insert_ts_list(res["res"], self.start_ts, self.time_step, self.rows)
142-
return res
136+
super().execute()
143137
```
144138

145-
Add your code to `/usr/local/taos/taosanode/lib/taosanalytics/algo/fc/` where the `timemoe.py` file is located.
139+
Add your code to `/usr/local/taos/taosanode/lib/taosanalytics/algo/fc/`. Actually, you can find a `timemoe.py` file that we have already prepared already.
146140

147141
TDgpt has built-in support for Time-MoE. You can run `SHOW ANODES FULL` and see that forecasting based on Time-MoE is listed as `timemoe-fc`.
148142

@@ -152,7 +146,7 @@ Modify the `[tsfm-service]` section of `/etc/taos/taosanode.ini`:
152146

153147
```ini
154148
[tsfm-service]
155-
timemoe-fc = http://127.0.0.1:5001/ds_predict
149+
timemoe-fc = http://127.0.0.1:6037/ds_predict
156150
```
157151

158152
Add the path for your deployment. The key is the name of the model defined in your Python code, and the value is the URL of Time-MoE on your local machine.
@@ -166,9 +160,202 @@ SELECT FORECAST(i32, 'algo=timemoe-fc')
166160
FROM foo;
167161
```
168162

169-
## Add Other TSFMs
163+
## Deploying Other Time-Series Foundation Models
164+
165+
The logic for registering models in TDgpt after deploying them locally is similar across all types. You only need to modify the Class Name and the Model Service Name (Key) and set the correct service address. Adaptation files for **Chronos**, **TimesFM**, and **Moirai** are provided by default; users of version 3.3.6.4 and later only need to start the corresponding services locally.
166+
167+
The deployment and startup methods are as follows:
168+
169+
### Starting the Moirai Service
170+
171+
To avoid dependency conflicts, it is recommended to prepare a clean Python virtual environment and install the libraries there.
172+
173+
```bash
174+
pip install torch==2.3.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
175+
pip install uni2ts
176+
pip install flask
177+
178+
```
179+
180+
Configure the service address in the `moirai-server.py` file (see above for the method) and set the model to be loaded (if necessary).
181+
182+
```python
183+
_model_list = [
184+
'Salesforce/moirai-moe-1.0-R-small', # small model with 117M parameters
185+
'Salesforce/moirai-moe-1.0-R-base', # base model with 205M parameters
186+
]
187+
188+
pretrained_model = MoiraiMoEModule.from_pretrained(
189+
_model_list[0] # Loads the 'small' model by default; change to 1 to load 'base'
190+
).to(device)
191+
192+
```
193+
194+
Execute the command to start the service. The model files will be downloaded automatically during the first startup. If the download speed is too slow, you can use a domestic mirror (see above for setup).
195+
196+
```bash
197+
nohup python moirai-server.py > service_output.out 2>&1 &
198+
199+
```
200+
201+
Follow the same steps as above to check the service status.
202+
203+
### Starting the Chronos Service
204+
205+
Install dependencies in a clean Python virtual environment:
206+
207+
```bash
208+
pip install torch==2.3.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
209+
pip install chronos-forecasting
210+
pip install flask
211+
212+
```
213+
214+
Set the service address and model in `chronos-server.py`. You can also use the default values.
215+
216+
```python
217+
def main():
218+
app.run(
219+
host='0.0.0.0',
220+
port=6038,
221+
threaded=True,
222+
debug=False
223+
)
224+
225+
```
226+
227+
```python
228+
_model_list = [
229+
'amazon/chronos-bolt-tiny', # 9M parameters, based on t5-efficient-tiny
230+
'amazon/chronos-bolt-mini', # 21M parameters, based on t5-efficient-mini
231+
'amazon/chronos-bolt-small', # 48M parameters, based on t5-efficient-small
232+
'amazon/chronos-bolt-base', # 205M parameters, based on t5-efficient-base
233+
]
234+
235+
model = BaseChronosPipeline.from_pretrained(
236+
_model_list[0], # Loads the 'tiny' model by default; modify the index to change the model
237+
device_map=device,
238+
torch_dtype=torch.bfloat16,
239+
)
240+
241+
```
242+
243+
Execute the following in the shell to start the service:
244+
245+
```bash
246+
nohup python chronos-server.py > service_output.out 2>&1 &
247+
248+
```
249+
250+
### Starting the TimesFM Service
251+
252+
Install dependencies in a clean Python virtual environment:
253+
254+
```bash
255+
pip install torch==2.3.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
256+
pip install timesfm
257+
pip install jax
258+
pip install flask==3.0.3
259+
260+
```
261+
262+
Adjust the service address in `timesfm-server.py` if necessary, then execute the command below:
263+
264+
```bash
265+
nohup python timesfm-server.py > service_output.out 2>&1 &
266+
267+
```
268+
269+
### Starting the Moment Service
270+
271+
Install dependencies in a clean Python virtual environment:
272+
273+
```bash
274+
pip install torch==2.3.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
275+
pip install transformers==4.33.3
276+
pip install numpy==1.25.2
277+
pip install matplotlib
278+
pip install pandas==1.5
279+
pip install scikit-learn
280+
pip install flask==3.0.3
281+
pip install momentfm
282+
283+
```
284+
285+
Adjust the service address in `moment-server.py` if necessary, then execute the command below:
286+
287+
```bash
288+
nohup python moment-server.py > service_output.out 2>&1 &
289+
290+
```
291+
292+
---
293+
294+
## Service Management Scripts (Start and Stop)
295+
296+
To simplify management, TDgpt (v3.4.0.0+) provides unified scripts: `start-model.sh` and `stop-model.sh`. These allow you to start or stop specific or all foundation model services with a single command.
297+
298+
### Start Script
299+
300+
The `start-model.sh` script loads the corresponding Python virtual environment and initiates the model service script based on the specified model name.
301+
302+
After a `root` installation, the script is located in `<tdgpt_root>/bin/`. A symbolic link is automatically created at `/usr/bin/start-model` for global access.
303+
304+
Logs are output to `/var/log/taos/taosanode/taosanode_service_<model_name>.log` by default.
305+
306+
**Usage**:
307+
308+
```bash
309+
Usage: /usr/bin/start-model [-c config_file] [model_name|all] [other_params...]
310+
311+
Supported models: tdtsfm, timesfm, timemoe, moirai, chronos, moment
312+
313+
```
314+
315+
**Options**:
316+
317+
* `-c config_file`: Specifies the configuration file (Default: `/etc/taos/taosanode.ini`).
318+
* `-h, --help`: Displays help information.
319+
320+
**Examples**:
321+
322+
1. Start all services in the background: `/usr/bin/start-model all`
323+
2. Start a specific service (e.g., TimesFM): `/usr/bin/start-model timesfm`
324+
3. Specify a custom config file: `/usr/bin/start-model -c /path/to/custom_taosanode.ini`
325+
326+
### Stop Script
327+
328+
`stop-model.sh` is used to terminate specified or all model services. It automatically identifies and kills the relevant Python processes.
329+
330+
**Examples**:
331+
332+
1. Stop the TimesFM service: `/usr/bin/stop-model timesfm`
333+
2. Stop all model services: `/usr/bin/stop-model all`
334+
335+
---
336+
337+
## Dynamic Model Download
338+
339+
In versions 3.3.8.x and later, you can specify different model scales during startup. If no parameters are provided, the driver file (`[xxx]-server.py`) will automatically load the model with the smallest parameter scale.
340+
341+
Additionally, if you have manually downloaded model files, you can run them by specifying the local path.
342+
343+
```bash
344+
# Run the chronos-bolt-tiny model located at /var/lib/taos/taosanode/model/chronos.
345+
# If the directory doesn't exist, it will download automatically to that path.
346+
# The third parameter (True) enables the mirror site for faster downloads (recommended for users in China).
347+
python chronos-server.py /var/lib/taos/taosanode/model/chronos/ amazon/chronos-bolt-tiny True
348+
349+
```
350+
351+
## Transformers Version Requirements
170352

171-
You can add more open-source or proprietary TSFMs to TDgpt by following the process described in this document. Ensure that the class and service names have been configured appropriately and that the service URL is reachable.
353+
| Model Name | Transformers Version |
354+
| --- | --- |
355+
| time-moe, moirai, tdtsfm | 4.40 |
356+
| chronos | 4.55 |
357+
| moment | 4.33 |
358+
| timesfm | N/A |
172359

173360
### References
174361

0 commit comments

Comments
 (0)