You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -5,7 +5,35 @@ sidebar_label: Deploy a Time-Series Foundation Model
5
5
6
6
A number of research institutions and enterprises have released open-source time-series foundation models (TSFMs), greatly simplifying time-series data analysis. Beyond traditional data analysis algorithms, machine learning, and deep learning models, TSFMs offer a new and powerful option for advanced time-series analytics.
7
7
8
-
TDgpt includes two TSFMs, TDtsfm and Time-MoE, but you can add more open-source or proprietary TSFMs to TDgpt as needed.
8
+
TDgpt (since version 3.3.6.4) provides native support for six types of Time-Series Foundation Models (TSFMs): TDtsfm v1.0, Time-MoE, Chronos, Moirai, TimesFM, and Moment. All these models are deployed as local services that TDgpt connects to.
9
+
10
+
### Deployment Details
11
+
12
+
The server scripts for all six TSFM services are located in the `<TDgpt_root_directory>/lib/taosanalytics/tsfmservice/` directory.
13
+
14
+
TDgpt distinguishes between models that are configured by default and those that require manual configuration:
15
+
16
+
***Default Models**: `TDtsfm` and `Time-MoE` are configured by default in `taosanode.ini`. You only need to start their respective server scripts to use them.
17
+
***Additional Models**: `Chronos`, `Moirai`, `TimesFM`, and `Moment` require you to start their server scripts and add their service URLs to `taosanode.ini` before use.
18
+
19
+
TDgpt has been adapted to interface with specific features of these models. If a certain function is unavailable, it may be due to a limitation of the model itself or because TDgpt has not yet been adapted to support that specific feature for that model.
This document describes how to integrate an independent TSFM service into TDengine, using [Time-MoE](https://github.com/Time-MoE/Time-MoE) as an example, and how to use the model in SQL statements for time-series forecasting.
11
39
@@ -22,9 +50,9 @@ pip install accelerate
22
50
23
51
You can use the virtual Python environment installed by TDgpt or a separate environment.
24
52
25
-
## Configure TSFM Path
53
+
## Configure TSFM Service Path & Port
26
54
27
-
The `lib/taosanalytics/time-moe.py` file in the TDgpt root directory deploys and serves the Time-MoE model. Modify this file to set an appropriate URL.
55
+
The `lib/taosanalytics/time-moe.py`(rename to `/lib/taosanalytics/tsfmservice/timemoe-service.py` since 3.3.6.4) file in the TDgpt root directory deploys and serves the Time-MoE model. Modify this file to set an appropriate URL.
28
56
29
57
```python
30
58
@app.route('/ds_predict', methods=['POST'])
@@ -37,15 +65,17 @@ Change `ds_predict` to the URL that you want to use in your environment.
37
65
```python
38
66
app.run(
39
67
host='0.0.0.0',
40
-
port=5001,
68
+
port=6037,
41
69
threaded=True,
42
70
debug=False
43
71
)
44
72
```
45
73
46
74
In this section, you can update the port if desired. After you have set your URL and port number, restart the service.
47
75
48
-
## Run the Python Script
76
+
## Run the Python Script (Available before 3.3.8.x)
77
+
78
+
> ⚠️ NOTE:The following method is only available before 3.3.8.x, if you're using later version, please refer to [Dynamic Model Download](#Dynamic Model Download)
@@ -57,15 +87,15 @@ Check the `service-output.out` file to confirm that the model has been loaded:
57
87
58
88
```shell
59
89
Running on all addresses (0.0.0.0)
60
-
Running on http://127.0.0.1:5001
90
+
Running on http://127.0.0.1:6037
61
91
```
62
92
63
93
## Verify the Service
64
94
65
95
Verify that the service is running normally:
66
96
67
97
```shell
68
-
curl 127.0.0.1:5001/ds_predict
98
+
curl 127.0.0.1:6037/ds_predict
69
99
```
70
100
71
101
The following indicates that Time-MoE has been deployed:
@@ -80,7 +110,7 @@ The following indicates that Time-MoE has been deployed:
80
110
81
111
## Load the Model into TDgpt
82
112
83
-
You can modify the [timemoe.py] file and use it in TDgpt. In this example, Time-MoE is adapted to provide forecasting.
113
+
You can modify the [timemoe.py](https://github.com/taosdata/TDengine/blob/main/tools/tdgpt/taosanalytics/algo/fc/timemoe.py) file and use it in TDgpt. In this example, Time-MoE is adapted to provide forecasting.
84
114
85
115
```python
86
116
class_TimeMOEService(AbstractForecastService):
@@ -94,55 +124,19 @@ class _TimeMOEService(AbstractForecastService):
94
124
def__init__(self):
95
125
super().__init__()
96
126
97
-
self.table_name =None
98
-
99
-
# find service address in taosanode.ini or use default if not found
Add your code to `/usr/local/taos/taosanode/lib/taosanalytics/algo/fc/` where the `timemoe.py` file is located.
139
+
Add your code to `/usr/local/taos/taosanode/lib/taosanalytics/algo/fc/`. Actually, you can find a `timemoe.py` file that we have already prepared already.
146
140
147
141
TDgpt has built-in support for Time-MoE. You can run `SHOW ANODES FULL` and see that forecasting based on Time-MoE is listed as `timemoe-fc`.
148
142
@@ -152,7 +146,7 @@ Modify the `[tsfm-service]` section of `/etc/taos/taosanode.ini`:
152
146
153
147
```ini
154
148
[tsfm-service]
155
-
timemoe-fc = http://127.0.0.1:5001/ds_predict
149
+
timemoe-fc = http://127.0.0.1:6037/ds_predict
156
150
```
157
151
158
152
Add the path for your deployment. The key is the name of the model defined in your Python code, and the value is the URL of Time-MoE on your local machine.
The logic for registering models in TDgpt after deploying them locally is similar across all types. You only need to modify the Class Name and the Model Service Name (Key) and set the correct service address. Adaptation files for **Chronos**, **TimesFM**, and **Moirai** are provided by default; users of version 3.3.6.4 and later only need to start the corresponding services locally.
166
+
167
+
The deployment and startup methods are as follows:
168
+
169
+
### Starting the Moirai Service
170
+
171
+
To avoid dependency conflicts, it is recommended to prepare a clean Python virtual environment and install the libraries there.
_model_list[0] # Loads the 'small' model by default; change to 1 to load 'base'
190
+
).to(device)
191
+
192
+
```
193
+
194
+
Execute the command to start the service. The model files will be downloaded automatically during the first startup. If the download speed is too slow, you can use a domestic mirror (see above for setup).
To simplify management, TDgpt (v3.4.0.0+) provides unified scripts: `start-model.sh` and `stop-model.sh`. These allow you to start or stop specific or all foundation model services with a single command.
297
+
298
+
### Start Script
299
+
300
+
The `start-model.sh` script loads the corresponding Python virtual environment and initiates the model service script based on the specified model name.
301
+
302
+
After a `root` installation, the script is located in `<tdgpt_root>/bin/`. A symbolic link is automatically created at `/usr/bin/start-model` for global access.
303
+
304
+
Logs are output to `/var/log/taos/taosanode/taosanode_service_<model_name>.log` by default.
Supported models: tdtsfm, timesfm, timemoe, moirai, chronos, moment
312
+
313
+
```
314
+
315
+
**Options**:
316
+
317
+
*`-c config_file`: Specifies the configuration file (Default: `/etc/taos/taosanode.ini`).
318
+
*`-h, --help`: Displays help information.
319
+
320
+
**Examples**:
321
+
322
+
1. Start all services in the background: `/usr/bin/start-model all`
323
+
2. Start a specific service (e.g., TimesFM): `/usr/bin/start-model timesfm`
324
+
3. Specify a custom config file: `/usr/bin/start-model -c /path/to/custom_taosanode.ini`
325
+
326
+
### Stop Script
327
+
328
+
`stop-model.sh` is used to terminate specified or all model services. It automatically identifies and kills the relevant Python processes.
329
+
330
+
**Examples**:
331
+
332
+
1. Stop the TimesFM service: `/usr/bin/stop-model timesfm`
333
+
2. Stop all model services: `/usr/bin/stop-model all`
334
+
335
+
---
336
+
337
+
## Dynamic Model Download
338
+
339
+
In versions 3.3.8.x and later, you can specify different model scales during startup. If no parameters are provided, the driver file (`[xxx]-server.py`) will automatically load the model with the smallest parameter scale.
340
+
341
+
Additionally, if you have manually downloaded model files, you can run them by specifying the local path.
342
+
343
+
```bash
344
+
# Run the chronos-bolt-tiny model located at /var/lib/taos/taosanode/model/chronos.
345
+
# If the directory doesn't exist, it will download automatically to that path.
346
+
# The third parameter (True) enables the mirror site for faster downloads (recommended for users in China).
You can add more open-source or proprietary TSFMs to TDgpt by following the process described in this document. Ensure that the class and service names have been configured appropriately and that the service URL is reachable.
0 commit comments