Skip to content

Commit 1f3bec4

Browse files
authored
Fix windows service demo (#3972) (#3974)
Cherry-pick of: 0d82dfa Ticket:CVS-180981
1 parent ce41e54 commit 1f3bec4

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

docs/windows_service.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ ovms --list_models
4646

4747
### Pull models
4848
```bat
49-
ovms --pull --model_name OpenVINO/Qwen3-8B-int4-ov --task text_generation --target_device
49+
ovms --pull --model_name OpenVINO/Qwen3-8B-int4-ov --task text_generation --target_device CPU
5050
```
5151

5252
### Start a model by adding it to the config.json

extras/llama_swap/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,10 @@ While this tool was implemented for llama-cpp project, it can be easily enabled
1515
## Pull the models needed for the deployment
1616

1717
```bat
18-
ovms pull --task embeddings --model_name OpenVINO/Qwen3-Embedding-0.6B-int8-ov --target_device GPU --cache_dir .ov_cache --pooling LAST
19-
ovms pull --task text_generation --model_name OpenVINO/Qwen3-4B-int4-ov --target_device GPU --cache_dir .ov_cache --tool_parser hermes3
20-
ovms pull --task text_generation --model_name OpenVINO/InternVL2-2B-int4-ov --target_device GPU --cache_dir .ov_cache
21-
ovms pull --task text_generation --model_name OpenVINO/Mistral-7B-Instruct-v0.3-int4-ov --target_device GPU --cache_dir .ov_cache --tool_parser mistral
18+
ovms --pull --task embeddings --model_name OpenVINO/Qwen3-Embedding-0.6B-int8-ov --target_device GPU --cache_dir .ov_cache --pooling LAST
19+
ovms --pull --task text_generation --model_name OpenVINO/Qwen3-4B-int4-ov --target_device GPU --cache_dir .ov_cache --tool_parser hermes3
20+
ovms --pull --task text_generation --model_name OpenVINO/InternVL2-2B-int4-ov --target_device GPU --cache_dir .ov_cache
21+
ovms --pull --task text_generation --model_name OpenVINO/Mistral-7B-Instruct-v0.3-int4-ov --target_device GPU --cache_dir .ov_cache --tool_parser mistral
2222
```
2323

2424
## Configure config.yaml for llama_swap

0 commit comments

Comments
 (0)