Skip to content

Commit 0ce06f6

Browse files
Update doc links to 2026 (openvinotoolkit#3491)
## Description Update docs to 2026 as still pointing to older docs. ## Checklist: - [x] This PR follows [GenAI Contributing guidelines](https://github.com/openvinotoolkit/openvino.genai?tab=contributing-ov-file#contributing). - [N/A] Tests have been updated or added to cover the new code. - [x] I have made corresponding changes to the documentation. https://whitneyfoster.github.io/openvino.genai/
1 parent 995fa02 commit 0ce06f6

File tree

32 files changed

+45
-45
lines changed

32 files changed

+45
-45
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ Library efficiently supports LoRA adapters for Text and Image generation scenari
7373
- Select active adapters for every generation
7474
- Mix multiple adapters with coefficients via alpha blending
7575

76-
All scenarios are run on top of OpenVINO Runtime that supports inference on CPU, GPU and NPU. See [here](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html) for platform support matrix.
76+
All scenarios are run on top of OpenVINO Runtime that supports inference on CPU, GPU and NPU. See [here](https://docs.openvino.ai/2026/about-openvino/release-notes-openvino/system-requirements.html) for platform support matrix.
7777

7878
<a id="optimization-methods"></a>
7979

@@ -87,12 +87,12 @@ OpenVINO™ GenAI library provides a transparent way to use state-of-the-art gen
8787
Additionally, OpenVINO™ GenAI library implements a continuous batching approach to use OpenVINO within LLM serving. The continuous batching library could be used in LLM serving frameworks and supports the following features:
8888
- Prefix caching that caches fragments of previous generation requests and corresponding KVCache entries internally and uses them in case of repeated query.
8989

90-
Continuous batching functionality is used within OpenVINO Model Server (OVMS) to serve LLMs, see [here](https://docs.openvino.ai/2025/openvino-workflow/model-server/ovms_what_is_openvino_model_server.html) for more details.
90+
Continuous batching functionality is used within OpenVINO Model Server (OVMS) to serve LLMs, see [here](https://docs.openvino.ai/2026/model-server/ovms_what_is_openvino_model_server.html) for more details.
9191

9292

9393
## Additional Resources
9494

95-
- [OpenVINO Generative AI workflow](https://docs.openvino.ai/2025/openvino-workflow-generative.html)
95+
- [OpenVINO Generative AI workflow](https://docs.openvino.ai/2026/openvino-workflow-generative.html)
9696
- [Optimum Intel and OpenVINO](https://huggingface.co/docs/optimum/intel/openvino/export)
9797
- [OpenVINO Notebooks with GenAI](https://openvinotoolkit.github.io/openvino_notebooks/?libraries=OpenVINO+GenAI)
9898

samples/cpp/image_generation/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ optimum-cli export openvino --model dreamlike-art/dreamlike-anime-1.0 --task sta
4242

4343
## Run text to image
4444

45-
Follow [Get Started with Samples](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/get-started-demos.html) to run the sample.
45+
Follow [Get Started with Samples](https://docs.openvino.ai/2026/get-started/learn-openvino/openvino-samples/get-started-demos.html) to run the sample.
4646

4747
`stable_diffusion ./dreamlike_anime_1_0_ov/FP16 'cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting'`
4848

samples/cpp/rag/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ optimum-cli export openvino --task text-classification --model cross-encoder/ms-
2727

2828
## Run
2929

30-
Follow [Get Started with Samples](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/get-started-demos.html) to run the sample.
30+
Follow [Get Started with Samples](https://docs.openvino.ai/2026/get-started/learn-openvino/openvino-samples/get-started-demos.html) to run the sample.
3131

3232
### 1. Text Embedding Sample (`text_embeddings.cpp`)
3333
- **Description:**

samples/cpp/speech_generation/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ python create_speaker_embedding.py
3838

3939
## Run Text-to-speech sample
4040

41-
Follow [Get Started with Samples](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/get-started-demos.html)
41+
Follow [Get Started with Samples](https://docs.openvino.ai/2026/get-started/learn-openvino/openvino-samples/get-started-demos.html)
4242
to run the sample.
4343

4444
`text-to-speech speecht5_tts "Hello OpenVINO GenAI" speaker_embedding.bin`

samples/cpp/text_generation/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ and architectures, we still recommend converting the model to the IR format usin
3232

3333
## Sample Descriptions
3434
### Common information
35-
Follow [Get Started with Samples](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/get-started-demos.html) to get common information about OpenVINO samples.
35+
Follow [Get Started with Samples](https://docs.openvino.ai/2026/get-started/learn-openvino/openvino-samples/get-started-demos.html) to get common information about OpenVINO samples.
3636
Follow [build instruction](../../../src/docs/BUILD.md) to build GenAI samples
3737

3838
GPUs usually provide better performance compared to CPUs. Modify the source code to change the device for inference to the GPU.
@@ -64,7 +64,7 @@ The following template can be used as a default, but it may not work properly wi
6464
#### NPU support
6565

6666
NPU device is supported with some limitations. See [NPU inference of
67-
LLMs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai/inference-with-genai-on-npu.html) documentation. In particular:
67+
LLMs](https://docs.openvino.ai/2026/openvino-workflow-generative/inference-with-genai/inference-with-genai-on-npu.html) documentation. In particular:
6868

6969
- Models must be exported with symmetric INT4 quantization (`optimum-cli export openvino --weight-format int4 --sym --model <model> <output_folder>`).
7070
For models with more than 4B parameters, channel wise quantization should be used (`--group-size -1`).

samples/cpp/video_generation/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ pipeline.save_pretrained(output_dir)
3939

4040
### Common Information
4141

42-
Follow [Get Started with Samples](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/get-started-demos.html) to get common information about OpenVINO samples.
42+
Follow [Get Started with Samples](https://docs.openvino.ai/2026/get-started/learn-openvino/openvino-samples/get-started-demos.html) to get common information about OpenVINO samples.
4343
Follow [build instruction](../../../src/docs/BUILD.md) to build GenAI samples.
4444

4545
GPUs usually provide better performance compared to CPUs. Modify the source code to change the device for inference to the GPU.

samples/cpp/visual_language_chat/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ pip install --upgrade-strategy eager -r ../../requirements.txt
2020
optimum-cli export openvino --model openbmb/MiniCPM-V-2_6 --trust-remote-code MiniCPM-V-2_6
2121
```
2222

23-
Follow [Get Started with Samples](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/get-started-demos.html) to run samples.
23+
Follow [Get Started with Samples](https://docs.openvino.ai/2026/get-started/learn-openvino/openvino-samples/get-started-demos.html) to run samples.
2424

2525
## Run image-to-text chat sample:
2626

samples/cpp/whisper_speech_recognition/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ You can download example audio file: https://storage.openvinotoolkit.org/models_
2121

2222
## Run
2323

24-
Follow [Get Started with Samples](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/get-started-demos.html) to run the sample.
24+
Follow [Get Started with Samples](https://docs.openvino.ai/2026/get-started/learn-openvino/openvino-samples/get-started-demos.html) to run the sample.
2525

2626
`whisper_speech_recognition whisper-base how_are_you_doing_today.wav`
2727

samples/python/text_generation/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ and architectures, we still recommend converting the model to the IR format usin
5757

5858
## Sample Descriptions
5959
### Common information
60-
Follow [Get Started with Samples](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/get-started-demos.html) to get common information about OpenVINO samples.
60+
Follow [Get Started with Samples](https://docs.openvino.ai/2026/get-started/learn-openvino/openvino-samples/get-started-demos.html) to get common information about OpenVINO samples.
6161
Follow [build instruction](../../../src/docs/BUILD.md) to build GenAI samples
6262

6363
GPUs usually provide better performance compared to CPUs. Modify the source code to change the device for inference to the GPU.
@@ -89,7 +89,7 @@ The following template can be used as a default, but it may not work properly wi
8989
#### NPU support
9090

9191
NPU device is supported with some limitations. See [NPU inference of
92-
LLMs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai/inference-with-genai-on-npu.html) documentation. In particular:
92+
LLMs](https://docs.openvino.ai/2026/openvino-workflow-generative/inference-with-genai/inference-with-genai-on-npu.html) documentation. In particular:
9393

9494
- Models must be exported with symmetric INT4 quantization (`optimum-cli export openvino --weight-format int4 --sym --model <model> <output_folder>`).
9595
For models with more than 4B parameters, channel wise quantization should be used (`--group-size -1`).

samples/python/video_generation/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ pipeline.save_pretrained(output_dir)
3939

4040
### Common Information
4141

42-
Follow [Get Started with Samples](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/get-started-demos.html) to get common information about OpenVINO samples.
42+
Follow [Get Started with Samples](https://docs.openvino.ai/2026/get-started/learn-openvino/openvino-samples/get-started-demos.html) to get common information about OpenVINO samples.
4343
Follow [build instruction](../../../src/docs/BUILD.md) to build GenAI samples.
4444

4545
GPUs usually provide better performance compared to CPUs. Modify the source code to change the device for inference to the GPU.

0 commit comments

Comments
 (0)