Intel will not provide or guarantee development of or support for this project, including but not limited to, maintenance, bug fixes, new releases or updates.
Patches to this project are no longer accepted by Intel.
If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the community, please create your own fork of the project.
This repository contains NLP use cases build using Intel's different AI components with a focus on the OpenVINO™ toolkit. Each use case is supported with detailed documentation present in the respective folders.
| Use Case Name | Description | Folder Name | 
|---|---|---|
| Quantization Aware Training and Inference using OpenVINO™ toolkit | An End-to-End NLP workflow with Quantization Aware Training using Optimum-Intel*, and Inference using Optimum-Intel*, OpenVINO™ Model Server & Optimum-ONNX Runtime with OpenVINO™ Execution Provider | question-answering-bert-qat |