This guide provides step-by-step instructions for deploying the Chat Question-and-Answer Core Sample Application using Helm.
Before you begin, ensure that you have the following prerequisites:
- Kubernetes cluster set up and running.
- The cluster must support dynamic provisioning of Persistent Volumes (PV). Refer to the Kubernetes Dynamic Provisioning Guide for more details.
- Install
kubectlon your system. Refer to its Installation Guide. Ensure access to the Kubernetes cluster. - Install
helmon your system. Refer to the Installation Guide.
You can deploy the ChatQ&A Core application using Helm in two ways: by pulling the Helm chart from Docker Hub or by building it from the source code. Follow the steps below based on your preferred method.
Note: Steps 1–3 differ depending on whether you choose to pull the chart or build it from source.
Use the following command to pull the Helm chart from Docker Hub:
helm pull oci://registry-1.docker.io/intel/chat-question-and-answer-core --version <version-no>Refer to the Docker Hub tags page for details on the latest version number to use for the sample application.
Unpack the downloaded .tgz file:
tar -xvf chat-question-and-answer-core-<version-no>.tgz
cd chat-question-and-answer-coreEdit the values.yaml file to set the necessary environment variables. Ensure you set the huggingface.apiToken and proxy settings as required.
Note: Do not use special characters in configuration values.
Next, choose the appropriate values*.yaml file based on the model framework you want to use:
-
OpenVINO toolkit: Use
values-openvino.yaml -
Ollama: Use
values-ollama.yaml
For OpenVINO toolkit framework, models (embedding, reranker and LLM) are downloaded from HuggingFace Hub. For Ollama framework, models (embdding and LLM) are pulled from the Ollama model registry.
To enable GPU support, set the configuration parameter gpu.enabled to true and provide the corresponding gpu.key that assigned in your cluster node in the values.yaml file.
GPU support only enabled for OpenVINO toolkit framework.
For detailed information on supported and validated hardware platforms and configurations, please refer to the Validated Hardware Platform section.
| Key | Description | Example Value | Required When | Supported Framework (OpenVINO/Ollama) |
|---|---|---|---|---|
configmap.enabled |
Enable use of ConfigMap for model configuration. Set to true to use ConfigMap; otherwise, defaults in the application are used. (true/false) | true | Always. Default to true in values.yaml |
Both |
global.huggingface.apiToken |
Hugging Face API token | <your-huggingface-token> |
Always | OpenVINO |
global.EMBEDDING_MODEL |
Embedding Model Name | OpenVINO: - BAAI/bge-small-en-v1.5 Ollama: - bge-large |
If configmap.enabled = true |
Both |
global.LLM_MODEL |
LLM model for OVMS | OpenVINO: - microsoft/Phi-3.5-mini-instruct Ollama: - phi3 |
If configmap.enabled = true |
Both |
global.RERANKER_MODEL |
Reranker model name | BAAI/bge-reranker-base | If configmap.enabled = true |
OpenVINO |
global.PROMPT_TEMPLATE |
RAG template for formatting input to the LLM. Supports {context} and {question}. Leave empty to use default. | See values.yaml for example |
Optional | Both |
global.UI_NODEPORT |
Static port for UI service (30000–32767). Leave empty for automatic assignment. | Optional | Both | |
global.keeppvc |
Persist storage (true/false) | false | Optional. Default to false in values.yaml |
Both |
global.EMBEDDING_DEVICE |
Device for embedding (CPU/GPU) | CPU | Always. Default to CPU in values.yaml |
OpenVINO |
global.RERANKER_DEVICE |
Device for reranker (CPU/GPU) | CPU | Always. Default to CPU in values.yaml |
OpenVINO |
global.LLM_DEVICE |
Device for LLM (CPU/GPU) | CPU | Always. Default to CPU in values.yaml |
OpenVINO |
global.MAX_TOKENS |
Number of output tokens | 1024 | Optional. Default to 1024. Not more than 1024. |
Both |
global.keep_alive |
Controls how long a loaded model remains in memory after it has been used. | Example: - "1h" (str) - 1 hour - "30m" (str) - 30 minutes - 1800 (int) - 1800 seconds/30 minutes - 0 (int) - unload immediately after use - -1 (int) - forever |
Optional. Default to -1 in values-ollama.yaml |
Ollama |
gpu.enabled |
Deploy on GPU (true/false) | false | Optional | OpenVINO |
gpu.key |
Label assigned to the GPU node on kubernetes cluster by the device plugin. Example - gpu.intel.com/i915, gpu.intel.com/xe. Identify by running kubectl describe node |
<your-node-key-on-cluster> |
If gpu.enabled = true |
OpenVINO |
NOTE:
-
If
configmap.enabledis set to false, the application will use its default internal configuration. You can view the default configuration template here. -
If
gpu.enabledis set tofalse, the parametersglobal.EMBEDDING_DEVICE,global.RERANKER_DEVICE, andglobal.LLM_DEVICEmust not be set toGPU. A validation check is included and will throw an error if any of these parameters are incorrectly set toGPUwhileGPU support is disabled. -
When
gpu.enabledis set totrue, the default value for these device parameters is GPU. On systems with an integrated GPU, the device ID is always 0 (i.e., GPU.0), and GPU is treated as an alias for GPU.0. For systems with multiple GPUs (e.g., both integrated and discrete Intel GPUs), you can specify the desired devices using comma-separated IDs such as GPU.0, GPU.1 and etc.
Clone the repository containing the Helm chart:
# Clone the latest on mainline
git clone https://github.com/open-edge-platform/edge-ai-libraries.git edge-ai-libraries
# Alternatively, Clone a specific release branch
git clone https://github.com/open-edge-platform/edge-ai-libraries.git edge-ai-libraries -b <release-tag>Navigate to the chart directory:
cd edge-ai-libraries/sample-applications/chat-question-and-answer-core/chartEdit the values.yaml file located in the chart directory to set the necessary environment variables. Refer to the table in Option 1, Step 3 for the list of keys and example values.
Note: Do not use special characters in configuration values.
Navigate to the chart directory and build the Helm dependencies using the following command:
helm dependency buildDeploy the Chat Question-and-Answer Core Helm chart:
-
Deploy with OpenVINO toolkit:
helm install chatqna-core -f values.yaml -f values-openvino.yaml . --namespace <your-namespace>
-
Deploy with Ollama:
helm install chatqna-core -f values.yaml -f values-ollama.yaml . --namespace <your-namespace>
Check the status of the deployed resources to ensure everything is running correctly
kubectl get pods -n <your-namespace>
kubectl get services -n <your-namespace>Nginx service running as a reverse proxy in one of the pods, helps us to access the application. We need to get Host IP and Port on the node where the nginx service is running.
Run the following command and replace <$my_namespace> with your own namespace to get the host IP of the node and port exposed by Nginx service:
chatqna_hostip=$(kubectl get pods -l app=chatqna-core-nginx -n $my_namespace -o jsonpath='{.items[0].status.hostIP}')
chatqna_port=$(kubectl get service chatqna-core-nginx -n $my_namespace -o jsonpath='{.spec.ports[0].nodePort}')
echo "http://${chatqna_hostip}:${chatqna_port}"Copy the output of the above bash snippet and paste it into your browser to access the application UI.
If any changes are made to the subcharts, update the Helm dependencies using the following command:
helm dependency updateTo uninstall helm charts deployed, use the following command:
helm uninstall <name> -n <your-namespace>- Ensure that all pods are running and the services are accessible.
- Access the application dashboard and verify that it is functioning as expected.
-
If you encounter any issues during the deployment process, check the Kubernetes logs for errors:
kubectl logs <pod_name>
-
If the PVC created during a Helm chart deployment is not removed or auto-deleted due to a deployment failure or being stuck, it must be deleted manually using the following commands:
# List the PVCs present in the given namespace kubectl get pvc -n <namespace> # Delete the required PVC from the namespace kubectl delete pvc <pvc-name> -n <namespace>