For deploying large models, such as large language models (LLMs), {productname-long} includes a single-model serving platform that is based on the KServe component. Because each model is deployed from its own model server, the single-model serving platform helps you to deploy, monitor, scale, and maintain large models that require more resources.
modules/about-the-single-model-serving-platform.adoc modules/about-kserve-deployment-modes.adoc modules/installing-kserve.adoc modules/deploying-models-using-the-single-model-serving-platform.adoc modules/enabling-the-single-model-serving-platform.adoc modules/adding-a-custom-model-serving-runtime-for-the-single-model-serving-platform.adoc modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc modules/deploying-models-on-the-single-model-serving-platform.adoc modules/deploying-models-using-multiple-gpu-nodes.adoc modules/setting-timeout-for-kserve.adoc modules/customizing-parameters-serving-runtime.adoc modules/customizable-model-serving-runtime-parameters.adoc modules/using-accelerators-with-vllm.adoc modules/using-oci-containers-for-model-storage.adoc modules/storing-a-model-in-oci-image.adoc modules/deploying-model-stored-in-oci-image.adoc modules/accessing-authentication-token-for-model-deployed-on-single-model-serving-platform.adoc modules/accessing-inference-endpoint-for-model-deployed-on-single-model-serving-platform.adoc
In the single-model serving platform, you can view performance metrics for a specific model that is deployed on the platform.
You can optionally enhance the preinstalled model-serving runtimes available in {productname-short} to leverage additional benefits and capabilities, such as optimized inferencing, reduced latency, and fine-tuned resource allocation.
Certain performance issues might require you to tune the parameters of your inference service or model-serving runtime.
modules/ref-supported-runtimes.adoc modules/ref-tested-verified-runtimes.adoc modules/ref-inference-endpoints.adoc
modules/about-the-NVIDIA-NIM-model-serving-platform.adoc modules/enabling-the-nvidia-nim-model-serving-platform.adoc modules/deploying-models-on-the-NVIDIA-NIM-model-serving-platform.adoc modules/enabling-metrics-for-existing-nim-deployment.adoc modules/viewing-nvidia-nim-metrics-for-a-nim-model.adoc modules/viewing-performance-metrics-for-a-nim-model.adoc