Skip to content

Latest commit

 

History

History
45 lines (38 loc) · 4.19 KB

overview-of-accelerators.adoc

File metadata and controls

45 lines (38 loc) · 4.19 KB

Overview of accelerators

If you work with large data sets, you can use accelerators to optimize the performance of your data science models in {productname-short}. With accelerators, you can scale your work, reduce latency, and increase productivity. You can use accelerators in {productname-short} to assist your data scientists in the following tasks:

  • Natural language processing (NLP)

  • Inference

  • Training deep neural networks

  • Data cleansing and data processing

You can use the following accelerators with {productname-short}:

  • NVIDIA graphics processing units (GPUs)

    • To use compute-heavy workloads in your models, you can enable NVIDIA graphics processing units (GPUs) in {productname-short}.

    • To enable NVIDIA GPUs on OpenShift, you must install the NVIDIA GPU Operator.

  • AMD graphics processing units (GPUs)

    • Use the AMD GPU Operator to enable AMD GPUs for workloads such as AI/ML training and inference.

    • To enable AMD GPUs on OpenShift, you must do the following tasks:

    • Once installed, the AMD GPU Operator allows you to use the ROCm workbench images to streamline AI/ML workflows on AMD GPUs.

  • Intel Gaudi AI accelerators

    • Intel provides hardware accelerators intended for deep learning workloads.

    • Before you can enable Intel Gaudi AI accelerators in {productname-short}, you must install the necessary dependencies. Also, the version of the Intel Gaudi AI Operator that you install must match the version of the corresponding workbench image in your deployment.

    • A workbench image for Intel Gaudi accelerators is not included in {productname-short} by default. Instead, you must create and configure a custom notebook to enable Intel Gaudi AI support.

    • You can enable Intel Gaudi AI accelerators on-premises or with AWS DL1 compute nodes on an AWS instance.

  • Before you can use an accelerator in {productname-short}, you must enable GPU support in {productname-short}. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs. In addition, your OpenShift instance must contain an associated accelerator profile. For accelerators that are new to your deployment, you must configure an accelerator profile for the accelerator in context. You can create an accelerator profile from the SettingsAccelerator profiles page on the {productname-short} dashboard. If your deployment contains existing accelerators that had associated accelerator profiles already configured, an accelerator profile is automatically created after you upgrade to the latest version of {productname-short}.