Skip to content

CUDA and ROCm GPU HAL Driverlink

IREE can accelerate model execution on NVIDIA GPUs using CUDA and on AMD GPUs using ROCm. Due to the similarity of CUDA and ROCm APIs and infrastructure, the CUDA and ROCm backends share much of their implementation in IREE:

  • The IREE compiler uses a similar GPU code generation pipeline for each, but generates PTX for CUDA and hsaco for ROCm
  • The IREE runtime HAL driver for ROCm mirrors the one for CUDA, except for command buffers implementations - where CUDA has "direct", "stream", and "graph" command buffers, and ROCm has only "direct" command buffers

Prerequisiteslink

In order to use CUDA or ROCm to drive the GPU, you need to have a functional CUDA or ROCm environment. It can be verified by the following steps:

Run the following command in a shell:

nvidia-smi | grep CUDA

If nvidia-smi does not exist, you will need to install the latest CUDA Toolkit SDK.

Run the following command in a shell:

rocm-smi | grep rocm

If rocm-smi does not exist, you will need to install the latest ROCm Toolkit SDK.

Get runtime and compilerlink

Get IREE runtimelink

Next you will need to get an IREE runtime that includes the CUDA (for Nvidia hardware) or ROCm (for AMD hardware) HAL driver.

Build runtime from sourcelink

Please make sure you have followed the Getting started page to build IREE from source, then enable the CUDA HAL driver with the IREE_HAL_DRIVER_CUDA option or the experimental ROCm HAL driver with the IREE_EXTERNAL_HAL_DRIVERS=rocm option.

Download compiler as Python packagelink

Python packages for various IREE functionalities are regularly published to PyPI. See the Python Bindings page for more details. The core iree-compiler package includes the CUDA compiler:

python -m pip install iree-compiler

Tip

iree-compile is installed to your python module installation path. If you pip install with the user mode, it is under ${HOME}/.local/bin, or %APPDATA%Python on Windows. You may want to include the path in your system's PATH environment variable.

export PATH=${HOME}/.local/bin:${PATH}

Currently ROCm is NOT supported for the Python interface.

Build compiler from sourcelink

Please make sure you have followed the Getting started page to build the IREE compiler, then enable the CUDA compiler target with the IREE_TARGET_BACKEND_CUDA option or the ROCm compiler target with the IREE_TARGET_BACKEND_ROCM option.

Compile and run the modellink

With the compiler and runtime ready, we can now compile a model and run it on the GPU.

Compile the modellink

IREE compilers transform a model into its final deployable format in many sequential steps. A model authored with Python in an ML framework should use the corresponding framework's import tool to convert into a format (i.e., MLIR) expected by main IREE compilers first.

Using MobileNet v2 as an example, you can download the SavedModel with trained weights from TensorFlow Hub and convert it using IREE's TensorFlow importer. Then,

Compile using the command-linelink

Let iree_input.mlir be the model's initial MLIR representation generated by IREE's TensorFlow importer. We can now compile them for each GPU by running the following command:

iree-compile \
    --iree-hal-target-backends=cuda \
    --iree-hal-cuda-llvm-target-arch=<...> \
    --iree-hal-cuda-disable-loop-nounroll-wa \
    --iree-input-type=mhlo \
    iree_input.mlir -o mobilenet-cuda.vmfb

Note that a cuda target architecture(iree-hal-cuda-llvm-target-arch) of the form sm_<arch_number> is needed to compile towards each GPU architecture. If no architecture is specified then we will default to sm_35.

Here are a table of commonly used architectures:

CUDA GPU Target Architecture
Nvidia K80 sm_35
Nvidia P100 sm_60
Nvidia V100 sm_70
Nvidia A100 sm_80
iree-compile \
    --iree-hal-target-backends=rocm \
    --iree-rocm-target-chip=<...> \
    --iree-rocm-link-bc=true \
    --iree-rocm-bc-dir=<...> \
    --iree-input-type=mhlo \
    iree_input.mlir -o mobilenet-rocm.vmfb

Note ROCm Bitcode Dir(iree-rocm-bc-dir) path is required. If the system you are compiling IREE in has ROCm installed, then the default value of /opt/rocm/amdgcn/bitcode will usually suffice. If you intend on building ROCm compiler in a non-ROCm capable system, please set iree-rocm-bc-dir to the absolute path where you might have saved the amdgcn bitcode.

Note that a ROCm target chip(iree-rocm-target-chip) of the form gfx<arch_number> is needed to compile towards each GPU architecture. If no architecture is specified then we will default to gfx908 Here are a table of commonly used architecture

AMD GPU Target Chip
AMD MI25 gfx900
AMD MI50 gfx906
AMD MI60 gfx906
AMD MI100 gfx908

Run the modellink

Run using the command-linelink

Run the following command:

iree-run-module \
    --device=cuda \
    --module=mobilenet-cuda.vmfb \
    --function=predict \
    --input="1x224x224x3xf32=0"
iree-run-module \
    --device=rocm \
    --module=mobilenet-rocm.vmfb \
    --function=predict \
    --input="1x224x224x3xf32=0"

The above assumes the exported function in the model is named as predict and it expects one 224x224 RGB image. We are feeding in an image with all 0 values here for brevity, see iree-run-module --help for the format to specify concrete values.