Skip to content

Commit

Permalink
Update ORT doc for ROCM 6.0 (#1862)
Browse files Browse the repository at this point in the history
* Update ORT doc for ROCM 6.0

* Update amdgpu.mdx
  • Loading branch information
mht-sharma authored May 28, 2024
1 parent 6d56c5f commit f300865
Showing 1 changed file with 8 additions and 8 deletions.
16 changes: 8 additions & 8 deletions docs/source/onnxruntime/usage_guides/amdgpu.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ Our testing involved AMD Instinct GPUs, and for specific GPU compatibility, plea
This guide will show you how to run inference on the `ROCMExecutionProvider` execution provider that ONNX Runtime supports for AMD GPUs.

## Installation
The following setup installs the ONNX Runtime support with ROCM Execution Provider with ROCm 5.7.
The following setup installs the ONNX Runtime support with ROCM Execution Provider with ROCm 6.0.

#### 1 ROCm Installation

Refer to the [ROCm installation guide](https://rocm.docs.amd.com/en/latest/deploy/linux/index.html) to install ROCm 5.7.
Refer to the [ROCm installation guide](https://rocm.docs.amd.com/en/latest/deploy/linux/index.html) to install ROCm 6.0.

#### 2 Installing `onnxruntime-rocm`

Expand All @@ -26,11 +26,11 @@ docker build -f Dockerfile -t ort/rocm .
**Local Installation Steps:**

##### 2.1 PyTorch with ROCm Support
Optimum ONNX Runtime integration relies on some functionalities of Transformers that require PyTorch. For now, we recommend to use Pytorch compiled against RoCm 5.7, that can be installed following [PyTorch installation guide](https://pytorch.org/get-started/locally/):
Optimum ONNX Runtime integration relies on some functionalities of Transformers that require PyTorch. For now, we recommend to use Pytorch compiled against RoCm 6.0, that can be installed following [PyTorch installation guide](https://pytorch.org/get-started/locally/):

```bash
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7
# Use 'rocm/pytorch:latest' as the preferred base image when using Docker for PyTorch installation.
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.0
# Use 'rocm/pytorch:rocm6.0.2_ubuntu22.04_py3.10_pytorch_2.1.2' as the preferred base image when using Docker for PyTorch installation.
```

##### 2.2 ONNX Runtime with ROCm Execution Provider
Expand All @@ -42,13 +42,13 @@ pip install cmake onnx
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Install ONNXRuntime from source
git clone --recursive https://github.com/ROCmSoftwarePlatform/onnxruntime.git
git clone --single-branch --branch main --recursive https://github.com/Microsoft/onnxruntime onnxruntime
cd onnxruntime
git checkout rocm5.7_internal_testing_eigen-3.4.zip_hash

./build.sh --config Release --build_wheel --update --build --parallel --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) --use_rocm --rocm_home=/opt/rocm
./build.sh --config Release --build_wheel --allow_running_as_root --update --build --parallel --cmake_extra_defines CMAKE_HIP_ARCHITECTURES=gfx90a,gfx942 ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) --use_rocm --rocm_home=/opt/rocm
pip install build/Linux/Release/dist/*
```
Note: The instructions build ORT for `MI210/MI250/MI300` gpus. To support other architectures, please update the `CMAKE_HIP_ARCHITECTURES` in the build command.

<Tip>
To avoid conflicts between `onnxruntime` and `onnxruntime-rocm`, make sure the package `onnxruntime` is not installed by running `pip uninstall onnxruntime` prior to installing `onnxruntime-rocm`.
Expand Down

0 comments on commit f300865

Please sign in to comment.