diff --git a/docs/source/onnxruntime/usage_guides/amdgpu.mdx b/docs/source/onnxruntime/usage_guides/amdgpu.mdx index acd8d732ac3..575f7700ce9 100644 --- a/docs/source/onnxruntime/usage_guides/amdgpu.mdx +++ b/docs/source/onnxruntime/usage_guides/amdgpu.mdx @@ -7,11 +7,11 @@ Our testing involved AMD Instinct GPUs, and for specific GPU compatibility, plea This guide will show you how to run inference on the `ROCMExecutionProvider` execution provider that ONNX Runtime supports for AMD GPUs. ## Installation -The following setup installs the ONNX Runtime support with ROCM Execution Provider with ROCm 5.7. +The following setup installs the ONNX Runtime support with ROCM Execution Provider with ROCm 6.0. #### 1 ROCm Installation -Refer to the [ROCm installation guide](https://rocm.docs.amd.com/en/latest/deploy/linux/index.html) to install ROCm 5.7. +Refer to the [ROCm installation guide](https://rocm.docs.amd.com/en/latest/deploy/linux/index.html) to install ROCm 6.0. #### 2 Installing `onnxruntime-rocm` @@ -26,11 +26,11 @@ docker build -f Dockerfile -t ort/rocm . **Local Installation Steps:** ##### 2.1 PyTorch with ROCm Support -Optimum ONNX Runtime integration relies on some functionalities of Transformers that require PyTorch. For now, we recommend to use Pytorch compiled against RoCm 5.7, that can be installed following [PyTorch installation guide](https://pytorch.org/get-started/locally/): +Optimum ONNX Runtime integration relies on some functionalities of Transformers that require PyTorch. For now, we recommend to use Pytorch compiled against RoCm 6.0, that can be installed following [PyTorch installation guide](https://pytorch.org/get-started/locally/): ```bash -pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7 -# Use 'rocm/pytorch:latest' as the preferred base image when using Docker for PyTorch installation. +pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.0 +# Use 'rocm/pytorch:rocm6.0.2_ubuntu22.04_py3.10_pytorch_2.1.2' as the preferred base image when using Docker for PyTorch installation. ``` ##### 2.2 ONNX Runtime with ROCm Execution Provider @@ -42,13 +42,13 @@ pip install cmake onnx curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # Install ONNXRuntime from source -git clone --recursive https://github.com/ROCmSoftwarePlatform/onnxruntime.git +git clone --single-branch --branch main --recursive https://github.com/Microsoft/onnxruntime onnxruntime cd onnxruntime -git checkout rocm5.7_internal_testing_eigen-3.4.zip_hash -./build.sh --config Release --build_wheel --update --build --parallel --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) --use_rocm --rocm_home=/opt/rocm +./build.sh --config Release --build_wheel --allow_running_as_root --update --build --parallel --cmake_extra_defines CMAKE_HIP_ARCHITECTURES=gfx90a,gfx942 ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) --use_rocm --rocm_home=/opt/rocm pip install build/Linux/Release/dist/* ``` +Note: The instructions build ORT for `MI210/MI250/MI300` gpus. To support other architectures, please update the `CMAKE_HIP_ARCHITECTURES` in the build command. To avoid conflicts between `onnxruntime` and `onnxruntime-rocm`, make sure the package `onnxruntime` is not installed by running `pip uninstall onnxruntime` prior to installing `onnxruntime-rocm`.