diff --git a/README.md b/README.md index c86fa19c5a6..d18cc041012 100644 --- a/README.md +++ b/README.md @@ -19,6 +19,7 @@ If you'd like to use the accelerator-specific features of 🤗 Optimum, you can | [ONNX Runtime](https://onnxruntime.ai/docs/) | `pip install --upgrade-strategy eager optimum[onnxruntime]` | | [Intel Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) | `pip install --upgrade-strategy eager optimum[neural-compressor]`| | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `pip install --upgrade-strategy eager optimum[openvino,nncf]` | +| [AMD Instinct GPUs and Ryzen AI NPU](https://www.amd.com/en/graphics/instinct-server-accelerators) | `pip install --upgrade-strategy eager optimum[amd]` | | [Habana Gaudi Processor (HPU)](https://habana.ai/training/) | `pip install --upgrade-strategy eager optimum[habana]` | | [FuriosaAI](https://www.furiosa.ai/) | `pip install --upgrade-strategy eager optimum[furiosa]` | diff --git a/docs/source/index.mdx b/docs/source/index.mdx index 0c9a9c87f1e..d3c7399a4c7 100644 --- a/docs/source/index.mdx +++ b/docs/source/index.mdx @@ -33,9 +33,9 @@ As such, Optimum enables developers to efficiently use any of these platforms wi >
Accelerate your training and inference workflows with AWS Trainium and AWS Inferentia
-Available soon AMD Instinct GPUs
+Enable performance optimizations for AMD Instinct GPUs and AMD Ryzen AI IPUs