diff --git a/docs/source/index.mdx b/docs/source/index.mdx index b9eefcd9999..52e35d61d7c 100644 --- a/docs/source/index.mdx +++ b/docs/source/index.mdx @@ -23,27 +23,27 @@ As such, Optimum enables developers to efficiently use any of these platforms wi
Maximize training throughput and efficiency with Habana's Gaudi processor
+Maximize training throughput and efficiency with Habana's Gaudi processor
Optimize your model to speedup inference with OpenVINO and Neural Compressor
+Optimize your model to speedup inference with OpenVINO and Neural Compressor
Accelerate your training and inference workflows with AWS Trainium and AWS Inferentia
+Accelerate your training and inference workflows with AWS Trainium and AWS Inferentia
Fast and efficient inference on FuriosaAI WARBOY
+Fast and efficient inference on FuriosaAI WARBOY
Apply quantization and graph optimization to accelerate Transformers models training and inference with ONNX Runtime
+Apply quantization and graph optimization to accelerate Transformers models training and inference with ONNX Runtime
A one-liner integration to use PyTorch's BetterTransformer with Transformers models
+A one-liner integration to use PyTorch's BetterTransformer with Transformers models