diff --git a/docs/source/index.mdx b/docs/source/index.mdx index b9eefcd9999..52e35d61d7c 100644 --- a/docs/source/index.mdx +++ b/docs/source/index.mdx @@ -23,27 +23,27 @@ As such, Optimum enables developers to efficiently use any of these platforms wi
Habana
-

Maximize training throughput and efficiency with Habana's Gaudi processor

+

Maximize training throughput and efficiency with Habana's Gaudi processor

Intel
-

Optimize your model to speedup inference with OpenVINO and Neural Compressor

+

Optimize your model to speedup inference with OpenVINO and Neural Compressor

AWS Trainium/Inferentia
-

Accelerate your training and inference workflows with AWS Trainium and AWS Inferentia

+

Accelerate your training and inference workflows with AWS Trainium and AWS Inferentia

FuriosaAI
-

Fast and efficient inference on FuriosaAI WARBOY

+

Fast and efficient inference on FuriosaAI WARBOY

ONNX Runtime
-

Apply quantization and graph optimization to accelerate Transformers models training and inference with ONNX Runtime

+

Apply quantization and graph optimization to accelerate Transformers models training and inference with ONNX Runtime

BetterTransformer
-

A one-liner integration to use PyTorch's BetterTransformer with Transformers models

+

A one-liner integration to use PyTorch's BetterTransformer with Transformers models