From ff3c09a47c849ba0d7b1da5f55ea0397a3772467 Mon Sep 17 00:00:00 2001 From: adismort14 <104080429+adismort14@users.noreply.github.com> Date: Mon, 13 Nov 2023 23:20:40 +0530 Subject: [PATCH] Fixed typo in pipelines.md documentation (#27455) Update pipelines.md --- docs/source/en/main_classes/pipelines.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/main_classes/pipelines.md b/docs/source/en/main_classes/pipelines.md index b105cb544ffcbb..3bb3f0d4524218 100644 --- a/docs/source/en/main_classes/pipelines.md +++ b/docs/source/en/main_classes/pipelines.md @@ -225,7 +225,7 @@ For users, a rule of thumb is: - **Measure performance on your load, with your hardware. Measure, measure, and keep measuring. Real numbers are the only way to go.** -- If you are latency constrained (live product doing inference), don't batch +- If you are latency constrained (live product doing inference), don't batch. - If you are using CPU, don't batch. - If you are using throughput (you want to run your model on a bunch of static data), on GPU, then: