From e28f97da71f3ab910ea9cc5c078ce758295c162e Mon Sep 17 00:00:00 2001 From: Tom Aarsen <37621491+tomaarsen@users.noreply.github.com> Date: Fri, 15 Nov 2024 10:53:02 +0100 Subject: [PATCH] [`docs`] Fix the prompt link to the training script (#3060) This automatic linking doesn't work in an .rst section --- examples/training/prompts/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/training/prompts/README.md b/examples/training/prompts/README.md index 81779e78a..860e0bd2e 100644 --- a/examples/training/prompts/README.md +++ b/examples/training/prompts/README.md @@ -96,7 +96,7 @@ Additionally, some research papers (`INSTRUCTOR `_: This script finetunes `mpnet-base `_ on 100k query-answer pairs from the `natural-questions `_ dataset using the :class:`~sentence_transformers.losses.CachedMultipleNegativesRankingLoss` loss. The model is evaluated during training using the :class:`~sentence_transformers.evaluation.NanoBEIREvaluator`. +* `training_nq_prompts.py `_: This script finetunes `mpnet-base `_ on 100k query-answer pairs from the `natural-questions `_ dataset using the :class:`~sentence_transformers.losses.CachedMultipleNegativesRankingLoss` loss. The model is evaluated during training using the :class:`~sentence_transformers.evaluation.NanoBEIREvaluator`. This script has two variables that affect 1) whether prompts are used and 2) whether prompts are included in the pooling. I have finetuned both ``mpnet-base`` and ``bert-base-uncased`` under the various different settings, resulting in a 0.66% and 0.90% relative improvements on ``NDCG@10`` at no extra cost.