diff --git a/examples/training/prompts/README.md b/examples/training/prompts/README.md index 81779e78a..860e0bd2e 100644 --- a/examples/training/prompts/README.md +++ b/examples/training/prompts/README.md @@ -96,7 +96,7 @@ Additionally, some research papers (`INSTRUCTOR `_: This script finetunes `mpnet-base `_ on 100k query-answer pairs from the `natural-questions `_ dataset using the :class:`~sentence_transformers.losses.CachedMultipleNegativesRankingLoss` loss. The model is evaluated during training using the :class:`~sentence_transformers.evaluation.NanoBEIREvaluator`. +* `training_nq_prompts.py `_: This script finetunes `mpnet-base `_ on 100k query-answer pairs from the `natural-questions `_ dataset using the :class:`~sentence_transformers.losses.CachedMultipleNegativesRankingLoss` loss. The model is evaluated during training using the :class:`~sentence_transformers.evaluation.NanoBEIREvaluator`. This script has two variables that affect 1) whether prompts are used and 2) whether prompts are included in the pooling. I have finetuned both ``mpnet-base`` and ``bert-base-uncased`` under the various different settings, resulting in a 0.66% and 0.90% relative improvements on ``NDCG@10`` at no extra cost.