Skip to content

Commit

Permalink
Remove some dead code from distillation script
Browse files Browse the repository at this point in the history
  • Loading branch information
tomaarsen committed Nov 19, 2024
1 parent ef439ab commit 090327f
Showing 1 changed file with 0 additions and 10 deletions.
10 changes: 0 additions & 10 deletions examples/training/distillation/model_distillation.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,16 +56,6 @@
inference_batch_size = 64
train_batch_size = 64

# We use AllNLI as a source of sentences for the distillation
nli_dataset_path = "datasets/AllNLI.tsv.gz"

# Further, we use sentences extracted from the English Wikipedia to train the distillation
wikipedia_dataset_path = "datasets/wikipedia-en-sentences.txt.gz"

# We use the STS benchmark dataset to see how much performance we loose
sts_dataset_path = "datasets/stsbenchmark.tsv.gz"


logging.info("Load the AllNLI dataset")
# Load the AllNLI dataset: https://huggingface.co/datasets/sentence-transformers/all-nli
nli_train_dataset = load_dataset("sentence-transformers/all-nli", "pair-score", split="train")
Expand Down

0 comments on commit 090327f

Please sign in to comment.