license | base_model | tags | metrics | model-index | widget | datasets | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
apache-2.0 |
distilbert-base-uncased |
|
|
|
|
|
This model is a fine-tuned version of distilbert-base-uncased on yelp-dataset dataset. It achieves the following results on the evaluation set:
- Loss: 0.2995
- Accuracy: 0.8930
- F1: 0.7863
- Precision: 0.7976
- Recall: 0.7768
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
0.2996 | 1.0 | 9645 | 0.2995 | 0.8930 | 0.7863 | 0.7976 | 0.7768 |
0.2233 | 2.0 | 19290 | 0.3381 | 0.8966 | 0.7957 | 0.8015 | 0.7907 |
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2