distilbert-base-cased-distilled-squad-v2
This model is a fine-tuned version of distilbert/distilbert-base-cased-distilled-squad on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.9145
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.969 | 0.17 | 2500 | 0.8847 |
0.9411 | 0.34 | 5000 | 0.8974 |
0.9072 | 0.51 | 7500 | 0.8331 |
0.9098 | 0.68 | 10000 | 0.8146 |
0.866 | 0.85 | 12500 | 0.8371 |
0.6918 | 1.02 | 15000 | 0.8752 |
0.6142 | 1.19 | 17500 | 0.8580 |
0.6348 | 1.36 | 20000 | 0.8042 |
0.604 | 1.53 | 22500 | 0.8274 |
0.5953 | 1.7 | 25000 | 0.8006 |
0.6046 | 1.87 | 27500 | 0.8022 |
0.4395 | 2.04 | 30000 | 0.8887 |
0.4461 | 2.21 | 32500 | 0.9536 |
0.4254 | 2.38 | 35000 | 0.9380 |
0.4234 | 2.55 | 37500 | 0.9079 |
0.396 | 2.72 | 40000 | 0.9392 |
0.4161 | 2.89 | 42500 | 0.9145 |
Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.