vxnli-v0
This model is a fine-tuned version of microsoft/tapex-base-finetuned-wtq on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3409
- Exact Match: 0.6371
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Exact Match |
---|---|---|---|---|
0.2238 | 1.0 | 400 | 0.3173 | 0.5495 |
0.0189 | 2.0 | 800 | 0.3454 | 0.5769 |
0.0117 | 3.0 | 1200 | 0.3820 | 0.5509 |
0.0077 | 4.0 | 1600 | 0.3753 | 0.5769 |
0.0063 | 5.0 | 2000 | 0.3912 | 0.5935 |
0.0054 | 6.0 | 2400 | 0.3932 | 0.5906 |
0.0052 | 7.0 | 2800 | 0.3967 | 0.5740 |
0.0049 | 8.0 | 3200 | 0.3899 | 0.6101 |
0.004 | 9.0 | 3600 | 0.4111 | 0.5978 |
0.0045 | 10.0 | 4000 | 0.4381 | 0.5870 |
0.0038 | 11.0 | 4400 | 0.5001 | 0.5617 |
0.0036 | 12.0 | 4800 | 0.4930 | 0.5834 |
0.0044 | 13.0 | 5200 | 0.4405 | 0.5639 |
Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.