saattrupdan's picture
Update README.md
db0c875
|
raw
history blame
1.11 kB
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlmr-base-texas-squad-da
results: []
---
# xlmr-base-texas-squad-da
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-da dataset.
It achieves the following results on the evaluation set:
- Exact match: 63.95%
- F1-score: 68.39%
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6438 | 1.0 | 4183 | 1.4711 |
| 1.4079 | 2.0 | 8366 | 1.4356 |
| 1.2532 | 3.0 | 12549 | 1.4509 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3