File size: 2,316 Bytes
aa805cf a97e8d6 aa805cf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- type: f1
value: 32.3397
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzQ4MzMyZDgzOGRlMjI5MGI2NTU5M2FkMWI5ZDFmMTc0MjczZDU0MzU3YjE2YzRmNjgyMDhjZWI2MTljNGRjNCIsInZlcnNpb24iOjF9.fhrHqXSNMRf79P-fz_uF9zu-q1kmgRrUrwpArmbeUbsBzghFMNlixjGBj0DjRSqNowZx-rPOJEjUfmy6IoKRBA
- type: exact_match
value: 21.0333
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTk0ZmFmZjA2NThhMjUyYTgwMzM0MjY1Nzg4ZDZmMDkxOTAyOGU0MTI1ZGE3YjMzOTQ3OThlNjFjMTA1NTNmOCIsInZlcnNpb24iOjF9.B3Z30EVq3nftd2fymhabm0rsSop2HvWfnqDl46oyw20jRFwxuKJE3oF72iCGEAworlhC0hurbVMt-WgGj5XyBA
- type: loss
value: 3.934098243713379
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzg2MWQxZDY2MDE5M2RkZWNkZDkyOWVjNGNjMDg2ZWE0NTA2ZDFhZTEzYjNiN2YyMDAwZTQyZGJlOTc1NWQ0OSIsInZlcnNpb24iOjF9.i1npIhsmBnPp7HjlBTzl4q0sg1d25aYSy75ui47Fi9VU7oen50LSDoqn9FXvaU42zjXbsaoMX8CyV1PQe4MsBw
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|