lauraparra28 commited on
Commit
7ed1bde
1 Parent(s): 410429c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -7
README.md CHANGED
@@ -2,12 +2,30 @@
2
  license: apache-2.0
3
  base_model: albert-base-v2
4
  tags:
5
- - generated_from_trainer
6
  datasets:
7
- - squad
8
  model-index:
9
- - name: albert-base-v2-finetuned-squad
10
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,10 +36,12 @@ should probably proofread and complete it, then remove this comment. -->
18
  This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset.
19
  It achieves the following results on the evaluation set:
20
  - Loss: 1.4539
 
 
21
 
22
  ## Model description
23
 
24
- More information needed
25
 
26
  ## Intended uses & limitations
27
 
@@ -29,7 +49,7 @@ More information needed
29
 
30
  ## Training and evaluation data
31
 
32
- More information needed
33
 
34
  ## Training procedure
35
 
@@ -60,4 +80,4 @@ The following hyperparameters were used during training:
60
  - Transformers 4.34.0
61
  - Pytorch 1.12.1
62
  - Datasets 2.14.5
63
- - Tokenizers 0.14.1
 
2
  license: apache-2.0
3
  base_model: albert-base-v2
4
  tags:
5
+ - generated_from_trainer
6
  datasets:
7
+ - squad
8
  model-index:
9
+ - name: albert-base-v2-finetuned-squad
10
+ results:
11
+ - task:
12
+ name: Question Answering
13
+ type: question-answering
14
+ dataset:
15
+ type: squad_v2
16
+ name: The Stanford Question Answering Dataset
17
+ args: en
18
+ metrics:
19
+ - type: eval_exact
20
+ value: 76.263
21
+ - type: eval_f1
22
+ value: 84.734
23
+ language:
24
+ - en
25
+ metrics:
26
+ - exact_match
27
+ - f1
28
+
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
36
  This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset.
37
  It achieves the following results on the evaluation set:
38
  - Loss: 1.4539
39
+ - Exact Match: 80.60548722800378
40
+ - F1 score: 88.76870326468953
41
 
42
  ## Model description
43
 
44
+ This model is fine-tuned on the extractive question answering task -- The Stanford Question Answering Dataset -- SQuAD2.0.
45
 
46
  ## Intended uses & limitations
47
 
 
49
 
50
  ## Training and evaluation data
51
 
52
+ Training and evaluation was done on SQuAD2.0.
53
 
54
  ## Training procedure
55
 
 
80
  - Transformers 4.34.0
81
  - Pytorch 1.12.1
82
  - Datasets 2.14.5
83
+ - Tokenizers 0.14.1