nickrwu commited on
Commit
aafccb9
1 Parent(s): 1915a6f

End of training

Browse files
Files changed (5) hide show
  1. README.md +13 -15
  2. model.safetensors +1 -1
  3. tokenizer.json +2 -2
  4. tokenizer_config.json +7 -0
  5. training_args.bin +1 -1
README.md CHANGED
@@ -1,6 +1,4 @@
1
  ---
2
- license: mit
3
- base_model: LIAMF-USP/roberta-large-finetuned-race
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -18,13 +16,13 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # roberta-mqa
20
 
21
- This model is a fine-tuned version of [LIAMF-USP/roberta-large-finetuned-race](https://huggingface.co/LIAMF-USP/roberta-large-finetuned-race) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 1.6094
24
- - Accuracy: 0.2120
25
- - F1: 0.1945
26
- - Precision: 0.2027
27
- - Recall: 0.2057
28
 
29
  ## Model description
30
 
@@ -44,8 +42,8 @@ More information needed
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 2e-05
47
- - train_batch_size: 8
48
- - eval_batch_size: 16
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
@@ -54,11 +52,11 @@ The following hyperparameters were used during training:
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
58
- |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
59
- | 1.6124 | 1.0 | 3712 | 1.6094 | 0.2079 | 0.1810 | 0.1981 | 0.2006 |
60
- | 1.613 | 2.0 | 7424 | 1.6094 | 0.2077 | 0.0871 | 0.1713 | 0.1975 |
61
- | 1.61 | 3.0 | 11136 | 1.6094 | 0.2120 | 0.1945 | 0.2027 | 0.2057 |
62
 
63
 
64
  ### Framework versions
 
1
  ---
 
 
2
  tags:
3
  - generated_from_trainer
4
  metrics:
 
16
 
17
  # roberta-mqa
18
 
19
+ This model was trained from scratch on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.4631
22
+ - Accuracy: 0.3793
23
+ - F1: 0.3774
24
+ - Precision: 0.3819
25
+ - Recall: 0.3760
26
 
27
  ## Model description
28
 
 
42
 
43
  The following hyperparameters were used during training:
44
  - learning_rate: 2e-05
45
+ - train_batch_size: 28
46
+ - eval_batch_size: 28
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
 
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
56
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
57
+ | 1.5076 | 1.0 | 1061 | 1.4901 | 0.3372 | 0.3328 | 0.3366 | 0.3321 |
58
+ | 1.4244 | 2.0 | 2122 | 1.4584 | 0.3594 | 0.3560 | 0.3615 | 0.3545 |
59
+ | 1.3553 | 3.0 | 3183 | 1.4631 | 0.3793 | 0.3774 | 0.3819 | 0.3760 |
60
 
61
 
62
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:355975a656b3dd01f2004661fdb8602866177f506299c5266560d02baa036844
3
  size 1421491284
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a69d4df6a1517a588ac45ca0ba0ca3dffc7ce35a95e241e259c47da914da8517
3
  size 1421491284
tokenizer.json CHANGED
@@ -2,13 +2,13 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 256,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
  "padding": {
10
  "strategy": {
11
- "Fixed": 256
12
  },
13
  "direction": "Right",
14
  "pad_to_multiple_of": null,
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 128,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
  "padding": {
10
  "strategy": {
11
+ "Fixed": 128
12
  },
13
  "direction": "Right",
14
  "pad_to_multiple_of": null,
tokenizer_config.json CHANGED
@@ -48,10 +48,17 @@
48
  "eos_token": "</s>",
49
  "errors": "replace",
50
  "mask_token": "<mask>",
 
51
  "model_max_length": 512,
 
52
  "pad_token": "<pad>",
 
 
53
  "sep_token": "</s>",
 
54
  "tokenizer_class": "RobertaTokenizer",
55
  "trim_offsets": true,
 
 
56
  "unk_token": "<unk>"
57
  }
 
48
  "eos_token": "</s>",
49
  "errors": "replace",
50
  "mask_token": "<mask>",
51
+ "max_length": 128,
52
  "model_max_length": 512,
53
+ "pad_to_multiple_of": null,
54
  "pad_token": "<pad>",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
  "sep_token": "</s>",
58
+ "stride": 0,
59
  "tokenizer_class": "RobertaTokenizer",
60
  "trim_offsets": true,
61
+ "truncation_side": "right",
62
+ "truncation_strategy": "longest_first",
63
  "unk_token": "<unk>"
64
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a1586777c2c3d51f3d19369f0acd98471143cffee8b778dc238892df728228d
3
  size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29833ff135368169c490c02f10b2e299fbd028f85b3ec2ea7cc8875ffdebb575
3
  size 4920