datarpit commited on
Commit
cd848d6
1 Parent(s): b60f997

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -59
README.md CHANGED
@@ -2,8 +2,6 @@
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
- datasets:
6
- - natural_questions
7
  model-index:
8
  - name: toy-qa
9
  results: []
@@ -14,9 +12,9 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # toy-qa
16
 
17
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the natural_questions dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.2284
20
 
21
  ## Model description
22
 
@@ -36,67 +34,18 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 2e-05
39
- - train_batch_size: 8
40
- - eval_batch_size: 8
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
- - num_epochs: 50
45
 
46
  ### Training results
47
 
48
- | Training Loss | Epoch | Step | Validation Loss |
49
- |:-------------:|:-----:|:----:|:---------------:|
50
- | 5.3253 | 1.0 | 14 | 3.3435 |
51
- | 3.1799 | 2.0 | 28 | 0.6626 |
52
- | 2.8858 | 3.0 | 42 | 0.7105 |
53
- | 2.7921 | 4.0 | 56 | 0.5784 |
54
- | 2.3648 | 5.0 | 70 | 0.5843 |
55
- | 2.0496 | 6.0 | 84 | 0.3544 |
56
- | 1.5883 | 7.0 | 98 | 0.3406 |
57
- | 0.829 | 8.0 | 112 | 0.3717 |
58
- | 0.5334 | 9.0 | 126 | 0.4049 |
59
- | 0.4915 | 10.0 | 140 | 0.4101 |
60
- | 0.57 | 11.0 | 154 | 0.3464 |
61
- | 0.5129 | 12.0 | 168 | 0.3892 |
62
- | 0.4334 | 13.0 | 182 | 0.2455 |
63
- | 0.5406 | 14.0 | 196 | 0.2647 |
64
- | 0.2201 | 15.0 | 210 | 0.2422 |
65
- | 0.0917 | 16.0 | 224 | 0.2218 |
66
- | 0.1753 | 17.0 | 238 | 0.2453 |
67
- | 0.0231 | 18.0 | 252 | 0.2061 |
68
- | 0.1107 | 19.0 | 266 | 0.2102 |
69
- | 0.1182 | 20.0 | 280 | 0.2496 |
70
- | 0.0829 | 21.0 | 294 | 0.2151 |
71
- | 0.0775 | 22.0 | 308 | 0.2490 |
72
- | 0.0067 | 23.0 | 322 | 0.2369 |
73
- | 0.0058 | 24.0 | 336 | 0.2334 |
74
- | 0.005 | 25.0 | 350 | 0.2288 |
75
- | 0.0042 | 26.0 | 364 | 0.2148 |
76
- | 0.0071 | 27.0 | 378 | 0.2582 |
77
- | 0.0043 | 28.0 | 392 | 0.2232 |
78
- | 0.0028 | 29.0 | 406 | 0.2170 |
79
- | 0.0034 | 30.0 | 420 | 0.2290 |
80
- | 0.0021 | 31.0 | 434 | 0.2299 |
81
- | 0.0028 | 32.0 | 448 | 0.2214 |
82
- | 0.0019 | 33.0 | 462 | 0.2312 |
83
- | 0.002 | 34.0 | 476 | 0.2263 |
84
- | 0.0024 | 35.0 | 490 | 0.2397 |
85
- | 0.0019 | 36.0 | 504 | 0.2272 |
86
- | 0.0087 | 37.0 | 518 | 0.2321 |
87
- | 0.0016 | 38.0 | 532 | 0.2425 |
88
- | 0.0016 | 39.0 | 546 | 0.2449 |
89
- | 0.0014 | 40.0 | 560 | 0.2208 |
90
- | 0.0013 | 41.0 | 574 | 0.2185 |
91
- | 0.0013 | 42.0 | 588 | 0.2246 |
92
- | 0.0016 | 43.0 | 602 | 0.2344 |
93
- | 0.0016 | 44.0 | 616 | 0.2411 |
94
- | 0.0015 | 45.0 | 630 | 0.2345 |
95
- | 0.0015 | 46.0 | 644 | 0.2317 |
96
- | 0.0014 | 47.0 | 658 | 0.2289 |
97
- | 0.0011 | 48.0 | 672 | 0.2288 |
98
- | 0.0012 | 49.0 | 686 | 0.2288 |
99
- | 0.0013 | 50.0 | 700 | 0.2284 |
100
 
101
 
102
  ### Framework versions
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: toy-qa
7
  results: []
 
12
 
13
  # toy-qa
14
 
15
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.2410
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 2e-05
37
+ - train_batch_size: 16
38
+ - eval_batch_size: 16
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - num_epochs: 1
43
 
44
  ### Training results
45
 
46
+ | Training Loss | Epoch | Step | Validation Loss |
47
+ |:-------------:|:-----:|:-----:|:---------------:|
48
+ | 1.9537 | 1.0 | 20415 | 0.2410 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
 
51
  ### Framework versions