MiuN2k3 commited on
Commit
879947c
1 Parent(s): bd4cd86

End of training

Browse files
Files changed (2) hide show
  1. README.md +19 -19
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,21 +1,21 @@
1
  ---
2
  license: mit
3
- base_model: xlm-roberta-base
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
- - name: mtl-xlmr-base-viwiki-v2
8
  results: []
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
- # mtl-xlmr-base-viwiki-v2
15
 
16
- This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.6919
19
 
20
  ## Model description
21
 
@@ -35,8 +35,8 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 1e-05
38
- - train_batch_size: 16
39
- - eval_batch_size: 16
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
@@ -45,18 +45,18 @@ The following hyperparameters were used during training:
45
 
46
  ### Training results
47
 
48
- | Training Loss | Epoch | Step | Validation Loss |
49
- |:-------------:|:-----:|:----:|:---------------:|
50
- | 0.7756 | 1.0 | 960 | 0.7972 |
51
- | 0.6104 | 2.0 | 1920 | 0.6775 |
52
- | 0.5942 | 3.0 | 2880 | 0.6227 |
53
- | 0.6037 | 4.0 | 3840 | 0.6349 |
54
- | 0.5208 | 5.0 | 4800 | 0.5975 |
55
- | 0.347 | 6.0 | 5760 | 0.6008 |
56
- | 0.415 | 7.0 | 6720 | 0.6142 |
57
- | 0.3473 | 8.0 | 7680 | 0.6252 |
58
- | 0.3312 | 9.0 | 8640 | 0.6748 |
59
- | 0.2134 | 10.0 | 9600 | 0.6919 |
60
 
61
 
62
  ### Framework versions
 
1
  ---
2
  license: mit
3
+ base_model: xlm-roberta-large
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
+ - name: mtl-xlmr-large-viwiki-v2
8
  results: []
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
+ # mtl-xlmr-large-viwiki-v2
15
 
16
+ This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 1.6167
19
 
20
  ## Model description
21
 
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 1e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 4
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
 
45
 
46
  ### Training results
47
 
48
+ | Training Loss | Epoch | Step | Validation Loss |
49
+ |:-------------:|:-----:|:-----:|:---------------:|
50
+ | 0.5611 | 1.0 | 1920 | 0.5882 |
51
+ | 0.3039 | 2.0 | 3840 | 0.5782 |
52
+ | 0.2045 | 3.0 | 5760 | 0.5083 |
53
+ | 0.2969 | 4.0 | 7680 | 0.7146 |
54
+ | 0.0895 | 5.0 | 9600 | 0.8017 |
55
+ | 0.0781 | 6.0 | 11520 | 1.0214 |
56
+ | 0.0002 | 7.0 | 13440 | 1.1289 |
57
+ | 0.0029 | 8.0 | 15360 | 1.4217 |
58
+ | 0.041 | 9.0 | 17280 | 1.5223 |
59
+ | 0.0 | 10.0 | 19200 | 1.6167 |
60
 
61
 
62
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f5b487b16ec0678ba65e3b4143299fd341a7b4804bcf1b022cb4ff0a72d568b
3
  size 2235424492
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a78132a1676af0bcb34260650a05b9271ff9518387f30911cfa32a422de86fcb
3
  size 2235424492