jfmatos-isq commited on
Commit
89a6db1
1 Parent(s): 6e02601

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -19,7 +19,7 @@ model-index:
19
  metrics:
20
  - name: F1
21
  type: f1
22
- value: 0.0
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,8 +29,8 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
31
  It achieves the following results on the evaluation set:
32
- - Loss: nan
33
- - F1: 0.0
34
 
35
  ## Model description
36
 
@@ -50,8 +50,8 @@ More information needed
50
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 5e-05
53
- - train_batch_size: 64
54
- - eval_batch_size: 64
55
  - seed: 42
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
@@ -59,11 +59,11 @@ The following hyperparameters were used during training:
59
 
60
  ### Training results
61
 
62
- | Training Loss | Epoch | Step | Validation Loss | F1 |
63
- |:-------------:|:-----:|:----:|:---------------:|:---:|
64
- | 0.0 | 1.0 | 197 | nan | 0.0 |
65
- | 0.0 | 2.0 | 394 | nan | 0.0 |
66
- | 0.0 | 3.0 | 591 | nan | 0.0 |
67
 
68
 
69
  ### Framework versions
 
19
  metrics:
20
  - name: F1
21
  type: f1
22
+ value: 0.8597727272727272
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
31
  It achieves the following results on the evaluation set:
32
+ - Loss: 0.1363
33
+ - F1: 0.8598
34
 
35
  ## Model description
36
 
 
50
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 5e-05
53
+ - train_batch_size: 24
54
+ - eval_batch_size: 24
55
  - seed: 42
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
 
59
 
60
  ### Training results
61
 
62
+ | Training Loss | Epoch | Step | Validation Loss | F1 |
63
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
64
+ | 0.2552 | 1.0 | 525 | 0.1783 | 0.8162 |
65
+ | 0.1286 | 2.0 | 1050 | 0.1390 | 0.8473 |
66
+ | 0.0821 | 3.0 | 1575 | 0.1363 | 0.8598 |
67
 
68
 
69
  ### Framework versions