vonewman commited on
Commit
df66d27
1 Parent(s): 4dcb128

Training completed!

Browse files
Files changed (3) hide show
  1. README.md +9 -10
  2. pytorch_model.bin +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: F1
24
  type: f1
25
- value: 0.8143486469477659
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the masakhaner2 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.0663
36
- - F1: 0.8143
37
 
38
  ## Model description
39
 
@@ -53,21 +53,20 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 5e-05
56
- - train_batch_size: 48
57
- - eval_batch_size: 48
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - num_epochs: 4
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | F1 |
66
  |:-------------:|:-----:|:----:|:---------------:|:------:|
67
- | No log | 1.0 | 96 | 0.1062 | 0.6522 |
68
- | 0.1995 | 2.0 | 192 | 0.0733 | 0.7662 |
69
- | 0.1995 | 3.0 | 288 | 0.0681 | 0.8065 |
70
- | 0.0549 | 4.0 | 384 | 0.0663 | 0.8143 |
71
 
72
 
73
  ### Framework versions
 
22
  metrics:
23
  - name: F1
24
  type: f1
25
+ value: 0.7235926628716003
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the masakhaner2 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.0815
36
+ - F1: 0.7236
37
 
38
  ## Model description
39
 
 
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 5e-05
56
+ - train_batch_size: 64
57
+ - eval_batch_size: 64
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
+ - num_epochs: 3
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | F1 |
66
  |:-------------:|:-----:|:----:|:---------------:|:------:|
67
+ | No log | 1.0 | 72 | 0.1272 | 0.5887 |
68
+ | 0.2412 | 2.0 | 144 | 0.0916 | 0.6888 |
69
+ | 0.2412 | 3.0 | 216 | 0.0815 | 0.7236 |
 
70
 
71
 
72
  ### Framework versions
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:08d500734e860f3b420162d2697a218594134714b7998d404c654f560ec0d7ff
3
  size 1109908201
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f119c9a86763fdebfad7fda7fc259a24affc268b63ceb0a70dad7f3eaf81442
3
  size 1109908201
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:134a2b8f42e1eb7845e2a7085b1c0306ab8cc8032ec7ff46db2fef45dc4ef6d1
3
  size 4091
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2318f8626d0254cf88abcbd7e0d589faf7b32ce13156a5059d7b2d3a94d277cc
3
  size 4091