Prasadrao commited on
Commit
10547de
1 Parent(s): 5a0bb37

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -11,6 +11,8 @@ base_model: Prasadrao/xlm-roberta-large-go-emotions-v2
11
  model-index:
12
  - name: xlm-roberta-large-go-emotions-v3
13
  results: []
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,7 +20,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # xlm-roberta-large-go-emotions-v3
20
 
21
- This model is a fine-tuned version of [Prasadrao/xlm-roberta-large-go-emotions-v2](https://huggingface.co/Prasadrao/xlm-roberta-large-go-emotions-v2) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.0953
24
  - Accuracy: 0.4534
@@ -44,8 +46,8 @@ More information needed
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 2e-05
47
- - train_batch_size: 32
48
- - eval_batch_size: 32
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
@@ -64,4 +66,4 @@ The following hyperparameters were used during training:
64
  - Transformers 4.37.0
65
  - Pytorch 2.1.2
66
  - Datasets 2.15.0
67
- - Tokenizers 0.15.1
 
11
  model-index:
12
  - name: xlm-roberta-large-go-emotions-v3
13
  results: []
14
+ datasets:
15
+ - go_emotions
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
20
 
21
  # xlm-roberta-large-go-emotions-v3
22
 
23
+ This model is a fine-tuned version of [Prasadrao/xlm-roberta-large-go-emotions-v2](https://huggingface.co/Prasadrao/xlm-roberta-large-go-emotions-v2) on go emotion dataset.
24
  It achieves the following results on the evaluation set:
25
  - Loss: 0.0953
26
  - Accuracy: 0.4534
 
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 2e-05
49
+ - train_batch_size: 16
50
+ - eval_batch_size: 16
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
 
66
  - Transformers 4.37.0
67
  - Pytorch 2.1.2
68
  - Datasets 2.15.0
69
+ - Tokenizers 0.15.1