jlvila commited on
Commit
be246e3
1 Parent(s): e64e320

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -23,10 +23,10 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.5
27
  - name: F1
28
  type: f1
29
- value: 0.6666666666666666
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -36,9 +36,9 @@ should probably proofread and complete it, then remove this comment. -->
36
 
37
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
38
  It achieves the following results on the evaluation set:
39
- - Loss: 0.6932
40
- - Accuracy: 0.5
41
- - F1: 0.6667
42
 
43
  ## Model description
44
 
@@ -57,13 +57,13 @@ More information needed
57
  ### Training hyperparameters
58
 
59
  The following hyperparameters were used during training:
60
- - learning_rate: 0.002
61
  - train_batch_size: 32
62
  - eval_batch_size: 32
63
  - seed: 42
64
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
  - lr_scheduler_type: linear
66
- - num_epochs: 1
67
 
68
  ### Training results
69
 
 
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
+ value: 0.85
27
  - name: F1
28
  type: f1
29
+ value: 0.8524590163934426
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
36
 
37
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
38
  It achieves the following results on the evaluation set:
39
+ - Loss: 0.3323
40
+ - Accuracy: 0.85
41
+ - F1: 0.8525
42
 
43
  ## Model description
44
 
 
57
  ### Training hyperparameters
58
 
59
  The following hyperparameters were used during training:
60
+ - learning_rate: 2e-05
61
  - train_batch_size: 32
62
  - eval_batch_size: 32
63
  - seed: 42
64
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
  - lr_scheduler_type: linear
66
+ - num_epochs: 2
67
 
68
  ### Training results
69