StatsGary commited on
Commit
1ed9509
1 Parent(s): 9804bce

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -41
README.md CHANGED
@@ -4,36 +4,9 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - wnut_17
7
- metrics:
8
- - precision
9
- - recall
10
- - f1
11
- - accuracy
12
  model-index:
13
  - name: token_classification_wnut
14
- results:
15
- - task:
16
- name: Token Classification
17
- type: token-classification
18
- dataset:
19
- name: wnut_17
20
- type: wnut_17
21
- config: wnut_17
22
- split: train
23
- args: wnut_17
24
- metrics:
25
- - name: Precision
26
- type: precision
27
- value: 0.5846994535519126
28
- - name: Recall
29
- type: recall
30
- value: 0.39666357738646896
31
- - name: F1
32
- type: f1
33
- value: 0.47266703478741023
34
- - name: Accuracy
35
- type: accuracy
36
- value: 0.947714933093925
37
  ---
38
 
39
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -41,13 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
41
 
42
  # token_classification_wnut
43
 
44
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
45
- It achieves the following results on the evaluation set:
46
- - Loss: 0.2932
47
- - Precision: 0.5847
48
- - Recall: 0.3967
49
- - F1: 0.4727
50
- - Accuracy: 0.9477
51
 
52
  ## Model description
53
 
@@ -66,20 +33,19 @@ More information needed
66
  ### Training hyperparameters
67
 
68
  The following hyperparameters were used during training:
69
- - learning_rate: 5e-05
70
- - train_batch_size: 32
71
- - eval_batch_size: 32
72
  - seed: 42
73
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
74
  - lr_scheduler_type: linear
75
- - num_epochs: 2
76
 
77
  ### Training results
78
 
79
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
80
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
81
- | No log | 1.0 | 107 | 0.2419 | 0.5242 | 0.4319 | 0.4736 | 0.9469 |
82
- | No log | 2.0 | 214 | 0.2932 | 0.5847 | 0.3967 | 0.4727 | 0.9477 |
83
 
84
 
85
  ### Framework versions
 
4
  - generated_from_trainer
5
  datasets:
6
  - wnut_17
 
 
 
 
 
7
  model-index:
8
  - name: token_classification_wnut
9
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
14
 
15
  # token_classification_wnut
16
 
17
+ This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the wnut_17 dataset.
 
 
 
 
 
 
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 4e-05
37
+ - train_batch_size: 16
38
+ - eval_batch_size: 16
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - num_epochs: 1
43
 
44
  ### Training results
45
 
46
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
47
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
48
+ | No log | 1.0 | 213 | 0.3717 | 0.6279 | 0.3707 | 0.4662 | 0.9481 |
 
49
 
50
 
51
  ### Framework versions