Kriyans commited on
Commit
97ebcdb
1 Parent(s): e7500ad

End of training

Browse files
Files changed (1) hide show
  1. README.md +19 -19
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: apache-2.0
3
- base_model: distilbert-base-cased
4
  tags:
5
  - generated_from_trainer
6
  datasets:
@@ -20,21 +20,21 @@ model-index:
20
  name: ner
21
  type: ner
22
  config: indian_names
23
- split: test
24
  args: indian_names
25
  metrics:
26
  - name: Precision
27
  type: precision
28
- value: 0.9779481031086752
29
  - name: Recall
30
  type: recall
31
- value: 0.950199700449326
32
  - name: F1
33
  type: f1
34
- value: 0.96387423507069
35
  - name: Accuracy
36
  type: accuracy
37
- value: 0.977337411889879
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -42,13 +42,13 @@ should probably proofread and complete it, then remove this comment. -->
42
 
43
  # Bert-NER
44
 
45
- This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the ner dataset.
46
  It achieves the following results on the evaluation set:
47
- - Loss: 0.0518
48
- - Precision: 0.9779
49
- - Recall: 0.9502
50
- - F1: 0.9639
51
- - Accuracy: 0.9773
52
 
53
  ## Model description
54
 
@@ -67,7 +67,7 @@ More information needed
67
  ### Training hyperparameters
68
 
69
  The following hyperparameters were used during training:
70
- - learning_rate: 1e-05
71
  - train_batch_size: 32
72
  - eval_batch_size: 32
73
  - seed: 42
@@ -79,11 +79,11 @@ The following hyperparameters were used during training:
79
 
80
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
81
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
82
- | No log | 1.0 | 438 | 0.0725 | 0.9691 | 0.9325 | 0.9505 | 0.9693 |
83
- | 0.0435 | 2.0 | 876 | 0.0635 | 0.9687 | 0.9392 | 0.9537 | 0.9711 |
84
- | 0.039 | 3.0 | 1314 | 0.0569 | 0.9790 | 0.9416 | 0.9599 | 0.9751 |
85
- | 0.0392 | 4.0 | 1752 | 0.0542 | 0.9744 | 0.9490 | 0.9615 | 0.9758 |
86
- | 0.0378 | 5.0 | 2190 | 0.0518 | 0.9779 | 0.9502 | 0.9639 | 0.9773 |
87
 
88
 
89
  ### Framework versions
@@ -91,4 +91,4 @@ The following hyperparameters were used during training:
91
  - Transformers 4.34.0
92
  - Pytorch 2.0.1+cu118
93
  - Datasets 2.14.5
94
- - Tokenizers 0.14.0
 
1
  ---
2
  license: apache-2.0
3
+ base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  datasets:
 
20
  name: ner
21
  type: ner
22
  config: indian_names
23
+ split: train
24
  args: indian_names
25
  metrics:
26
  - name: Precision
27
  type: precision
28
+ value: 0.982625089167431
29
  - name: Recall
30
  type: recall
31
+ value: 0.9665213251140179
32
  - name: F1
33
  type: f1
34
+ value: 0.9745066828368578
35
  - name: Accuracy
36
  type: accuracy
37
+ value: 0.9865003101950782
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
42
 
43
  # Bert-NER
44
 
45
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ner dataset.
46
  It achieves the following results on the evaluation set:
47
+ - Loss: 0.0404
48
+ - Precision: 0.9826
49
+ - Recall: 0.9665
50
+ - F1: 0.9745
51
+ - Accuracy: 0.9865
52
 
53
  ## Model description
54
 
 
67
  ### Training hyperparameters
68
 
69
  The following hyperparameters were used during training:
70
+ - learning_rate: 5e-05
71
  - train_batch_size: 32
72
  - eval_batch_size: 32
73
  - seed: 42
 
79
 
80
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
81
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
82
+ | No log | 1.0 | 469 | 0.0541 | 0.9760 | 0.9580 | 0.9669 | 0.9826 |
83
+ | 0.0887 | 2.0 | 938 | 0.0503 | 0.9767 | 0.9620 | 0.9693 | 0.9839 |
84
+ | 0.0519 | 3.0 | 1407 | 0.0464 | 0.9799 | 0.9627 | 0.9712 | 0.9849 |
85
+ | 0.0467 | 4.0 | 1876 | 0.0430 | 0.9806 | 0.9652 | 0.9728 | 0.9856 |
86
+ | 0.0427 | 5.0 | 2345 | 0.0404 | 0.9826 | 0.9665 | 0.9745 | 0.9865 |
87
 
88
 
89
  ### Framework versions
 
91
  - Transformers 4.34.0
92
  - Pytorch 2.0.1+cu118
93
  - Datasets 2.14.5
94
+ - Tokenizers 0.14.1