Akaash commited on
Commit
c8be9cb
2 Parent(s): 5787486 fa29e74

Merge branch 'main' of https://huggingface.co/akaashp15/distilbert-base-uncased-finetuned-ner

Browse files
Files changed (6) hide show
  1. README.md +57 -5
  2. special_tokens_map.json +7 -0
  3. tf_model.h5 +3 -0
  4. tokenizer.json +0 -0
  5. tokenizer_config.json +13 -0
  6. vocab.txt +0 -0
README.md CHANGED
@@ -1,7 +1,59 @@
1
  ---
2
- pipeline_tag: token-classification
3
- datasets:
4
- - conll2003
5
- library_name: transformers
 
 
6
  ---
7
- Hello
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_keras_callback
5
+ model-index:
6
+ - name: akaashp15/distilbert-base-uncased-finetuned-ner
7
+ results: []
8
  ---
9
+
10
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
+ probably proofread and complete it, then remove this comment. -->
12
+
13
+ # akaashp15/distilbert-base-uncased-finetuned-ner
14
+
15
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Train Loss: 0.0344
18
+ - Validation Loss: 0.0597
19
+ - Train Precision: 0.9253
20
+ - Train Recall: 0.9356
21
+ - Train F1: 0.9304
22
+ - Train Accuracy: 0.9836
23
+ - Epoch: 2
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
43
+ - training_precision: float32
44
+
45
+ ### Training results
46
+
47
+ | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
48
+ |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
49
+ | 0.1990 | 0.0712 | 0.8974 | 0.9226 | 0.9098 | 0.9790 | 0 |
50
+ | 0.0544 | 0.0612 | 0.9148 | 0.9318 | 0.9232 | 0.9822 | 1 |
51
+ | 0.0344 | 0.0597 | 0.9253 | 0.9356 | 0.9304 | 0.9836 | 2 |
52
+
53
+
54
+ ### Framework versions
55
+
56
+ - Transformers 4.30.2
57
+ - TensorFlow 2.13.0-rc2
58
+ - Datasets 2.13.1
59
+ - Tokenizers 0.13.3
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a08b21329becde3f2ec6e5d8cac973462c83d4ee6ca9decf5e920e07a005739
3
+ size 265606416
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": true,
5
+ "mask_token": "[MASK]",
6
+ "model_max_length": 512,
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "strip_accents": null,
10
+ "tokenize_chinese_chars": true,
11
+ "tokenizer_class": "DistilBertTokenizer",
12
+ "unk_token": "[UNK]"
13
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff