davidliu1110 commited on
Commit
694d848
1 Parent(s): 0512d0b

End of training

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ metrics:
5
+ - precision
6
+ - recall
7
+ - f1
8
+ - accuracy
9
+ model-index:
10
+ - name: bert-base-chinese-david-ner
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # bert-base-chinese-david-ner
18
+
19
+ This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.2217
22
+ - Precision: 0.8020
23
+ - Recall: 0.8379
24
+ - F1: 0.8196
25
+ - Accuracy: 0.9471
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 2e-05
45
+ - train_batch_size: 8
46
+ - eval_batch_size: 8
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - lr_scheduler_warmup_ratio: 0.1
51
+ - num_epochs: 3
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
+ | 0.2487 | 1.4 | 500 | 0.2446 | 0.8138 | 0.8138 | 0.8138 | 0.9417 |
58
+ | 0.0668 | 2.8 | 1000 | 0.2217 | 0.8020 | 0.8379 | 0.8196 | 0.9471 |
59
+
60
+
61
+ ### Framework versions
62
+
63
+ - Transformers 4.29.0.dev0
64
+ - Pytorch 1.10.1+cu113
65
+ - Datasets 2.11.0
66
+ - Tokenizers 0.13.3
.ipynb_checkpoints/tokenizer_config-checkpoint.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": false,
5
+ "mask_token": "[MASK]",
6
+ "model_max_length": 512,
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "strip_accents": null,
10
+ "tokenize_chinese_chars": true,
11
+ "tokenizer_class": "BertTokenizer",
12
+ "unk_token": "[UNK]"
13
+ }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b744bcbe3a646bf785ecfd9a36ab7c4bcb9efc11ec3a8fdc603e954705cab64d
3
  size 406794033
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66898f39dcdcd1c6f0f13de736fe2a1c655775827395fee70d1f1b56b07de824
3
  size 406794033
tokenizer.json CHANGED
@@ -1,11 +1,6 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 512,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
  "padding": null,
10
  "added_tokens": [
11
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:15f345c3f615ddcc71b7037eb8a4230825e214082dea733e556f625211b3136e
3
  size 3567
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8ef90f23e45ebc1c1c65869144db764835c72d78386f25b7bce2796d6d37c57
3
  size 3567