ctlin commited on
Commit
296083b
1 Parent(s): 8b6ce97

upload model files

Browse files
README.md CHANGED
@@ -1,3 +1,49 @@
1
  ---
 
 
 
 
 
 
 
 
2
  license: gpl-3.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - zh
4
+ thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
5
+ tags:
6
+ - pytorch
7
+ - lm-head
8
+ - bert
9
+ - zh
10
  license: gpl-3.0
11
  ---
12
+
13
+ # CKIP Oldhan BERT Base Chinese
14
+
15
+ Pretrained model on oldhan Chinese language using a masked language modeling (MLM) objective.
16
+
17
+ ## Homepage
18
+ * [ckiplab/han-transformers](https://github.com/ckiplab/han-transformers)
19
+
20
+ ## Training Datasets
21
+ The copyright of the datasets belongs to the Institute of Linguistics, Academia Sinica.
22
+ * [中央研究院上古漢語標記語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/akiwi/kiwi.sh?ukey=-406192123&qtype=-1)
23
+ * [中央研究院中古漢語語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/dkiwi/kiwi.sh?ukey=852967425&qtype=-1)
24
+ * [中央研究院近代漢語語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/pkiwi/kiwi.sh?ukey=-299696128&qtype=-1)
25
+ * [中央研究院現代漢語語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/mkiwi/kiwi.sh)
26
+
27
+ ## Contributors
28
+ * Chin-Tung Lin at [CKIP](https://ckip.iis.sinica.edu.tw/)
29
+
30
+ ## Usage
31
+
32
+ * Using our model in your script
33
+ ```python
34
+ from transformers import (
35
+ AutoTokenizer,
36
+ AutoModel,
37
+ )
38
+
39
+ tokenizer = AutoTokenizer.from_pretrained("ckiplab/oldhan-bert-base-chinese")
40
+ model = AutoModel.from_pretrained("ckiplab/oldhan-bert-base-chinese")
41
+ ```
42
+
43
+ * Using our model for inference
44
+ ```python
45
+ >>> from transformers import pipeline
46
+ >>> unmasker = pipeline('fill-mask', model='ckiplab/oldhan-bert-base-chinese')
47
+ >>> unmasker("黎民[MASK]變時雍")
48
+
49
+ ```
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "ckiplab/bert-base-chinese",
3
+ "architectures": [
4
+ "BertForMaskedLM"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "directionality": "bidi",
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "pooler_fc_size": 768,
21
+ "pooler_num_attention_heads": 12,
22
+ "pooler_num_fc_layers": 3,
23
+ "pooler_size_per_head": 128,
24
+ "pooler_type": "first_token_transform",
25
+ "position_embedding_type": "absolute",
26
+ "tokenizer_class": "BertTokenizerFast",
27
+ "transformers_version": "4.7.0",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 26140
31
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5c471a38e8956981b1c1c71478534e15e5f19e9660d02dfd2d5f57a7ba17fc4
3
+ size 424662955
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "name_or_path": "ckiplab/bert-base-chinese", "special_tokens_map_file": "/home/cindy666/.cache/huggingface/transformers/d8a1a1b7a3de221ae53bf9d55154b9df9c4cda18409b393ee0fda4bce4ca7818.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d", "do_basic_tokenize": true, "never_split": null}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff