rightyonghu commited on
Commit
2c14c99
1 Parent(s): 1a733bd
Files changed (4) hide show
  1. README.md +44 -0
  2. config.json +20 -0
  3. pytorch_model.bin +3 -0
  4. vocab.txt +0 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: zh
3
+ ---
4
+
5
+ # ERNIE-3.0-micro-zh
6
+
7
+ ## Introduction
8
+
9
+ ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
10
+ More detail: https://arxiv.org/abs/2107.02137
11
+
12
+ ## Released Model Info
13
+
14
+ This released pytorch model is converted from the officially released PaddlePaddle ERNIE model and
15
+ a series of experiments have been conducted to check the accuracy of the conversion.
16
+
17
+ - Official PaddlePaddle ERNIE repo:https://paddlenlp.readthedocs.io/zh/latest/model_zoo/transformers/ERNIE/contents.html
18
+ - Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch
19
+
20
+ ## How to use
21
+ If you want to use ernie-3.0 series models, you need to add `task_type_id` to BERT model following this [MR](https://github.com/huggingface/transformers/pull/18686/files)
22
+ **OR** you can re-install the transformers from my changed branch.
23
+ ```bash
24
+ pip uninstall transformers # optional
25
+ pip install git+https://github.com/nghuyong/transformers@add_task_type_id # reinstall
26
+ ```
27
+ Then you can load ERNIE-3.0 model as before:
28
+ ```Python
29
+ from transformers import BertTokenizer, BertModel
30
+
31
+ tokenizer = BertTokenizer.from_pretrained("nghuyong/ernie-3.0-micro-zh")
32
+ model = BertModel.from_pretrained("nghuyong/ernie-3.0-micro-zh")
33
+ ```
34
+
35
+ ## Citation
36
+
37
+ ```bibtex
38
+ @article{sun2021ernie,
39
+ title={Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation},
40
+ author={Sun, Yu and Wang, Shuohuan and Feng, Shikun and Ding, Siyu and Pang, Chao and Shang, Junyuan and Liu, Jiaxiang and Chen, Xuyi and Zhao, Yanbin and Lu, Yuxiang and others},
41
+ journal={arXiv preprint arXiv:2107.02137},
42
+ year={2021}
43
+ }
44
+ ```
config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "attention_probs_dropout_prob": 0.1,
3
+ "hidden_act": "gelu",
4
+ "hidden_dropout_prob": 0.1,
5
+ "hidden_size": 384,
6
+ "intermediate_size": 1536,
7
+ "initializer_range": 0.02,
8
+ "max_position_embeddings": 2048,
9
+ "num_attention_heads": 12,
10
+ "num_hidden_layers": 4,
11
+ "task_type_vocab_size": 16,
12
+ "type_vocab_size": 4,
13
+ "use_task_id": true,
14
+ "vocab_size": 40000,
15
+ "layer_norm_eps": 1e-05,
16
+ "model_type": "bert",
17
+ "architectures": [
18
+ "BertModel"
19
+ ]
20
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36a4ae1eb13f0c51cf0d83c0972063dc6dbe7c1ab33c19a7bfa1ba4b540b9716
3
+ size 93032531
vocab.txt ADDED
The diff for this file is too large to render. See raw diff