roygan commited on
Commit
8ea5531
1 Parent(s): 27630ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md CHANGED
@@ -1,3 +1,73 @@
1
  ---
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - zh
4
  license: apache-2.0
5
+
6
+ tags:
7
+ - ZEN
8
+ - chinese
9
+
10
+ inference: false
11
+
12
  ---
13
+ # Erlangshen-ZEN2-668M-Chinese, one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
14
+
15
+ Erlangshen-ZEN2-668M-Chinese is an open-source Chinese pre-training model of the ZEN team on the [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). IDEA-CCNL refers to the [source code of ZEN2.0](https://github.com/sinovation/ZEN2) and the [paper of ZEN2.0](https://arxiv.org/abs/2105.01279), and provides the Chinese classification task and extraction task of ZEN2.0 effects and code samples. In the future, we will work with the ZEN team to explore the optimization direction of the pre-training model and continue to improve the effect of the pre-training model on classification and extraction tasks.
16
+
17
+ ## Usage
18
+ There is no structure of ZEN2 in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of ZEN2 from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
19
+
20
+ ```shell
21
+ git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
22
+ ```
23
+
24
+ ## load model
25
+ ```python
26
+
27
+ from fengshen.models.zen2.ngram_utils import ZenNgramDict
28
+ from fengshen.models.zen2.tokenization import BertTokenizer
29
+ from fengshen.models.zen2.modeling import ZenModel
30
+
31
+ pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN2-668M-Chinese'
32
+
33
+ tokenizer = BertTokenizer.from_pretrained(pretrain_path)
34
+ model = ZenForSequenceClassification.from_pretrained(pretrain_path)
35
+ # model = ZenForTokenClassification.from_pretrained(pretrain_path)
36
+ ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
37
+
38
+ ```
39
+
40
+ You can get classification and extraction examples below.
41
+
42
+ [classification example on fengshen]()
43
+
44
+
45
+ [extraction example on fengshen]()
46
+
47
+ ## Evaluation
48
+
49
+ ### Classification
50
+
51
+ | Model(Acc) | afqmc | tnews | iflytek | ocnli | cmnli |
52
+ | :--------: | :-----: | :----: | :-----: | :----: | :----: |
53
+ | Erlangshen-ZEN2-345M-Chinese | 0.741 | 0.584 | 0.599 | 0.788 | 0.80 |
54
+ | Erlangshen-ZEN2-668M-Chinese | 0.75 | 0.60 | 0.589 | 0.81 | 0.82 |
55
+
56
+ ### Extraction
57
+
58
+ | Model(F1) | WEIBO(test) | Resume(test) | MSRA(test) | OntoNote4.0(test) | CMeEE(dev) | CLUENER(dev) |
59
+ | :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: |
60
+ | Erlangshen-ZEN2-345M-Chinese | 65.26 | 96.03 | 95.15 | 78.93 | 62.81 | 79.27 |
61
+ | Erlangshen-ZEN2-668M-Chinese | 70.02 | 96.08 | 95.13 | 80.89 | 63.37 | 79.22 |
62
+
63
+
64
+ ## Citation
65
+ If you find the resource is useful, please cite the following website in your paper.
66
+ ```
67
+ @article{Sinovation2021ZEN2,
68
+ title="{ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders}",
69
+ author={Yan Song, Tong Zhang, Yonggang Wang, Kai-Fu Lee},
70
+ journal={arXiv preprint arXiv:2105.01279},
71
+ year={2021},
72
+ }
73
+ ```