Transformers
PyTorch
Chinese
ZEN
chinese
Joelzhang commited on
Commit
793acbc
·
1 Parent(s): 6da09aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -30
README.md CHANGED
@@ -10,65 +10,104 @@ tags:
10
  inference: false
11
 
12
  ---
13
- # Erlangshen-ZEN2-668M-Chinese, one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
14
 
15
- Erlangshen-ZEN2-668M-Chinese is an open-source Chinese pre-training model of the ZEN team on the [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). IDEA-CCNL refers to the [source code of ZEN2.0](https://github.com/sinovation/ZEN2) and the [paper of ZEN2.0](https://arxiv.org/abs/2105.01279), and provides the Chinese classification task and extraction task of ZEN2.0 effects and code samples. In the future, we will work with the ZEN team to explore the optimization direction of the pre-training model and continue to improve the effect of the pre-training model on classification and extraction tasks.
 
16
 
17
- ## Usage
18
- There is no structure of ZEN2 in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of ZEN2 from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ```shell
21
  git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
22
  ```
23
 
24
- ## load model
25
  ```python
26
-
27
  from fengshen.models.zen2.ngram_utils import ZenNgramDict
28
  from fengshen.models.zen2.tokenization import BertTokenizer
29
- from fengshen.models.zen2.modeling import ZenForSequenceClassification
30
 
31
  pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN2-668M-Chinese'
32
 
33
  tokenizer = BertTokenizer.from_pretrained(pretrain_path)
34
- model = ZenForSequenceClassification.from_pretrained(pretrain_path)
35
- # model = ZenForTokenClassification.from_pretrained(pretrain_path)
36
  ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
37
 
38
  ```
39
 
 
 
40
  You can get classification and extraction examples below.
41
 
42
- [classification example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/fs_zen2_large_tnews.sh)
43
 
 
44
 
45
- [extraction example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/ner_zen2_large_ontonotes4.sh)
46
 
 
47
 
48
- ## Evaluation
49
 
50
- ### Classification
51
 
52
- | Model(Acc) | afqmc | tnews | iflytek | ocnli | cmnli |
53
- | :--------: | :-----: | :----: | :-----: | :----: | :----: |
54
- | Erlangshen-ZEN2-345M-Chinese | 0.741 | 0.584 | 0.599 | 0.788 | 0.80 |
55
- | Erlangshen-ZEN2-668M-Chinese | 0.75 | 0.60 | 0.589 | 0.81 | 0.82 |
 
 
 
 
 
56
 
57
- ### Extraction
58
 
59
- | Model(F1) | WEIBO(test) | Resume(test) | MSRA(test) | OntoNote4.0(test) | CMeEE(dev) | CLUENER(dev) |
60
- | :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: |
61
- | Erlangshen-ZEN2-345M-Chinese | 65.26 | 96.03 | 95.15 | 78.93 | 62.81 | 79.27 |
62
- | Erlangshen-ZEN2-668M-Chinese | 70.02 | 96.08 | 95.13 | 80.89 | 63.37 | 79.22 |
63
 
64
-
65
- ## Citation
66
- If you find the resource is useful, please cite the following website in your paper.
67
- ```
68
- @article{Sinovation2021ZEN2,
69
- title="{ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders}",
70
- author={Yan Song, Tong Zhang, Yonggang Wang, Kai-Fu Lee},
71
- journal={arXiv preprint arXiv:2105.01279},
72
  year={2021},
 
73
  }
74
  ```
 
10
  inference: false
11
 
12
  ---
13
+ # Erlangshen-ZEN2-668M-Chinese
14
 
15
+ - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
16
+ - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
17
 
18
+ ## 简介 Brief Introduction
19
+
20
+ 善于处理NLU任务,使用了N-gram编码增强文本语义,6.68亿参数量的ZEN2
21
+
22
+ ZEN2 model, which uses N-gram to enhance text semantic and has 668M parameters, is adept at NLU tasks.
23
+
24
+ ## 模型分类 Model Taxonomy
25
+
26
+ | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
27
+ | :----: | :----: | :----: | :----: | :----: | :----: |
28
+ | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | ZEN2 | 668M | Chinese |
29
+
30
+ ## 模型信息 Model Information
31
+
32
+ 我们与[ZEN团队](https://github.com/sinovation/ZEN)合作,使用我们的封神框架,开源发布了ZEN2模型。具体而言,通过引入无监督学习中提取的知识,ZEN通过N-gram方法学习不同的文本粒度信息。ZEN2使用大规模数据集和特殊的预训练策略对N-gram增强编码器进行预训练。下一步,我们将继续与ZEN团队一起探索PLM的优化,并提高下游任务的性能。
33
+
34
+ We open source and publicly release ZEN2 using our Fengshen Framework in collaboration with the [ZEN team](https://github.com/sinovation/ZEN). More precisely, by bringing together knowledge extracted by unsupervised learning, ZEN learns different textual granularity information through N-gram methods. ZEN2 pre-trains the N-gram-enhanced encoders with large-scale datasets and special pre-training strategies. In the next step, we continue with the ZEN team to explore the optimization of PLM and improve the performance on downstream tasks.
35
+
36
+ ### 下游效果 Performance
37
+
38
+ **分类任务 Classification**
39
+
40
+ | Model(Acc) | afqmc | tnews | iflytek | ocnli | cmnli |
41
+ | :--------: | :-----: | :----: | :-----: | :----: | :----: |
42
+ | Erlangshen-ZEN2-345M-Chinese | 0.741 | 0.584 | 0.599 | 0.788 | 0.80 |
43
+ | Erlangshen-ZEN2-668M-Chinese | 0.75 | 0.60 | 0.589 | 0.81 | 0.82 |
44
+
45
+ **抽取任务 Extraction**
46
+
47
+ | Model(F1) | WEIBO(test) | Resume(test) | MSRA(test) | OntoNote4.0(test) | CMeEE(dev) | CLUENER(dev) |
48
+ | :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: |
49
+ | Erlangshen-ZEN2-345M-Chinese | 65.26 | 96.03 | 95.15 | 78.93 | 62.81 | 79.27 |
50
+ | Erlangshen-ZEN2-668M-Chinese | 70.02 | 96.08 | 95.13 | 80.89 | 63.37 | 79.22 |
51
+
52
+
53
+ ## 使用 Usage
54
+
55
+ 因为[transformers](https://github.com/huggingface/transformers)库中是没有ZEN2相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
56
+
57
+ Since there is no structure of ZEN2 in [transformers library](https://github.com/huggingface/transformers), you can find the structure of ZEN2 and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
58
 
59
  ```shell
60
  git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
61
  ```
62
 
 
63
  ```python
 
64
  from fengshen.models.zen2.ngram_utils import ZenNgramDict
65
  from fengshen.models.zen2.tokenization import BertTokenizer
66
+ from fengshen.models.zen2.modeling import ZenForSequenceClassification, ZenForTokenClassification
67
 
68
  pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN2-668M-Chinese'
69
 
70
  tokenizer = BertTokenizer.from_pretrained(pretrain_path)
71
+ model_classification = ZenForSequenceClassification.from_pretrained(pretrain_path)
72
+ model_extraction = ZenForTokenClassification.from_pretrained(pretrain_path)
73
  ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
74
 
75
  ```
76
 
77
+ 你可以从下方的链接获得我们做分类和抽取的详细示例。
78
+
79
  You can get classification and extraction examples below.
80
 
81
+ [分类 classification example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/fs_zen2_base_tnews.sh)
82
 
83
+ [抽取 extraction example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/ner_zen2_base_ontonotes4.sh)
84
 
 
85
 
86
+ ## 引用 Citation
87
 
88
+ 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
89
 
90
+ If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
91
 
92
+ ```text
93
+ @article{fengshenbang,
94
+ author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
95
+ title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
96
+ journal = {CoRR},
97
+ volume = {abs/2209.02970},
98
+ year = {2022}
99
+ }
100
+ ```
101
 
102
+ 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
103
 
104
+ You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
 
 
 
105
 
106
+ ```text
107
+ @misc{Fengshenbang-LM,
108
+ title={Fengshenbang-LM},
109
+ author={IDEA-CCNL},
 
 
 
 
110
  year={2021},
111
+ howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
112
  }
113
  ```