File size: 5,527 Bytes
6bd93dd
5548225
 
6bd93dd
5548225
 
 
 
 
 
 
6bd93dd
228b282
5548225
45032b2
228b282
5548225
228b282
 
 
 
 
 
 
 
 
 
7cb55a4
228b282
 
 
7681e12
228b282
7681e12
228b282
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ea20ea
228b282
 
5548225
 
 
 
 
 
 
 
228b282
5548225
615fe53
023527e
 
228b282
 
023527e
5548225
 
 
228b282
 
023527e
 
7c94eee
5548225
7c94eee
374c3b6
228b282
374c3b6
7cb55a4
374c3b6
7cb55a4
 
 
72c92ec
 
 
 
 
 
 
 
7cb55a4
 
 
 
 
 
374c3b6
228b282
 
bc050e2
228b282
 
 
 
 
 
374c3b6
228b282
5548225
228b282
 
 
 
 
 
 
 
5548225
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
language: 
  - zh
license: apache-2.0

tags:
- ZEN
- chinese

inference: false

---
# Erlangshen-ZEN1-224M-Chinese

- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)

## 简介 Brief Introduction

善于处理NLU任务,使用了N-gram编码增强文本语义,2.24亿参数量的ZEN1

ZEN1 model, which uses N-gram to enhance text semantic and has 224M parameters, is adept at NLU tasks.

## 模型分类 Model Taxonomy

|  需求 Demand  | 任务 Task       | 系列 Series      | 模型 Model    | 参数 Parameter | 额外 Extra |
|  :----:  | :----:  | :----:  | :----:  | :----:  | :----:  |
| 通用 General  | 自然语言理解 NLU | 二郎神 Erlangshen | ZEN1 |      224M      |     中文-Chinese     |

## 模型信息 Model Information

我们与[ZEN团队](https://github.com/sinovation/ZEN)合作,使用我们的封神框架,开源发布了ZEN1模型。具体而言,通过引入无监督学习中提取的知识,ZEN通过N-gram方法学习不同的文本粒度信息。ZEN1可以通过仅在单个小语料库(低资源场景)上进行训练来获得良好的性能增益。下一步,我们将继续与ZEN团队一起探索PLM的优化,并提高下游任务的性能

We open source and publicly release ZEN1 using our Fengshen Framework in collaboration with the [ZEN team](https://github.com/sinovation/ZEN). More precisely, by bringing together knowledge extracted by unsupervised learning, ZEN learns different textual granularity information through N-gram methods. ZEN1 can obtain good performance gains by training only on a single small corpus (low-resource scenarios). In the next step, we continue with the ZEN team to explore the optimization of PLM and improve the performance on downstream tasks.

### 下游效果 Performance

**分类任务 Classification**

|  model   | dataset  | Acc |
|  ----  | ----  | ---- |
| IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese | Tnews | 56.82% |

**抽取任务 Extraction**

|  model   | dataset  | F1 |
|  ----  | ----  | ---- |
| IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese | OntoNote4.0 | 80.8% | 


## 使用 Usage

因为[transformers](https://github.com/huggingface/transformers)库中是没有ZEN1相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。

Since there is no structure of ZEN1 in [transformers library](https://github.com/huggingface/transformers), you can find the structure of ZEN1 and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).

 ```shell
 git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
 ```

```python
from fengshen.models.zen1.ngram_utils import ZenNgramDict
from fengshen.models.zen1.tokenization import BertTokenizer
from fengshen.models.zen1.modeling import ZenForSequenceClassification, ZenForTokenClassification

pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese'

tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model_classification = ZenForSequenceClassification.from_pretrained(pretrain_path)
model_extraction = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)

```

你可以从下方的链接获得我们做分类和抽取的详细示例。

You can get classification and extraction examples below.

[分类 classification example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen1_finetune/fs_zen1_tnews.sh)

[抽取 extraction example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen1_finetune/ner_zen1_ontonotes4.sh)

## 引用 Citation

如果您在您的工作中使用了我们的模型,可以引用我们的对该模型的论文:

If you are using the resource for your work, please cite the our paper for this model:

```text
@inproceedings{diao-etal-2020-zen,
    title = "ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations",
    author = "Diao, Shizhe and Bai, Jiaxin and Song, Yan and Zhang, Tong and Wang, Yonggang",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    pages = "4729--4740",
}
```

如果您在您的工作中使用了我们的模型,也可以引用我们的[总论文](https://arxiv.org/abs/2209.02970):

If you are using the resource for your work, please cite the our [overview paper](https://arxiv.org/abs/2209.02970):

```text
@article{fengshenbang,
  author    = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
  title     = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
  journal   = {CoRR},
  volume    = {abs/2209.02970},
  year      = {2022}
}
```

也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):

You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):

```text
@misc{Fengshenbang-LM,
  title={Fengshenbang-LM},
  author={IDEA-CCNL},
  year={2021},
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```