File size: 2,862 Bytes
27630ea
8ea5531
 
27630ea
8ea5531
 
 
 
 
 
 
27630ea
8ea5531
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
language: 
  - zh
license: apache-2.0

tags:
- ZEN
- chinese

inference: false

---
# Erlangshen-ZEN2-668M-Chinese, one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).

Erlangshen-ZEN2-668M-Chinese is an open-source Chinese pre-training model of the ZEN team on the [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). IDEA-CCNL refers to the [source code of ZEN2.0](https://github.com/sinovation/ZEN2) and the [paper of ZEN2.0](https://arxiv.org/abs/2105.01279), and provides the Chinese classification task and extraction task of ZEN2.0 effects and code samples. In the future, we will work with the ZEN team to explore the optimization direction of the pre-training model and continue to improve the effect of the pre-training model on classification and extraction tasks.

## Usage
There is no structure of ZEN2 in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of ZEN2 from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)

 ```shell
 git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
 ```

## load model
```python

from fengshen.models.zen2.ngram_utils import ZenNgramDict
from fengshen.models.zen2.tokenization import BertTokenizer
from fengshen.models.zen2.modeling import ZenModel

pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN2-668M-Chinese'

tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model = ZenForSequenceClassification.from_pretrained(pretrain_path)
# model = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)

```

You can get classification and extraction examples below.

[classification example on fengshen]()


[extraction example on fengshen]()

## Evaluation

### Classification

|    Model(Acc)   | afqmc    |  tnews  | iflytek    |  ocnli  |  cmnli  |
| :--------:    | :-----:  | :----:  | :-----:   | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 0.741      |   0.584    | 0.599      |   0.788    | 0.80    |
| Erlangshen-ZEN2-668M-Chinese | 0.75      |   0.60    | 0.589      |   0.81    | 0.82    |

### Extraction

|    Model(F1)   | WEIBO(test) |  Resume(test)  | MSRA(test) |  OntoNote4.0(test) |  CMeEE(dev)  | CLUENER(dev) |
| :--------:    | :-----:  | :----:  | :-----:   | :----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 65.26 | 96.03 | 95.15 | 78.93 | 62.81 | 79.27 |
| Erlangshen-ZEN2-668M-Chinese | 70.02 | 96.08 | 95.13 | 80.89 | 63.37 | 79.22 |


## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@article{Sinovation2021ZEN2,
  title="{ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders}",
  author={Yan Song, Tong Zhang, Yonggang Wang, Kai-Fu Lee},
  journal={arXiv preprint arXiv:2105.01279},
  year={2021},
}
```