JiahuanCao commited on
Commit
57e5006
1 Parent(s): 3c455ff

Create README_en.md

Browse files
Files changed (1) hide show
  1. README_en.md +125 -0
README_en.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+ <img src="./images/通古logo.png" width="400"/>
3
+ </div>
4
+
5
+ # TongGu LLM
6
+
7
+ ## Introduction
8
+
9
+ Tonggu is a classical Chinese LLM developed by the Deep Learning and Visual Computing Laboratory (SCUT-DLVCLab) at South China University of Technology. It has strong capabilities in understanding and processing ancient texts. Tonggu uses multi-stage instruction fine-tuning and innovatively proposes a Redundancy-Aware Tuning (RAT) method, which greatly retains the capabilities of the base model while enhancing the performance of downstream tasks.
10
+
11
+ <div align="center">
12
+ <img src="./images/model_training.png">
13
+ </div>
14
+
15
+
16
+ ## Evaluation
17
+
18
+ Tonggu has surpassed existing models in a wide range of classical Chinese understanding and processing tasks. A comparison with its base model Baichuan2-7B-Chat demonstrates the effectiveness of Tonggu's training process and methods. In the future, Tonggu will continue to update its model and benefit from even more powerful base models.
19
+
20
+
21
+ <div align="center">
22
+ <img src="./images/evaluation_table.png">
23
+ </div>
24
+
25
+ <div align="center">
26
+ <img src="./images/evaluation_table2.png" width="600">
27
+ </div>
28
+
29
+ # Open-source List
30
+
31
+ ## Model
32
+
33
+ [**TongGu-7B-Instruct**](https://huggingface.co/SCUT-DLVCLab/TongGu-7B-Instruct): The 7B classical Chinese language model is based on Baichuan2-7B-Base, which has undergone unsupervised incremental pre-training on a corpus of 2.41 billion classical Chinese texts, and fine-tuned on 4 million classical Chinese dialogue data, possessing functions such as ancient text annotation, translation, and appreciation.
34
+
35
+
36
+ ## Data
37
+
38
+ **ACCN-INS**: 4 million classical Chinese instruction data, covering 24 estimated tasks across three dimensions of ancient text understanding, generation, and knowledge.
39
+
40
+ The ACCN-INS dataset can only be used for non-commercial research purposes. For scholar or organization who wants to use the MSDS dataset, please first fill in this [Application Form](https://github.com/SCUT-DLVCLab/TongGu-LLM/blob/main/application-form/Application-Form-for-Using-ACCN-INS.docx) and email them to us. When submitting the application form to us, please list or attached 1-2 of your publications in the recent 6 years to indicate that you (or your team) do research in the related research fields of classical Chinese.
41
+ We will give you the download link and the decompression password after your application has been received and approved.
42
+ All users must follow all use conditions; otherwise, the authorization will be revoked.
43
+
44
+
45
+ # News
46
+
47
+ - 2024/9/21 The paper of Tonggu has been accepted by EMNLP 2024.
48
+ - 2024/9/26 Tonggu model and instruction data have been opened sourced.
49
+
50
+
51
+ # Examples
52
+
53
+ <details><summary><b>句读</b></summary>
54
+
55
+ ![image](./images/标点.png)
56
+
57
+ </details>
58
+
59
+ <details><summary><b>成语解释</b></summary>
60
+
61
+ ![image](./images/成语解释.png)
62
+
63
+ </details>
64
+
65
+ <details><summary><b>文白翻译</b></summary>
66
+
67
+ ![image](./images/文白翻译.png)
68
+
69
+ </details>
70
+
71
+ <details><summary><b>白文翻译</b></summary>
72
+
73
+ ![image](./images/白文翻译.png)
74
+
75
+ </details>
76
+
77
+ <details><summary><b>诗词创作</b></summary>
78
+
79
+ ![image](./images/词创作.png)
80
+
81
+ </details>
82
+
83
+
84
+ # Inference
85
+
86
+ ```python
87
+ import torch
88
+ from transformers import AutoModelForCausalLM, AutoTokenizer
89
+
90
+ model_path = "SCUT-DLVCLab/TongGu-7B-Instruct"
91
+ model = AutoModelForCausalLM.from_pretrained(model_path, device_map='auto', torch_dtype=torch.bfloat16, trust_remote_code=True)
92
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
93
+
94
+ system_message = "你是通古,由华南理工大学DLVCLab训练而来的古文大模型。你具备丰富的古文知识,为用户提供有用、准确的回答。"
95
+ user_query = "翻译成白话文:大学之道,在明明德,在亲民,在止于至善。"
96
+ prompt = f"{system_message}\n<用户> {user_query}\n<通古> "
97
+ inputs = tokenizer(prompt, return_tensors='pt')
98
+ generate_ids = model.generate(
99
+ inputs.input_ids.cuda(),
100
+ max_new_tokens=128
101
+ )
102
+ generate_text = tokenizer.batch_decode(
103
+ generate_ids,
104
+ skip_special_tokens=True,
105
+ clean_up_tokenization_spaces=False
106
+ )[0][len(prompt):]
107
+
108
+ print(generate_text)
109
+ ```
110
+
111
+
112
+ # Citation
113
+
114
+ ```
115
+ @article{cao2024tonggu,
116
+ title={TongGu: Mastering Classical Chinese Understanding with Knowledge-Grounded Large Language Models},
117
+ author={Cao, Jiahuan and Peng, Dezhi and Zhang, Peirong and Shi, Yongxin and Liu, Yang and Ding, Kai and Jin, Lianwen},
118
+ journal={EMNLP 2024},
119
+ year={2024}
120
+ }
121
+ ```
122
+
123
+ # Statement:
124
+
125
+ After extensive data incremental pre-training and instruction fine-tuning, Tonggu has strong capabilities in processing ancient texts, such as punctuation and translation. However, due to limitations in model size and the autoregressive generation paradigm, Tonggu may still generate misleading replies containing factual errors or harmful content that includes bias or discrimination. Please use it cautiously and be aware of discerning such content. Do not spread harmful content generated by Tonggu on the Internet. If any adverse consequences arise, the disseminator shall bear the responsibility.