File size: 2,409 Bytes
7c24857
 
bdac23e
 
 
 
7c24857
bdac23e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: apache-2.0
datasets:
- zd21/SciInstruct
language:
- en
---
# SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning

<p align="center">
📃 <a href="https://arxiv.org/abs/2401.07950" target="_blank">[SciGLM]</a> <a href="https://github.com/THUDM/SciGLM" target="_blank">[GitHub]</a> <br>
</p>

**SciGLM** is a suite of scientific language models able to conduct college-level scientific reasoning. Central to our approach is a novel self-reflective instruction annotation framework to address the data scarcity challenge in the science domain. This framework leverages existing LLMs to generate step-by-step reasoning for unlabelled scientific questions, followed by a process of self-reflective critic-and-revise. Applying this framework, we curated SciInstruct, a diverse and high-quality dataset encompassing physics, chemistry, math, and formal proofs.

## **SciInstruct**

We construct the SciInstruct as follows:

| Subject  |  Math  | Physics\& Chemistry | Formal Proofs (Lean) | Total   |
| --- | ---- | --------- | ------- | --- |
| # Number | 89,934 |       123,869       |        40,248        | 254,051 |

We release our data and model for public use. If you wish to use SciInstruct or SciGLM, you can download them from the following links.

Download data:
[[Google Drive](https://drive.google.com/file/d/1UlvMEau9659BoBxbMG6sk-oKaiIIO-hJ/view?usp=sharing)]
[[Tsinghua Cloud](https://cloud.tsinghua.edu.cn/d/da691b9466544d55be8e/)]

Download model:
[[Hugging Face](https://huggingface.co/zd21/SciGLM-6B)]

## **Training & Inference**

### **Fine-tuning**
You can use the SciGLM model through Huggingface's Transformers library. 

```
git clone https://github.com/THUDM/SciGLM.git
cd SciGLM
pip install -r requirements.txt
```

To train the 6B model, run:
```
bash /path/training/finetune.sh
```

### Inference
```
cd /path/to/inference
python cli_demo.py
```

## **Citation**

If you find our work helpful, please kindly cite our paper:

```
@misc{zhang2024sciglm,
      title={SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning}, 
      author={Dan Zhang and Ziniu Hu and Sining Zhoubian and Zhengxiao Du and Kaiyu Yang and Zihan Wang and Yisong Yue and Yuxiao Dong and Jie Tang},
      year={2024},
      eprint={2401.07950},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```