File size: 9,356 Bytes
c0c250e
 
c40be13
 
 
 
 
 
 
c0c250e
c40be13
 
 
887ace8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
913e970
 
 
 
 
 
 
 
 
 
 
 
 
 
b279ee7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
913e970
 
 
 
887ace8
c40be13
d6cd77d
 
2f2a1c8
d6cd77d
 
 
6312547
 
8f9abff
887ace8
 
 
 
 
 
 
7a03621
887ace8
 
 
7a03621
c40be13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6be81f3
c40be13
6be81f3
f2ff0be
 
a24e68d
 
c40be13
 
6be81f3
c40be13
 
 
 
 
a24e68d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f692d0d
36ae419
 
 
 
 
 
 
 
 
 
 
 
 
 
f692d0d
36ae419
b3cc9d6
 
 
 
887ace8
 
 
c40be13
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- llm
- code
---

# CrystalChat

We present CrystalChat, an instruction following model finetuned from [LLM360/CrystalCoder](https://huggingface.co/LLM360/CrystalCoder). Following the release of [LLM360/AmberChat](https://huggingface.co/LLM360/AmberChat)and [LLM360/AmberSafe](https://huggingface.co/LLM360/AmberSafe) in December 2023, CrystalChat is the next and most performant chat model released under LLM360. CrystalChat is trained on a carefully selected mix publicly available language and code datasets.

As always, the training data, training code, and metrics are publicly available.

## About LLM360
LLM360 is an initiative for comprehensive and fully open-sourced LLMs, 
where all training details, model checkpoints, intermediate results, and 
additional analyses are made available to the community. Our goal is to advance 
the field by inviting the community to deepen the understanding of LLMs 
together. As the first step of the project LLM360, we release all intermediate 
model checkpoints, our fully-prepared pre-training dataset, all source code and
configurations, and training details. We are
committed to continually pushing the boundaries of LLMs through this open-source 
effort.

Get access now at [LLM360 site](https://www.llm360.ai/)

# Instruction Tuning Training

**CrystalChat** is using the last **CrystalCoder** checkpoint of phase2 ([CrystalCoder_phase2_checkpoint_214387](https://huggingface.co/LLM360/CrystalCoder/tree/CrystalCoder_phase2_checkpoint_214387)) as the initialization checkpoint. We then finetune the model using the dataset mentioned below.

We also performed the same finetuning on the last **CrystalCoder** checkpoint of phase3 ([CrystalCoder_phase3_checkpoint_027728](https://huggingface.co/LLM360/CrystalCoder/tree/CrystalCoder_phase3_checkpoint_027728)). The phase2 and phase3 finetuning results are very similar, but phase2 finetuning exhibits slightly better performance on the English language benchmarks. We choose the phase2 finetuning result as the final model for **CrystalChat**.

# Instruction Tuning Data 

The instruction tuning data is a mix of publicly available language and code datasets, plus a orginally created dataset called **WebAlpaca**. The WebAlpaca dataset is created by us and is used as part of our instruction tuning training data. We will release the WebAlpaca dataset in a separate repository.

The summary of the instruction tuning data is as follows:

<center><img src="data_table.jpg" alt="Instruction Data"/></center>

# Instruction Format

We've added some new special tokens to the CrystalCoder tokenizer to support the instruction tuning.

List special tokens used in the instruction tuning:

```
bos: <s> 
eos: </s>
system_start: <|sys_start|>
system_end: <|sys_end|>
user_start: <|im_start|>
user_end: <|im_end|>
```

The instruction format is as follows:

```
<s> <|sys_start|> system prompt <|sys_end|> <|im_start|> first user utterance <|im_end|> first model response <|im_start|> next user utterance <|im_end|> next model response </s>
```

# Reproducing the Results

We will realize the training code and the training data soon. Our training code is based on [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), with some modifications to support our training data format and Maximal Update Parametrization (μP).

# CrystalChat Performance

|           Model          | Trained Tokens | Avg. of Avg. | Language Avg. | Coding Avg. |  ARC  | HellaSwag | MMLU (5-shot) | GSM8K | Winogrande(5-shot) | TruthfulQA | HumanEval (pass@1) | MBPP (pass@1) |
|:------------------------:|:--------------:|:------------:|:-------------:|:-----------:|:-----:|:---------:|:-------------:|:-----:|:------------------:|:----------:|:------------------:|:-------------:|
| CrystalChat 7B           | 1.275T         | 44.96        | 53.29         | 36.62       | 51.71 | 76.12     | 53.22         | 28.05 | 70.64              | 47.29      | 34.12              | 39.11         |
| Mistral-7B-Instruct-v0.1 | -              | 44.34        | 54.86         | 30.62       | 58.05 | 75.71     | 55.56         | 32.00 | 74.27              | 55.90      | 29.27              | 31.96         |
| CodeLlama-7b-Instruct    | 2.5T           | 40.91        | 45.29         | 36.52       | 43.35 | 66.14     | 42.75         | 15.92 | 64.33              | 39.23      | 34.12              | 38.91         |
| Llama-2-7b-Chat          | 2T             | 34.11        | 52.86         | 15.35       | 53.07 | 78.39     | 48.42         | 18.88 | 73.09              | 45.30      | 13.26              | 17.43         |
| AmberChat 7B             | 1.25T          |     -        | 44.76         |     -       | 42.83 | 74.03     | 38.88         | 5.31  | 66.77              | 40.72      |     -              |       -       |



| Combined Language and Coding Ability           |
|------------------------------------------------|
<img src="CC-Compare.jpg" alt="arc" width="800"/>

| Performance on Standard Benchmarks             |
|------------------------------------------------|
<img src="cc-eval-std-benchmarks.png" alt="std-bench" width="800"/>

| Perforamnce on Language Benchmarks                      |
|---------------------------------------------------------|
<img src="cc-eval-lang-compare.png" alt="arc" width="800"/>

## Model Description

- **Model type:** Language model with the same architecture as LLaMA-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Resources for more information:**
  - [Training Code](https://github.com/LLM360/crystalcoder-train)
  - [Data Preparation](https://github.com/LLM360/crystalcoder-data-prep)
  - [Metrics](https://github.com/LLM360/Analysis360)
  - [Fully processed CrystalCoder pretraining data](https://huggingface.co/datasets/LLM360/CrystalCoderDatasets)

# Loading CrystalChat 

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda:0" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained("LLM360/CrystalChat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("LLM360/CrystalChat", trust_remote_code=True).to(device)

prompt = '<s> <|sys_start|> You are an AI assistant. You will be given a task. You must generate a detailed and long answer. <|sys_end|> <|im_start|> Write a python function that takes a list of integers and returns the squared sum of the list. <|im_end|>'


input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
gen_tokens = model.generate(input_ids, do_sample=True, max_length=400)

print("-"*20 + "Output for model"  + 20 * '-')
print(tokenizer.batch_decode(gen_tokens)[0])
```

Response:
````
Here's a Python function named `squared_sum_list` that takes a list of integers as input and returns the squared sum of the list:

```python
def squared_sum_list(lst):
    return sum([num ** 2 for num in lst])
```

The function `squared_sum_list` uses a list comprehension to iterate over each number in the input list `lst` and calculate its square. Then, it uses the `sum` function to accumulate all the squared numbers in a single value - the squared sum of the list.

For example:

```python
numbers = [1, 2, 3, 4, 5]
print(squared_sum_list(numbers))  # Outputs: 55
```

In the above code, the list `[1, 2, 3, 4, 5]` is passed as an argument to the `squared_sum_list` function. The function calculates the sum of the squares of the elements in the list, which is `1 + 4 + 9 + 16 + 25 = 55`. The function then returns this result, which is printed to the console.</s>
````

<!-- ## CrystalChat DataMix
| Subset      | Tokens (Billion) |
| ----------- | ----------- |
| OASST1-guanaco      | 4.46       |
| SlimOrca   | 225.63        |
| ShareGPT   | 112.91        |
| Evol-ShareGPT   | 85.95        |
| ChatLogs   | 29.34        |
| CodeAlpaca   | 2.62        |
| Rosetta Code   | 7.99        |
| Evol-CodeAlpaca 1   | 73.80        |
| Evol-CodeAlpaca 2   | 34.91        |
| HTML Instruction   | 43.67        |
| General Textbooks   | 85.59        |
| Programming Books   | 395.63        |
| Total | 1102.52 | -->

# Evaluation

Coming Soon!

# Bias, Risks, and Limitations
CrystalChat has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). The training data is known and made available [here](https://huggingface.co/datasets/LLM360/CrystalCoderDatasets). It primarily consists of SlimPajama, StarCoder, and WebCrawl dataset.

# Citation

**BibTeX:**

```bibtex
@misc{liu2023llm360,
      title={LLM360: Towards Fully Transparent Open-Source LLMs}, 
      author={Zhengzhong Liu and Aurick Qiao and Willie Neiswanger and Hongyi Wang and Bowen Tan and Tianhua Tao and Junbo Li and Yuqi Wang and Suqi Sun and Omkar Pangarkar and Richard Fan and Yi Gu and Victor Miller and Yonghao Zhuang and Guowei He and Haonan Li and Fajri Koto and Liping Tang and Nikhil Ranjan and Zhiqiang Shen and Xuguang Ren and Roberto Iriondo and Cun Mu and Zhiting Hu and Mark Schulze and Preslav Nakov and Tim Baldwin and Eric P. Xing},
      year={2023},
      eprint={2312.06550},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```