01-ai-bot commited on
Commit
81dc865
1 Parent(s): 3959ccd

Sync README

Browse files
Files changed (1) hide show
  1. README.md +39 -208
README.md CHANGED
@@ -1,64 +1,40 @@
 
 
 
 
 
 
 
 
 
 
1
  <div align="center">
2
- <p align="center">
3
- <img src="https://github.com/01-ai/Yi/raw/main/assets/img/Yi.svg?sanitize=true" width="200px">
4
- </p>
5
- <a href="https://github.com/01-ai/Yi/actions/workflows/ci.yml">
6
- <img src="https://github.com/01-ai/Yi/actions/workflows/ci.yml/badge.svg">
7
- </a>
8
- <a href="https://huggingface.co/01-ai">
9
- <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-01--ai-blue">
10
- </a>
11
- <a href="https://www.modelscope.cn/organization/01ai/">
12
- <img src="https://img.shields.io/badge/ModelScope-01--ai-blue">
13
- </a>
14
- <a href="https://github.com/01-ai/Yi/blob/main/LICENSE">
15
- <img src="https://img.shields.io/badge/Code_License-Apache_2.0-lightblue">
16
- </a>
17
- <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">
18
- <img src="https://img.shields.io/badge/Model_License-Model_Agreement-lightblue">
19
- </a>
20
- <a href="mailto:oss@01.ai">
21
- <img src="https://img.shields.io/badge/✉️-yi@01.ai-FFE01B">
22
- </a>
23
  </div>
24
 
25
  ## Introduction
26
 
27
  The **Yi** series models are large language models trained from scratch by
28
- developers at [01.AI](https://01.ai/).
 
 
 
 
 
 
29
 
30
  ## News
31
 
32
- <details open>
33
- <summary>🔥 <b>2023/11/08</b>: Invited test of Yi-34B chat model.</summary>
34
-
35
- Application form:
36
-
37
- - [English](https://cn.mikecrm.com/l91ODJf)
38
- - [Chinese](https://cn.mikecrm.com/gnEZjiQ)
39
-
40
- </details>
41
-
42
- <details>
43
- <summary>🎯 <b>2023/11/05</b>: The base model of <code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>.</summary>
44
 
45
- This release contains two base models with the same parameter sizes of previous
46
- release, except that the context window is extended to 200K.
47
-
48
- </details>
49
-
50
- <details>
51
- <summary>🎯 <b>2023/11/02</b>: The base model of <code>Yi-6B</code> and <code>Yi-34B</code>.</summary>
52
-
53
- The first public release contains two bilingual (English/Chinese) base models
54
- with the parameter sizes of 6B and 34B. Both of them are trained with 4K
55
- sequence length and can be extended to 32K during inference time.
56
-
57
- </details>
58
 
59
  ## Model Performance
60
 
61
-
62
  | Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
63
  | :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
64
  | | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
@@ -100,169 +76,24 @@ Falcon-180B's performance was not underestimated.
100
 
101
  ## Usage
102
 
103
- Feel free to [create an issue](https://github.com/01-ai/Yi/issues/new) if you
104
- encounter any problem when using the **Yi** series models.
105
-
106
- ### 1. Prepare development environment
107
-
108
- The best approach to try the **Yi** series models is through Docker with GPUs. We
109
- provide the following docker images to help you get started.
110
-
111
- - `registry.lingyiwanwu.com/ci/01-ai/yi:latest`
112
-
113
- Note that the `latest` tag always points to the latest code in the `main`
114
- branch. To test a stable version, please replace it with a specific
115
- [tag](https://github.com/01-ai/Yi/tags).
116
-
117
- If you prefer to try out with your local development environment. First, create
118
- a virtual environment and clone this repo. Then install the dependencies with
119
- `pip install -r requirements.txt`. For the best performance, we recommend you
120
- also install the latest version (`>=2.3.3`) of
121
- [flash-attention](https://github.com/Dao-AILab/flash-attention#installation-and-features).
122
-
123
- ### 2. Download the model (optional)
124
-
125
- By default, the model weights and tokenizer will be downloaded from
126
- [HuggingFace](https://huggingface.co/01-ai) automatically in the next step. You
127
- can also download them manually from the following places:
128
-
129
- - [ModelScope](https://www.modelscope.cn/organization/01ai/)
130
- - [WiseModel](https://wisemodel.cn/models) (Search for `Yi`)
131
- - Mirror site (remember to extract the content with `tar`)
132
- - [Yi-6B.tar](https://storage.lingyiwanwu.com/yi/models/Yi-6B.tar)
133
- - [Yi-6B-200K.tar](https://storage.lingyiwanwu.com/yi/models/Yi-6B-200K.tar)
134
- - [Yi-34B.tar](https://storage.lingyiwanwu.com/yi/models/Yi-34B.tar)
135
- - [Yi-34B-200K.tar](https://storage.lingyiwanwu.com/yi/models/Yi-34B-200K.tar)
136
-
137
- ### 3. Examples
138
-
139
- #### 3.1 Use the base model
140
-
141
- ```bash
142
- python demo/text_generation.py
143
- ```
144
-
145
- To reuse the downloaded models in the previous step, you can provide the extra
146
- `--model` argument:
147
-
148
- ```bash
149
- python demo/text_generation.py --model /path/to/model
150
- ```
151
-
152
- Or if you'd like to get your hands dirty:
153
-
154
- ```python
155
- from transformers import AutoModelForCausalLM, AutoTokenizer
156
-
157
- model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34B", device_map="auto", torch_dtype="auto", trust_remote_code=True)
158
- tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34B", trust_remote_code=True)
159
- inputs = tokenizer("There's a place where time stands still. A place of breath taking wonder, but also", return_tensors="pt")
160
- max_length = 256
161
-
162
- outputs = model.generate(
163
- inputs.input_ids.cuda(),
164
- max_length=max_length,
165
- eos_token_id=tokenizer.eos_token_id,
166
- do_sample=True,
167
- repetition_penalty=1.3,
168
- no_repeat_ngram_size=5,
169
- temperature=0.7,
170
- top_k=40,
171
- top_p=0.8,
172
- )
173
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
174
- ```
175
-
176
- <details>
177
-
178
- <summary>Output</summary>
179
-
180
- **Prompt**: There's a place where time stands still. A place of breath taking wonder, but also
181
-
182
- **Generation**: There's a place where time stands still. A place of breath taking wonder, but also of great danger. A place where the very air you breathe could kill you. A place where the only way to survive is to be prepared.
183
- The place is called the Arctic.
184
- The Arctic is a vast, frozen wilderness. It is a place of extremes. The temperatures can drop to -40 degrees Celsius. The winds can reach speeds of 100 kilometers per hour. The sun can shine for 24 hours a day, or not at all for weeks on end.
185
- The Arctic is also a place of great beauty. The ice and snow are a pristine white. The sky is a deep blue. The sunsets are spectacular.
186
- But the Arctic is also a place of great danger. The ice can be treacherous. The winds can be deadly. The sun can be blinding.
187
- The Arctic is a place where the only way to survive is to be prepared.
188
- The Arctic is a place of extremes. The temperatures can drop to -40 degrees Celsius. The winds can reach speeds of 100 kilometers per hour. The sun can shine for 24 hours a day, or not at all for weeks on end.
189
- The Arctic is a place of great beauty. The ice and snow are a
190
-
191
- </details>
192
-
193
- For more advanced usage, please refer to the
194
- [doc](https://github.com/01-ai/Yi/tree/main/demo).
195
-
196
- #### 3.2 Finetuning from the base model:
197
-
198
- ```bash
199
- bash finetune/scripts/run_sft_Yi_6b.sh
200
- ```
201
-
202
- Once finished, you can compare the finetuned model and the base model with the following command:
203
-
204
- ```bash
205
- bash finetune/scripts/run_eval.sh
206
- ```
207
-
208
- For more advanced usage like fine-tuning based on your custom data, please refer
209
- the [doc](https://github.com/01-ai/Yi/tree/main/finetune).
210
-
211
- #### 3.3 Quantization
212
-
213
- ##### GPT-Q
214
- ```bash
215
- python quantization/gptq/quant_autogptq.py \
216
- --model /base_model \
217
- --output_dir /quantized_model \
218
- --trust_remote_code
219
- ```
220
-
221
- Once finished, you can then evaluate the resulting model as follows:
222
-
223
- ```bash
224
- python quantization/gptq/eval_quantized_model.py \
225
- --model /quantized_model \
226
- --trust_remote_code
227
- ```
228
-
229
- For a more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/gptq)
230
-
231
- ##### AWQ
232
- ```bash
233
- python quantization/awq/quant_autoawq.py \
234
- --model /base_model \
235
- --output_dir /quantized_model \
236
- --trust_remote_code
237
- ```
238
-
239
- Once finished, you can then evaluate the resulted model as follows:
240
-
241
- ```bash
242
- python quantization/awq/eval_quantized_model.py \
243
- --model /quantized_model \
244
- --trust_remote_code
245
- ```
246
-
247
- For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/awq)
248
 
249
  ## Disclaimer
250
 
251
- We use data compliance checking algorithms during the training process, to
252
- ensure the compliance of the trained model to the best of our ability. Due to
253
- complex data and the diversity of language model usage scenarios, we cannot
254
- guarantee that the model will generate correct, and reasonable output in all
255
- scenarios. Please be aware that there is still a risk of the model producing
256
- problematic outputs. We will not be responsible for any risks and issues
257
- resulting from misuse, misguidance, illegal usage, and related misinformation,
258
- as well as any associated data security concerns.
259
 
260
  ## License
261
 
262
- The source code in this repo is licensed under the [Apache 2.0
263
- license](https://github.com/01-ai/Yi/blob/main/LICENSE). The Yi series models
264
- are fully open for academic research and free commercial usage with permission
265
- via applications. All usage must adhere to the [Model License
266
- Agreement 2.0](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt).
267
- To apply for the official commercial license, please contact us
268
- ([yi@01.ai](mailto:yi@01.ai)).
 
1
+ ---
2
+ license: other
3
+ license_name: yi-license
4
+ license_link: LICENSE
5
+ widget:
6
+ - text: "There's a place where time stands still. A place of breath taking wonder, but also"
7
+ output:
8
+ text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?"
9
+ pipeline_tag: text-generation
10
+ ---
11
  <div align="center">
12
+
13
+ <img src="./Yi.svg" width="200px">
14
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  </div>
16
 
17
  ## Introduction
18
 
19
  The **Yi** series models are large language models trained from scratch by
20
+ developers at [01.AI](https://01.ai/). The first public release contains two
21
+ bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B))
22
+ and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained
23
+ with 4K sequence length and can be extended to 32K during inference time.
24
+ The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
25
+ and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with
26
+ 200K context length.
27
 
28
  ## News
29
 
30
+ - 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
31
+ and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length.
32
+ - 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and
33
+ [`Yi-34B`](https://huggingface.co/01-ai/Yi-34B).
 
 
 
 
 
 
 
 
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  ## Model Performance
37
 
 
38
  | Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
39
  | :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
40
  | | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
 
76
 
77
  ## Usage
78
 
79
+ Please visit our [github repository](https://github.com/01-ai/Yi) for general
80
+ guidance on how to use this model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
 
82
  ## Disclaimer
83
 
84
+ Although we use data compliance checking algorithms during the training process
85
+ to ensure the compliance of the trained model to the best of our ability, due to
86
+ the complexity of the data and the diversity of language model usage scenarios,
87
+ we cannot guarantee that the model will generate correct and reasonable output
88
+ in all scenarios. Please be aware that there is still a risk of the model
89
+ producing problematic outputs. We will not be responsible for any risks and
90
+ issues resulting from misuse, misguidance, illegal usage, and related
91
+ misinformation, as well as any associated data security concerns.
92
 
93
  ## License
94
 
95
+ The Yi series models are fully open for academic research and free commercial
96
+ usage with permission via applications. All usage must adhere to the [Model
97
+ License Agreement 2.0](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE). To
98
+ apply for the official commercial license, please contact us
99
+ ([yi@01.ai](mailto:yi@01.ai)).