yi-01-ai commited on
Commit
e2a03e2
1 Parent(s): 6d26c6c

Auto Sync from git://github.com/01-ai/Yi.git/commit/0e82bda9b71966a20ad86286b69f5038c2e4fa41

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -115,7 +115,7 @@ pipeline_tag: text-generation
115
 
116
  ## 📌 Introduction
117
 
118
- - 🤖 The Yi series models are the next generation of open source large language models trained from strach by [01.AI](https://01.ai/).
119
 
120
  - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
121
 
@@ -142,7 +142,7 @@ Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat)
142
  Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary)
143
  Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary)
144
 
145
- <sub><sup> - 4 bits series models are quantized by AWQ. <br> - 8 bits series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090).</sup></sub>
146
 
147
  ### Base models
148
 
@@ -176,7 +176,7 @@ For chat models and base models:
176
  <details>
177
  <summary>🎯 <b>2023/11/23</b>: The chat models are open to public.</summary>
178
 
179
- This release contains two chat models based on previous released base models, two 8-bits models quantized by GPTQ, two 4-bits models quantized by AWQ.
180
 
181
  - `Yi-34B-Chat`
182
  - `Yi-34B-Chat-4bits`
@@ -208,7 +208,7 @@ Application form:
208
  <details>
209
  <summary>🎯 <b>2023/11/05</b>: The base model of <code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>.</summary>
210
 
211
- This release contains two base models with the same parameter sizes of previous
212
  release, except that the context window is extended to 200K.
213
 
214
  </details>
@@ -405,7 +405,7 @@ Everyone! 🙌 ✅
405
 
406
  - The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the [Yi Series Models Community License Agreement 2.1](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt)
407
 
408
- - For free commercial use, you only need to [complete this form](https://www.lingyiwanwu.com/yi-license) to get Yi Model Commercial License.
409
 
410
  <div align="right"> [ <a href="#building-the-next-generation-of-open-source-and-bilingual-llms">Back to top ⬆️ </a> ] </div>
411
 
 
115
 
116
  ## 📌 Introduction
117
 
118
+ - 🤖 The Yi series models are the next generation of open source large language models trained from scratch by [01.AI](https://01.ai/).
119
 
120
  - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
121
 
 
142
  Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary)
143
  Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary)
144
 
145
+ <sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090).</sup></sub>
146
 
147
  ### Base models
148
 
 
176
  <details>
177
  <summary>🎯 <b>2023/11/23</b>: The chat models are open to public.</summary>
178
 
179
+ This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ.
180
 
181
  - `Yi-34B-Chat`
182
  - `Yi-34B-Chat-4bits`
 
208
  <details>
209
  <summary>🎯 <b>2023/11/05</b>: The base model of <code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>.</summary>
210
 
211
+ This release contains two base models with the same parameter sizes as the previous
212
  release, except that the context window is extended to 200K.
213
 
214
  </details>
 
405
 
406
  - The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the [Yi Series Models Community License Agreement 2.1](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt)
407
 
408
+ - For free commercial use, you only need to [complete this form](https://www.lingyiwanwu.com/yi-license) to get a Yi Model Commercial License.
409
 
410
  <div align="right"> [ <a href="#building-the-next-generation-of-open-source-and-bilingual-llms">Back to top ⬆️ </a> ] </div>
411