flyingfishinwater
commited on
Commit
•
b0514bc
1
Parent(s):
1cf09d0
Update README.md
Browse files
README.md
CHANGED
@@ -178,7 +178,7 @@ OpenChat is an innovative library of open-source language models, fine-tuned wit
|
|
178 |
**Prompt Format:**
|
179 |
|
180 |
```
|
181 |
-
GPT4
|
182 |
```
|
183 |
|
184 |
**Template Name:** Mistral
|
@@ -231,23 +231,27 @@ The Phi-3 4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open m
|
|
231 |
|
232 |
---
|
233 |
|
234 |
-
# Yi 6B Chat
|
235 |
|
236 |
-
|
237 |
|
238 |
-
**Model Intention:** It's a 6B model and can understand English and Chinese. It's good for
|
239 |
|
240 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/
|
241 |
|
242 |
-
**Model Info URL:** [https://huggingface.co/01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat)
|
243 |
|
244 |
**Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0)
|
245 |
|
246 |
-
**Model Description:**
|
247 |
|
248 |
**Developer:** [https://01.ai/](https://01.ai/)
|
249 |
|
250 |
-
**
|
|
|
|
|
|
|
|
|
251 |
|
252 |
**Context Length:** 4096 tokens
|
253 |
|
|
|
178 |
**Prompt Format:**
|
179 |
|
180 |
```
|
181 |
+
GPT4 User: {{prompt}}<|end_of_turn|>GPT4 Assistant:
|
182 |
```
|
183 |
|
184 |
**Template Name:** Mistral
|
|
|
231 |
|
232 |
---
|
233 |
|
234 |
+
# Yi 1.5 6B Chat
|
235 |
|
236 |
+
Yi-1.5 is an upgraded version which delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI. The Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more.
|
237 |
|
238 |
+
**Model Intention:** It's a 6B model and can understand English and Chinese. It's good for coding, math, reasoning and language understanding
|
239 |
|
240 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Yi-1.5-6B-Q3_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Yi-1.5-6B-Q3_K_M.gguf?download=true)
|
241 |
|
242 |
+
**Model Info URL:** [https://huggingface.co/01-ai/Yi-1.5-6B-Chat](https://huggingface.co/01-ai/Yi-1.5-6B-Chat)
|
243 |
|
244 |
**Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0)
|
245 |
|
246 |
+
**Model Description:** Yi-1.5 is an upgraded version which delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI. The Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more.
|
247 |
|
248 |
**Developer:** [https://01.ai/](https://01.ai/)
|
249 |
|
250 |
+
**Update Date:** 2024-05-12
|
251 |
+
|
252 |
+
**Update History:** Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension
|
253 |
+
|
254 |
+
**File Size:** 2990 MB
|
255 |
|
256 |
**Context Length:** 4096 tokens
|
257 |
|