yi-01-ai
commited on
Commit
•
edb527e
1
Parent(s):
d4addc1
Auto Sync from git://github.com/01-ai/Yi.git/commit/731b2af8583cba38d6544ebf909d7c85545f75a8
Browse files
README.md
CHANGED
@@ -100,6 +100,7 @@ pipeline_tag: text-generation
|
|
100 |
- [Fine-tuning](#fine-tuning)
|
101 |
- [Quantization](#quantization)
|
102 |
- [Deployment](#deployment)
|
|
|
103 |
- [Learning hub](#learning-hub)
|
104 |
- [Why Yi?](#why-yi)
|
105 |
- [Ecosystem](#ecosystem)
|
@@ -337,6 +338,7 @@ Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K)
|
|
337 |
- [Fine-tuning](#fine-tuning)
|
338 |
- [Quantization](#quantization)
|
339 |
- [Deployment](#deployment)
|
|
|
340 |
- [Learning hub](#learning-hub)
|
341 |
|
342 |
## Quick start
|
@@ -1024,6 +1026,44 @@ Below are detailed minimum VRAM requirements under different batch use cases.
|
|
1024 |
<a href="#top">Back to top ⬆️ </a> ]
|
1025 |
</p>
|
1026 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1027 |
### Learning hub
|
1028 |
|
1029 |
<details>
|
|
|
100 |
- [Fine-tuning](#fine-tuning)
|
101 |
- [Quantization](#quantization)
|
102 |
- [Deployment](#deployment)
|
103 |
+
- [FAQ](#faq)
|
104 |
- [Learning hub](#learning-hub)
|
105 |
- [Why Yi?](#why-yi)
|
106 |
- [Ecosystem](#ecosystem)
|
|
|
338 |
- [Fine-tuning](#fine-tuning)
|
339 |
- [Quantization](#quantization)
|
340 |
- [Deployment](#deployment)
|
341 |
+
- [FAQ](#faq)
|
342 |
- [Learning hub](#learning-hub)
|
343 |
|
344 |
## Quick start
|
|
|
1026 |
<a href="#top">Back to top ⬆️ </a> ]
|
1027 |
</p>
|
1028 |
|
1029 |
+
### FAQ
|
1030 |
+
<details>
|
1031 |
+
<summary> If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️</summary>
|
1032 |
+
<br>
|
1033 |
+
|
1034 |
+
#### 💡Fine-tuning
|
1035 |
+
- <strong>Base model or Chat model - which to fine-tune?</strong>
|
1036 |
+
<br>The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task.
|
1037 |
+
- If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice.
|
1038 |
+
- On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice.
|
1039 |
+
- It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements.
|
1040 |
+
- <strong>Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference?</strong>
|
1041 |
+
<br>
|
1042 |
+
The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes.
|
1043 |
+
- Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely.
|
1044 |
+
- The Base model's fine-tuning is more versatile, with a relatively high performance potential.
|
1045 |
+
- If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to.
|
1046 |
+
- If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet.
|
1047 |
+
|
1048 |
+
#### 💡Quantization
|
1049 |
+
- <strong>Quantized model versus original model - what is the performance gap?</strong>
|
1050 |
+
- The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points.
|
1051 |
+
- Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results.
|
1052 |
+
|
1053 |
+
#### 💡General
|
1054 |
+
- <strong>Where can I source fine-tuning question answering datasets?</strong>
|
1055 |
+
- You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) readily available.
|
1056 |
+
- Additionally, Github offers fine-tuning frameworks, such as [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), which integrates pre-made datasets.
|
1057 |
+
|
1058 |
+
- <strong>What is the GPU memory requirement for fine-tuning Yi-34B FP16?</strong>
|
1059 |
+
<br>
|
1060 |
+
The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance.
|
1061 |
+
|
1062 |
+
- <strong>Are there any third-party platforms that support chat functionality for the Yi-34b-200k model?</strong>
|
1063 |
+
<br>
|
1064 |
+
If you're looking for third-party Chats, options include [fireworks.ai](https://fireworks.ai/login?callbackURL=https://fireworks.ai/models/fireworks/yi-34b-chat).
|
1065 |
+
</details>
|
1066 |
+
|
1067 |
### Learning hub
|
1068 |
|
1069 |
<details>
|