RichardErkhov commited on
Commit
481aded
1 Parent(s): dcdb69c

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +149 -0
README.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Faro-Yi-9B - GGUF
11
+ - Model creator: https://huggingface.co/wenbopan/
12
+ - Original model: https://huggingface.co/wenbopan/Faro-Yi-9B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Faro-Yi-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q2_K.gguf) | Q2_K | 3.12GB |
18
+ | [Faro-Yi-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.IQ3_XS.gguf) | IQ3_XS | 3.46GB |
19
+ | [Faro-Yi-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.IQ3_S.gguf) | IQ3_S | 3.64GB |
20
+ | [Faro-Yi-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q3_K_S.gguf) | Q3_K_S | 3.63GB |
21
+ | [Faro-Yi-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.IQ3_M.gguf) | IQ3_M | 3.78GB |
22
+ | [Faro-Yi-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q3_K.gguf) | Q3_K | 4.03GB |
23
+ | [Faro-Yi-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q3_K_M.gguf) | Q3_K_M | 4.03GB |
24
+ | [Faro-Yi-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q3_K_L.gguf) | Q3_K_L | 4.37GB |
25
+ | [Faro-Yi-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.IQ4_XS.gguf) | IQ4_XS | 4.5GB |
26
+ | [Faro-Yi-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q4_0.gguf) | Q4_0 | 4.69GB |
27
+ | [Faro-Yi-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.IQ4_NL.gguf) | IQ4_NL | 4.73GB |
28
+ | [Faro-Yi-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q4_K_S.gguf) | Q4_K_S | 4.72GB |
29
+ | [Faro-Yi-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q4_K.gguf) | Q4_K | 4.96GB |
30
+ | [Faro-Yi-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q4_K_M.gguf) | Q4_K_M | 4.96GB |
31
+ | [Faro-Yi-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q4_1.gguf) | Q4_1 | 5.19GB |
32
+ | [Faro-Yi-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q5_0.gguf) | Q5_0 | 5.69GB |
33
+ | [Faro-Yi-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q5_K_S.gguf) | Q5_K_S | 5.69GB |
34
+ | [Faro-Yi-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q5_K.gguf) | Q5_K | 5.83GB |
35
+ | [Faro-Yi-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q5_K_M.gguf) | Q5_K_M | 5.83GB |
36
+ | [Faro-Yi-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q5_1.gguf) | Q5_1 | 6.19GB |
37
+ | [Faro-Yi-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-gguf/blob/main/Faro-Yi-9B.Q6_K.gguf) | Q6_K | 6.75GB |
38
+
39
+
40
+
41
+
42
+ Original model description:
43
+ ---
44
+ license: mit
45
+ datasets:
46
+ - wenbopan/Fusang-v1
47
+ - wenbopan/OpenOrca-zh-20k
48
+ language:
49
+ - zh
50
+ - en
51
+ ---
52
+
53
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp)
54
+
55
+ **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.**
56
+
57
+ # Faro-Yi-9B
58
+ Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1.
59
+
60
+ Just like Yi-9B-200K, Faro-Yi-9B supports up to 200K context length.
61
+
62
+ ## How to Use
63
+
64
+ Faro-Yi-9B uses the chatml template and performs well in both short and long contexts. For longer inputs under **24GB of VRAM**, I recommend to use vLLM to have a max prompt of 32K. Setting `kv_cache_dtype="fp8_e5m2"` allows for 48K input length. 4bit-AWQ quantization on top of that can boost input length to 160K, albeit with some performance impact. Adjust `max_model_len` arg in vLLM or `config.json` to avoid OOM.
65
+
66
+
67
+ ```python
68
+ import io
69
+ import requests
70
+ from PyPDF2 import PdfReader
71
+ from vllm import LLM, SamplingParams
72
+
73
+ llm = LLM(model="wenbopan/Faro-Yi-9B", kv_cache_dtype="fp8_e5m2", max_model_len=100000)
74
+
75
+ pdf_data = io.BytesIO(requests.get("https://arxiv.org/pdf/2303.08774.pdf").content)
76
+ document = "".join(page.extract_text() for page in PdfReader(pdf_data).pages) # 100 pages
77
+
78
+ question = f"{document}\n\nAccording to the paper, what is the parameter count of GPT-4?"
79
+ messages = [ {"role": "user", "content": question} ] # 83K tokens
80
+ prompt = llm.get_tokenizer().apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
81
+ output = llm.generate(prompt, SamplingParams(temperature=0.8, max_tokens=500))
82
+ print(output[0].outputs[0].text)
83
+ # Yi-9B-200K: 175B. GPT-4 has 175B \nparameters. How many models were combined to create GPT-4? Answer: 6. ...
84
+ # Faro-Yi-9B: GPT-4 does not have a publicly disclosed parameter count due to the competitive landscape and safety implications of large-scale models like GPT-4. ...
85
+ ```
86
+
87
+
88
+ <details> <summary>Or With Transformers</summary>
89
+
90
+ ```python
91
+ from transformers import AutoModelForCausalLM, AutoTokenizer
92
+
93
+ model = AutoModelForCausalLM.from_pretrained('wenbopan/Faro-Yi-9B', device_map="cuda")
94
+ tokenizer = AutoTokenizer.from_pretrained('wenbopan/Faro-Yi-9B')
95
+ messages = [
96
+ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
97
+ {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
98
+ ]
99
+
100
+ input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
101
+ generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5)
102
+ response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. ...
103
+ ```
104
+
105
+ </details>
106
+
107
+ ## Performance
108
+
109
+ Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters.
110
+
111
+ <details> <summary>Benchmark Results</summary>
112
+
113
+ ### Fact-based Evaluation (Open LLM Leaderboard)
114
+
115
+ | **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** |
116
+ | -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- |
117
+ | **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 |
118
+ | **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 |
119
+
120
+ ### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench))
121
+
122
+ | **Name** | **Average_zh** | **Average_en** | **Code Completion** |
123
+ |----------------|----------------|----------------|---------------------|
124
+ | **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 |
125
+ | **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 |
126
+
127
+ <details>
128
+ <summary>Score breakdown</summary>
129
+
130
+ | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** |
131
+ |----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------|
132
+ | **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 |
133
+ | **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** |
134
+
135
+ </details>
136
+
137
+ ### Performance on Preference (MT-Bench)
138
+
139
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/M0Kc64sIsbNyCCvrRk1Lv.png)
140
+
141
+ ### Bilingual Ability (CMMLU & MMLU)
142
+
143
+ | **Name** | MMLU | **CMMLU** |
144
+ | -------------- | --------- | --------- |
145
+ | **Yi-9B-200K** | 65.73 | 71.97 |
146
+ | **Faro-Yi-9B** | **68.80** | **73.28** |
147
+
148
+ </details>
149
+