Update README.md
Browse files
README.md
CHANGED
@@ -12,35 +12,47 @@ language:
|
|
12 |
|
13 |
**The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.**
|
14 |
|
15 |
-
# Faro-Yi-34B
|
16 |
-
Faro-Yi-34B is an improved [Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-34B-200K, Faro-Yi-34B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1.
|
17 |
|
18 |
## How to Use
|
19 |
|
20 |
-
Faro-Yi-34B uses chatml template. This make it easy to set up system prompt and multi-turn conversations.
|
21 |
|
22 |
```python
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
-
|
26 |
-
model_path,
|
27 |
-
device_map="cuda"
|
28 |
-
)
|
29 |
-
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
30 |
|
|
|
|
|
|
|
|
|
|
|
31 |
messages = [
|
32 |
{"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
|
33 |
{"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
|
34 |
]
|
35 |
-
input_ids = tokenizer.apply_chat_template(
|
36 |
-
messages,
|
37 |
-
tokenize=True,
|
38 |
-
add_generation_prompt=True,
|
39 |
-
return_tensors="pt",
|
40 |
-
).to(model.device)
|
41 |
-
|
42 |
generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5)
|
43 |
-
response = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
|
|
|
44 |
|
45 |
-
|
46 |
-
```
|
|
|
12 |
|
13 |
**The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.**
|
14 |
|
15 |
+
# Faro-Yi-34B-200K
|
16 |
+
Faro-Yi-34B-200K is an improved [Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-34B-200K, Faro-Yi-34B-200K has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1.
|
17 |
|
18 |
## How to Use
|
19 |
|
20 |
+
Faro-Yi-34B-200K uses chatml template. This make it easy to set up system prompt and multi-turn conversations. It truly excels when used for analyzing long documents or instructions. I recommend using vLLM for long inputs.
|
21 |
|
22 |
```python
|
23 |
+
import io
|
24 |
+
import requests
|
25 |
+
from PyPDF2 import PdfReader
|
26 |
+
from vllm import LLM, SamplingParams
|
27 |
+
|
28 |
+
llm = LLM(model="wenbopan/Faro-Yi-34B-200K")
|
29 |
+
|
30 |
+
pdf_data = io.BytesIO(requests.get("https://arxiv.org/pdf/2303.08774.pdf").content)
|
31 |
+
document = "".join(page.extract_text() for page in PdfReader(pdf_data).pages) # 100 pages
|
32 |
+
|
33 |
+
question = f"{document}\n\nAccording to the paper, what is the parameter count of GPT-4?"
|
34 |
+
messages = [ {"role": "user", "content": question} ] # 83K tokens
|
35 |
+
prompt = llm.get_tokenizer().apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
|
36 |
+
output = llm.generate(prompt, SamplingParams(temperature=0.8, max_tokens=500))
|
37 |
+
print(output[0].outputs[0].text)
|
38 |
+
# Yi-9B-200K: 175B. GPT-4 has 175B \nparameters. How many models were combined to create GPT-4? Answer: 6. ...
|
39 |
+
# Faro-Yi-9B-200K: GPT-4 does not have a publicly disclosed parameter count due to the competitive landscape and safety implications of large-scale models like GPT-4. ...
|
40 |
+
```
|
41 |
|
42 |
+
<details> <summary>Or With Transformers</summary>
|
|
|
|
|
|
|
|
|
43 |
|
44 |
+
```python
|
45 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
46 |
+
|
47 |
+
model = AutoModelForCausalLM.from_pretrained('wenbopan/Faro-Yi-34B-200K', device_map="cuda")
|
48 |
+
tokenizer = AutoTokenizer.from_pretrained('wenbopan/Faro-Yi-34B-200K')
|
49 |
messages = [
|
50 |
{"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
|
51 |
{"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
|
52 |
]
|
53 |
+
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5)
|
55 |
+
response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. ...
|
56 |
+
```
|
57 |
|
58 |
+
</details>
|
|