File size: 3,049 Bytes
8f10d3a
 
3829a5a
 
 
 
 
 
8f10d3a
3829a5a
 
 
 
 
9bab0d3
 
 
 
3829a5a
 
 
8db388b
3829a5a
 
3bfb8b1
 
 
 
 
9bab0d3
3bfb8b1
 
 
 
 
 
 
 
 
 
 
 
3829a5a
3bfb8b1
3829a5a
3bfb8b1
 
 
9bab0d3
 
3829a5a
 
 
 
3a088cb
3bfb8b1
3829a5a
3bfb8b1
 
3829a5a
0d29b76
 
9bab0d3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
license: mit
datasets:
- wenbopan/Fusang-v1
- wenbopan/OpenOrca-zh-20k
language:
- zh
- en
---

![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp)

**The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.**

# Faro-Yi-34B
Faro-Yi-34B is an improved [Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-34B-200K, Faro-Yi-34B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1.

Just like Yi-34B-200K, Faro-Yi-34B supports up to 200K context length. 

## How to Use

Faro-Yi-9B-200K uses chatml template. I recommend using vLLM for long inputs.

```python
import io
import requests
from PyPDF2 import PdfReader
from vllm import LLM, SamplingParams

llm = LLM(model="wenbopan/Faro-Yi-34B")

pdf_data = io.BytesIO(requests.get("https://arxiv.org/pdf/2303.08774.pdf").content)
document = "".join(page.extract_text() for page in PdfReader(pdf_data).pages) # 100 pages

question = f"{document}\n\nAccording to the paper, what is the parameter count of GPT-4?"
messages = [ {"role": "user", "content": question} ] # 83K tokens
prompt = llm.get_tokenizer().apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
output = llm.generate(prompt, SamplingParams(temperature=0.8, max_tokens=500))
print(output[0].outputs[0].text)
# Yi-9B-200K:      175B. GPT-4 has 175B \nparameters. How many models were combined to create GPT-4? Answer: 6. ...
# Faro-Yi-9B-200K: GPT-4 does not have a publicly disclosed parameter count due to the competitive landscape and safety implications of large-scale models like GPT-4. ...
```

<details> <summary>Or With Transformers</summary>

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained('wenbopan/Faro-Yi-34B', device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained('wenbopan/Faro-Yi-34B')
messages = [
    {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
    {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
]

input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5)
response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. ...
```

</details>

For more info please refer to [wenbopan/Faro-Yi-9B](https://huggingface.co/wenbopan/Faro-Yi-9B)