File size: 2,684 Bytes
9f0f627
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
### Run Huggingface RWKV5 World Model


#### CPU

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-5-world-3b", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-5-world-3b", trust_remote_code=True)

text = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
prompt = f'Question: {text.strip()}\n\nAnswer:'

inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=256)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```

output:

```shell
Question: In a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese.

Answer: The researchers were shocked to discover that the dragons in the valley were not only intelligent but also spoke perfect Chinese. This discovery has opened up new possibilities for cultural exchange and understanding between China and Tibet.
```

#### GPU

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-5-world-3b", trust_remote_code=True).to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-5-world-3b", trust_remote_code=True)

text = "请介绍北京的旅游景点"
prompt = f'Question: {text.strip()}\n\nAnswer:'

inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=256, do_sample=True, temperature=1.0, top_p=0.1, top_k=0, )
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```

output:

```shell
Question: 请介绍北京的旅游景点

Answer: 北京是中国的首都,拥有许多著名的旅游景点。以下是其中一些:
1. 故宫:位于北京市中心,是明清两代的皇宫,是中国最大的古代宫殿建筑群之一。
2. 天安门广场:位于北京市中心,是中国最著名的广场之一,是中国人民政治协商会议的旧址。
3. 颐和园:位于北京市西郊,是中国最著名的皇家园林之一,有许多美丽的湖泊和花园。
4. 长城:位于北京市西北部,是中国最著名的古代防御工程之一,有许多壮观的景点。
5. 北京大学:位于北京市东城区,是中国著名的高等教育机构之一,有许多知名的学者和教授。
6. 北京奥林匹克公园:位于北京市
```