KaleiNeely commited on
Commit
48cb79c
1 Parent(s): bd5f1dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -14
README.md CHANGED
@@ -1,29 +1,60 @@
1
- ### Run Huggingface RWKV World Model
2
 
3
- > This model is developed and converted through https://github.com/BBuf/RWKV-World-HF-Tokenizer. If you have any issues, you can raise them in this project. You're also welcome to star it to follow the subsequent development progress.
4
 
5
  #### CPU
6
 
7
  ```python
 
8
  from transformers import AutoModelForCausalLM, AutoTokenizer
9
 
10
- model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-world-7b")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-world-7b", trust_remote_code=True)
12
 
13
- text = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
14
- prompt = f'Question: {text.strip()}\n\nAnswer:'
15
 
16
  inputs = tokenizer(prompt, return_tensors="pt")
17
- output = model.generate(inputs["input_ids"], max_new_tokens=256)
18
  print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
19
  ```
20
 
21
  output:
22
 
23
  ```shell
24
- Question: In a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese.
 
 
 
 
25
 
26
- Answer: 科学家在一个未曾探索过的山谷中发现了一群能说流利中文的龙。科学家惊讶地发现,这些龙是在一个完全未被探索的地区生活的。
 
 
 
 
 
 
 
 
27
  ```
28
 
29
  #### GPU
@@ -32,22 +63,45 @@ Answer: 科学家在一个未曾探索过的山谷中发现了一群能说流利
32
  import torch
33
  from transformers import AutoModelForCausalLM, AutoTokenizer
34
 
35
- model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-world-7b", torch_dtype=torch.float16).to(0)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-world-7b", trust_remote_code=True)
37
 
38
- text = "你叫什么名字?"
39
- prompt = f'Question: {text.strip()}\n\nAnswer:'
40
 
41
  inputs = tokenizer(prompt, return_tensors="pt").to(0)
42
- output = model.generate(inputs["input_ids"], max_new_tokens=40)
43
  print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
44
  ```
45
 
46
  output:
47
 
48
  ```shell
49
- Question: 你叫什么名字?
 
 
 
 
50
 
51
- Answer: 我是一个人工智能语言模型,没有具体的名字。
52
  ```
53
 
 
1
+ ### Run Huggingface RWKV5 World Model
2
 
 
3
 
4
  #### CPU
5
 
6
  ```python
7
+ import torch
8
  from transformers import AutoModelForCausalLM, AutoTokenizer
9
 
10
+ def generate_prompt(instruction, input=""):
11
+ instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n')
12
+ input = input.strip().replace('\r\n','\n').replace('\n\n','\n')
13
+ if input:
14
+ return f"""Instruction: {instruction}
15
+
16
+ Input: {input}
17
+
18
+ Response:"""
19
+ else:
20
+ return f"""User: hi
21
+
22
+ Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
23
+
24
+ User: {instruction}
25
+
26
+ Assistant:"""
27
+
28
+
29
+ model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-world-7b", trust_remote_code=True).to(torch.float32)
30
  tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-world-7b", trust_remote_code=True)
31
 
32
+ text = "请介绍北京的旅游景点"
33
+ prompt = generate_prompt(text)
34
 
35
  inputs = tokenizer(prompt, return_tensors="pt")
36
+ output = model.generate(inputs["input_ids"], max_new_tokens=333, do_sample=True, temperature=1.0, top_p=0.3, top_k=0, )
37
  print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
38
  ```
39
 
40
  output:
41
 
42
  ```shell
43
+ User: hi
44
+
45
+ Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
46
+
47
+ User: 请介绍北京的旅游景点
48
 
49
+ Assistant: 北京是中国的首都,拥有众多的旅游景点,以下是其中一些著名的景点:
50
+ 1. 故宫:位于北京市中心,是明清两代的皇宫,内有大量的文物和艺术品。
51
+ 2. 天安门广场:是中国最著名的广场之一,是中国人民政治协商会议的旧址,也是中国人民政治协商会议的中心。
52
+ 3. 颐和园:是中国古代皇家园林之一,有着悠久的历史和丰富的文化内涵。
53
+ 4. 长城:是中国古代的一道长城,全长约万里,是中国最著名的旅游景点之一。
54
+ 5. 北京大学:是中国著名的高等教育机构之一,有着悠久的历史和丰富的文化内涵。
55
+ 6. 北京动物园:是中国最大的动物园之一,有着丰富的动物资源和丰富的文化内涵。
56
+ 7. 故宫博物院:是中国最著名的博物馆之一,收藏了大量的文物和艺术品,是中国最重要的文化遗产之一。
57
+ 8. 天坛:是中国古代皇家
58
  ```
59
 
60
  #### GPU
 
63
  import torch
64
  from transformers import AutoModelForCausalLM, AutoTokenizer
65
 
66
+ def generate_prompt(instruction, input=""):
67
+ instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n')
68
+ input = input.strip().replace('\r\n','\n').replace('\n\n','\n')
69
+ if input:
70
+ return f"""Instruction: {instruction}
71
+
72
+ Input: {input}
73
+
74
+ Response:"""
75
+ else:
76
+ return f"""User: hi
77
+
78
+ Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
79
+
80
+ User: {instruction}
81
+
82
+ Assistant:"""
83
+
84
+
85
+ model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-world-7b", trust_remote_code=True, torch_dtype=torch.float16).to(0)
86
  tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-world-7b", trust_remote_code=True)
87
 
88
+ text = "乌兰察布"
89
+ prompt = generate_prompt(text)
90
 
91
  inputs = tokenizer(prompt, return_tensors="pt").to(0)
92
+ output = model.generate(inputs["input_ids"], max_new_tokens=128, do_sample=True, temperature=1.0, top_p=0.3, top_k=0, )
93
  print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
94
  ```
95
 
96
  output:
97
 
98
  ```shell
99
+ User: hi
100
+
101
+ Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
102
+
103
+ User: 乌兰察布
104
 
105
+ Assistant: 乌兰察布市是中国新疆维吾尔自治区的一个地级市,位于新疆维吾尔自治区西南部,毗邻青海省。乌兰察布市是新疆维吾尔自治区的重要城市之一,也是新疆维吾尔自治区的第二大城市。乌兰察布市是新疆的重要经济中心之一,拥有丰富的自然资源和人口密度,是新疆的重要交通枢纽和商
106
  ```
107