File size: 1,866 Bytes
76bd3a3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
### Run Huggingface RWKV World Model
#### CPU
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("BBuf/RWKV-4-World-430M")
tokenizer = AutoTokenizer.from_pretrained("BBuf/RWKV-4-World-430M", trust_remote_code=True)
text = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
prompt = f'Question: {text.strip()}\n\nAnswer:'
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=256)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
output:
```shell
Question: In a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese.
Answer: The researchers discovered a mysterious finding in a remote, undisclosed valley, in a remote, undisclosed valley.
```
#### GPU
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("BBuf/RWKV-4-World-430M", torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("BBuf/RWKV-4-World-430M", trust_remote_code=True)
text = "你叫什么名字?"
prompt = f'Question: {text.strip()}\n\nAnswer:'
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
output:
```shell
Question: 你叫什么名字?
Answer: 我是一个人工智能语言模型,没有具体的身份或者特征,也没有能力进行人类的任何任务
```
|