sayhan commited on
Commit
c8faae9
·
verified ·
1 Parent(s): 8eab127

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen
4
+ license_link: >-
5
+ https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
6
+ language:
7
+ - en
8
+ - zh
9
+ tags:
10
+ - qwen
11
+ - qwen1.5
12
+ - qwen2
13
+ - llama
14
+ inference: false
15
+ ---
16
+ ## Description
17
+ This repo containst the "LLaMAfied" version of [Qwen1.5-72B-Chat](https://huggingface.co/Qwen/Qwen1.5-72B-Chat) by Alibaba Cloud. I used the amazing [script](https://github.com/Minami-su/character_AI_open/blob/main/llamafy_qwen_v2.py) made by [Minami-su](https://huggingface.co/Minami-su) to LLaMAfy the model.
18
+
19
+ ## Usage
20
+ ```python
21
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
22
+ tokenizer = AutoTokenizer.from_pretrained("sayhan/Qwen1.5-72B-Chat-LLaMAfied")
23
+ model = AutoModelForCausalLM.from_pretrained("sayhan/Qwen1.5-72B-Chat-LLaMAfied", torch_dtype="auto", device_map="auto")
24
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
25
+
26
+ messages = [
27
+ {"role": "user", "content": "Who are you?"}
28
+ ]
29
+ inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
30
+ inputs = inputs.to("cuda")
31
+ generate_ids = model.generate(inputs,max_length=2048, streamer=streamer)
32
+ ```
33
+
34
+ ## Other LLaMAfied Qwen1.5 Models
35
+ The two other sizes of the Qwen1.5 have been LLaMAfied by [Minami-su](https://huggingface.co/Minami-su)
36
+ - **0.5B:** [Minami-su/Qwen1.5-0.5B-Chat_llamafy](https://huggingface.co/Minami-su/Qwen1.5-0.5B-Chat_llamafy)
37
+ - **7B:** [Minami-su/Qwen1.5-7B-Chat_llamafy](https://huggingface.co/Minami-su/Qwen1.5-7B-Chat_llamafy)