MaziyarPanahi commited on
Commit
7c8a847
1 Parent(s): f0ff573

Create README.md (#2)

Browse files

- Create README.md (e98b60b3a4475acf414e82fa6cca6faf2673017c)

Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
3
+ library_name: transformers
4
+ tags:
5
+ - axolotl
6
+ - finetune
7
+ - facebook
8
+ - meta
9
+ - pytorch
10
+ - llama
11
+ - llama-3
12
+ language:
13
+ - en
14
+ pipeline_tag: text-generation
15
+ license: other
16
+ license_name: llama3
17
+ license_link: LICENSE
18
+ inference: false
19
+ model_creator: MaziyarPanahi
20
+ model_name: Llama-3-8B-Instruct-DPO-v0.4
21
+ quantized_by: MaziyarPanahi
22
+ datasets:
23
+ - argilla/ultrafeedback-binarized-preferences
24
+ - Intel/orca_dpo_pairs
25
+ ---
26
+
27
+ <img src="./llama-3-merges.webp" alt="Goku 8x22B v0.4 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
28
+
29
+
30
+ # Llama-3-8B-Instruct-DPO-v0.4
31
+
32
+ This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-8B-Instruct` model.
33
+
34
+ # Prompt Template
35
+
36
+ This model uses `ChatML` prompt template:
37
+
38
+ ```
39
+ <|im_start|>system
40
+ {System}
41
+ <|im_end|>
42
+ <|im_start|>user
43
+ {User}
44
+ <|im_end|>
45
+ <|im_start|>assistant
46
+ {Assistant}
47
+ ````
48
+
49
+ # How to use
50
+
51
+ You can use this model by using `MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.4` as the model name in Hugging Face's
52
+ transformers library.
53
+
54
+ ```python
55
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
56
+ from transformers import pipeline
57
+ import torch
58
+
59
+ model_id = "MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.4"
60
+
61
+ model = AutoModelForCausalLM.from_pretrained(
62
+ model_id,
63
+ torch_dtype=torch.float16,
64
+ device_map="auto",
65
+ trust_remote_code=True,
66
+ # attn_implementation="flash_attention_2"
67
+ )
68
+
69
+ tokenizer = AutoTokenizer.from_pretrained(
70
+ model_id,
71
+ trust_remote_code=True
72
+ )
73
+
74
+ streamer = TextStreamer(tokenizer)
75
+
76
+ pipeline = pipeline(
77
+ "text-generation",
78
+ model=model,
79
+ tokenizer=tokenizer,
80
+ model_kwargs={"torch_dtype": torch.bfloat16},
81
+ streamer=streamer
82
+ )
83
+
84
+ # Then you can use the pipeline to generate text.
85
+
86
+ messages = [
87
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
88
+ {"role": "user", "content": "Who are you?"},
89
+ ]
90
+
91
+ prompt = tokenizer.apply_chat_template(
92
+ messages,
93
+ tokenize=False,
94
+ add_generation_prompt=True
95
+ )
96
+
97
+ terminators = [
98
+ tokenizer.eos_token_id,
99
+ tokenizer.convert_tokens_to_ids("<|im_end|>")
100
+ ]
101
+
102
+ outputs = pipeline(
103
+ prompt,
104
+ max_new_tokens=256,
105
+ eos_token_id=terminators,
106
+ do_sample=True,
107
+ temperature=0.6,
108
+ top_p=0.95,
109
+ )
110
+ print(outputs[0]["generated_text"][len(prompt):])
111
+ ```