DRXD1000 commited on
Commit
d41c3cc
1 Parent(s): 17c72ee

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -0
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - de
5
+ tags:
6
+ - dpo
7
+ - alignment-handbook
8
+ - awq
9
+ - quantization
10
+ ---
11
+ <div align="center">
12
+ <img src=https://cdn-uploads.huggingface.co/production/uploads/6474c16e7d131daf633db8ad/-mL8PSG00X2lEw1lb8E1Q.png>
13
+ </div>
14
+
15
+ # AWQ-Version of Phoenix
16
+
17
+ | Bits | GS | AWQ Dataset | Seq Len |
18
+ | ---- | -- | ----------- | ------- |
19
+ | 4 | 128 | c4 | 4096 |
20
+
21
+ # Model Card for Phoenix
22
+
23
+
24
+ **Phoenix** is a model trained using Direct Preference Optimization (DPO) for the german language. Its training procedure follows the process of the alignment-handbook from Huggingface.
25
+ In contrast to zephyr and notus this model has been trained using german instruction and dpo data. In detail, a german translation of HuggingFaceH4/ultrachat_200k
26
+ and HuggingFaceH4/ultrafeedback_binarized were created in addition to a series of allready available instruction datasets. The LLM haoranxu/ALMA-13B was used for this.
27
+ While the mistral model performs really well, it is not really suitable for the german language. Therefore we have used the fantastic LeoLM/leo-mistral-hessianai-7b.
28
+ Thanks to the new type of training, Phoenix is not only able to compete with the Mistral model from LeoLM but also **beats the Llama-70b-chat model in 2 mt-bench categories**.
29
+ This model **wouldn't have been possible without the amazing work of Huggingface, LeoLM, openbnb, Argilla the Alma-Team and many others of the AI community**.
30
+ i would like to personally thank all AI researchers who make the training of such models possible
31
+
32
+ ## MT-Bench-DE Scores
33
+ Phoenix beats the LeoLM-Mistral model in all categories except for coding and humanities.
34
+ Additionally it also Beats LeoLM/Llama-2-70b-chat in roleplay and reasoning which shows the power of DPO.
35
+
36
+ ```
37
+ {
38
+ "first_turn": 6.39375,
39
+ "second_turn": 5.1625,
40
+ "categories": {
41
+ "writing": 7.45,
42
+ "roleplay": 7.9,
43
+ "reasoning": 4.3,
44
+ "math": 3.25,
45
+ "coding": 2.5,
46
+ "extraction": 5.9,
47
+ "stem": 7.125,
48
+ "humanities": 7.8
49
+ },
50
+ "average": 5.778124999999999
51
+ }
52
+ ```
53
+
54
+ ## Other Evaluations
55
+
56
+ Florian Leurer compared Phoenix to other LLMs. Check it out here:
57
+
58
+ ['Evaluation of German LLMs'](https://www.linkedin.com/posts/florian-leuerer-927479194_vermutlich-relativ-unbeobachtet-ist-gestern-activity-7151475428019388418-sAKR?utm_source=share&utm_medium=member_desktop)
59
+
60
+
61
+ ## Model Details
62
+
63
+ ### Model Description
64
+
65
+ - **Developed by:** Matthias Uhlig (based on HuggingFace H4, Argillla and MistralAI previous efforts and amazing work)
66
+ - **Shared by:** Matthias Uhlig
67
+ - **Model type:** GPT-like 7B model DPO fine-tuned
68
+ - **Language(s) (NLP):** German
69
+ - **License:** Apache 2.0 (same as alignment-handbook/zephyr-7b-dpo-full)
70
+ - **Finetuned from model:** [`LeoLM/leo-mistral-hessianai-7b`](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b)
71
+
72
+ ### Model Sources
73
+
74
+ - **Repository:** -
75
+ - **Paper:** in progress
76
+ - **Demo:** -
77
+
78
+ ## Training Details
79
+
80
+ ### Training Hardware
81
+
82
+ We used a VM with 8 x A100 80GB hosted in Runpods.io.
83
+
84
+ ### Training Data
85
+
86
+ We used a new translated version of [`HuggingFaceH4/ultrachat_200k`](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k), and [argilla/ultrafeedback-binarized-preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences).
87
+
88
+ The data used for training will be made public after additional quality inspection.
89
+
90
+ ## Prompt template
91
+ We use the same prompt template as [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta):
92
+ ```
93
+ <|system|>
94
+ </s>
95
+ <|user|>
96
+ {prompt}</s>
97
+ <|assistant|>
98
+ ```
99
+
100
+ It is also possible to use the model in a multi-turn setup
101
+ ```
102
+ <|system|>
103
+ </s>
104
+ <|user|>
105
+ {prompt_1}</s>
106
+ <|assistant|>
107
+ {answer_1}</s>
108
+ <|user|>
109
+ {prompt_2}</s>
110
+ <|assistant|>
111
+ ```
112
+
113
+ ## Usage
114
+ You will first need to install `transformers` and `accelerate` (just to ease the device placement), then you can run any of the following:
115
+ ### Via `generate`
116
+ ```python
117
+ import torch
118
+ from transformers import AutoModelForCausalLM, AutoTokenizer
119
+ model = AutoModelForCausalLM.from_pretrained("DRXD1000/Phoenix-AWQ", torch_dtype=torch.bfloat16, device_map="auto")
120
+ tokenizer = AutoTokenizer.from_pretrained("DRXD1000/Phoenix-AWQ")
121
+ prompt = """<|system|>
122
+ </s>
123
+ <|user|>
124
+ Erkläre mir was KI ist.</s>
125
+ <|assistant|>
126
+ """
127
+ inputs = tokenizer.apply_chat_template(prompt, return_tensors="pt").to("cuda")
128
+ outputs = model.generate(inputs, num_return_sequences=1, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
129
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
130
+ ```
131
+
132
+ ## Ethical Considerations and Limitations
133
+
134
+ As with all LLMs, the potential outputs of `DRXD1000/Phoenix-AWQ` cannot be predicted
135
+ in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
136
+ to user prompts. Therefore, before deploying any applications of `DRXD1000/Phoenix-AWQ`, developers should
137
+ perform safety testing and tuning tailored to their specific applications of the model.
138
+ Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
139
+
140
+
141
+
142
+ ## Training procedure
143
+
144
+ ### Training hyperparameters
145
+
146
+ The following hyperparameters were used during training:
147
+ - learning_rate: 5e-07
148
+ - train_batch_size: 8
149
+ - eval_batch_size: 4
150
+ - seed: 42
151
+ - distributed_type: multi-GPU
152
+ - num_devices: 8
153
+ - total_train_batch_size: 64
154
+ - total_eval_batch_size: 32
155
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
156
+ - lr_scheduler_type: linear
157
+ - lr_scheduler_warmup_ratio: 0.1
158
+ - num_epochs: 1
159
+
160
+
161
+ ### Framework versions
162
+
163
+ - Transformers 4.35.0
164
+ - Pytorch 2.1.2+cu121
165
+ - Datasets 2.14.6
166
+ - Tokenizers 0.14.1