munish0838 commited on
Commit
4aaea00
β€’
1 Parent(s): 4af8752

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +183 -0
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - dpo
5
+ base_model: mlabonne/NeuralDaredevil-8B-abliterated
6
+ datasets:
7
+ - mlabonne/orpo-dpo-mix-40k
8
+ model-index:
9
+ - name: Daredevil-8B-abliterated-dpomix
10
+ results:
11
+ - task:
12
+ type: text-generation
13
+ name: Text Generation
14
+ dataset:
15
+ name: AI2 Reasoning Challenge (25-Shot)
16
+ type: ai2_arc
17
+ config: ARC-Challenge
18
+ split: test
19
+ args:
20
+ num_few_shot: 25
21
+ metrics:
22
+ - type: acc_norm
23
+ value: 69.28
24
+ name: normalized accuracy
25
+ source:
26
+ url: >-
27
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix
28
+ name: Open LLM Leaderboard
29
+ - task:
30
+ type: text-generation
31
+ name: Text Generation
32
+ dataset:
33
+ name: HellaSwag (10-Shot)
34
+ type: hellaswag
35
+ split: validation
36
+ args:
37
+ num_few_shot: 10
38
+ metrics:
39
+ - type: acc_norm
40
+ value: 85.05
41
+ name: normalized accuracy
42
+ source:
43
+ url: >-
44
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: MMLU (5-Shot)
51
+ type: cais/mmlu
52
+ config: all
53
+ split: test
54
+ args:
55
+ num_few_shot: 5
56
+ metrics:
57
+ - type: acc
58
+ value: 69.1
59
+ name: accuracy
60
+ source:
61
+ url: >-
62
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix
63
+ name: Open LLM Leaderboard
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: TruthfulQA (0-shot)
69
+ type: truthful_qa
70
+ config: multiple_choice
71
+ split: validation
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: mc2
76
+ value: 60
77
+ source:
78
+ url: >-
79
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: Winogrande (5-shot)
86
+ type: winogrande
87
+ config: winogrande_xl
88
+ split: validation
89
+ args:
90
+ num_few_shot: 5
91
+ metrics:
92
+ - type: acc
93
+ value: 78.69
94
+ name: accuracy
95
+ source:
96
+ url: >-
97
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix
98
+ name: Open LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: GSM8k (5-shot)
104
+ type: gsm8k
105
+ config: main
106
+ split: test
107
+ args:
108
+ num_few_shot: 5
109
+ metrics:
110
+ - type: acc
111
+ value: 71.8
112
+ name: accuracy
113
+ source:
114
+ url: >-
115
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix
116
+ name: Open LLM Leaderboard
117
+ pipeline_tag: text-generation
118
+ ---
119
+
120
+ # NeuralDaredevil-8B-abliterated-GGUF
121
+ This is quantized version of [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) created using llama.cpp
122
+
123
+ # Model Description
124
+ This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated), trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
125
+ The DPO fine-tuning successfully recovers the performance loss due to the abliteration process, making it an excellent uncensored model.
126
+
127
+ ## πŸ”Ž Applications
128
+
129
+ NeuralDaredevil-8B-abliterated performs better than the Instruct model on my tests.
130
+
131
+ You can use it for any application that doesn't require alignment, like role-playing. Tested on LM Studio using the "Llama 3" preset.
132
+
133
+
134
+ ## πŸ† Evaluation
135
+
136
+ ### Open LLM Leaderboard
137
+
138
+ NeuralDaredevil-8B is the best-performing uncensored 8B model on the Open LLM Leaderboard (MMLU score).
139
+
140
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/HQtd51mJfVRhJ0lJFLceM.png)
141
+
142
+ ### Nous
143
+
144
+ Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
145
+
146
+ | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
147
+ |---|---:|---:|---:|---:|---:|
148
+ | [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [πŸ“„](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** |
149
+ | [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [πŸ“„](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
150
+ | [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [πŸ“„](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 |
151
+ | [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [πŸ“„](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 |
152
+ | [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [πŸ“„](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 |
153
+ | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [πŸ“„](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
154
+ | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [πŸ“„](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
155
+
156
+ ## 🌳 Model family tree
157
+
158
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ekwRGgnjzEOyprT8sEBFt.png)
159
+
160
+ ## πŸ’» Usage
161
+
162
+ ```python
163
+ !pip install -qU transformers accelerate
164
+
165
+ from transformers import AutoTokenizer
166
+ import transformers
167
+ import torch
168
+
169
+ model = "mlabonne/Daredevil-8B"
170
+ messages = [{"role": "user", "content": "What is a large language model?"}]
171
+
172
+ tokenizer = AutoTokenizer.from_pretrained(model)
173
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
174
+ pipeline = transformers.pipeline(
175
+ "text-generation",
176
+ model=model,
177
+ torch_dtype=torch.float16,
178
+ device_map="auto",
179
+ )
180
+
181
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
182
+ print(outputs[0]["generated_text"])
183
+ ```