Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,36 @@ pipeline_tag: text-generation
|
|
10 |
inference: false
|
11 |
quantized_by: Suparious
|
12 |
---
|
13 |
-
#
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
inference: false
|
11 |
quantized_by: Suparious
|
12 |
---
|
13 |
+
# mlabonne/Chimera-8B AWQ
|
14 |
|
15 |
+
- Model creator: [mlabonne](https://huggingface.co/mlabonne)
|
16 |
+
- Original model: [Chimera-8B](https://huggingface.co/mlabonne/Chimera-8B)
|
17 |
+
|
18 |
+
## Model Summary
|
19 |
+
|
20 |
+
Dare-ties merge method.
|
21 |
+
|
22 |
+
List of all models and merging path is coming soon.
|
23 |
+
|
24 |
+
Merging the "thick"est model weights from mistral models using amazing training methods like direct preference optimization (dpo) and reinforced learning.
|
25 |
+
|
26 |
+
I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
|
27 |
+
and optimized code until i achieved the best possible results.
|
28 |
+
|
29 |
+
Thank you openchat 3.5 for showing me the way.
|
30 |
+
|
31 |
+
Here is my contribution.
|
32 |
+
|
33 |
+
## Prompt Template
|
34 |
+
|
35 |
+
Replace {system} with your system prompt, and {prompt} with your prompt instruction.
|
36 |
+
|
37 |
+
```
|
38 |
+
### System:
|
39 |
+
{system}
|
40 |
+
|
41 |
+
### User:
|
42 |
+
{prompt}
|
43 |
+
|
44 |
+
### Assistant:
|
45 |
+
```
|