Tristan Druyen commited on
Commit
e02625e
1 Parent(s): c9f7730

Improve README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -1,3 +1,32 @@
1
  ---
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: Nous-Hermes-2-Mixtral-8x7B-DPO
3
  license: apache-2.0
4
+ tags:
5
+ - Mixtral
6
+ - instruct
7
+ - finetune
8
+ - chatml
9
+ - DPO
10
+ - RLHF
11
+ - gpt4
12
+ - synthetic data
13
+ - distillation
14
+ model-index:
15
+ - name: qwp4w3hyb/Nous-Hermes-2-Mixtral-8x7B-DPO-iMat-GGUF
16
+ results: []
17
+ language:
18
+ - en
19
+ datasets:
20
+ - teknium/OpenHermes-2.5
21
  ---
22
+
23
+ # Nous-Hermes-2-Mixtral-8x7B-DPO-iMat-GGUF
24
+
25
+ Source Model: [NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
26
+ Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [46acb3676718b983157058aecf729a2064fc7d34](https://github.com/ggerganov/llama.cpp/commit/46acb3676718b983157058aecf729a2064fc7d34)
27
+ Imatrix was generated from the f16 gguf via this command:
28
+
29
+ ./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
30
+
31
+ Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
32
+