Tristan Druyen commited on
Commit
53e466e
1 Parent(s): 868c7bd

Improve README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -1,3 +1,34 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: NousResearch/Hermes-2-Pro-Mistral-7B
3
+ tags:
4
+ - Mistral
5
+ - instruct
6
+ - finetune
7
+ - chatml
8
+ - DPO
9
+ - RLHF
10
+ - gpt4
11
+ - synthetic data
12
+ - distillation
13
+ - function calling
14
+ - json mode
15
+ model-index:
16
+ - name: Hermes-2-Pro-Mistral-7B-iMat-GGUF
17
+ results: []
18
  license: apache-2.0
19
+ language:
20
+ - en
21
+ datasets:
22
+ - teknium/OpenHermes-2.5
23
  ---
24
+
25
+ # Hermes-2-Pro-Mistral-7B-iMat-GGUF
26
+
27
+ Source Model: [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
28
+ Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [46acb3676718b983157058aecf729a2064fc7d34](https://github.com/ggerganov/llama.cpp/commit/46acb3676718b983157058aecf729a2064fc7d34)
29
+ Imatrix was generated from the f16 gguf via this command:
30
+
31
+ ./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
32
+
33
+ Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
34
+