qwp4w3hyb commited on
Commit
93fe3e8
1 Parent(s): c1ef811

Improve README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md CHANGED
@@ -1,3 +1,38 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: cc-by-nc-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: CohereForAI/c4ai-command-r-plus
3
+ tags:
4
+ - cohere
5
+ - commandr
6
+ - instruct
7
+ - finetune
8
+ - function calling
9
+ - importance matrix
10
+ - imatrix
11
+ language:
12
+ - en
13
+ - fr
14
+ - de
15
+ - es
16
+ - it
17
+ - pt
18
+ - ja
19
+ - ko
20
+ - zh
21
+ - ar
22
+ model-index:
23
+ - name: c4ai-command-r-plus-iMat-GGUF
24
+ results: []
25
  license: cc-by-nc-4.0
26
  ---
27
+
28
+ # CohereForAI/c4ai-command-r-plus GGUFs created with an importance matrix
29
+
30
+ Source Model: [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)
31
+
32
+ Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [5dc9dd7152dedc6046b646855585bd070c91e8c8](https://github.com/ggerganov/llama.cpp/commit/5dc9dd7152dedc6046b646855585bd070c91e8c8) (master from 2024-04-09)
33
+
34
+ Imatrix was generated from the f16 gguf via this command:
35
+
36
+ ./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
37
+
38
+ Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)