qwp4w3hyb's picture
Improve README.md
1f54eeb unverified
|
raw
history blame
819 Bytes
---
base_model: AetherResearch/Cerebrum-1.0-8x7b
tags:
- Mixtral
- instruct
- finetune
- imatrix
model-index:
- name: Cerebrum-1.0-8x7b-iMat-GGUF
results: []
license: apache-2.0
---
# Cerebrum-1.0-8x7b-iMat-GGUF
Source Model: [AetherResearch/Cerebrum-1.0-8x7b](https://huggingface.co/AetherResearch/Cerebrum-1.0-8x7b)
Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [46acb3676718b983157058aecf729a2064fc7d34](https://github.com/ggerganov/llama.cpp/commit/46acb3676718b983157058aecf729a2064fc7d34)
Imatrix was generated from the f16 gguf via this command:
./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)