|
--- |
|
base_model: MaziyarPanahi/calme-2.1-llama3.1-70b |
|
datasets: |
|
- MaziyarPanahi/truthy-dpo-v0.1-axolotl |
|
language: |
|
- en |
|
library_name: transformers |
|
model_creator: MaziyarPanahi |
|
model_name: calme-2.1-llama3.1-70b |
|
quantized_by: mradermacher |
|
tags: |
|
- chat |
|
- llama |
|
- facebook |
|
- llaam3 |
|
- finetune |
|
- chatml |
|
--- |
|
## About |
|
|
|
<!-- ### quantize_version: 2 --> |
|
<!-- ### output_tensor_quantised: 1 --> |
|
<!-- ### convert_type: hf --> |
|
<!-- ### vocab_type: --> |
|
<!-- ### tags: nicoboss --> |
|
weighted/imatrix quants of https://huggingface.co/MaziyarPanahi/calme-2.1-llama3.1-70b |
|
|
|
<!-- provided-files --> |
|
static quants are available at https://huggingface.co/mradermacher/calme-2.1-llama3.1-70b-GGUF |
|
## Usage |
|
|
|
If you are unsure how to use GGUF files, refer to one of [TheBloke's |
|
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for |
|
more details, including on how to concatenate multi-part files. |
|
|
|
## Provided Quants |
|
|
|
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) |
|
|
|
| Link | Type | Size/GB | Notes | |
|
|:-----|:-----|--------:|:------| |
|
| [GGUF](https://huggingface.co/mradermacher/calme-2.1-llama3.1-70b-i1-GGUF/resolve/main/calme-2.1-llama3.1-70b.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | |
|
| [GGUF](https://huggingface.co/mradermacher/calme-2.1-llama3.1-70b-i1-GGUF/resolve/main/calme-2.1-llama3.1-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | |
|
|
|
Here is a handy graph by ikawrakow comparing some lower-quality quant |
|
types (lower is better): |
|
|
|
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) |
|
|
|
And here are Artefact2's thoughts on the matter: |
|
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 |
|
|
|
## FAQ / Model Request |
|
|
|
See https://huggingface.co/mradermacher/model_requests for some answers to |
|
questions you might have and/or if you want some other model quantized. |
|
|
|
## Thanks |
|
|
|
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting |
|
me use its servers and providing upgrades to my workstation to enable |
|
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. |
|
|
|
<!-- end --> |
|
|