qwp4w3hyb's picture
Create README.md
c3c5d8b verified
---
license: other
license_name: llama-3
license_link: https://llama.meta.com/llama3/license/
pipeline_tag: text-generation
base_model: Replete-AI/Llama-3-11.5B-Instruct-V2
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- instruct
- finetune
- frankenmerge
- merge
- gguf
- imatrix
- importance matrix
model-index:
- name: Yi-1.5-34B-Chat-16K-iMat-GGUF
results: []
---
# Quant Infos
- quants done with an importance matrix for improved quantization loss
- ggufs & imatrix generated from bf16 for "optimal" accuracy loss
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [fabf30b4c4fca32e116009527180c252919ca922](https://github.com/ggerganov/llama.cpp/commit/fabf30b4c4fca32e116009527180c252919ca922) (master as of 2024-05-20)
- Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
```
./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
```
# Original Model Card:
## Llama-3-11.5B-v2
Thank you to Meta for the weights for Meta-Llama-3-8B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png)
This is an upscaling of the Meta-Llama-3-8B Ai using techniques created for chargoddard/mistral-11b-slimorca. This Ai model has been upscaled from 8b parameters to 11.5b parameters without any continuous pretraining or fine-tuning.
Unlike version 1 this model has no issues at fp16 or any quantizations.
The model that was used to create this one is linked below:
https://huggingface.co/meta-llama/Meta-Llama-3-8B