bjoernp's picture
Update README.md
4d6abe0 verified
---
license: llama3
language:
- de
library_name: transformers
---
# Llama3_DiscoLeo_8B_DARE_Experimental_4bit_awq_glc
This model is a 4 bit quantization of [DiscoResearch/Llama3_DiscoLeo_8B_DARE_Experimental](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_8B_DARE_Experimental)
created using [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) with a custom bilingual calibration dataset and `quant_config = {"zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM"}`.
Copy of the original model card:
[DiscoResearch/Llama3_German_8B_v0.1](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729) is a large language model based on [Meta's Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). It is specialized for the German language through continuous pretraining on 65 billion high-quality tokens, similar to previous [LeoLM](https://huggingface.co/LeoLM) or [Occiglot](https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01) models.
This is a merge of our instruct model with the Instruct model by Meta. Created using [mergekit](https://github.com/cg123/mergekit). Contributed by [Damian B.](https://huggingface.co/damianb23)!
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.1](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.1)
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.1
parameters:
density: 0.5
weight: 0.5
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: meta-llama/Meta-Llama-3-8B
parameters:
normalize: true
int8_mask: false
dtype: bfloat16
```