--- base_model: - Casual-Autopsy/Llama-3-Yollow-SCE - lmg-anon/vntl-llama3-8b-v2-qlora library_name: transformers tags: - mergekit - merge language: - en - ja pipeline_tag: translation --- **Disclaimer:** Set logit bias for `<|eot_id|>` to `5` and Top K to `1` Model uses vntl llama3 8b v2 instruct formatting and prompting # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Casual-Autopsy/Llama-3-Yollow-SCE](https://huggingface.co/Casual-Autopsy/Llama-3-Yollow-SCE) as a base. ### Models Merged The following models were included in the merge: * [Casual-Autopsy/Llama-3-Yollow-SCE](https://huggingface.co/Casual-Autopsy/Llama-3-Yollow-SCE) + [lmg-anon/vntl-llama3-8b-v2-qlora](https://huggingface.co/lmg-anon/vntl-llama3-8b-v2-qlora) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Casual-Autopsy/Llama-3-Yollow-SCE+lmg-anon/vntl-llama3-8b-v2-qlora parameters: density: 0.85 weight: 0.5 merge_method: ties base_model: Casual-Autopsy/Llama-3-Yollow-SCE parameters: normalize: false dtype: bfloat16 ```