--- base_model: - NousResearch/Hermes-2-Pro-Llama-3-8B - cognitivecomputations/dolphin-2.9-llama3-8b - NousResearch/Meta-Llama-3-8B - winglian/llama-3-8b-256k-PoSE - maum-ai/Llama-3-MAAL-8B-Instruct-v0.1 - asiansoul/Llama-3-Open-Ko-Linear-8B - NousResearch/Meta-Llama-3-8B-Instruct - nvidia/Llama3-ChatQA-1.5-8B - Danielbrdz/Barcenas-Llama3-8b-ORPO - aaditya/Llama3-OpenBioLLM-8B library_name: transformers tags: - mergekit - merge - llama --- # YACHT-Llama-3-Ko-8B [![DALL-E Yacht](https://i.ibb.co/hHr5xnh/DALL-E-2024-05-05-11-57-02-A-futuristic-yacht-boat-on-a-calm-ocean-at-dawn-featuring-sleek-curves-an.png)](https://ibb.co/92BXmfz) ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base. ### Models Merged The following models were included in the merge: * [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) * [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) * [winglian/llama-3-8b-256k-PoSE](https://huggingface.co/winglian/llama-3-8b-256k-PoSE) * [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1) * [asiansoul/Llama-3-Open-Ko-Linear-8B](https://huggingface.co/asiansoul/Llama-3-Open-Ko-Linear-8B) * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) * [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) * [Danielbrdz/Barcenas-Llama3-8b-ORPO](https://huggingface.co/Danielbrdz/Barcenas-Llama3-8b-ORPO) * [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Meta-Llama-3-8B # Base model providing a general foundation without specific parameters - model: NousResearch/Meta-Llama-3-8B-Instruct parameters: density: 0.60 weight: 0.25 - model: winglian/llama-3-8b-256k-PoSE parameters: density: 0.55 weight: 0.15 - model: nvidia/Llama3-ChatQA-1.5-8B parameters: density: 0.55 weight: 0.1 - model: asiansoul/Llama-3-Open-Ko-Linear-8B parameters: density: 0.55 weight: 0.2 - model: maum-ai/Llama-3-MAAL-8B-Instruct-v0.1 parameters: density: 0.55 weight: 0.1 - model: NousResearch/Hermes-2-Pro-Llama-3-8B parameters: density: 0.55 weight: 0.1 - model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 0.55 weight: 0.05 - model: Danielbrdz/Barcenas-Llama3-8b-ORPO parameters: density: 0.55 weight: 0.05 - model: aaditya/Llama3-OpenBioLLM-8B parameters: density: 0.55 weight: 0.1 merge_method: dare_ties base_model: NousResearch/Meta-Llama-3-8B parameters: int8_mask: true dtype: bfloat16 ```