--- base_model: - NousResearch/Hermes-2-Pro-Llama-3-8B - cognitivecomputations/dolphin-2.9-llama3-8b - NousResearch/Meta-Llama-3-8B - winglian/llama-3-8b-256k-PoSE - maum-ai/Llama-3-MAAL-8B-Instruct-v0.1 - asiansoul/Llama-3-Open-Ko-Linear-8B - NousResearch/Meta-Llama-3-8B-Instruct - nvidia/Llama3-ChatQA-1.5-8B - Danielbrdz/Barcenas-Llama3-8b-ORPO - aaditya/Llama3-OpenBioLLM-8B library_name: transformers tags: - mergekit - merge - llama --- # YACHT-Llama-3-Ko-8B [![DALL-E Yacht](https://i.ibb.co/hHr5xnh/DALL-E-2024-05-05-11-57-02-A-futuristic-yacht-boat-on-a-calm-ocean-at-dawn-featuring-sleek-curves-an.png)](https://ibb.co/92BXmfz) 🎵 *[JayLee LLMs Signature Tag] : ✍️ "I need a Jay Jay chat boy"* 🎵 ✨ *Navigating the High Seas of Data: Crafting the Ultimate Yacht Insights with Merged LLMs* ✨ ## 🏟️ Merged Model Series Yacht Features Welcome to the merged model series yacht! This provides an overview of the powerful features and functionalities that this series brings together, akin to a sleek, modern yacht sailing across the digital ocean. ### 1. Function Calling & JSON Outputs - Offers precise function calling and structured JSON outputs via specialized tokens like ``, ``, and ``. Streamlines system communication for developers. ### 2. Conversational Interaction - Avoids excessive "SYSTEM MESSAGE" chatter while delivering seamless, friendly dialogue. - Specializes in answering questions with precision, handling arithmetic and tabular data effortlessly. ### 3. Expanded Context Length - Extends the context length to 256k tokens using PoSE, offering a broader field of data analysis. ### 4. Multilingual Capabilities - Transfers instruction-following from English to Korean for reliable interaction across languages. ### 5. Optimized Dialogue & Safety - Aligns with human preferences using fine-tuning (SFT) and reinforcement learning (RLHF), ensuring helpful and safe dialogues. ### 6. Precision Merging - Merges foundational and preview models for Korean language through task arithmetic, providing seamless integration. ### 7. Specialized Biomedical Knowledge - Specializes in biomedical tasks with accurate responses for healthcare professionals and researchers. ### 8. Novel Training & Collaboration - Combines [ORPO method](https://arxiv.org/pdf/2403.07691) and dolphin preference datasets for high-quality conversation and collaboration. The merged model series yacht offers unparalleled functionality, drawing together a fleet of specialized models. Whether you need precise function calling, multilingual capabilities, or conversational AI, this yacht has every deck optimized to navigate the digital ocean with style and precision. ## 👘 Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base. ## 🩱 Models Merged The following models were included in the merge: * [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) * [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) * [winglian/llama-3-8b-256k-PoSE](https://huggingface.co/winglian/llama-3-8b-256k-PoSE) * [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1) * [asiansoul/Llama-3-Open-Ko-Linear-8B](https://huggingface.co/asiansoul/Llama-3-Open-Ko-Linear-8B) * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) * [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) * [Danielbrdz/Barcenas-Llama3-8b-ORPO](https://huggingface.co/Danielbrdz/Barcenas-Llama3-8b-ORPO) * [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) ## 🪭 Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Meta-Llama-3-8B # Base model providing a general foundation without specific parameters - model: NousResearch/Meta-Llama-3-8B-Instruct parameters: density: 0.60 weight: 0.25 - model: winglian/llama-3-8b-256k-PoSE parameters: density: 0.55 weight: 0.15 - model: nvidia/Llama3-ChatQA-1.5-8B parameters: density: 0.55 weight: 0.1 - model: asiansoul/Llama-3-Open-Ko-Linear-8B parameters: density: 0.55 weight: 0.2 - model: maum-ai/Llama-3-MAAL-8B-Instruct-v0.1 parameters: density: 0.55 weight: 0.1 - model: NousResearch/Hermes-2-Pro-Llama-3-8B parameters: density: 0.55 weight: 0.1 - model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 0.55 weight: 0.05 - model: Danielbrdz/Barcenas-Llama3-8b-ORPO parameters: density: 0.55 weight: 0.05 - model: aaditya/Llama3-OpenBioLLM-8B parameters: density: 0.55 weight: 0.1 merge_method: dare_ties base_model: NousResearch/Meta-Llama-3-8B parameters: int8_mask: true dtype: bfloat16 ```