Llama3-Prime / README.md
agentlans's picture
Update README.md
e4f2a7d verified
---
license: llama3
---
# Llama3-Prime
This [Llama 3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model is a merge of other pretrained Llama 3 language models that were optimized for user preference. As a result, this merged model should be strong at providing relevant answers to user queries. Here, usability is more important than beating benchmarks.
- Input: text only
- Output: text only
- Prompt format: Llama 3
- Language: English
This model was created by merging multiple models with equal weights through the use of [MergeKit's](https://github.com/arcee-ai/mergekit) `model_stock` method.
Base Model: [Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B)
Models Used:
- [Llama-3-Instruct-8B-SimPO-ExPO](https://huggingface.co/chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO)
- [Llama-3-8B-Magpie-Pro-SFT-v0.1](https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-v0.1)
- [SELM-Llama-3-8B-Instruct-iter-3](https://huggingface.co/ZhangShenao/SELM-Llama-3-8B-Instruct-iter-3)
- [LLaMA3-iterative-DPO-final-ExPO](https://huggingface.co/chujiezheng/LLaMA3-iterative-DPO-final-ExPO)
- [Llama-3-Instruct-8B-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3)
- [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus)
- [Bagel-8b-v1.0](https://huggingface.co/jondurbin/bagel-8b-v1.0)
Training Details:
The merged model was trained using [LLaMA Factory](https://github.com/hiyouga/LLaMA-Factory) on the `alpaca_en_demo` dataset to ensure the model can respond in the Llama 3 Instruct format. The training parameters included a rank of 1, an alpha value of 1, and a 0.3 dropout rate. In other words, very weak training to prevent interfering with the merged model's capabilities.