--- license: apache-2.0 --- # etri-xainlp/llama2-13b-lima-sft-dpo ## Model Details **Model Developers** ETRI xainlp team **Input** text only. **Output** text only. **Model Architecture** **Base Model** [meta-llama/Llama-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) **Training Dataset** - fully sft: 650k instruction-following set - lima sft: 280k instruction-following set - dpo+lora: 90k user preference set - We use A100 GPU 80GB * 7, when training.