etri-xainlp/llama2-13b-sft-dpo

Model Details

Model Developers ETRI xainlp team

Input text only.

Output text only.

Model Architecture

Base Model meta-llama/Llama-13b-hf

Training Dataset

  • fully sft: 650k instruction-following set

  • dpo+lora: 90k user preference set

  • We use A100 GPU 80GB * 8, when training.

Downloads last month
108
Safetensors
Model size
13B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for etri-xainlp/llama2-13b-sft-dpo

Quantizations
1 model

Spaces using etri-xainlp/llama2-13b-sft-dpo 6