Edit model card

etri-xainlp/SOLAR-10.7B-merge-dpo_v1

Model Details

Model Developers ETRI xainlp team

Input text only.

Output text only.

Model Architecture

We used MergeKit to merge Model heavytail/kullm-solar into Model etri-xainlp/SOLAR-10.7B-merge-dpo as the base.

Base Model etri-xainlp/SOLAR-10.7B-merge-dpo

Merge Model davidkim205/komt-solar-10.7b-sft-v5

Training Dataset

  • dpo+lora: 100k user preference set

  • We use A100 GPU 80GB * 8, when training.

Downloads last month
1,209
Safetensors
Model size
10.7B params
Tensor type
FP16
·