Edit model card

etri-xainlp/SOLAR-10.7B-merge-dpo_v1

Model Details

Model Developers ETRI xainlp team

Input text only.

Output text only.

Model Architecture

We used MergeKit to merge Model heavytail/kullm-solar into Model etri-xainlp/SOLAR-10.7B-merge-dpo as the base.

Base Model etri-xainlp/SOLAR-10.7B-merge-dpo

Merge Model davidkim205/komt-solar-10.7b-sft-v5

Training Dataset

  • dpo+lora: 100k user preference set

  • We use A100 GPU 80GB * 8, when training.

Downloads last month
1,656
Safetensors
Model size
10.7B params
Tensor type
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using etri-xainlp/SOLAR-10.7B-merge-dpo_v1 1