Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ language:
|
|
11 |
# Model Card for Deita 7B V1.0
|
12 |
|
13 |
Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs).
|
14 |
-
Deita 7B V1.0 is a fine-tuned + DPO version of Mistral-7B-v0.1 that was trained on **
|
15 |
|
16 |
## Model description
|
17 |
|
|
|
11 |
# Model Card for Deita 7B V1.0
|
12 |
|
13 |
Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs).
|
14 |
+
Deita 7B V1.0 is a fine-tuned + DPO version of Mistral-7B-v0.1 that was trained on **6K** automatically selected lightweight, high-quality alignment SFT data: [Deita 6K V0](https://huggingface.co/datasets/hkust-nlp/deita-6k-v0) and **10K** randomly sampled alignment preference data from Ultrafeedback.
|
15 |
|
16 |
## Model description
|
17 |
|