Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ license: apple-ascl
|
|
10 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/63118add64939fabc0108b28/BB42g4V8HTxb5dR4tcy8A.png" alt="DCLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
11 |
|
12 |
|
13 |
-
# Model Card for DCLM-IT
|
14 |
|
15 |
DCLM-IT-7B is a 7 billion parameter language model trained on the DCLM-Baseline dataset and then further finetuned on our DCLM-IT finetuning mixture. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.
|
16 |
|
|
|
10 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/63118add64939fabc0108b28/BB42g4V8HTxb5dR4tcy8A.png" alt="DCLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
11 |
|
12 |
|
13 |
+
# Model Card for DCLM-7B-IT
|
14 |
|
15 |
DCLM-IT-7B is a 7 billion parameter language model trained on the DCLM-Baseline dataset and then further finetuned on our DCLM-IT finetuning mixture. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.
|
16 |
|