Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,7 @@ library_name: transformers
|
|
3 |
license: apache-2.0
|
4 |
datasets:
|
5 |
- monology/pile-uncopyrighted
|
|
|
6 |
language:
|
7 |
- en
|
8 |
metrics:
|
@@ -16,6 +17,8 @@ pipeline_tag: text-generation
|
|
16 |
|
17 |
**MiniPLM-QWen-200M** is a 200M model with QWen achitecture pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial QWen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
|
18 |
|
|
|
|
|
19 |
<p align='left'>
|
20 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/2BqT0NgkmIXYlktovw9kG.png" width="1000">
|
21 |
</p>
|
@@ -29,7 +32,9 @@ MiniPLM models achieves better performance given the same computation and scales
|
|
29 |
</p>
|
30 |
|
31 |
## Baseline Models
|
32 |
-
+ [Conventional Pre-Training](https://huggingface.co/MiniLLM/Pretrain-Qwen-
|
33 |
-
+ [VanillaKD](https://huggingface.co/MiniLLM/VanillaKD-Pretrain-Qwen-
|
34 |
|
35 |
## Citation
|
|
|
|
|
|
3 |
license: apache-2.0
|
4 |
datasets:
|
5 |
- monology/pile-uncopyrighted
|
6 |
+
- MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
|
7 |
language:
|
8 |
- en
|
9 |
metrics:
|
|
|
17 |
|
18 |
**MiniPLM-QWen-200M** is a 200M model with QWen achitecture pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial QWen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
|
19 |
|
20 |
+
We also open-source the [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM for reproducibility.
|
21 |
+
|
22 |
<p align='left'>
|
23 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/2BqT0NgkmIXYlktovw9kG.png" width="1000">
|
24 |
</p>
|
|
|
32 |
</p>
|
33 |
|
34 |
## Baseline Models
|
35 |
+
+ [Conventional Pre-Training](https://huggingface.co/MiniLLM/Pretrain-Qwen-200M)
|
36 |
+
+ [VanillaKD](https://huggingface.co/MiniLLM/VanillaKD-Pretrain-Qwen-200M)
|
37 |
|
38 |
## Citation
|
39 |
+
|
40 |
+
TODO
|