afaji commited on
Commit
8f17355
1 Parent(s): 6a43f3e

auto-rename

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -24,22 +24,22 @@ widget:
24
  should probably proofread and complete it, then remove this comment. -->
25
 
26
  <p align="center" width="100%">
27
- <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini/main/images/LaMnin.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
28
  </p>
29
 
30
  # LaMini-GPT-1.5B
31
 
32
  [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]()
33
 
34
- This model is one of our LaMini model series in paper "[LaMini: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini)".
35
- This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on [LaMini dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini/).
36
- You can view other LaMini model series as follow. Note that not all models are performing as well. Models with ✩ are those with the best overall performance given their size/architecture. More details can be seen in our paper.
37
 
38
  <table>
39
  <thead>
40
  <tr>
41
  <th>Base model</th>
42
- <th colspan="4">LaMini series (#parameters)</th>
43
  </tr>
44
  </thead>
45
  <tbody>
@@ -121,10 +121,10 @@ print("Response": generated_text)
121
  ## Training Procedure
122
 
123
  <p align="center" width="100%">
124
- <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
125
  </p>
126
 
127
- We initialize with [gpt2-xl](https://huggingface.co/gpt2-xl) and fine-tune it on our [LaMini dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 1.5B.
128
 
129
  ### Training Hyperparameters
130
 
@@ -142,8 +142,8 @@ More information needed
142
 
143
  ```bibtex
144
  @misc{lamini,
145
- title={LaMini: A Diverse Herd of Distilled Models from Large-Scale Instructions},
146
- author={},
147
  year={2023},
148
  publisher = {GitHub},
149
  journal = {GitHub repository},
 
24
  should probably proofread and complete it, then remove this comment. -->
25
 
26
  <p align="center" width="100%">
27
+ <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
28
  </p>
29
 
30
  # LaMini-GPT-1.5B
31
 
32
  [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]()
33
 
34
+ This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)".
35
+ This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
36
+ You can view other LaMini-LM model series as follow. Note that not all models are performing as well. Models with ✩ are those with the best overall performance given their size/architecture. More details can be seen in our paper.
37
 
38
  <table>
39
  <thead>
40
  <tr>
41
  <th>Base model</th>
42
+ <th colspan="4">LaMini-LM series (#parameters)</th>
43
  </tr>
44
  </thead>
45
  <tbody>
 
121
  ## Training Procedure
122
 
123
  <p align="center" width="100%">
124
+ <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
125
  </p>
126
 
127
+ We initialize with [gpt2-xl](https://huggingface.co/gpt2-xl) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 1.5B.
128
 
129
  ### Training Hyperparameters
130
 
 
142
 
143
  ```bibtex
144
  @misc{lamini,
145
+ title={LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
146
+ author={Minghao Wu and Abdul Waheed and Chiyu Zhang and Muhammad Abdul-Mageed and Alham Fikri Aji},
147
  year={2023},
148
  publisher = {GitHub},
149
  journal = {GitHub repository},