chiyuzhang commited on
Commit
d502f23
1 Parent(s): c54d873

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -23,7 +23,7 @@ should probably proofread and complete it, then remove this comment. -->
23
  [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
24
  [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
25
 
26
- This model is one of our LaMini model series in paper "[LaMini: A Diverse Herd of Distilled Models from Large-Scale Instructions]()". This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on [LaMini dataset]() that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini/).
27
  You can view other LaMini model series as follow. Note that not all models are performing as well. More details can be seen in our paper.
28
 
29
  <table>
@@ -86,7 +86,7 @@ You can view other LaMini model series as follow. Note that not all models are p
86
  ## Use
87
 
88
  ### Intended use
89
- We recommend to use model to reponse to human instructions wrote in natural language.
90
 
91
  We now show you how to load and use our model using HuggingFace `pipline()`.
92
 
@@ -105,7 +105,7 @@ print("Response": generated_text)
105
  ```
106
 
107
  ## Training Procedure
108
- We initialize with [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) and fine-tune it on our [LaMini dataset](). Its total number of parameters is 61M.
109
 
110
  ### Training Hyperparameters
111
 
 
23
  [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
24
  [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
25
 
26
+ This model is one of our LaMini model series in paper "[LaMini: A Diverse Herd of Distilled Models from Large-Scale Instructions]()". This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on [LaMini dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini/).
27
  You can view other LaMini model series as follow. Note that not all models are performing as well. More details can be seen in our paper.
28
 
29
  <table>
 
86
  ## Use
87
 
88
  ### Intended use
89
+ We recommend using the model to response to human instructions written in natural language.
90
 
91
  We now show you how to load and use our model using HuggingFace `pipline()`.
92
 
 
105
  ```
106
 
107
  ## Training Procedure
108
+ We initialize with [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) and fine-tune it on our [LaMini dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 61M.
109
 
110
  ### Training Hyperparameters
111