Text Generation
Transformers
English
alpaca
bloom
LLM
mrm8488 commited on
Commit
48bed71
1 Parent(s): b236b75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -17,10 +17,12 @@ tags:
17
  This adapter was created by using the [PEFT](https://github.com/huggingface/peft) library and allowed the base model **BigScience/BLOOM 7B1** to be fine-tuned on the **Stanford's Alpaca Dataset** by using the method **LoRA**.
18
 
19
  ## Model Description
20
- [BERTIN-GPT-J-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B) is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
 
 
21
 
22
  ## Training data
23
- Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
24
 
25
  The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
26
 
@@ -35,7 +37,7 @@ In a preliminary study, the authors also found that the 52K generated data to be
35
 
36
  ### Supported Tasks and Leaderboards
37
 
38
- The Alpaca dataset designed for instruction training pretrained language models.
39
 
40
  ### Training procedure
41
 
 
17
  This adapter was created by using the [PEFT](https://github.com/huggingface/peft) library and allowed the base model **BigScience/BLOOM 7B1** to be fine-tuned on the **Stanford's Alpaca Dataset** by using the method **LoRA**.
18
 
19
  ## Model Description
20
+ BigScience Large Open-science Open-access Multilingual Language Model
21
+
22
+ [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
23
 
24
  ## Training data
25
+ Alpaca is a dataset of **52,000** instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
26
 
27
  The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
28
 
 
37
 
38
  ### Supported Tasks and Leaderboards
39
 
40
+ The Alpaca dataset is designed for instruction training pre-trained language models.
41
 
42
  ### Training procedure
43