Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
library_name: transformers
|
7 |
+
tags:
|
8 |
+
- alpaca
|
9 |
+
- bloom
|
10 |
+
- LLM
|
11 |
+
---
|
12 |
+
|
13 |
+
# AlpacOOM: Alpaca + BLOOM
|
14 |
+
|
15 |
+
|
16 |
+
## Adapter Description
|
17 |
+
This adapter was created by using the [PEFT](https://github.com/huggingface/peft) library and allowed the base model **BigScience/BLOOM 7B1** to be fine-tuned on the **Stanford's Alpaca Dataset** by using the method **LoRA**.
|
18 |
+
|
19 |
+
## Model Description
|
20 |
+
[BERTIN-GPT-J-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B) is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
|
21 |
+
|
22 |
+
## Training data
|
23 |
+
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
|
24 |
+
|
25 |
+
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
|
26 |
+
|
27 |
+
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
|
28 |
+
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
|
29 |
+
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
|
30 |
+
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
|
31 |
+
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
|
32 |
+
|
33 |
+
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
|
34 |
+
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
|
35 |
+
|
36 |
+
### Supported Tasks and Leaderboards
|
37 |
+
|
38 |
+
The Alpaca dataset designed for instruction training pretrained language models.
|
39 |
+
|
40 |
+
### Training procedure
|
41 |
+
|
42 |
+
TBA
|
43 |
+
|
44 |
+
## How to use
|
45 |
+
```py
|
46 |
+
|
47 |
+
```
|