nihalnayak
commited on
Commit
•
a8e503d
1
Parent(s):
ee04173
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- BatsResearch/ctga-v1
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
library_name: transformers
|
7 |
+
pipeline_tag: text2text-generation
|
8 |
+
tags:
|
9 |
+
- data generation
|
10 |
+
---
|
11 |
+
|
12 |
+
# Model Card for bonito
|
13 |
+
|
14 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
15 |
+
|
16 |
+
Bonito is an open-source model for conditional task generation: the task of converting unannotated text into task-specific training datasets for instruction tuning.
|
17 |
+
|
18 |
+
![Bonito](https://raw.githubusercontent.com/BatsResearch/bonito/main/assets/workflow.jpg)
|
19 |
+
|
20 |
+
## Model Details
|
21 |
+
|
22 |
+
### Model Description
|
23 |
+
|
24 |
+
<!-- Provide a longer summary of what this model is. -->
|
25 |
+
|
26 |
+
Bonito can be used to create synthetic instruction tuning datasets to adapt large language models on users' specialized, private data.
|
27 |
+
In our [paper](https://github.com/BatsResearch/bonito), we show that Bonito can be used to adapt both pretrained and instruction tuned models to tasks without any annotations.
|
28 |
+
|
29 |
+
- **Developed by:** Nihal V. Nayak, Yiyang Nan, Avi Trost, and Stephen H. Bach
|
30 |
+
- **Model type:** MistralForCausalLM
|
31 |
+
- **Language(s) (NLP):** English
|
32 |
+
- **License:** TBD
|
33 |
+
- **Finetuned from model:** `mistralai/Mistral-7B-v0.1`
|
34 |
+
|
35 |
+
### Model Sources
|
36 |
+
|
37 |
+
<!-- Provide the basic links for the model. -->
|
38 |
+
|
39 |
+
- **Repository:** [https://github.com/BatsResearch/bonito](https://github.com/BatsResearch/bonito)
|
40 |
+
- **Paper:** Arxiv link
|
41 |
+
|
42 |
+
## Uses
|
43 |
+
|
44 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
45 |
+
|
46 |
+
### Direct Use
|
47 |
+
|
48 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
49 |
+
To easily generate synthetic instruction tuning datasets, we recommend using the [bonito](https://github.com/BatsResearch/bonito) package built using the `transformers` and the `vllm` libraries.
|
50 |
+
|
51 |
+
```python
|
52 |
+
from bonito import Bonito, SamplingParams
|
53 |
+
from datasets import load_dataset
|
54 |
+
|
55 |
+
# Initialize the Bonito model
|
56 |
+
bonito = Bonito()
|
57 |
+
|
58 |
+
# load dataaset with unannotated text
|
59 |
+
unannotated_text = load_dataset(
|
60 |
+
"BatsResearch/bonito-experiment",
|
61 |
+
"unannotated_contract_nli"
|
62 |
+
)["train"].select(range(10))
|
63 |
+
|
64 |
+
# Generate synthetic instruction tuning dataset
|
65 |
+
sampling_params = SamplingParams(max_tokens=256, top_p=0.95, temperature=0.5, n=1)
|
66 |
+
synthetic_dataset = bonito.generate_tasks(
|
67 |
+
unannotated_text,
|
68 |
+
context_col="input",
|
69 |
+
task_type="nli",
|
70 |
+
sampling_params=sampling_params
|
71 |
+
)
|
72 |
+
```
|
73 |
+
|
74 |
+
|
75 |
+
### Out-of-Scope Use
|
76 |
+
|
77 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
78 |
+
|
79 |
+
Our model is trained to generate the following task types: summarization, sentiment analysis, multiple-choice question answering, extractive question answering, topic classification, natural language inference, question generation, text generation, question answering without choices, paraphrase identification, sentence completion, yes-no question answering, word sense disambiguation, paraphrase generation, textual entailment, and
|
80 |
+
coreference resolution.
|
81 |
+
The model might not produce accurate synthetic tasks beyond these task types.
|
82 |
+
|
83 |
+
## Bias, Risks, and Limitations
|
84 |
+
|
85 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
86 |
+
**Limitations**
|
87 |
+
|
88 |
+
Our work relies on the availability of large amounts of unannotated text.
|
89 |
+
If only a small quantity of unannotated text is present, the target language model, after adaptation, may experience a drop in performance.
|
90 |
+
While we demonstrate positive improvements on pretrained and instruction-tuned models, our observations are limited to the three task types (yes-no question answering, extractive question answering, and natural language inference) considered in our paper.
|
91 |
+
|
92 |
+
**Risks**
|
93 |
+
|
94 |
+
Bonito poses risks similar to those of any large language model.
|
95 |
+
For example, our model could be used to generate factually incorrect datasets in specialized domains.
|
96 |
+
Our model can exhibit the biases and stereotypes of the base model, Mistral-7B, even after extensive supervised fine-tuning.
|
97 |
+
Finally, our model does not include safety training and can potentially generate harmful content.
|
98 |
+
|
99 |
+
### Recommendations
|
100 |
+
|
101 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
102 |
+
|
103 |
+
We recommend users thoroughly inspect the generated tasks and benchmark performance on critical datasets before deploying the models trained with the synthetic tasks into the real world.
|
104 |
+
|
105 |
+
## Training Details
|
106 |
+
|
107 |
+
### Training Data
|
108 |
+
|
109 |
+
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
110 |
+
To train Bonito, we create a new dataset called conditional task generation with attributes by remixing existing instruction tuning datasets.
|
111 |
+
See [ctga-v1](https://huggingface.co/datasets/BatsResearch/ctga-v1) for more details.
|
112 |
+
|
113 |
+
### Training Procedure
|
114 |
+
|
115 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
116 |
+
|
117 |
+
#### Training Hyperparameters
|
118 |
+
|
119 |
+
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
120 |
+
We train the model using [Q-LoRA](https://github.com/artidoro/qlora) by optimizing the cross entropy loss over the output tokens.
|
121 |
+
The model is trained for 100,000 steps.
|
122 |
+
The training takes about 4 days on four GPUs to complete.
|
123 |
+
|
124 |
+
We use the following hyperparameters:
|
125 |
+
- Q-LoRA rank (r): 64
|
126 |
+
- Q-LoRA scaling factor ($alpha$): 4
|
127 |
+
- Q-LoRA dropout: 0
|
128 |
+
- Optimizer: Paged AdamW
|
129 |
+
- Learning rate scheduler: linear
|
130 |
+
- Max. learning rate: 1e-04
|
131 |
+
- Min. learning rate: 0
|
132 |
+
- Weight decay: 0
|
133 |
+
- Dropout: 0
|
134 |
+
- Max. gradient norm: 0.3
|
135 |
+
- Effective batch size: 16
|
136 |
+
- Max. input length: 2,048
|
137 |
+
- Max. output length: 2,048
|
138 |
+
- Num. steps: 100,000
|
139 |
+
|
140 |
+
## Citation
|
141 |
+
|
142 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
143 |
+
```
|
144 |
+
TBD
|
145 |
+
```
|