bart-base-instructiongen-w-inputs
Use this text2text model to find out what LLM instruction
(and inputs
if relevant) might have generated <arbitrary input text>
!
- Check out a basic demo on Spaces
- An example of how to use instructiongen models in a CLI script can be found here
- You can find other models fine-tuned for instruction generation by searching for the instructiongen tag
about
This model is a fine-tuned version of facebook/bart-base on the pszemraj/fleece2instructions-inputs-alpaca-cleaned
dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9579
- Rouge1: 62.3604
- Rouge2: 39.5109
- Rougel: 58.8843
- Rougelsum: 60.4494
- Gen Len: 24.9917
Example
Intended uses & limitations
This model is intended to be used to generate instructions from arbitrary text. You can then use these instructions + your data to fine-tune an LLM on instructions w.r.t. a specific domain. This model is primarily intended to enable low-resource domain adaptation, rather than "I want to generate even better prompts for the FLAN-V2 dataset!".
The fleece2instructions-inputs-alpaca-cleaned
dataset, obtained from the alpaca-lora repo under the ODC-BY license, has been converted to a text2text format for use with language models. In this dataset, the original 'inputs' and 'instructions' columns are combined into a single 'instructions_inputs' column. To clearly separate the two types of content, each piece of text is prefixed with either an <instruction>
or <inputs>
token. These tokens not only facilitate model comprehension, but also allow for easy regex separation of model outputs during inference.
As such, users can expect the output of this model to be similarly structured with <instruction>
and <inputs>
tokens.
This is just the base model, for better performance (but slower/compute intensive) see the bart-large version. Further exploration/data may lead to even better models!
Training and evaluation data
Refer to the fleece2instructions-inputs-alpaca-cleaned dataset
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 2.0
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
1.1147 | 1.0 | 680 | 0.9901 | 61.8451 | 38.8293 | 58.3372 | 59.8658 | 25.2401 |
0.9565 | 2.0 | 1360 | 0.9579 | 62.3604 | 39.5109 | 58.8843 | 60.4494 | 24.9917 |
- Downloads last month
- 14