Edit model card

๐Ÿฎ ๐Ÿฆ™ Flan-Alpaca: Instruction Tuning from Humans and Machines

๐Ÿ“ฃ Curious to know the performance of ๐Ÿฎ ๐Ÿฆ™ Flan-Alpaca on large-scale LLM evaluation benchmark, InstructEval? Read our paper https://arxiv.org/pdf/2306.04757.pdf. We evaluated more than 10 open-source instruction-tuned LLMs belonging to various LLM families including Pythia, LLaMA, T5, UL2, OPT, and Mosaic. Codes and datasets: https://github.com/declare-lab/instruct-eval

Our repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5. The pretrained models and demos are available on HuggingFace ๐Ÿค— :

Model Parameters Training GPUs
Flan-Alpaca-Base 220M 1x A6000
Flan-Alpaca-Large 770M 1x A6000
Flan-Alpaca-XL 3B 1x A6000
Flan-Alpaca-XXL 11B 4x A6000 (FSDP)

Why?

Alpaca represents an exciting new direction to approximate the performance of large language models (LLMs) like ChatGPT cheaply and easily. Concretely, they leverage an LLM such as GPT-3 to generate instructions as synthetic training data. The synthetic data which covers more than 50k tasks can then be used to finetune a smaller model. However, the original implementation is less accessible due to licensing constraints of the underlying LLaMA model. Furthermore, users have noted potential noise in the synthetic dataset. Hence, it may be better to explore a fully accessible model that is already trained on high-quality (but less diverse) instructions such as Flan-T5.

Usage

This uses Huggingface PEFT library for Parameter Efficient Fine Tuning

import torch
from peft import PeftModel
from transformers import GenerationConfig


from transformers import AutoTokenizer, AutoModelForSeq2SeqLM


BASE_MODEL = "google/flan-t5-xl"
LORA_WEIGHTS = "declare-lab/flan-alpaca-xl-lora"
TEMPERATURE = 1.0
TOP_P = 0.75
TOP_K = 40
NUM_BEAMS = 4
MAX_NEW_TOKENS = 128

if torch.cuda.is_available():
    device = "cuda"
else:
    device = "cpu"


if device == "cuda":
    model = AutoModelForSeq2SeqLM.from_pretrained(
        BASE_MODEL,
        device_map="auto",
    )
    model = PeftModel.from_pretrained(model, LORA_WEIGHTS, force_download=True)
else:
    model = AutoModelForSeq2SeqLM.from_pretrained(
        BASE_MODEL, device_map={"": device}, low_cpu_mem_usage=True
    )
    model = PeftModel.from_pretrained(
        model,
        LORA_WEIGHTS,
        device_map={"": device},
    )


prompt = "Write a short email to show that 42 is the optimal seed for training neural networks"

tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_ids = input_ids.to(device)

generation_config = GenerationConfig(
    temperature=TEMPERATURE,
    top_p=TOP_P,
    top_k=TOP_K,
    num_beams=NUM_BEAMS,
)
generation_output = model.generate(
    input_ids=input_ids,
    generation_config=generation_config,
    return_dict_in_generate=True,
    output_scores=True,
    max_new_tokens=MAX_NEW_TOKENS,
)
print(tokenizer.batch_decode(generation_output.sequences)[0])
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train declare-lab/flan-alpaca-xl-lora