Edit model card

Model Card for Flan-Alpaca-GPT4-base-3k

This model was obtained by fine-tuning the google/flan-t5-base model on the tatsu-lab/alpaca dataset with the max_source_length option set to 3048. The instructions at the following repository were used for fine-tuning: https://github.com/declare-lab/flan-alpaca The goal of this model was a learning exercise to determine if setting a higher max_source_length resulted in the model interpreting larger prompts during inference.

Model Description

  • Language(s) (NLP): English
  • Finetuned from model: google/flan-t5-base

How to use

from transformers import pipeline

prompt = "Write an email about an alpaca that likes flan"
model = pipeline(model="evolveon/flan-alpaca-gpt4-base-3k")
model(prompt, max_length=3048, do_sample=True)

# Dear AlpacaFriend,
# My name is Alpaca and I'm 10 years old.
# I'm excited to announce that I'm a big fan of flan!
# We like to eat it as a snack and I believe that it can help with our overall growth.
# I'd love to hear your feedback on this idea. 
# Have a great day! 
# Best, AL Paca
Downloads last month
5

Dataset used to train evolveon/flan-alpaca-gpt4-base-3k