Edit model card

Model Card

This is an "Abstract to Tweet" model that crafts a tweet summarizing a research paper abstract trained on a synthetic dataset of arXiv abstracts and tweets. It is used as a demonstration of the DataDreamer 🤖💤 library.

Example Usage

from transformers import pipeline

# Load model
pipe = pipeline('text2text-generation', 'datadreamer-dev/abstracts_to_tweet_model')

# Generate a tweet from the abstract of the LoRA paper
abstract = "An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at this https URL."
generated_tweet = pipe(abstract)[0]['generated_text'] 

# Print the generated tweet
print(generated_tweet) 

# Output:
# "Exciting news in #NLP! We've developed Low-Rank Adaptation, or LoRA, to reduce the number of trainable parameters for downstream tasks. It reduces model weights by 10,000 times and GPU memory by 3 times. #AI #MachineLearning"

This model was trained with a synthetic dataset with DataDreamer 🤖💤. The synthetic dataset card and model card can be found here. The training arguments can be found here.

Downloads last month
2
Safetensors
Model size
248M params
Tensor type
F32
·

Finetuned from

Dataset used to train datadreamer-dev/abstracts_to_tweet_model