quote_generator / README.md
clemsadand's picture
Update README.md
3646aff verified
metadata
base_model: gpt2
library_name: peft
datasets:
  - clemsadand/quote_data
metrics:
  - bertscore

Model Card for Quote Generator

This model is a fine-tuned version of GPT-2 using LoRA (Low-Rank Adaptation) to generate quotes based on a custom dataset. It is designed to create meaningful and inspirational quotes.

Model Details

Model Description

The Quote Generator is built on top of the GPT-2 model, fine-tuned using the Low-Rank Adaptation (LoRA) technique to specialize in generating quotes. The training dataset comprises a curated collection of quotes from various sources, enabling the model to produce high-quality and contextually relevant quotes.

  • Developed by: Clément Adandé
  • Model type: Language Model (NLP)
  • Language(s) (NLP): English
  • License: MIT
  • Finetuned from model : GPT-2

Model Sources

Uses

Direct Use

The model can be directly used to generate quotes for various applications, such as social media content, motivational messages, and creative writing.

Downstream Use

The model can be further fine-tuned for specific contexts or integrated into applications requiring quote generation.

Out-of-Scope Use

The model should not be used for generating harmful, offensive, or misleading content. It may not perform well for generating quotes in languages other than English.

Bias, Risks, and Limitations

The model may inherit biases present in the training data. Generated quotes may not always be factually accurate or appropriate for all contexts. Users should verify the content before use in sensitive applications.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. It is recommended to review and edit the generated quotes before public use.

How to Get Started with the Model

Use the code below to get started with the model.

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

config = PeftConfig.from_pretrained("clemsadand/quote_generator")
base_model = AutoModelForCausalLM.from_pretrained("gpt2")
model = PeftModel.from_pretrained(base_model, "clemsadand/quote_generator")

tokenizer = AutoTokenizer.from_pretrained("gpt2")

def generate_quote(input_text):
    input_tensor = tokenizer(input_text, return_tensors="pt")
    output = model.generate(input_tensor["input_ids"], attention_mask=input_tensor["attention_mask"],
                          max_length=64, num_beams=5, no_repeat_ngram_size=2,
                          early_stopping=True, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.7)
    output = tokenizer.decode(output[0], ski_special_tokens=True, clean_up_tokenization_spaces=True)
    return output

input_text = "Generate a quote about kindness with the keywords compassion, empathy, help, generosity, care"

print(generate_quote(input_text))