SmolLM-prompt-generator
SmolLM-prompt-generator is model trained from HuggingFaceTB/SmolLM-135M to generate high quality prompts for Text-to-image model.
Datasets
My model is finetuned from CaptionEmporium/coyo-hd-11m-llavanext, using caption_llava_short
column and some preprocess steps for getting better training prompts.
How to use:
To use this model, you can load it directly from the Hugging Face Model Hub:
#!pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_path = "zenai-org/SmolLM-prompt-generation"
model = AutoModelForCausalLM.from_pretrained(model_path).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_path)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
# prompt starting with <|endoftext|> is recommended
prompt = "<|endoftext|>"
generated_caption = generator(
prompt,
max_length=77,
min_length=10,
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.7,
eos_token_id=tokenizer.eos_token_id,
)
print(generated_caption[0]['generated_text'])
Example
Input Prompt | Generated Prompt |
---|---|
a cute cat | a cute cat with a yellow nose and ears sits on a white surface. |
a potrait of a woman" | a potrait of a woman with a contemplative expression, featuring a detailed hairstyle and a neutral background with a subtle pattern. |
a picture of | a picture of a woman in a pink dress with a "T" logo, set against a dark background, representing the "Best Girl in Hollywood" series. |
- Downloads last month
- 95
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.