pszemraj's picture
Update README.md
f1dd802
metadata
license: apache-2.0
tags:
  - generated_from_trainer
  - stable diffusion
  - diffusion
  - text2image
  - prompt augment
  - prompt engineering
datasets:
  - Gustavosta/Stable-Diffusion-Prompts
model-index:
  - name: distilgpt2-magicprompt-SD
    results: []
thumbnail: https://i.ibb.co/WkmTnZD/image.png
widget:
  - text: morning sun over Jakarta
    example_title: morning sun
  - text: 'WARNING: pip is'
    example_title: pip
  - text: sentient cheese
    example_title: sentient cheese
  - text: cheeps are
    example_title: cheeps
  - text: avocado armchair
    example_title: creative prompt
  - text: Landscape of
    example_title: landscape
parameters:
  min_length: 16
  max_new_tokens: 24
  no_repeat_ngram_size: 1
  do_sample: true

distilgpt2-magicprompt-SD

colab

Generate/augment your prompt, stable diffusion style.

This model is a fine-tuned version of distilgpt2 on the Gustavosta/Stable-Diffusion-Prompts dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3089
  • eval_steps_per_second = 17.201
  • perplexity = 3.7022

example

Results in (DALL-E, but you get the idea):

example


this distilgpt2 version is probably small/fast enough to be used locally on CPU!

basic usage

install transformers as needed:

pip install -U transformers

load and query through a pipeline object:

from transformers import pipeline

model_tag = "pszemraj/distilgpt2-magicprompt-SD"
generator = pipeline(
    "text-generation",
    model=model_tag,
)

prompt = "The Answer to Why"
result = generator(
    prompt,
    max_new_tokens=24,
)  # generate, adjust/add kwargs as needed
print(result[0]["generated_text"])

Training and evaluation data

refer to the Gustavosta/Stable-Diffusion-Prompts dataset.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 256
  • total_eval_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss
2.7061 0.99 33 2.5859
2.08 1.99 66 1.9965
1.7623 2.99 99 1.7248
1.5408 3.99 132 1.5449
1.4147 4.99 165 1.4437
1.3593 5.99 198 1.3768
1.2703 6.99 231 1.3362
1.2528 7.99 264 1.3175
1.1981 8.99 297 1.3091
1.2117 9.99 330 1.3089

Framework versions

  • Transformers 4.25.0.dev0
  • Pytorch 1.13.0+cu117
  • Datasets 2.6.1
  • Tokenizers 0.13.1