pszemraj's picture
Librarian Bot: Add base_model information to model (#2)
bd85b31
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - Gustavosta/Stable-Diffusion-Prompts
widget:
  - text: morning sun over Jakarta
    example_title: morning sun
  - text: 'WARNING: pip is'
    example_title: pip
  - text: sentient cheese
    example_title: sentient cheese
  - text: cheeps are
    example_title: cheeps
parameters:
  min_length: 32
  max_length: 64
  no_repeat_ngram_size: 1
  do_sample: true
  top_k: 50
  top_p: 0.95
  repetition_penalty: 5.5
base_model: EleutherAI/gpt-neo-125M
model-index:
  - name: gpt-neo-125M-magicprompt-SD
    results: []

gpt-neo-125M-magicprompt-SD

Generate/augment your prompt, stable diffusion style.

This model is a fine-tuned version of EleutherAI/gpt-neo-125M on the Gustavosta/Stable-Diffusion-Prompts dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8875
  • perplexity: 6.6028

Training and evaluation data

refer to the Gustavosta/Stable-Diffusion-Prompts dataset.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 256
  • total_eval_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss
3.2189 0.99 33 3.0051
2.5466 1.99 66 2.5215
2.2791 2.99 99 2.2881
2.107 3.99 132 2.1322
1.9458 4.99 165 2.0270
1.8664 5.99 198 1.9580
1.8083 6.99 231 1.9177
1.7631 7.99 264 1.8964
1.7369 8.99 297 1.8885
1.766 9.99 330 1.8875

Framework versions

  • Transformers 4.25.0.dev0
  • Pytorch 1.13.0+cu117
  • Datasets 2.6.1
  • Tokenizers 0.13.1