--- license: other tags: - generated_from_trainer - stable diffusion - diffusion - text2image - prompt augment - prompt engineering datasets: - Gustavosta/Stable-Diffusion-Prompts widget: - text: morning sun over Jakarta example_title: morning sun - text: 'WARNING: pip is' example_title: pip - text: sentient cheese example_title: sentient cheese - text: cheeps are example_title: cheeps - text: avocado armchair example_title: creative prompt - text: Landscape of example_title: landscape parameters: min_length: 16 max_length: 96 no_repeat_ngram_size: 1 do_sample: true base_model: facebook/opt-350m model-index: - name: opt-350m-magicprompt-SD results: [] --- # opt-350m-magicprompt-SD Generate/augment your prompt, stable diffusion style. This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the Gustavosta/Stable-Diffusion-Prompts dataset. It achieves the following results on the evaluation set: - Loss: 1.2987 - eval_steps_per_second = 16.623 - perplexity = 3.6644 ## example ![jakarta](https://i.imgur.com/TP3HQOA.png) output (_on DALL-E 2, but as words are words, works anywhere_) ![dalle2-jakarta](https://i.ibb.co/BKVxwmJ/DALL-E-2022-11-09-12-37-56-morning-sun-over-Jakarta-by-Simon-St-lenhag-and-Gaston-Bussiere-Matte-pai.png) ## Training and evaluation data refer to the `Gustavosta/Stable-Diffusion-Prompts` dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 32 - total_train_batch_size: 512 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8568 | 0.95 | 16 | 2.5937 | | 2.2487 | 1.95 | 32 | 2.1050 | | 1.9011 | 2.95 | 48 | 1.8082 | | 1.6837 | 3.95 | 64 | 1.6178 | | 1.4887 | 4.95 | 80 | 1.4897 | | 1.3812 | 5.95 | 96 | 1.4017 | | 1.2944 | 6.95 | 112 | 1.3437 | | 1.2574 | 7.95 | 128 | 1.3127 | | 1.2325 | 8.95 | 144 | 1.3009 | | 1.2223 | 9.95 | 160 | 1.2987 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.6.1 - Tokenizers 0.13.1