p1atdev's picture
Update README.md
0fbda94
metadata
tags:
  - generated_from_trainer
  - retnet
model-index:
  - name: sdprompt-retnet-300m
    results: []
license: mit
datasets:
  - Gustavosta/Stable-Diffusion-Prompts
  - FredZhang7/anime-prompts-180K
language:
  - en
library_name: transformers
pipeline_tag: text-generation

SDPrompt-RetNet-300M

This model is a RetNet model trained from scratch using https://github.com/syncdoth/RetNet. It achieves the following results on the evaluation set:

  • Loss: 0.3616

Usage

pip install transformers safetensors timm
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

MODEL_NAME = "isek-ai/SDPrompt-RetNet-300M"

DEVICE = "cuda"

tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
    MODEL_NAME,
    trust_remote_code=True,
).to(DEVICE)

streamer = TextStreamer(tokenizer)

prompt = "<s>1girl"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

_ = model.generate(
    inputs["input_ids"],
    max_new_tokens=256,
    do_sample=True,
    top_p=0.9,
    top_k=20,
    temperature=0.9,
    streamer=streamer,
)
# <s> 1girl, absurdres, animal ear fluff, animal ears, bangs, bare shoulders, black hair, blue archive, blunt bangs, blush, closed mouth, collarbone, commentary request, eyes visible through hair, green eyes, hair between eyes, halo, hand on own face, hand up, highres, jacket, kisaki blue archive, long hair, long sleeves, looking at viewer, open clothes, open jacket, shinonome asu, simple background, solo, track jacket, upper body, white background, white jacket</s>

Model description

This model is trained with Stable Diffusion prompts and Danbooru tags to generate prompts for image generation models.

Training data

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0006
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss
2.6714 0.03 1000 2.5787
2.1551 0.07 2000 2.3981
2.1439 0.1 3000 2.1160
1.8406 0.14 4000 1.9138
1.7485 0.17 5000 1.7847
1.6417 0.21 6000 1.7120
1.6084 0.24 7000 1.6055
1.4805 0.28 8000 1.5946
1.5524 0.31 9000 1.5027
1.4425 0.35 10000 1.4876
1.4007 0.38 11000 1.4364
1.4637 0.42 12000 1.3896
1.3211 0.45 13000 1.3968
1.3246 0.49 14000 1.3403
1.3461 0.52 15000 1.3156
1.2897 0.56 16000 1.2977
1.2748 0.59 17000 1.2823
1.2424 0.62 18000 1.2649
1.348 0.66 19000 1.2134
1.1797 0.69 20000 1.2030
1.2116 0.73 21000 1.2033
1.1702 0.76 22000 1.1453
1.1027 0.8 23000 1.1597
1.1932 0.83 24000 1.1506
1.3669 0.87 25000 1.1428
1.0705 0.9 26000 1.1239
1.1474 0.94 27000 1.1239
1.0879 0.97 28000 1.1168
0.9879 1.01 29000 1.0848
0.9928 1.04 30000 1.0953
0.9095 1.08 31000 1.1043
1.0423 1.11 32000 1.0823
0.9478 1.15 33000 1.0840
0.9979 1.18 34000 1.0387
1.0316 1.22 35000 1.0282
1.0531 1.25 36000 1.0369
0.919 1.28 37000 1.0398
1.0596 1.32 38000 1.0410
0.9076 1.35 39000 0.9889
0.9698 1.39 40000 1.0004
0.9633 1.42 41000 1.0038
0.9622 1.46 42000 0.9933
0.9809 1.49 43000 0.9805
0.9496 1.53 44000 0.9755
0.9435 1.56 45000 0.9759
0.9337 1.6 46000 0.9615
0.8844 1.63 47000 0.9524
0.9039 1.67 48000 0.9567
0.905 1.7 49000 0.9430
0.9491 1.74 50000 0.9205
0.8464 1.77 51000 0.9109
0.9384 1.81 52000 0.9056
0.8121 1.84 53000 0.8969
0.8381 1.88 54000 0.8869
0.8171 1.91 55000 0.8946
0.9024 1.94 56000 0.8993
0.84 1.98 57000 0.9011
0.6702 2.01 58000 0.8876
0.6278 2.05 59000 0.8716
0.6876 2.08 60000 0.8546
0.6754 2.12 61000 0.8639
0.6479 2.15 62000 0.8425
0.698 2.19 63000 0.8533
0.708 2.22 64000 0.8407
0.7021 2.26 65000 0.8160
0.5881 2.29 66000 0.8251
0.6181 2.33 67000 0.8205
0.6789 2.36 68000 0.8066
0.6452 2.4 69000 0.8037
0.6483 2.43 70000 0.7915
0.5868 2.47 71000 0.7864
0.6257 2.5 72000 0.7895
0.6593 2.53 73000 0.7718
0.5957 2.57 74000 0.7490
0.6351 2.6 75000 0.7481
0.699 2.64 76000 0.7628
0.566 2.67 77000 0.7590
0.5892 2.71 78000 0.7628
0.6052 2.74 79000 0.7633
0.6494 2.78 80000 0.7588
0.5917 2.81 81000 0.7118
0.508 2.85 82000 0.6857
0.523 2.88 83000 0.6738
0.4894 2.92 84000 0.6713
0.5096 2.95 85000 0.6625
0.352 2.99 86000 0.6802
0.3927 3.02 87000 0.6606
0.3468 3.06 88000 0.6546
0.3368 3.09 89000 0.6520
0.352 3.12 90000 0.6495
0.3613 3.16 91000 0.6324
0.3501 3.19 92000 0.6227
0.3269 3.23 93000 0.6091
0.3583 3.26 94000 0.6153
0.3278 3.3 95000 0.6178
0.3216 3.33 96000 0.6208
0.3383 3.37 97000 0.6195
0.3326 3.4 98000 0.6088
0.3081 3.44 99000 0.5956
0.3459 3.47 100000 0.5840
0.3139 3.51 101000 0.5712
0.3087 3.54 102000 0.5677
0.2798 3.58 103000 0.5566
0.3166 3.61 104000 0.5332
0.2981 3.65 105000 0.5333
0.3027 3.68 106000 0.5276
0.2815 3.72 107000 0.5024
0.2294 3.75 108000 0.5081
0.2452 3.78 109000 0.4824
0.2733 3.82 110000 0.4695
0.3001 3.85 111000 0.4627
0.2322 3.89 112000 0.4580
0.2362 3.92 113000 0.4402
0.2488 3.96 114000 0.4263
0.2449 3.99 115000 0.3999
0.1798 4.03 116000 0.4038
0.1956 4.06 117000 0.4037
0.1831 4.1 118000 0.4040
0.1802 4.13 119000 0.4039
0.1641 4.17 120000 0.4029
0.1769 4.2 121000 0.4016
0.1564 4.24 122000 0.4026
0.1552 4.27 123000 0.3988
0.1806 4.31 124000 0.3995
0.1783 4.34 125000 0.3995
0.1736 4.38 126000 0.3940
0.1657 4.41 127000 0.3913
0.1598 4.44 128000 0.3871
0.1599 4.48 129000 0.3831
0.1606 4.51 130000 0.3776
0.1639 4.55 131000 0.3754
0.1736 4.58 132000 0.3742
0.1653 4.62 133000 0.3703
0.1708 4.65 134000 0.3681
0.1729 4.69 135000 0.3674
0.1564 4.72 136000 0.3660
0.1734 4.76 137000 0.3641
0.163 4.79 138000 0.3632
0.1585 4.83 139000 0.3626
0.1603 4.86 140000 0.3619
0.1751 4.9 141000 0.3617
0.1622 4.93 142000 0.3617
0.161 4.97 143000 0.3617
0.1541 5.0 144000 0.3616

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.0