Guizmus commited on
Commit
9edb1f0
1 Parent(s): 130e6da

Upload 17 files (#2)

Browse files

- Upload 17 files (d44318cd1e1ceac3f29c79d92d90f8b915b585c6)

.gitattributes CHANGED
@@ -32,3 +32,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ showcase.jpg filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -2,7 +2,7 @@
2
  language:
3
  - en
4
  license: creativeml-openrail-m
5
- thumbnail: "https://huggingface.co/Guizmus/PoW_UNDERWATER-WORLDS/resolve/main/showcase.jpg"
6
  tags:
7
  - stable-diffusion
8
  - text-to-image
@@ -10,69 +10,85 @@ tags:
10
  ---
11
  # PoW : UNDERWATER WORLDS
12
 
13
- ![Showcase](https://huggingface.co/Guizmus/PoW_UNDERWATER-WORLDS/resolve/main/showcase.jpg)
14
 
15
  ## Theme
16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ## Model description
18
 
19
  This is a model related to the "Picture of the Week" contest on Stable Diffusion discord.
20
 
21
- I try to make a model out of all the submission for people to continue enjoy the theme after the even, and see a little of their designs in other people's creations. The token stays "PoW Style" and I balance the learning on the low side, so that it doesn't just replicate creations.
 
 
22
 
23
- The total dataset is made of 38 pictures. It was trained on [Stable diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5). I used [EveryDream](https://github.com/victorchall/EveryDream2trainer) to do the training, 100 total repeat per picture over 50 epochs. The pictures were tagged using the token "PoW Style", and the username on discord followed by the userID. The dataset is provided below.
24
 
25
  The recommended sampling is k_Euler_a or DPM++ 2M Karras on 20 steps, CFGS 7.5 .
26
 
27
  ## Trained tokens
28
 
29
- * PoW Style
30
- * sometimes8916
31
- * 526christian2526
32
- * Akumetsu9719982
33
- * AndreTheApache9587
34
- * bitspirit31653
35
- * bWm_nubby6416
36
- * crazykazoo2431
37
- * Eppinette-Chi6220
38
- * espasmo9486
39
- * flowwolf4607
40
- * Guizmus7459
41
- * H20_Dancing8979
42
- * Horvallis7915
43
- * Jeremy6194
44
- * JRW19948464
45
- * Junglerally3955
46
- * Max Headroom6734
47
- * Meezy1963
48
- * millennium_gun6377
49
- * Munkyfoot7944
50
- * Nerfgun37508
51
- * NicoGJ2781
52
- * NoOfTheBeast8209
53
- * Omnia2931
54
- * owleye1290
55
- * Phaff1970
56
- * piscabo8649
57
- * Ras6722
58
- * ResidentChiefNZ6989
59
- * Rhapsody8685
60
- * Satyam_SSJ107387
61
- * tazi2574
62
- * Trash--Panda6213
63
- * valanya6869
64
- * vcm077281
65
- * Vil0404
66
- * wpatzz5836
67
- * xThIsIsBoToXx8765
 
68
 
69
  ## Download links
70
 
71
- [SafeTensors](https://huggingface.co/Guizmus/PoW_UNDERWATER-WORLDS/resolve/main/PoWStyle-UnderwaterWorlds.safetensors)
72
 
73
- [CKPT](https://huggingface.co/Guizmus/PoW_UNDERWATER-WORLDS/resolve/main/PoWStyle-UnderwaterWorlds.ckpt)
74
 
75
- [Dataset](https://huggingface.co/Guizmus/PoW_UNDERWATER-WORLDS/resolve/main/PoWStyle-UnderwaterWorlds.zip)
76
 
77
  ## 🧨 Diffusers
78
 
@@ -85,12 +101,12 @@ You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/op
85
  from diffusers import StableDiffusionPipeline
86
  import torch
87
 
88
- model_id = "Guizmus/PoW_UNDERWATER-WORLDS"
89
  pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
90
  pipe = pipe.to("cuda")
91
 
92
- prompt = "PoW Style AndreTheApache9587"
93
  image = pipe(prompt).images[0]
94
 
95
- image.save("./PoWStyle.png")
96
  ```
 
2
  language:
3
  - en
4
  license: creativeml-openrail-m
5
+ thumbnail: "https://huggingface.co/Guizmus/SDArt_underwaterworlds/resolve/main/showcase.jpg"
6
  tags:
7
  - stable-diffusion
8
  - text-to-image
 
10
  ---
11
  # PoW : UNDERWATER WORLDS
12
 
13
+ ![Showcase](https://huggingface.co/Guizmus/SDArt_underwaterworlds/resolve/main/showcase.jpg)
14
 
15
  ## Theme
16
 
17
+ Deep in the heart of the ocean, there was a realm of magic and mystery, where the creatures were of an unearthly nature. Lyra, a young and courageous mermaid, was determined to explore every space of this ethereal world. With each stroke of her powerful tail, she descended deeper into the abyssal unknown, encountering peculiar and breathtaking creatures - towering sea serpents with magnificent scales, colossal jellyfish that radiated with a vibrant light, and shoals of resplendent fish that swam in mesmerizing patterns.
18
+
19
+ As Lyra delved deeper, an expansive gateway of coral and seaweed emerged before her. Suddenly, she was surrounded by a swarm of bioluminescent seahorses that darted around her, their neon lights illuminating the shadows. Lyra couldn't help but smile at the sight, marveling at their beauty. After a moment, she composed herself and looked around. She was in disbelief at what lay before here - an underwater city of pure enchantment and intrigue.
20
+
21
+ The buildings were made of elegant stonework, adorned with an array of carvings depicting creatures beyond her wildest dreams. The walls were draped in a soft, radiant moss that illuminated the entire city like a cloudless starry night sky. The interplay of light and shadow danced in a hypnotic rhythm that left Lyra in a state of awe. She felt as if she had discovered a world that was both ancient and new, an underwater world of unexplored possibilities.
22
+
23
+ ***Bring the magic of the underwater worlds to life***
24
+
25
+ How would you capture the wonder and fluidity of the ocean depths?
26
+
27
+ What kind of unique underwater creatures or plants would Lyra have seen?
28
+
29
+ If you could explore one specific area of Lyra's world, what would it be?Remember to stay hydrated!
30
+
31
  ## Model description
32
 
33
  This is a model related to the "Picture of the Week" contest on Stable Diffusion discord.
34
 
35
+ I try to make a model out of all the submission for people to continue enjoy the theme after the even, and see a little of their designs in other people's creations. The token stays "SDArt" and I balance the learning on the low side, so that it doesn't just replicate creations.
36
+
37
+ The total dataset is made of 38 pictures. It was trained on [Stable diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5). I used [EveryDream](https://github.com/victorchall/EveryDream2trainer) to do the training, 100 total repeat per picture. The pictures were tagged using the token "SDArt", and an arbitrary token I choose. The dataset is provided below, as well as a list of usernames and their corresponding token.
38
 
 
39
 
40
  The recommended sampling is k_Euler_a or DPM++ 2M Karras on 20 steps, CFGS 7.5 .
41
 
42
  ## Trained tokens
43
 
44
+ * SDArt
45
+ * appt
46
+ * ohwx
47
+ * asr
48
+ * aten
49
+ * fcu
50
+ * chor
51
+ * cpec
52
+ * pfa
53
+ * kprc
54
+ * kuro
55
+ * asot
56
+ * elis
57
+ * sill
58
+ * exe
59
+ * bsp
60
+ * grl
61
+ * hap
62
+ * byes
63
+ * lpg
64
+ * yler
65
+ * avel
66
+ * vaw
67
+ * zaki
68
+ * ohn
69
+ * guin
70
+ * vini
71
+ * pz
72
+ * crit
73
+ * shma
74
+ * doa
75
+ * sks
76
+ * szn
77
+ * phol
78
+ * utm
79
+ * uy
80
+ * dds
81
+ * pte
82
+ * oxi
83
+ * ynna
84
 
85
  ## Download links
86
 
87
+ [SafeTensors](https://huggingface.co/Guizmus/SDArt_underwaterworlds/resolve/main/SDArt_underwaterworlds.safetensors)
88
 
89
+ [CKPT](https://huggingface.co/Guizmus/SDArt_underwaterworlds/resolve/main/SDArt_underwaterworlds.ckpt)
90
 
91
+ [Dataset](https://huggingface.co/Guizmus/SDArt_underwaterworlds/resolve/main/dataset.zip)
92
 
93
  ## 🧨 Diffusers
94
 
 
101
  from diffusers import StableDiffusionPipeline
102
  import torch
103
 
104
+ model_id = "Guizmus/SDArt_underwaterworlds"
105
  pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
106
  pipe = pipe.to("cuda")
107
 
108
+ prompt = "SDArt oxi"
109
  image = pipe(prompt).images[0]
110
 
111
+ image.save("./SDArt.png")
112
  ```
SDArt_underwaterworlds.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9727c797d087ca58b984004b0757ff7d82efc525393b4ce7c556f72824e3388
3
+ size 2132856622
SDArt_underwaterworlds.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b29ea9149b1356b13e92ad3623d90c396c573e5f4e418eb69b10fbdcc99cd53
3
+ size 2132625431
dataset.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93d311ae15a67d41991032fa72ff55f0ee4201e81cde5e43725141a2ffa4e011
3
+ size 6134987
scheduler/scheduler_config.json CHANGED
@@ -9,6 +9,7 @@
9
  "num_train_timesteps": 1000,
10
  "prediction_type": "epsilon",
11
  "set_alpha_to_one": false,
 
12
  "steps_offset": 1,
13
  "trained_betas": null,
14
  "variance_type": "fixed_small"
 
9
  "num_train_timesteps": 1000,
10
  "prediction_type": "epsilon",
11
  "set_alpha_to_one": false,
12
+ "skip_prk_steps": true,
13
  "steps_offset": 1,
14
  "trained_betas": null,
15
  "variance_type": "fixed_small"
showcase.jpg CHANGED

Git LFS Details

  • SHA256: af1404551410ae92ffc472c37bbb7174a0509001f69b88abc6e657961d26852c
  • Pointer size: 132 Bytes
  • Size of remote file: 1.96 MB
text_encoder/config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "ckpt_cache\\sd_v1-5_vae",
3
  "architectures": [
4
  "CLIPTextModel"
5
  ],
 
1
  {
2
+ "_name_or_path": "F:\\AI\\Data\\Diffusers\\stable-diffusion-v1-5",
3
  "architectures": [
4
  "CLIPTextModel"
5
  ],
text_encoder/pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0bf3f02f79646a54224370f98abb2b2c1164dbc8e5095d3a5cd5db55acd042ba
3
  size 492308087
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a35d7b689be4334400a4c13e4ae03370deee825ba9e6829624a29920b8352b2a
3
  size 492308087
tokenizer/tokenizer_config.json CHANGED
@@ -19,7 +19,7 @@
19
  },
20
  "errors": "replace",
21
  "model_max_length": 77,
22
- "name_or_path": "ckpt_cache\\sd_v1-5_vae",
23
  "pad_token": "<|endoftext|>",
24
  "special_tokens_map_file": "./special_tokens_map.json",
25
  "tokenizer_class": "CLIPTokenizer",
 
19
  },
20
  "errors": "replace",
21
  "model_max_length": 77,
22
+ "name_or_path": "F:\\AI\\Data\\Diffusers\\stable-diffusion-v1-5",
23
  "pad_token": "<|endoftext|>",
24
  "special_tokens_map_file": "./special_tokens_map.json",
25
  "tokenizer_class": "CLIPTokenizer",
unet/config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "_class_name": "UNet2DConditionModel",
3
  "_diffusers_version": "0.13.0",
4
- "_name_or_path": "ckpt_cache\\sd_v1-5_vae",
5
  "act_fn": "silu",
6
  "attention_head_dim": 8,
7
  "block_out_channels": [
 
1
  {
2
  "_class_name": "UNet2DConditionModel",
3
  "_diffusers_version": "0.13.0",
4
+ "_name_or_path": "F:\\AI\\Data\\Diffusers\\stable-diffusion-v1-5",
5
  "act_fn": "silu",
6
  "attention_head_dim": 8,
7
  "block_out_channels": [
unet/diffusion_pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:61392260bb17e441ab7c42ead4814697094c21d2019b033c13eb098382f834a1
3
  size 3438364325
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec71ff83af3888bd4ca0e1e85920da206458131ab0761dbbf061795e86a04868
3
  size 3438364325
vae/config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "_class_name": "AutoencoderKL",
3
  "_diffusers_version": "0.13.0",
4
- "_name_or_path": "ckpt_cache\\sd_v1-5_vae",
5
  "act_fn": "silu",
6
  "block_out_channels": [
7
  128,
 
1
  {
2
  "_class_name": "AutoencoderKL",
3
  "_diffusers_version": "0.13.0",
4
+ "_name_or_path": "F:\\AI\\Data\\Diffusers\\stable-diffusion-v1-5",
5
  "act_fn": "silu",
6
  "block_out_channels": [
7
  128,
vae/diffusion_pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:52fd6d497c6cc26f44a22cd27d905391a2aaadcecf89007b5b911198c4a50a93
3
- size 334710673
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb128b1f37e0c381c440128b217d29613b3e08b9e4ea7f20466424145ba538b0
3
+ size 167402961