zuleo commited on
Commit
12a6ebd
1 Parent(s): 02a0ea5

inital commit for model

Browse files
.gitattributes CHANGED
@@ -32,3 +32,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ *.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ tags:
4
+ - stable-diffusion
5
+ - stable-diffusion-diffusers
6
+ - text-to-image
7
+ - art
8
+ - artistic
9
+ - diffusers
10
+ - final fantasy
11
+ ---
12
+
13
+ # 🎁 effeffIX Concept Diffusion
14
+
15
+
16
+ Fine-tuned Stable Diffusion model, based of ```F222```, trained with concept art from a high quality role playing game.
17
+
18
+ ![Detailed Samples](https://huggingface.co/zuleo/effeffIX-concept-diffusion/resolve/main/booth5.png)
19
+
20
+ ## Model Usage
21
+
22
+ This model was trained on multiple concepts. Use the tokens below:
23
+
24
+ | Token | Description |
25
+ |----------------------|------------------------------------------------------|
26
+ | effeff9 woman | Uses concepts trained on female designs. |
27
+ | effeff9 man | Uses concepts trained on male designs. |
28
+ | effeff9 creature | Uses concepts trained on different creature designs. |
29
+ | effeff9 architecture | Uses concepts trained on architecture design. |
30
+
31
+ ---
32
+
33
+ ### Examples: effeff9 woman
34
+
35
+ ![Detailed Samples](https://huggingface.co/zuleo/effeffIX-concept-diffusion/resolve/main/booth1.png)
36
+
37
+ ### Examples: effeff9 man
38
+
39
+ ![Detailed Samples](https://huggingface.co/zuleo/effeffIX-concept-diffusion/resolve/main/booth2.png)
40
+
41
+ ### Examples: effeff9 creature
42
+
43
+ ![Detailed Samples](https://huggingface.co/zuleo/effeffIX-concept-diffusion/resolve/main/booth3.png)
44
+
45
+ ### Examples: effeff9 architecture
46
+
47
+ ![Detailed Samples](https://huggingface.co/zuleo/effeffIX-concept-diffusion/resolve/main/booth4.png)
48
+
49
+
50
+ ---
51
+
52
+ ☕ If you enjoy this model, buy us a coffee [![Buy a coffee](https://badgen.net/badge/icon/kofi?icon=kofi&label=buy%20us%20a%20coffee)](https://ko-fi.com/3eegames)
53
+
54
+ ---
55
+
56
+
57
+ ## 🧾 Prompt examples:
58
+
59
+ **The amazing Aubrey Plaza**
60
+
61
+ ```Wide shot of a effeff9 woman warrior aubrey plaza with shining armor descending from heaven, lifelike, (highly detailed eyes), super highly detailed face, professional digital painting, artstation, concept art, Unreal Engine 5, HD quality, 8k resolution, beautiful, cinematic, art by artgerm and greg rutkowski and alphonse mucha and loish and WLOP```
62
+
63
+ [Negative prompt](#❎-negative-prompt-template)
64
+
65
+ _Steps: 82, Sampler: DPM++ 2M, CFG scale: 8.5, Seed: 695884347, Size: 512x512, Model hash: b7ba5b22_
66
+
67
+ ---
68
+
69
+ **The Wise Giraffe**
70
+
71
+ ```portrait of a effeff9 creature Giraffe, artstation, concept art, Unreal Engine 5, HD quality, 4k resolution, beautiful, cinematic, art by artgerm and greg rutkowski and alphonse mucha and loish and WLOP```
72
+
73
+ [Negative prompt](#❎-negative-prompt-template)
74
+
75
+ _Steps: 90, Sampler: DPM++ 2M Karras, CFG scale: 8.5, Seed: 2821955656, Size: 512x512, Model hash: b7ba5b22_
76
+
77
+ ---
78
+
79
+ **The drag of the kingdom**
80
+
81
+ ```Wide shot of a grand kingdom, lifelike, super highly detailed, professional digital painting, artstation, concept art, Unreal Engine 5, HD quality, 8k resolution, beautiful, cinematic, art by artgerm and greg rutkowski and alphonse mucha and loish and WLOP, (effeff9 architecture)```
82
+
83
+ [Negative prompt](#❎-negative-prompt-template)
84
+
85
+ _Steps: 90, Sampler: DDIM, CFG scale: 13.5, Seed: 2625868484, Size: 512x512, Model hash: b7ba5b22_
86
+
87
+ ---
88
+
89
+ **The steamy momoa**
90
+
91
+ ```Perfectly-centered portrait of a effeff9 MAN jason momoa with shining scales descending from heaven, concept art, ART STATION, BEAUTIFUL PERFECT detailed MANGA EYES, art by artgerm and greg rutkowski and alphonse mucha and loish and WLOP```
92
+
93
+ [Negative prompt](#❎-negative-prompt-template)
94
+
95
+ _Steps: 56, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 3257609354, Size: 512x512, Model hash: b7ba5b22_
96
+
97
+
98
+ ---
99
+
100
+
101
+ ## ❎ Negative Prompt Template
102
+
103
+ This model offers a unique style where characters typically have larger, exaggerated sleeves and hands. To supress this style, add more variants to adjust the hand style.
104
+
105
+ All images were rendered with the negative prompt below:
106
+
107
+ ```Negative prompt: ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), ((extra limbs)), cloned face, (((disfigured))), out of frame, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck)))```
108
+
109
+ ## 🧨 Diffusers
110
+
111
+ This model can be used just like any other Stable Diffusion model. For more information,
112
+ please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
113
+
114
+ Export the model:
115
+ - [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx)
116
+ - [MPS](https://huggingface.co/docs/diffusers/optimization/mps)
117
+ - [FLAX/JAX](https://huggingface.co/blog/stable_diffusion_jax)
118
+
119
+ ```python
120
+ from diffusers import StableDiffusionPipeline
121
+ import torch
122
+
123
+ model_id = "zuleo/effeffIX-concept-diffusion"
124
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
125
+ pipe = pipe.to("cuda")
126
+
127
+ prompt = "effeff9 woman aubrey plaza"
128
+ image = pipe(prompt).images[0]
129
+
130
+ image.save("./i_luv_aubrey_plaza.png")
131
+ ```
132
+
133
+ ## License
134
+
135
+ This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
136
+ The CreativeML OpenRAIL License specifies:
137
+
138
+ - You can't use the model to deliberately produce nor share illegal or harmful outputs or content
139
+ - The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
140
+ - You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
141
+ [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
booth1.png ADDED

Git LFS Details

  • SHA256: 3d60eeba1878cf7462324294c7f2f94eb1020f1a71c77fd3da156a17adf5d23f
  • Pointer size: 132 Bytes
  • Size of remote file: 2.95 MB
booth2.png ADDED

Git LFS Details

  • SHA256: ca255246c14b64b36fece8c238bf5bca69fbd4f80ecd6681d8834bb391a7eb46
  • Pointer size: 132 Bytes
  • Size of remote file: 2.94 MB
booth3.png ADDED

Git LFS Details

  • SHA256: 11b9ed2e5ec94c3302f585612ae8cdb05f358c94825d81acd2734926271006e4
  • Pointer size: 132 Bytes
  • Size of remote file: 2.75 MB
booth4.png ADDED

Git LFS Details

  • SHA256: 849794c41453b8b202017b9d9eee5d59e7835ae768d38629031252c69468a0f9
  • Pointer size: 132 Bytes
  • Size of remote file: 3.04 MB
booth5.png ADDED

Git LFS Details

  • SHA256: 94cde286fb5459879ace8a7c0039e18b1173f71248bada036e9e749a3983d830
  • Pointer size: 132 Bytes
  • Size of remote file: 2.83 MB
effeffIX.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:293f21fb7fccd1a96fbcc69e6557117b30a0323ca5bfebc45988f81f1111bfd7
3
+ size 4098020782
model_index.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "StableDiffusionPipeline",
3
+ "_diffusers_version": "0.9.0",
4
+ "feature_extractor": [
5
+ null,
6
+ null
7
+ ],
8
+ "requires_safety_checker": null,
9
+ "safety_checker": [
10
+ null,
11
+ null
12
+ ],
13
+ "scheduler": [
14
+ "diffusers",
15
+ "DDIMScheduler"
16
+ ],
17
+ "text_encoder": [
18
+ "transformers",
19
+ "CLIPTextModel"
20
+ ],
21
+ "tokenizer": [
22
+ "transformers",
23
+ "CLIPTokenizer"
24
+ ],
25
+ "unet": [
26
+ "diffusers",
27
+ "UNet2DConditionModel"
28
+ ],
29
+ "vae": [
30
+ "diffusers",
31
+ "AutoencoderKL"
32
+ ]
33
+ }
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "DDIMScheduler",
3
+ "_diffusers_version": "0.9.0",
4
+ "beta_end": 0.012,
5
+ "beta_schedule": "scaled_linear",
6
+ "beta_start": 0.00085,
7
+ "clip_sample": false,
8
+ "num_train_timesteps": 1000,
9
+ "prediction_type": "epsilon",
10
+ "set_alpha_to_one": false,
11
+ "steps_offset": 1,
12
+ "trained_betas": null
13
+ }
text_encoder/config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "C:\\Users\\ryguy\\Documents\\github\\stable-diffusion-webui\\models\\dreambooth\\ff93\\working",
3
+ "architectures": [
4
+ "CLIPTextModel"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 0,
8
+ "dropout": 0.0,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "quick_gelu",
11
+ "hidden_size": 768,
12
+ "initializer_factor": 1.0,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 77,
17
+ "model_type": "clip_text_model",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 1,
21
+ "projection_dim": 768,
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.21.0",
24
+ "vocab_size": 49408
25
+ }
text_encoder/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:332bda1a46f1236ef9e323e8fcf9d5d0191588e28a442a3cc03ffd6ee65d2323
3
+ size 492308087
tokenizer/merges.txt ADDED
The diff for this file is too large to render. See raw diff
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": true,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<|startoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "do_lower_case": true,
12
+ "eos_token": {
13
+ "__type": "AddedToken",
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "errors": "replace",
21
+ "model_max_length": 77,
22
+ "name_or_path": "C:\\Users\\ryguy\\Documents\\github\\stable-diffusion-webui\\models\\dreambooth\\ff93\\working\\tokenizer",
23
+ "pad_token": "<|endoftext|>",
24
+ "special_tokens_map_file": "./special_tokens_map.json",
25
+ "tokenizer_class": "CLIPTokenizer",
26
+ "unk_token": {
27
+ "__type": "AddedToken",
28
+ "content": "<|endoftext|>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }
tokenizer/vocab.json ADDED
The diff for this file is too large to render. See raw diff
unet/config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "UNet2DConditionModel",
3
+ "_diffusers_version": "0.9.0",
4
+ "_name_or_path": "C:\\Users\\ryguy\\Documents\\github\\stable-diffusion-webui\\models\\dreambooth\\ff93\\working",
5
+ "act_fn": "silu",
6
+ "attention_head_dim": 8,
7
+ "block_out_channels": [
8
+ 320,
9
+ 640,
10
+ 1280,
11
+ 1280
12
+ ],
13
+ "center_input_sample": false,
14
+ "cross_attention_dim": 768,
15
+ "down_block_types": [
16
+ "CrossAttnDownBlock2D",
17
+ "CrossAttnDownBlock2D",
18
+ "CrossAttnDownBlock2D",
19
+ "DownBlock2D"
20
+ ],
21
+ "downsample_padding": 1,
22
+ "dual_cross_attention": false,
23
+ "flip_sin_to_cos": true,
24
+ "freq_shift": 0,
25
+ "in_channels": 4,
26
+ "layers_per_block": 2,
27
+ "mid_block_scale_factor": 1,
28
+ "norm_eps": 1e-05,
29
+ "norm_num_groups": 32,
30
+ "num_class_embeds": null,
31
+ "only_cross_attention": false,
32
+ "out_channels": 4,
33
+ "sample_size": 64,
34
+ "up_block_types": [
35
+ "UpBlock2D",
36
+ "CrossAttnUpBlock2D",
37
+ "CrossAttnUpBlock2D",
38
+ "CrossAttnUpBlock2D"
39
+ ],
40
+ "use_linear_projection": false
41
+ }
unet/diffusion_pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06893e409d2cfc79fa8084780decb09243f9c37e93c236478e11ae298209d82a
3
+ size 3438364325
vae/config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.9.0",
4
+ "_name_or_path": "C:\\Users\\ryguy\\Documents\\github\\stable-diffusion-webui\\models\\dreambooth\\ff93\\working",
5
+ "act_fn": "silu",
6
+ "block_out_channels": [
7
+ 128,
8
+ 256,
9
+ 512,
10
+ 512
11
+ ],
12
+ "down_block_types": [
13
+ "DownEncoderBlock2D",
14
+ "DownEncoderBlock2D",
15
+ "DownEncoderBlock2D",
16
+ "DownEncoderBlock2D"
17
+ ],
18
+ "in_channels": 3,
19
+ "latent_channels": 4,
20
+ "layers_per_block": 2,
21
+ "norm_num_groups": 32,
22
+ "out_channels": 3,
23
+ "sample_size": 256,
24
+ "up_block_types": [
25
+ "UpDecoderBlock2D",
26
+ "UpDecoderBlock2D",
27
+ "UpDecoderBlock2D",
28
+ "UpDecoderBlock2D"
29
+ ]
30
+ }
vae/diffusion_pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb8bfec1946af8d97bea496018e01cd1b1c290dcb72e2bdf41e044064709ebb9
3
+ size 167402961