fix repo (#1)
Browse files- Delete yom.yaml (3b5734ad2d02e0e39b71b6097ce2cfe6e22a227e)
- Delete model_index.json (c566cff7cdc249ed483ebcd9ea4523bec6d09d69)
- Rename yoms70.safetensors to classic/yoms70.safetensors (845f39e826d361e5375259e56c86f49589f7089d)
- Rename yompastel65.safetensors to classic/yompastel65.safetensors (d342bcf9f8ed62956fef8105d45866524c11c621)
- captcha'd (0af059e88ba81e7f1ad2560b1696fcc36cf248ba)
- Rename yomga70.safetensors to classic/yomga.safetensors (082cdf89980e192dd968a554806365dfdcd59b86)
- this model was a mistake (f07628aaa6d20620663719eb238b9b4f105c0b16)
- Rename anithing-v2.safetensors to classic/anithing2.safetensors (bd8dd5e6364aaa8797c46c284a33b96f0f3064f9)
- Rename ymm35x.safetensors to classic/ymm35x.safetensors (0ac40ea995ce62d7c44ac133e8c3960e67c0fbee)
- Rename yom.safetensors to classic/yom.safetensors (bb1b117196603d4ba20591046ea002bf5b5a2d11)
- Rename anithing-inpainting.inpainting.safetensors to inpaint/anithing.inpainting.safetensors (b30e630a97f112a8d499022ec828d631158f372a)
- Rename yomga-inpainting.inpainting.safetensors to inpaint/yomga.inpainting.safetensors (b7c01f38d93b809b522a28e6a8f0be52910ba8f6)
- Rename awoooooo.safetensors to awoo/awoooooo.safetensors (42208b0c298c70512fe30aae8e3b04908b24390e)
- Rename awooooooo.safetensors to awoo/awooooooo.safetensors (22d2c34f0d4edbbbb9bf827cf370f375ab0102fe)
- Create readme.md (f8f51556244867ae91c9c6ee4df9289eae582638)
- Rename classic/readme.md to classic/README.md (1d90983619858124b1a6cb93124a125f06a02a73)
- Create README.md (519ae7df918699acb88702fc3d27bac36763f1a1)
- Update README.md (8b22f4909d3fddc557cddade5f58ae853e5019ca)
- Create README.md (db7c52427f28c54e71cfed7ea02aa378aaa7baeb)
- README.md +4 -13
- anithing.safetensors +0 -3
- awoo/README.md +8 -0
- awoooooo.safetensors β awoo/awoooooo.safetensors +0 -0
- awooooooo.safetensors β awoo/awooooooo.safetensors +0 -0
- classic/README.md +12 -0
- anithing-v2.safetensors β classic/anithing2.safetensors +0 -0
- ymm35x.safetensors β classic/ymm35x.safetensors +0 -0
- yom.safetensors β classic/yom.safetensors +0 -0
- yomga70.safetensors β classic/yomga.safetensors +0 -0
- yompastel45.safetensors β classic/yompastel45.safetensors +0 -0
- yompastel65.safetensors β classic/yompastel65.safetensors +0 -0
- yoms70.safetensors β classic/yoms70.safetensors +0 -0
- inpaint/README.md +5 -0
- anithing-inpainting.inpainting.safetensors β inpaint/anithing.inpainting.safetensors +0 -0
- yomga-inpainting.inpainting.safetensors β inpaint/yomga.inpainting.safetensors +0 -0
- model_index.json +0 -33
- yom.yaml +0 -70
@@ -2,29 +2,20 @@
|
|
2 |
license: other
|
3 |
language:
|
4 |
- en
|
5 |
-
library_name: diffusers
|
6 |
pipeline_tag: text-to-image
|
7 |
tags:
|
8 |
- art
|
9 |
---
|
10 |
-
some
|
11 |
|
12 |
|
13 |
-
booru tags
|
14 |
|
15 |
-
|
16 |
-
- `yom.safetensors` | ehhh
|
17 |
-
- `yompastel45.safetensors` | extra color
|
18 |
-
- `yompastel65.safetensors` | more color
|
19 |
-
- `yomga70.safetensors` | pretty good
|
20 |
-
- `awooooooo.safetensors` | probably the best one
|
21 |
-
|
22 |
-
licensed under yodayno v2:
|
23 |
```
|
24 |
This license allows you to use the model, but only for non-commercial purposes. You cannot use the model or any part of it in a paid service or sell it.
|
25 |
If you use the model on any platform, you must provide a link or reference to the original model. You must give credit to the licensor whenever you use the model.
|
26 |
The licensor does not provide any warranty and is not liable for any damages caused by the use of the model.
|
27 |
If you break any of the terms, this license will be terminated.
|
28 |
This license is governed by the laws of the jurisdiction in which the licensor is located.
|
29 |
-
```
|
30 |
-
take that yodayo
|
|
|
2 |
license: other
|
3 |
language:
|
4 |
- en
|
|
|
5 |
pipeline_tag: text-to-image
|
6 |
tags:
|
7 |
- art
|
8 |
---
|
9 |
+
some merges and or ggml conversions
|
10 |
|
11 |
|
12 |
+
img: booru tags, use the `/awoo/` models preferibly, as theyre the best
|
13 |
|
14 |
+
all non-ggml models are licensed under yodayno v2:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
```
|
16 |
This license allows you to use the model, but only for non-commercial purposes. You cannot use the model or any part of it in a paid service or sell it.
|
17 |
If you use the model on any platform, you must provide a link or reference to the original model. You must give credit to the licensor whenever you use the model.
|
18 |
The licensor does not provide any warranty and is not liable for any damages caused by the use of the model.
|
19 |
If you break any of the terms, this license will be terminated.
|
20 |
This license is governed by the laws of the jurisdiction in which the licensor is located.
|
21 |
+
```
|
|
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:54c057fcf75b03e6847a2b29c3240097d1675690a205441ead3ec80487311a40
|
3 |
-
size 4265096720
|
|
|
|
|
|
|
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# awoo! models
|
2 |
+
|
3 |
+
these models are actually good, some style merges will be found in `/awoostyles/` whenever i feel like merging them
|
4 |
+
|
5 |
+
- awoooooo.sfts
|
6 |
+
- - base model for the other awoo models
|
7 |
+
- awooooooo.sfts
|
8 |
+
- - more color, more better, no baked lora/ti
|
File without changes
|
File without changes
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# classic models
|
2 |
+
|
3 |
+
these are some old models, i dont reccomend using them
|
4 |
+
|
5 |
+
- yomga.sfts
|
6 |
+
- - one of the few good ones
|
7 |
+
- more colorful
|
8 |
+
- yom.sfts
|
9 |
+
- - original one
|
10 |
+
- more dried out
|
11 |
+
|
12 |
+
the rest are not that good, apart from anithing probably
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# inpaint models
|
2 |
+
|
3 |
+
they're used to inpaint
|
4 |
+
|
5 |
+
yup
|
File without changes
|
File without changes
|
@@ -1,33 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"_class_name": "StableDiffusionPipeline",
|
3 |
-
"_diffusers_version": "0.11.1",
|
4 |
-
"feature_extractor": [
|
5 |
-
"transformers",
|
6 |
-
"CLIPImageProcessor"
|
7 |
-
],
|
8 |
-
"requires_safety_checker": true,
|
9 |
-
"safety_checker": [
|
10 |
-
"stable_diffusion",
|
11 |
-
"StableDiffusionSafetyChecker"
|
12 |
-
],
|
13 |
-
"scheduler": [
|
14 |
-
"diffusers",
|
15 |
-
"PNDMScheduler"
|
16 |
-
],
|
17 |
-
"text_encoder": [
|
18 |
-
"transformers",
|
19 |
-
"CLIPTextModel"
|
20 |
-
],
|
21 |
-
"tokenizer": [
|
22 |
-
"transformers",
|
23 |
-
"CLIPTokenizer"
|
24 |
-
],
|
25 |
-
"unet": [
|
26 |
-
"diffusers",
|
27 |
-
"UNet2DConditionModel"
|
28 |
-
],
|
29 |
-
"vae": [
|
30 |
-
"diffusers",
|
31 |
-
"AutoencoderKL"
|
32 |
-
]
|
33 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@@ -1,70 +0,0 @@
|
|
1 |
-
model:
|
2 |
-
base_learning_rate: 1.0e-04
|
3 |
-
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
4 |
-
params:
|
5 |
-
linear_start: 0.00085
|
6 |
-
linear_end: 0.0120
|
7 |
-
num_timesteps_cond: 1
|
8 |
-
log_every_t: 200
|
9 |
-
timesteps: 1000
|
10 |
-
first_stage_key: "image"
|
11 |
-
cond_stage_key: "caption"
|
12 |
-
image_size: 64
|
13 |
-
channels: 4
|
14 |
-
cond_stage_trainable: false # Note: different from the one we trained before
|
15 |
-
conditioning_key: crossattn
|
16 |
-
monitor: val/loss_simple_ema
|
17 |
-
scale_factor: 0.18215
|
18 |
-
use_ema: False
|
19 |
-
|
20 |
-
scheduler_config: # 10000 warmup steps
|
21 |
-
target: ldm.lr_scheduler.LambdaLinearScheduler
|
22 |
-
params:
|
23 |
-
warm_up_steps: [ 10000 ]
|
24 |
-
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
|
25 |
-
f_start: [ 1.e-6 ]
|
26 |
-
f_max: [ 1. ]
|
27 |
-
f_min: [ 1. ]
|
28 |
-
|
29 |
-
unet_config:
|
30 |
-
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
31 |
-
params:
|
32 |
-
image_size: 32 # unused
|
33 |
-
in_channels: 4
|
34 |
-
out_channels: 4
|
35 |
-
model_channels: 320
|
36 |
-
attention_resolutions: [ 4, 2, 1 ]
|
37 |
-
num_res_blocks: 2
|
38 |
-
channel_mult: [ 1, 2, 4, 4 ]
|
39 |
-
num_heads: 8
|
40 |
-
use_spatial_transformer: True
|
41 |
-
transformer_depth: 1
|
42 |
-
context_dim: 768
|
43 |
-
use_checkpoint: True
|
44 |
-
legacy: False
|
45 |
-
|
46 |
-
first_stage_config:
|
47 |
-
target: ldm.models.autoencoder.AutoencoderKL
|
48 |
-
params:
|
49 |
-
embed_dim: 4
|
50 |
-
monitor: val/rec_loss
|
51 |
-
ddconfig:
|
52 |
-
double_z: true
|
53 |
-
z_channels: 4
|
54 |
-
resolution: 256
|
55 |
-
in_channels: 3
|
56 |
-
out_ch: 3
|
57 |
-
ch: 128
|
58 |
-
ch_mult:
|
59 |
-
- 1
|
60 |
-
- 2
|
61 |
-
- 4
|
62 |
-
- 4
|
63 |
-
num_res_blocks: 2
|
64 |
-
attn_resolutions: [ ]
|
65 |
-
dropout: 0.0
|
66 |
-
lossconfig:
|
67 |
-
target: torch.nn.Identity
|
68 |
-
|
69 |
-
cond_stage_config:
|
70 |
-
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|