File size: 2,413 Bytes
8a38621
 
b275607
 
 
 
 
 
8a38621
b275607
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7ca34e
b275607
 
 
 
 
 
 
 
 
 
 
 
 
3ea12ee
b275607
3ea12ee
b275607
3ea12ee
b275607
3ea12ee
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- diffusers
- text-to-image
---

# SemiRealMix

The result of many merges aimed at making semi-realistic human images.

I use the following options to get good generation results:

#### Prompt:

delicate, masterpiece, best shadow, (1 girl:1.3), (korean girl:1.2), (from side:1.2), (from below:0.5), (photorealistic:1.5), extremely detailed skin, studio, beige background, warm soft light, low contrast, head tilt

#### Negative prompt:

(worst quality, low quality:1.4), nsfw, nude, (loli, child, infant, baby:1.5), jewely, (hard light:1.5), back light, spot light, hight contrast, (eyelid:1.3), outdoor, monochrome


Sampler: DPM++ SDE Karras

CFG Scale: 7

Steps: 20

Size: 512x768

Denoising strength: 0.5, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, Eta: 0.2

Clip skip: 2

Base Model : SD 1.5

VAE: vae-ft-mse-840000-ema-pruned

Use xformers : True

## 🧨 Diffusers

This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).

You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().

```python
from diffusers import StableDiffusionPipeline
import torch

model_id = "robotjung/SemiRealMix"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "1girl"
image = pipe(prompt).images[0]

image.save("./output.png")
```

## Examples:

Here are some examples of images generated using this model:

![Example-1](https://huggingface.co/robotjung/SemiRealMix/resolve/main/example-1.png)

![Example-2](https://huggingface.co/robotjung/SemiRealMix/resolve/main/example-2.png)

![Example-3](https://huggingface.co/robotjung/SemiRealMix/resolve/main/example-3.png)

![Example-4](https://huggingface.co/robotjung/SemiRealMix/resolve/main/example-4.png)

![Example-5](https://huggingface.co/robotjung/SemiRealMix/resolve/main/example-5.png)

![Example-6](https://huggingface.co/robotjung/SemiRealMix/resolve/main/example-6.png)

![Example-7](https://huggingface.co/robotjung/SemiRealMix/resolve/main/example-7.png)

![Example-8](https://huggingface.co/robotjung/SemiRealMix/resolve/main/example-8.png)