JCTN
/

Text-to-Image
Diffusers
Safetensors
StableDiffusionXLPipeline
stable-diffusion
sdxl
fluetnly-xl
fluently
trained
Inference Endpoints
JCTN commited on
Commit
3e389a1
1 Parent(s): bce54be

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +66 -0
  2. model_index.json +41 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: fluently-license
4
+ license_link: https://huggingface.co/spaces/fluently/License
5
+ datasets:
6
+ - ehristoforu/midjourney-images
7
+ - ehristoforu/dalle-3-images
8
+ - ehristoforu/fav_images
9
+ library_name: diffusers
10
+ pipeline_tag: text-to-image
11
+ base_model: stabilityai/stable-diffusion-xl-base-1.0
12
+ tags:
13
+ - safetensors
14
+ - stable-diffusion
15
+ - sdxl
16
+ - fluetnly-xl
17
+ - fluently
18
+ - trained
19
+ inference:
20
+ parameters:
21
+ num_inference_steps: 25
22
+ guidance_scale: 5
23
+ negative_prompt: "(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation"
24
+
25
+ ---
26
+ # **Fluenlty XL** V3 - the best XL-model
27
+
28
+ ![preview](images/preview.png)
29
+
30
+ [>>> Run in **RunDiffusion** <<<](https://civitai.com/api/run/401769?partnerId=1&strategyId=1067841896)
31
+
32
+ Introducing Fluently XL, you are probably ready to argue with the name of the model: “The best XL-model”, but now I will prove to you why it is true.
33
+
34
+ ## About this model
35
+
36
+ The model was obtained through training on *expensive graphics accelerators*, a lot of work was done, now we will show why this XL model is better than others.
37
+
38
+ ### Features
39
+
40
+ - Correct anatomy
41
+
42
+ - Art and realism in one
43
+
44
+ - Controling contrast
45
+
46
+ - Great nature
47
+
48
+ - Great faces without AfterDetailer
49
+
50
+ ### More info
51
+
52
+ Our model is better than others because we do not mix but **train**, but at first it may seem that the model is not very good, but if you are a real professional you will like it.
53
+
54
+ ## Using
55
+
56
+ Optimal parameters in Automatic1111/ComfyUI:
57
+
58
+ - Sampling steps: 20-35
59
+
60
+ - Sampler method: Euler a/Euler
61
+
62
+ - CFG Scale: 4-6.5
63
+
64
+ ## End
65
+
66
+ Let's remove models that copy each other from the top and put one that is actually developing, thank you)
model_index.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "StableDiffusionXLPipeline",
3
+ "_diffusers_version": "0.22.0.dev0",
4
+ "feature_extractor": [
5
+ null,
6
+ null
7
+ ],
8
+ "force_zeros_for_empty_prompt": true,
9
+ "image_encoder": [
10
+ null,
11
+ null
12
+ ],
13
+ "scheduler": [
14
+ "diffusers",
15
+ "EulerAncestralDiscreteScheduler"
16
+ ],
17
+ "text_encoder": [
18
+ "transformers",
19
+ "CLIPTextModel"
20
+ ],
21
+ "text_encoder_2": [
22
+ "transformers",
23
+ "CLIPTextModelWithProjection"
24
+ ],
25
+ "tokenizer": [
26
+ "transformers",
27
+ "CLIPTokenizer"
28
+ ],
29
+ "tokenizer_2": [
30
+ "transformers",
31
+ "CLIPTokenizer"
32
+ ],
33
+ "unet": [
34
+ "diffusers",
35
+ "UNet2DConditionModel"
36
+ ],
37
+ "vae": [
38
+ "diffusers",
39
+ "AutoencoderKL"
40
+ ]
41
+ }