promeai commited on
Commit
0a54870
1 Parent(s): ca93a47

update readme

Browse files
Files changed (1) hide show
  1. README.md +81 -5
README.md CHANGED
@@ -1,5 +1,81 @@
1
- ---
2
- license: other
3
- license_name: flux.1-dev-non-commercial-license
4
- license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: black-forest-labs/FLUX.1-dev
3
+ library_name: diffusers
4
+ tags:
5
+ - flux
6
+ - flux-diffusers
7
+ - text-to-image
8
+ - diffusers
9
+ - controlnet
10
+ - diffusers-training
11
+ - flux
12
+ - flux-diffusers
13
+ - text-to-image
14
+ - diffusers
15
+ - controlnet
16
+ - diffusers-training
17
+ inference: true
18
+ ---
19
+
20
+ <!-- This model card has been generated automatically according to the information the training script had access to. You
21
+ should probably proofread and complete it, then remove this comment. -->
22
+
23
+
24
+ # promeai/FLUX.1-controlnet-lineart-promeai
25
+
26
+ `promeai/FLUX.1-controlnet-lineart-promeai` holds controlnet weights trained on black-forest-labs/FLUX.1-dev with lineart condition.
27
+
28
+
29
+ Here are some example images.
30
+
31
+ ```
32
+ prompt: cute anime girl with massive fluffy fennec ears and a big fluffy tail blonde messy long hair blue eyes wearing a maid outfit with a long black gold leaf pattern dress and a white apron mouth open holding a fancy black forest cake with candles on top in the kitchen of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere
33
+ ```
34
+ ![input-control)](./images/example-control.jpg)
35
+ ![output)](./images/example-output.jpg)
36
+
37
+
38
+
39
+ ## Intended uses & limitations
40
+
41
+
42
+ ## How to use
43
+
44
+ ### with diffusers
45
+
46
+ ```python
47
+ # TODO: add an example code snippet for running this diffusion pipeline
48
+ import torch
49
+ from diffusers.utils import load_image
50
+ from diffusers.pipelines.flux.pipeline_flux_controlnet import FluxControlNetPipeline
51
+ from diffusers.models.controlnet_flux import FluxControlNetModel
52
+
53
+ base_model = 'black-forest-labs/FLUX.1-dev'
54
+ controlnet_model = 'promeai/FLUX.1-controlnet-lineart-promeai'
55
+ controlnet = FluxControlNetModel.from_pretrained(controlnet_model, torch_dtype=torch.bfloat16)
56
+ pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16)
57
+ pipe.to("cuda")
58
+
59
+ control_image = load_image("./images/example-control.jpg")
60
+ prompt = "cute anime girl with massive fluffy fennec ears and a big fluffy tail blonde messy long hair blue eyes wearing a maid outfit with a long black gold leaf pattern dress and a white apron mouth open holding a fancy black forest cake with candles on top in the kitchen of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest and very expensive stuff everywhere"
61
+ image = pipe(
62
+ prompt,
63
+ control_image=control_image,
64
+ controlnet_conditioning_scale=0.6,
65
+ num_inference_steps=28,
66
+ guidance_scale=3.5,
67
+ ).images[0]
68
+ image.save("./image.jpg")
69
+ ```
70
+
71
+ ### with comfyui
72
+ An [example comfyui workflow](./example_workflow.json)is also provided.
73
+
74
+
75
+ ## Limitations and bias
76
+
77
+ [TODO: provide examples of latent issues and potential remediations]
78
+
79
+ ## Training details
80
+
81
+ This controlnet is trained on one A100-80G GPU, with fine grained realword images dataset, with imagesize 512 + batchsize 3 (earlier period), and imagesize 1024 + batchsize 1 (after 512 training). With above configs, the GPU memory was about 70G and takes around 3 days to get this 14000steps-checkpoint. Training progress is going on, more ckpts will be released.