Question about process
hi
@CiaraRowles
, thanks for publishing this cool model
I have a question about your process, since we're writing a demo using the converted Diffusers model on PR #13 by
@patrickvonplaten
Looking at the configuration on script, could you clarify the steps here?
- Initial image + prompt using ControlNet Hed Boundary.
- Generate Image + prompt, (no condition) just img2img using TemporalNet
- LastFrame as init Image + prompt using TemporalNet
- back to 3 until ends
Are these the steps?
Thanks
data = {
"init_images": [current_image],
"inpainting_fill": 0,
"inpaint_full_res": True,
"inpaint_full_res_padding": 1,
"inpainting_mask_invert": 1,
"resize_mode": 0,
"denoising_strength": 0.45,
"prompt": "pop art, painting, highly detailed,",
"negative_prompt": "(ugly:1.3), (fused fingers), (too many fingers), (bad anatomy:1.5), (watermark:1.5), (words), letters, untracked eyes, asymmetric eyes, floating head, (logo:1.5), (bad hands:1.3), (mangled hands:1.2), (missing hands), (missing arms), backward hands, floating jewelry, unattached jewelry, floating head, doubled head, unattached head, doubled head, head in body, (misshapen body:1.1), (badly fitted headwear:1.2), floating arms, (too many arms:1.5), limbs fused with body, (facial blemish:1.5), badly fitted clothes, imperfect eyes, untracked eyes, crossed eyes, hair growing from clothes, partial faces, hair not attached to head",
"alwayson_scripts": {
"ControlNet":{
"args": [
{
"input_image": current_image,
"module": "hed",
"model": "control_hed-fp16 [13fee50b]",
"weight": 1.5,
"guidance": 1,
},
{
"input_image": last_image,
"model": "diff_control_sd15_temporalnet_fp16 [adc6bd97]",
"module": "none",
"weight": 0.7,
"guidance": 1,
}
]
}
},
"seed": 3189343382,
"subseed": -1,
"subseed_strength": -1,
"sampler_index": "Euler a",
"batch_size": 1,
"n_iter": 1,
"steps": 20,
"cfg_scale": 6,
"width": 512,
"height": 512,
"restore_faces": True,
"include_init_images": True,
"override_settings": {},
"override_settings_restore_afterwards": True
}
Hi!
To clarify, it's
Initial image + prompt using ControlNet Hed Boundary on the first frame of the video.
img2img with the next unaltered frame as the img2img input, and two ControlNet modules together, hed with the previously mentioned unaltered frame and the result of step 1 fed into the TemporalNet module
put the results of that into the the next frame's TemporalNet settings and repeat for the rest of the frames.
Does that help?
thanks @CiaraRowles , I think my issue is I'm trying to do it with a live webcam, not sure if it will work. I'l try with extracted frames from a video.
to clarify on step 2,
img2img with the next unaltered frame as the img2img input,
- is the next unaltered frame img2img with the first styled frame from step 1?
- then MultiControlnet (hed + temporanet), HED from unaltered frame + Frame from step 2?
sorry I'm not familiar with webuiAPI to decipher the JSON above 🙈
@CiaraRowles ,Hi,When I debug the webui code, I cannot found where to use diff_control_sd15_temporalnet_fp16, but only found where control_hed-fp16 used. Dit it is merged into control_hed-fp16 when forward? If I should install multi controlnet extentions or just controlnet extention?
@CiaraRowles ,Hi,When I debug the webui code, I cannot found where to use diff_control_sd15_temporalnet_fp16, but only found where control_hed-fp16 used. Dit it is merged into control_hed-fp16 when forward? If I should install multi controlnet extentions or just controlnet extention?
The same. Did you find the reason?