Image-to-Image
Diffusers
Safetensors
English
controlnet
laion
face
mediapipe

No annotator result

#21
by polax - opened

I installed the following models into the proper folder but can t make it work. control_v2p_sd15_mediapipe_face.safetensor+yaml (for Stable Diffusion 1.5)
It doesn t generate annotator result.
I am using google colab. the last ben. SD 1,5
controlnetface.jpg

-> error: No module named 'mediapipe'
Loading model from cache: control_v2p_sd15_mediapipe_face [9c7784a9] Loading preprocessor: mediapipe_face preprocessor resolution = 512 Error running process: /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 417, in process script.process(p, *script_args) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1134, in process detected_map, is_image = preprocessor(input_image, res=preprocessor_resolution, thr_a=unit.threshold_a, thr_b=unit.threshold_b) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/processor.py", line 111, in mediapipe_face from annotator.mediapipe_face import apply_mediapipe_face File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/mediapipe_face/init.py", line 1, in from .mediapipe_face_common import generate_annotation File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/mediapipe_face/mediapipe_face_common.py", line 3, in import mediapipe as mp ModuleNotFoundError: No module named 'mediapipe' 100% 30/30 [00:06<00:00, 4.50it/s] 100% 30/30 [00:06<00:00, 4.50it/s] 100% 30/30 [00:06<00:00, 4.46it/s] Loading weights [92970aa785] from /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/model.safetensors Applying xformers cross attention optimization. Weights loaded in 11.2s (load weights from disk: 9.8s, apply weights to model: 0.7s, move model to device: 0.7s). Loading model: control_v2p_sd15_mediapipe_face [9c7784a9] Loaded state_dict from [/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v2p_sd15_mediapipe_face.safetensors] Loading config: /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_v2p_sd15_mediapipe_face.yaml ControlNet model control_v2p_sd15_mediapipe_face [9c7784a9] loaded. Loading preprocessor: mediapipe_face preprocessor resolution = 704 Error running process: /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 417, in process script.process(p, *script_args) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1134, in process detected_map, is_image = preprocessor(input_image, res=preprocessor_resolution, thr_a=unit.threshold_a, thr_b=unit.threshold_b) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/processor.py", line 111, in mediapipe_face from annotator.mediapipe_face import apply_mediapipe_face File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/mediapipe_face/__init__.py", line 1, in from .mediapipe_face_common import generate_annotation File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/mediapipe_face/mediapipe_face_common.py", line 3, in import mediapipe as mp ModuleNotFoundError: No module named 'mediapipe' 100% 30/30 [00:07<00:00, 3.95it/s] 100% 30/30 [00:06<00:00, 4.67it/s] 100% 30/30 [00:06<00:00, 4.62it/s] Loading model from cache: control_v2p_sd15_mediapipe_face [9c7784a9] Loading preprocessor: mediapipe_face preprocessor resolution = 704 Error running process: /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 417, in process script.process(p, *script_args) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1134, in process detected_map, is_image = preprocessor(input_image, res=preprocessor_resolution, thr_a=unit.threshold_a, thr_b=unit.threshold_b) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/processor.py", line 111, in mediapipe_face from annotator.mediapipe_face import apply_mediapipe_face File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/mediapipe_face/__init__.py", line 1, in from .mediapipe_face_common import generate_annotation File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/mediapipe_face/mediapipe_face_common.py", line 3, in import mediapipe as mp ModuleNotFoundError: No module named 'mediapipe'

error: No module named 'mediapipe' means that mediapipe was either not found or not installed. If you run 'pip install mediapipe' it should resolve the issue.

the same issue,I use the controlnet V1.1.125, I have install mediapipe

Loading model: control_v11f1e_sd15_tile_fp16 [3b860298]
Loaded state_dict from [E:\AI\sd-webui-aki-v4\sd-webui-aki-v4\extensions\sd-webui-controlnet\models\control_v11f1e_sd15_tile_fp16.safetensors]
Loading config: E:\AI\sd-webui-aki-v4\sd-webui-aki-v4\extensions\sd-webui-controlnet\models\control_v11f1e_sd15_tile.yaml
ControlNet model control_v11f1e_sd15_tile_fp16 [3b860298] loaded.
Loading preprocessor: tile_resample
preprocessor resolution = 64

![P9~QU53GDFQU]U5GMM@35SX.png](https://cdn-uploads.huggingface.co/production/uploads/63102c84236215d0b711f83f/oplW_UVAIBdmy9SZVqQJp.png)

If mediapipe is installed, it is possible that a face isn't recognized by mediapipe. (In the case of polax above, mediapipe is almost certainly not installed.) Mediapipe is made to detect real photos, not to detect anime or drawings. :( Sadly it does not work well when detecting face drawings. Try with a real photo as the driver instead of an illustration.

If mediapipe is installed, it is possible that a face isn't recognized by mediapipe. (In the case of polax above, mediapipe is almost certainly not installed.) Mediapipe is made to detect real photos, not to detect anime or drawings. :( Sadly it does not work well when detecting face drawings. Try with a real photo as the driver instead of an illustration.

thanks. it solved the problem

for large picture with people with small faces on images. it doesn t seem to work. any tips ?

I think there's a limit to how small the faces can be in an image. When we were doing training we didn't get much detail for faces that had fewer than 64 pixels; there just was't enough information in them. If you can crop the image to 512 by 512 that would be ideal. Perhaps it's worth trying a content-aware resize to preserve the faces while reducing the image? Or you can try to produce the face annotation directly and disable the annotator.

here it is an example. the less small example. still not working for thoses faces.
bath.jpg

what do you mean by this "trying a content-aware resize to preserve the faces while reducing the image"
how can I produce face annotation directly?

It is possible to use an image editing program to copy/paste recognized faces into any configuration that you desire. For example: toyxyz has done something like this: https://twitter.com/toyxyz3/status/1644356993342386176

Before that, though, I would recommend increasing "max faces" from 1 to 3, as there are three faces in that picture. I would also decrease "min face confidence" to make it more likely that a face will be detected. (Note: this might cause things that are not faces to be detected.)

thanks but you can t go further than 2 for the max faces. if you create your own annotation with photoshop for example. does it work?

It should be possible to go up to ten for the max faces.

I've been able to create annotations manually in an image editor, yes.

Sign up or log in to comment