vumichien commited on
Commit
d29589b
1 Parent(s): 1cc889d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -1
README.md CHANGED
@@ -11,7 +11,54 @@ tags:
11
  inference: true
12
  ---
13
 
14
- # controlnet- MakiPan/controlnet-encoded-hands-20230504_125403
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images in the following.
17
 
 
11
  inference: true
12
  ---
13
 
14
+ ## Model Description
15
+
16
+ As Stable diffusion and other diffusion models are notoriously poor at generating realistic hands for our project we decided to train a ControlNet model using MediaPipes landmarks in order to generate more realistic hands avoiding common issues such as unrealistic positions and irregular digits.
17
+
18
+ We opted to use the [HAnd Gesture Recognition Image Dataset](https://github.com/hukenovs/hagrid) and [MediaPipe's Hand Landmarker](https://developers.google.com/mediapipe/solutions/vision/hand_landmarker) to train a control net that could potentially be used independently or as an in-painting tool.
19
+
20
+ ## Preprocess
21
+
22
+ To preprocess the data there were three options we considered:
23
+
24
+ - The first was to use Mediapipes built-in draw landmarks function. This was an obvious first choice however we noticed with low training steps that the model couldn't easily distinguish handedness and would often generate the wrong hand for the conditioning image.
25
+
26
+ <figure>
27
+ <img src="https://datasets-server.huggingface.co/assets/MakiPan/hagrid250k-blip2/--/MakiPan--hagrid250k-blip2/train/29/image/image.jpg" alt="Forwarding">
28
+ <figcaption>Original Image
29
+ </figcaption>
30
+ </figure>
31
+
32
+ <figure>
33
+ <img src="https://datasets-server.huggingface.co/assets/MakiPan/hagrid250k-blip2/--/MakiPan--hagrid250k-blip2/train/29/conditioning_image/image.jpg" alt="Routing">
34
+ <figcaption>Conditioning Image
35
+ </figcaption>
36
+ </figure>
37
+
38
+ - To counter this issue we changed the palm landmark colors with the intention to keep the color similar in order to learn that they provide similar information, but different to make the model know which hands were left or right.
39
+
40
+ <figure>
41
+ <img src="https://datasets-server.huggingface.co/assets/MakiPan/hagrid-hand-enc-250k/--/MakiPan--hagrid-hand-enc-250k/train/96/image/image.jpg" alt="Forwarding">
42
+ <figcaption>Original Image
43
+ </figcaption>
44
+ </figure>
45
+
46
+ <figure>
47
+ <img src="https://datasets-server.huggingface.co/assets/MakiPan/hagrid-hand-enc-250k/--/MakiPan--hagrid-hand-enc-250k/train/96/conditioning_image/image.jpg" alt="Routing">
48
+ <figcaption>Conditioning Image
49
+ </figcaption>
50
+ </figure>
51
+
52
+ - The last option was to use <a href="https://ai.googleblog.com/2020/12/mediapipe-holistic-simultaneous-face.html">MediaPipe Holistic</a> to provide pose face and hand landmarks to the ControlNet. This method was promising in theory, however, the HaGRID dataset was not suitable for this method as the Holistic model performs poorly with partial body and obscurely cropped images.
53
+
54
+ We anecdotally determined that when trained at lower steps the encoded hand model performed better than the standard MediaPipe model due to implied handedness. We theorize that with a larger dataset of more full-body hand and pose classifications, Holistic landmarks will provide the best images in the future however for the moment the hand-encoded model performs best.
55
+ This repo contain the weight of encoded hand ControlNet model
56
+ ## Dataset
57
+
58
+ [Dataset for Hand Encoding Mode](https://huggingface.co/datasets/MakiPan/hagrid250k-blip2)
59
+
60
+
61
+ ## Examples
62
 
63
  These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images in the following.
64