Vincent-luo commited on
Commit
d4a37fd
1 Parent(s): 7459625

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -1
README.md CHANGED
@@ -17,4 +17,60 @@ dataset_info:
17
  # Dataset Card for "hagrid-mediapipe-hands"
18
 
19
  This dataset is designed to train a ControlNet with human hands. It includes hand landmarks detected by MediaPipe(for more information refer to: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker).
20
- The source image data is from [HaGRID dataset](https://github.com/hukenovs/hagrid) and we use a modified version from Kaggle(https://www.kaggle.com/datasets/innominate817/hagrid-classification-512p) to build this dataset. There are 507050 data samples in total and the image resolution is 512x512.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  # Dataset Card for "hagrid-mediapipe-hands"
18
 
19
  This dataset is designed to train a ControlNet with human hands. It includes hand landmarks detected by MediaPipe(for more information refer to: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker).
20
+ The source image data is from [HaGRID dataset](https://github.com/hukenovs/hagrid) and we use a modified version from Kaggle(https://www.kaggle.com/datasets/innominate817/hagrid-classification-512p) to build this dataset. There are 507050 data samples in total and the image resolution is 512x512.
21
+
22
+ ### Generate Mediapipe annotation
23
+ We use the script below to generate hand landmarks and you should download `hand_landmarker.task` file first. For more information please refer to [this](https://developers.google.com/mediapipe/solutions/vision/hand_landmarker).
24
+ ```
25
+ import mediapipe as mp
26
+ from mediapipe import solutions
27
+ from mediapipe.framework.formats import landmark_pb2
28
+ from mediapipe.tasks import python
29
+ from mediapipe.tasks.python import vision
30
+ from PIL import Image
31
+ import cv2
32
+ import numpy as np
33
+
34
+ def draw_landmarks_on_image(rgb_image, detection_result):
35
+ hand_landmarks_list = detection_result.hand_landmarks
36
+ handedness_list = detection_result.handedness
37
+ annotated_image = np.zeros_like(rgb_image)
38
+
39
+ # Loop through the detected hands to visualize.
40
+ for idx in range(len(hand_landmarks_list)):
41
+ hand_landmarks = hand_landmarks_list[idx]
42
+ handedness = handedness_list[idx]
43
+
44
+ # Draw the hand landmarks.
45
+ hand_landmarks_proto = landmark_pb2.NormalizedLandmarkList()
46
+ hand_landmarks_proto.landmark.extend([
47
+ landmark_pb2.NormalizedLandmark(x=landmark.x, y=landmark.y, z=landmark.z) for landmark in hand_landmarks
48
+ ])
49
+ solutions.drawing_utils.draw_landmarks(
50
+ annotated_image,
51
+ hand_landmarks_proto,
52
+ solutions.hands.HAND_CONNECTIONS,
53
+ solutions.drawing_styles.get_default_hand_landmarks_style(),
54
+ solutions.drawing_styles.get_default_hand_connections_style())
55
+
56
+ return annotated_image
57
+
58
+ # Create an HandLandmarker object.
59
+ base_options = python.BaseOptions(model_asset_path='hand_landmarker.task')
60
+ options = vision.HandLandmarkerOptions(base_options=base_options,
61
+ num_hands=2)
62
+ detector = vision.HandLandmarker.create_from_options(options)
63
+
64
+ # Load the input image.
65
+ image = np.asarray(Image.open("./test.png"))
66
+ image = mp.Image(
67
+ image_format=mp.ImageFormat.SRGB, data=image
68
+ )
69
+
70
+ # Detect hand landmarks from the input image.
71
+ detection_result = detector.detect(image)
72
+
73
+ # Process the classification result and save it.
74
+ annotated_image = draw_landmarks_on_image(image.numpy_view(), detection_result)
75
+ cv2.imwrite("ann.png", cv2.cvtColor(annotated_image, cv2.COLOR_RGB2BGR))
76
+ ```