Vincent-luo's picture
Update README.md
d4a37fd
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: conditioning_image
      dtype: image
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 111989279184.95
      num_examples: 507050
  download_size: 112032639870
  dataset_size: 111989279184.95

Dataset Card for "hagrid-mediapipe-hands"

This dataset is designed to train a ControlNet with human hands. It includes hand landmarks detected by MediaPipe(for more information refer to: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker). The source image data is from HaGRID dataset and we use a modified version from Kaggle(https://www.kaggle.com/datasets/innominate817/hagrid-classification-512p) to build this dataset. There are 507050 data samples in total and the image resolution is 512x512.

Generate Mediapipe annotation

We use the script below to generate hand landmarks and you should download hand_landmarker.task file first. For more information please refer to this.

import mediapipe as mp
from mediapipe import solutions
from mediapipe.framework.formats import landmark_pb2
from mediapipe.tasks import python
from mediapipe.tasks.python import vision
from PIL import Image
import cv2
import numpy as np

def draw_landmarks_on_image(rgb_image, detection_result):
  hand_landmarks_list = detection_result.hand_landmarks
  handedness_list = detection_result.handedness
  annotated_image = np.zeros_like(rgb_image)

  # Loop through the detected hands to visualize.
  for idx in range(len(hand_landmarks_list)):
    hand_landmarks = hand_landmarks_list[idx]
    handedness = handedness_list[idx]

    # Draw the hand landmarks.
    hand_landmarks_proto = landmark_pb2.NormalizedLandmarkList()
    hand_landmarks_proto.landmark.extend([
      landmark_pb2.NormalizedLandmark(x=landmark.x, y=landmark.y, z=landmark.z) for landmark in hand_landmarks
    ])
    solutions.drawing_utils.draw_landmarks(
      annotated_image,
      hand_landmarks_proto,
      solutions.hands.HAND_CONNECTIONS,
      solutions.drawing_styles.get_default_hand_landmarks_style(),
      solutions.drawing_styles.get_default_hand_connections_style())

  return annotated_image

# Create an HandLandmarker object.
base_options = python.BaseOptions(model_asset_path='hand_landmarker.task')
options = vision.HandLandmarkerOptions(base_options=base_options,
                                       num_hands=2)
detector = vision.HandLandmarker.create_from_options(options)

# Load the input image.
image = np.asarray(Image.open("./test.png"))
image = mp.Image(
    image_format=mp.ImageFormat.SRGB, data=image
)

# Detect hand landmarks from the input image.
detection_result = detector.detect(image)

# Process the classification result and save it.
annotated_image = draw_landmarks_on_image(image.numpy_view(), detection_result)
cv2.imwrite("ann.png", cv2.cvtColor(annotated_image, cv2.COLOR_RGB2BGR))