EGCLLC / README.md
ffeew's picture
Update README.md
b61192d
metadata
license: cc-by-4.0
language:
  - en
size_categories:
  - 10K<n<100K

Enhanced GRID Corpus with Lip Landmark Coordinates

Introduction

This enhanced version of the GRID audiovisual sentence corpus, originally available at Zenodo, incorporates significant new features for auditory-visual speech recognition research. Building upon the preprocessed data from LipNet-PyTorch, we have added lip landmark coordinates to the dataset, providing detailed positional information of key points around the lips. This addition greatly enhances its utility in visual speech recognition and related fields. Furthermore, to facilitate ease of access and integration into existing machine learning workflows, we have published this enriched dataset on the Hugging Face platform, making it readily available to the research community.

Dataset Structure

This dataset is split into 3 directories:

  • lip_images: contains the images of the lips
    • speaker_id: contains the videos of a particular speaker
      • video_id: contains the video frames of a particular video
        • frame_no.jpg: jpg image of the lips of a particular frame
  • lip_coordinates: contains the landmark coordinates of the lips
    • speaker_id: contains the lip landmark of a particular speaker
      • video_id.json: a json file containing the lip landmark coordinates of a particular video, where the keys are the frame numbers and the values are the x, y lip landmark coordinates
  • GRID_alignments: contains the alignments of all the videos in the dataset
    • speaker_id: contains the alignments of a particular speaker
      • video_id.align: contains the alignments of a particular video, where each line is a word and the start and end time of the word in the video

Details

The lip landmark coordinates are extracted using the original videos in the GRID corpus and using the dlib library, using the shape_predictor_68_face_landmarks_GTX.dat pretrained model. The lip landmark coordinates are then saved in a json file, where the keys are the frame numbers and the values are the x, y lip landmark coordinates. The lip landmark coordinates are saved in the same order as the frames in the video.

Usage

The dataset can be downloaded by cloning this repository.

Cloning the repository

git clone https://huggingface.co/datasets/SilentSpeak/EGCLLC

Loading the dataset

After cloning the repository, you can load the dataset by unpacking the tar file and using dataset_tar.py script.

Alternatively, a probably faster method is that, you can un-tar the tar files using the following command:

tar -xvf lip_images.tar
tar -xvf lip_coordinates.tar
tar -xvf GRID_alignments.tar

Acknowledgements

Alvarez Casado, C., Bordallo Lopez, M. Real-time face alignment: evaluation methods, training strategies and implementation optimization. Springer Journal of Real-time image processing, 2021

Assael, Y., Shillingford, B., Whiteson, S., & Freitas, N. (2017). LipNet: End-to-End Sentence-level Lipreading. GPU Technology Conference.

Cooke, M., Barker, J., Cunningham, S., & Shao, X. (2006). The Grid Audio-Visual Speech Corpus (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.3625687