The dataset viewer is not available for this dataset.
The dataset tries to import a module that is not installed.
Error code:   DatasetModuleNotInstalledError
Exception:    ImportError
Message:      To be able to use lukasbraach/bundestag_slr, you need to install the following dependency: cv2.
Please install it using 'pip install cv2' for instance.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 72, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1910, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1876, in dataset_module_factory
                  return HubDatasetModuleFactoryWithScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1498, in get_module
                  local_imports = _download_additional_modules(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 353, in _download_additional_modules
                  raise ImportError(
              ImportError: To be able to use lukasbraach/bundestag_slr, you need to install the following dependency: cv2.
              Please install it using 'pip install cv2' for instance.

Need help to make the dataset viewer work? Open a discussion for direct support.

Bundestag Barrierefrei Dataset

Overview

The Bundestag Barrierefrei dataset is a large-scale dataset focused on the German Sign Language (DGS) interpretations of parliamentary sessions in the German Bundestag. This dataset aims to support research and development in sign language recognition, particularly within the context of transformer-based architectures. By leveraging this dataset, researchers can advance the field of sign language recognition and develop robust, inclusive communication technologies for the deaf and hearing-impaired community.

Dataset Details

  • Language: German Sign Language (DGS)
  • Source: German Bundestag sessions

Objectives

The primary objectives of providing the Bundestag Barrierefrei dataset are:

  1. To support the development of advanced sign language recognition models.
  2. To promote transparency, reproducibility, and collaboration within the research community.
  3. To improve the performance of transformer-based models in data-sparse domains like sign language recognition.

Structure

The dataset consists of video recordings of Bundestag sessions interpreted in DGS by professional sign language interpreters. Each video is accompanied by gloss-level annotations and corresponding German transcriptions.

Usage

Loading the Dataset

Due to its size, I suggest cloning the dataset locally and using streaming.

# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install

git clone https://huggingface.co/datasets/lukasbraach/bundestag_slr /path/to/bundestag_slr

To then load the dataset, you can use the following code snippet:

from datasets import load_dataset

dataset = load_dataset("/path/to/bundestag_slr", streaming=True)

Applied pre-processing

e begin by extracting individual frames from the input videos, processing them one by one to establish a foundational structure for further analysis. This initial step lays the groundwork for the subsequent detection and cropping operations: Using MediaPipe’s face detection system1, we identify faces in each extracted frame. This approach provides a bounding box around each detected face, allowing us to determine a square region that encompasses the upper body, the primary area where sign language gestures are expected. To avoid false positives with the depicted members of parliament, we restrict the face detection to the right 25 ths of the video frame. This allows us to extract the sign interpreter in a format very close to the original pre-processing of the RWTH Phoenix Weather 2014 dataset.

To ensure smooth transitions between detected bounding boxes, we apply a moving average technique using a buffer. This buffer stores the most recent bounding box coordinates and calculates a moving average to smooth out any jitter in the signer’s movements. This step helps to maintain a stable perspective on the detected face and upper body, leading to more consistent cropping and frame alignment. With the smoothed bounding box in place, we calculate the coordinates for a square region to crop from each frame. This square region is then sized to ensure it encompasses the upper body without exceeding the original frame boundaries. By focusing on this region, we reduce unnecessary background noise in the model’s input, which should improve the model’s convergence speed.

We resize the cropped region to a standard size, typically 224x224 pixels, to meet the input requirements of our machine learning model. This resizing step provides a consistent input format across the dataset. The goal was to create a uniform dataset that could be used effectively for model pre-training. By implementing this preprocessing pipeline, we ensure that our dataset met the necessary quality standards, providing a solid foundation for training our machine learning models and evaluating their performance. Fully pre-processed and encoded with the MP4 codec, the dataset has a size of 84 Gigabytes.

Remarks

The dataset is annotated with high-quality subtitles of the original spoken words in the plenary sessions. For completeness, they are included in the dataset that has been shared on the HuggingFace Hub, but with remarks. On qualitative inspection by laymans eyes, it is obvious that the timestamps of the subtitle text do not match up with the sign language utterances, since there are multiple occurences of subtitle text without corresponding sign utterances. It often seems as if the sign language interpreters take between one and two seconds to translate the spoken words.

As a pragmatic solution and to increase the chances of subtitle text being represented in the associated sign utterances, for all generated utterances, additional frames corresponding to approximately 1.5 seconds of source footage have been added. This delay is explicitly based on good intuition and not empirically backed. Further researchers are invited to follow up on these limitations.

License

This dataset is made available under the License of Bundestag Barrierefrei Sign Language Interpretations: English Version German Version (legally binding)

Please note that I am in no affiliation with Deutscher Bundestag and provide this as-is with no guarantees. My sole purpose is to accelerate sign language recognition research and share what I developed as part of my Master thesis.

Downloads last month
8