merterm's picture
Update README.md
e5feda4 verified
metadata
license: mit
task_categories:
  - text-generation
language:
  - de
size_categories:
  - 1K<n<10K

Intensified PHOENIX 14-T German Sign Language Dataset

This is a German-to-German Sign Language (DGS) dataset of weather forecasts. It is a prosodically-enhanced version of the RWTH-PHOENIX-Weather-2014T dataset.

Dataset Details

Dataset Description

  • Curated by: [Mert Inan]
  • Language(s) (NLP): German, DGS (German Sign Language)

Dataset Sources [optional]

Uses

The dataset is used for sign language generation in the original paper. The data contains parallel samples between German, German Sign Language (DGS) glosses, and German Sign Language (DGS) skeletal coordinates in the OpenPose format without the face.

Direct Use

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Dataset Structure

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

@inproceedings{inan-etal-2022-modeling,
    title = "Modeling Intensification for Sign Language Generation: A Computational Approach",
    author = "Inan, Mert  and
      Zhong, Yang  and
      Hassan, Sabit  and
      Quandt, Lorna  and
      Alikhani, Malihe",
    editor = "Muresan, Smaranda  and
      Nakov, Preslav  and
      Villavicencio, Aline",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.findings-acl.228",
    doi = "10.18653/v1/2022.findings-acl.228",
    pages = "2897--2911",
    abstract = "End-to-end sign language generation models do not accurately represent the prosody in sign language. A lack of temporal and spatial variations leads to poor-quality generated presentations that confuse human interpreters. In this paper, we aim to improve the prosody in generated sign languages by modeling intensification in a data-driven manner. We present different strategies grounded in linguistics of sign language that inform how intensity modifiers can be represented in gloss annotations. To employ our strategies, we first annotate a subset of the benchmark PHOENIX-14T, a German Sign Language dataset, with different levels of intensification. We then use a supervised intensity tagger to extend the annotated dataset and obtain labels for the remaining portion of it. This enhanced dataset is then used to train state-of-the-art transformer models for sign language generation. We find that our efforts in intensification modeling yield better results when evaluated with automatic metrics. Human evaluation also indicates a higher preference of the videos generated using our model.",
}

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]