Datasets:

Modalities:
Tabular
Text
Formats:
csv
Languages:
Russian
Libraries:
Datasets
pandas
License:
EyeWino / README.md
ai-forever's picture
Update README.md
9cc34a8 verified
metadata
language:
  - ru
tags:
  - anaphora
  - eye-tracking
size_categories:
  - 100K<n<1M
configs:
  - config_name: default
    data_files:
      - split: test
        path: winograd.csv
license: mit

EyeWino

EyeWino is a new dataset based on the data from human eye-tracking for anaphora resolution.

Dataset Description

The Russian Winograd Schema Challenge dataset from TAPE (Taktasheva et al., 2022) was utilized for the anaphora resolution task to gather information on participants' eye movements.

The final dataset consists of 296 sentence-question pairs, which contain 9319 words and 148 unique sentences. The average number of participants per word is 48. The total number of observations for each variable is 448047.

Data Fields

  • word, a word in a sentence;
  • example_id, id of the example in the dataset;
  • text_id, id of the unique text in the dataset;
  • position_id, position of the word in the sentence;
  • annotator_id, experiment participant id;
  • is_answer_correct, the correctness of the experiment participant's answer;
  • reading_time, the sum of all fixation durations on the current word, ms;
  • gaze_duration, the sum of all fixation durations on the current word in the first-pass reading, ms;
  • fixations, the number of all fixations on the current word;
  • first_fixation_duration, the duration of the first fixation on the word, ms;
  • x_coordinate_first_fixation, the coordinate of the first fixation on the word along the x axis, where the screen is the coordinate plane;
  • y_coordinate_first_fixation, the coordinate of the first fixation on the word along the y axis, where the screen is the coordinate plane;
  • amplitude_first_saccade, the amplitude of the first saccade, deg;
  • correct_antecedent, the correct antecedent for example_id;
  • incorrect_antecedent, the incorrect antecedent for example_id;
  • pronoun, an anaphoric pronoun for example_id;
  • is_pronoun, an indicator of whether the word is the anaphoric pronoun;
  • label, an indicator of whether the question is about the correct antecedent.

Cite our ACL workshop paper https://aclanthology.org/2024.cmcl-1.10/:

@inproceedings{kozlova-etal-2024-transformer,
    title = "Transformer Attention vs Human Attention in Anaphora Resolution",
    author = "Kozlova, Anastasia  and
      Akhmetgareeva, Albina  and
      Khanova, Aigul  and
      Kudriavtsev, Semen  and
      Fenogenova, Alena",
    editor = "Kuribayashi, Tatsuki  and
      Rambelli, Giulia  and
      Takmaz, Ece  and
      Wicke, Philipp  and
      Oseki, Yohei",
    booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.cmcl-1.10",
    pages = "109--122",
    abstract = "Motivated by human cognitive processes, attention mechanism within transformer architecture has been developed to assist neural networks in allocating focus to specific aspects within input data. Despite claims regarding the interpretability achieved by attention mechanisms, the extent of correlation and similarity between machine and human attention remains a subject requiring further investigation.In this paper, we conduct a quantitative analysis of human attention compared to neural attention mechanisms in the context of the anaphora resolution task. We collect an eye-tracking dataset based on the Winograd schema challenge task for the Russian language. Leveraging this dataset, we conduct an extensive analysis of the correlations between human and machine attention maps across various transformer architectures, network layers of pre-trained and fine-tuned models. Our aim is to investigate whether insights from human attention mechanisms can be used to enhance the performance of neural networks in tasks such as anaphora resolution. The results reveal distinctions in anaphora resolution processing, offering promising prospects for improving the performance of neural networks and understanding the cognitive nuances of human perception.",
}