ai-forever
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -42,4 +42,30 @@ The final dataset consists of 296 sentence-question pairs, which contain 9319 wo
|
|
42 |
- `incorrect_antecedent`, the incorrect antecedent for `example_id`;
|
43 |
- `pronoun`, an anaphoric pronoun for `example_id`;
|
44 |
- `is_pronoun`, an indicator of whether the word is the anaphoric pronoun;
|
45 |
-
- `label`, an indicator of whether the question is about the correct antecedent.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
- `incorrect_antecedent`, the incorrect antecedent for `example_id`;
|
43 |
- `pronoun`, an anaphoric pronoun for `example_id`;
|
44 |
- `is_pronoun`, an indicator of whether the word is the anaphoric pronoun;
|
45 |
+
- `label`, an indicator of whether the question is about the correct antecedent.
|
46 |
+
|
47 |
+
|
48 |
+
Cite our ACL workshop paper https://aclanthology.org/2024.cmcl-1.10/:
|
49 |
+
```
|
50 |
+
@inproceedings{kozlova-etal-2024-transformer,
|
51 |
+
title = "Transformer Attention vs Human Attention in Anaphora Resolution",
|
52 |
+
author = "Kozlova, Anastasia and
|
53 |
+
Akhmetgareeva, Albina and
|
54 |
+
Khanova, Aigul and
|
55 |
+
Kudriavtsev, Semen and
|
56 |
+
Fenogenova, Alena",
|
57 |
+
editor = "Kuribayashi, Tatsuki and
|
58 |
+
Rambelli, Giulia and
|
59 |
+
Takmaz, Ece and
|
60 |
+
Wicke, Philipp and
|
61 |
+
Oseki, Yohei",
|
62 |
+
booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics",
|
63 |
+
month = aug,
|
64 |
+
year = "2024",
|
65 |
+
address = "Bangkok, Thailand",
|
66 |
+
publisher = "Association for Computational Linguistics",
|
67 |
+
url = "https://aclanthology.org/2024.cmcl-1.10",
|
68 |
+
pages = "109--122",
|
69 |
+
abstract = "Motivated by human cognitive processes, attention mechanism within transformer architecture has been developed to assist neural networks in allocating focus to specific aspects within input data. Despite claims regarding the interpretability achieved by attention mechanisms, the extent of correlation and similarity between machine and human attention remains a subject requiring further investigation.In this paper, we conduct a quantitative analysis of human attention compared to neural attention mechanisms in the context of the anaphora resolution task. We collect an eye-tracking dataset based on the Winograd schema challenge task for the Russian language. Leveraging this dataset, we conduct an extensive analysis of the correlations between human and machine attention maps across various transformer architectures, network layers of pre-trained and fine-tuned models. Our aim is to investigate whether insights from human attention mechanisms can be used to enhance the performance of neural networks in tasks such as anaphora resolution. The results reveal distinctions in anaphora resolution processing, offering promising prospects for improving the performance of neural networks and understanding the cognitive nuances of human perception.",
|
70 |
+
}
|
71 |
+
```
|