iperbole commited on
Commit
94cbf07
·
verified ·
1 Parent(s): 042c398

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -0
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fa
4
+ license: cc-by-sa-4.0
5
+ tags:
6
+ - evaluation
7
+ - multilingual
8
+ pretty_name: Multi-LMentry
9
+ task_categories:
10
+ - question-answering
11
+ configs:
12
+ - config_name: fa
13
+ data_files: fa/*.jsonl
14
+ dataset_info:
15
+ features:
16
+ - name: id
17
+ dtype: string
18
+ - name: input
19
+ dtype: string
20
+ - name: metadata
21
+ dtype: string
22
+ - name: canary
23
+ dtype: string
24
+ splits:
25
+ - name: test
26
+ ---
27
+
28
+
29
+ # Multi-LMentry
30
+
31
+ This dataset card provides documentation for **Multi-LMentry**, a multilingual benchmark designed for evaluating large language models (LLMs) on fundamental, elementary-level tasks across nine languages. It is the official dataset release accompanying the EMNLP 2025 paper "Multi-LMentry: Can Multilingual LLMs Solve Elementary Tasks Across Languages?".
32
+
33
+ ## Dataset Details
34
+
35
+ ### Dataset Description
36
+
37
+ Multi-LMentry is a multilingual extension of [LMentry (Efrat et al., 2023)](https://aclanthology.org/2023.findings-acl.666/), which evaluates LLMs on tasks that are trivial for humans but often challenging for models. It covers **nine languages**:
38
+ - Farsi
39
+
40
+ The dataset enables systematic evaluation of core model abilities across low-, mid-, and high-resource languages. Tasks were recreated manually with the help of native speakers, ensuring linguistic and cultural appropriateness rather than relying on direct translation.
41
+
42
+ ### Dataset Sources
43
+
44
+ - **Paper:** Accepted at EMNLP 2025 main conference (link pending)
45
+ - [**GitHub Repository:**](https://github.com/langtech-bsc/multi_lmentry) Code to perform the evaluation on Multi-LMentry
46
+
47
+ ## Uses
48
+
49
+ The dataset is intended for:
50
+ - **Evaluation of LLMs** on elementary reasoning and understanding tasks.
51
+ - **Cross-lingual comparisons**, especially between high-resource and low-resource languages.
52
+ - **Diagnostics / unit tests** of fundamental model abilities.
53
+
54
+ It is **not intended** for training language models directly.
55
+
56
+ ## Dataset Structure
57
+
58
+ - The dataset is organized by **language folders**.
59
+ - Inside each folder, there is **one JSON file per task**.
60
+ - Each JSON contains input prompts and expected outputs for that task.
61
+ - Tasks include simple sentence construction, contextual word choice, alphabetic reasoning, etc.
62
+ - Some tasks are language-specific (e.g., rhyming words are excluded where not applicable).
63
+
64
+ ## How to Use
65
+
66
+ ```
67
+ from datasets import load_dataset
68
+ import json
69
+
70
+ # Load the Spanish "bigger_number" task
71
+ ds = load_dataset(
72
+ "BSC-LT/multi_lmentry",
73
+ "fa",
74
+ data_files="fa/bigger_number.jsonl"
75
+ )["train"]
76
+
77
+ # Access first example
78
+ example = ds[0]
79
+ print("Input:", example["input"])
80
+
81
+ # Convert metadata from string to dictionary
82
+ metadata = json.loads(example["metadata"])
83
+ print("Metadata:", metadata)
84
+
85
+ # Access the answer from metadata
86
+ answer = metadata.get("answer")
87
+ print("Answer:", answer)
88
+ ```
89
+
90
+ **Notes**:
91
+
92
+ - The metadata field contains task-specific information, including the answer. Its structure varies depending on the task, for example:
93
+ - Multiple-choice tasks may include a list of distractors and the correct answer index.
94
+ - Open-ended tasks, like "ends_with_letter", may only include task-specific metadata such as the target letter, without a predefined answer.
95
+ - Other fields (e.g., num_digits, n1, n2, template_id) may differ depending on the task type.
96
+ - Each JSONL file corresponds to a specific task; you can load multiple tasks by specifying multiple data_files.
97
+ - Evaluation: Multi-LMentry includes manually crafted regexes for each task to automatically check answers. These evaluation scripts are available in the (GitHub repository)[https://github.com/langtech-bsc/multi_lmentry] and ready to use for running systematic assessments of model outputs.
98
+
99
+ ## Dataset Creation
100
+
101
+ ### Curation Rationale
102
+
103
+ The motivation is to provide a **systematic, multilingual benchmark** for assessing whether LLMs can perform **basic reasoning tasks** that humans—even with only elementary proficiency—find trivial. This is crucial since many evaluations today focus on high-level reasoning while overlooking core capabilities.
104
+
105
+ ### Source Data
106
+
107
+ #### Data Collection and Processing
108
+
109
+ - Data was **manually created** in each language, rather than translated from English.
110
+ - Native speakers were involved to ensure correctness, cultural relevance, and avoidance of ambiguity or bias.
111
+ - Tasks were adapted to respect **linguistic characteristics**, such as orthography, morphology, or alphabet differences.
112
+
113
+ #### Who are the source data producers?
114
+
115
+ - **Native speakers** of the target languages, who carefully designed and validated the tasks.
116
+ - Task designs follow the original LMentry methodology but were recreated independently per language by native speakers of the target languages, who carefully designed and validated the tasks.
117
+
118
+ ## Acknowledgements
119
+
120
+ We gratefully acknowledge the support of Future AI Research ([PNRR MUR project PE0000013-FAIR](https://fondazione-fair.it/en/)).
121
+
122
+ The authors gratefully acknowledge the support of the AI Factory IT4LIA project and the CINECA award FAIR_NLP under the ISCRA initiative for granting access high-performance computing resources.
123
+
124
+ This work is funded by the Ministerio para la Transformación Digital y de la Función Pública and Plan de Recuperación, Transformación y Resiliencia - Funded by EU – NextGenerationEU within the framework of the project ILENIA with references 2022/TL22/00215337, 2022/TL22/00215336 and 2022/TL22/00215335, and within the framework of the project Desarrollo Modelos ALIA.
125
+
126
+ This work has been promoted and financed by the Generalitat de Catalunya through the Aina project.
127
+
128
+ ## License Information
129
+
130
+ [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ca)
131
+
132
+ ## Citation
133
+
134
+ ### Bibtex
135
+
136
+ ```bibtex
137
+ @inproceedings{moroni-etal-2025-multi,
138
+ title = "Multi-{LM}entry: Can Multilingual {LLM}s Solve Elementary Tasks Across Languages?",
139
+ author = "Moroni, Luca and
140
+ Aula-Blasco, Javier and
141
+ Conia, Simone and
142
+ Baucells, Irene and
143
+ Perez, Naiara and
144
+ Su{\'a}rez, Silvia Paniagua and
145
+ Sall{\'e}s, Anna and
146
+ Ostendorff, Malte and
147
+ Falc{\~a}o, J{\'u}lia and
148
+ Son, Guijin and
149
+ Gonzalez-Agirre, Aitor and
150
+ Navigli, Roberto and
151
+ Villegas, Marta",
152
+ editor = "Christodoulopoulos, Christos and
153
+ Chakraborty, Tanmoy and
154
+ Rose, Carolyn and
155
+ Peng, Violet",
156
+ booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
157
+ month = nov,
158
+ year = "2025",
159
+ address = "Suzhou, China",
160
+ publisher = "Association for Computational Linguistics",
161
+ url = "https://aclanthology.org/2025.emnlp-main.1731/",
162
+ doi = "10.18653/v1/2025.emnlp-main.1731",
163
+ pages = "34114--34145",
164
+ ISBN = "979-8-89176-332-6"
165
+ }
166
+ ```