MERA-evaluation commited on
Commit
05ae2bc
·
verified ·
1 Parent(s): 92bf35b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -126
README.md CHANGED
@@ -1086,7 +1086,7 @@ The human benchmark is measured on a subset of size 100 (sampled with the same o
1086
 
1087
  ### *Task Description*
1088
 
1089
- **Massive Multitask Russian AMplified Understudy (MaMuRAMu)** is a dataset designed to measure model professional knowledge acquired during pretraining in various fields. The task covers 57 subjects (subdomains) across different topics (domains): HUMANITIES; SOCIAL SCIENCE; SCIENCE, TECHNOLOGY, ENGINEERING, AND MATHEMATICS (STEM); OTHER. The dataset was created based on the English MMLU proposed in [1] and follows its methodology in instruction format. Each example contains a question from one of the categories with four possible answers, only one of which is correct.
1090
 
1091
  **Warning:** to avoid data leakage for MaMuRAMu, we created the NEW closed dataset that follows the original MMLU design. Thus, **results on the MMLU and MaMuRAMu datasets cannot be directly compared with each other.**
1092
 
@@ -1094,7 +1094,7 @@ The human benchmark is measured on a subset of size 100 (sampled with the same o
1094
 
1095
  #### Motivation
1096
 
1097
- This set is a continuation of the idea GLUE [2] and SuperGLUE [3] benchmarks, which focus on generalized assessment of tasks for understanding the language (NLU). Unlike sets like ruWorldTree and ruOpenBookQA (where questions are similar to MMLU format), which cover tests of the school curriculum and elementary knowledge, MaMuRAMu is designed to test professional knowledge in various fields.
1098
 
1099
  ### Dataset Description
1100
 
@@ -1168,7 +1168,7 @@ Accuracy of the annotation on the test set is `84.4%`.
1168
 
1169
  ## **MathLogicQA**
1170
 
1171
- ### *Task Description*
1172
 
1173
  The task is to solve mathematical problems formulated in natural language.
1174
 
@@ -1179,236 +1179,255 @@ Mathematical problems can be divided into several types:
1179
  - solving problems on proportions and comparison,
1180
  - comparing the objects described in the problem with the variables in the equation.
1181
 
 
 
1182
  The goal of the task is to analyze the ability of the model to solve mathematical tasks using simple operations such as addition, subtraction, multiplication, division, and comparison operations.
1183
 
1184
- ### *Dataset Description*
1185
 
1186
- Each example from the data set consists of the text of the problem and 4 answer options, of which only one is correct.
1187
 
1188
- #### *Data Fields*
1189
 
1190
- - `instruction` a string containing instructions for the task and information about the requirements for the model output format. All used products are presented in the project repository;
1191
- - `inputs` a dictionary containing input data for the model:
1192
- - `id` an integer indicating the index of the example;
1193
- - `option_a` a string containing answer option A;
1194
- - `option_b` a string containing answer option B;
1195
- - `option_c` a string containing answer option C;
1196
- - `option_d` a string containing answer option D;
1197
- - `outputs` a string containing the letter of the correct answer;
1198
- - `meta` a dictionary containing meta information:
1199
- - `id` an integer indicating the index of the example;
1200
- - `task` a string containing information about the task type: `math` includes solving systems of equations and comparing quantities, `logimath` includes matching the objects described in the problem with the variables in the equation and solving it.
1201
 
1202
- #### *Data Instances*
1203
 
1204
  Below is an example from the dataset:
1205
 
1206
  ```json
1207
  {
1208
- "instruction": "Задача: {text}\nВарианты ответа:\nA) {option_a}\nB) {option_b}\nC) {option_c}\nD) {option_d}\nКакой ответ является правильным? Запишите только букву верного варианта: A, B, C или D.\nОтвет: ",
1209
- "inputs": {
1210
- "text": "Если из 839 вычесть 924, то получится -17, умноженное на w. Каково значение переменной w?",
1211
- "option_a": "0",
1212
- "option_b": "1",
1213
- "option_c": "-5",
1214
- "option_d": "5"
1215
- },
1216
- "outputs": "D",
1217
- "meta": {
1218
- "id": 4,
1219
- "task": "math"
1220
- }
1221
  }
1222
  ```
1223
 
1224
- #### *Data Splits*
1225
 
1226
- The train set consists of 681 examples. The test set consists of 1143 examples.
1227
- Train and test sets are balanced in class labels.
1228
 
1229
- #### *Dataset Creation*
 
 
 
 
 
 
 
1230
 
1231
- The dataset includes two types of problems: logic and math.
1232
 
1233
- **logic**
1234
 
1235
  Logic problems are mathematical problems formulated in natural language. To solve this type of problem, it is necessary to construct a system of equations (or one equation) and solve it by comparing the objects described in the problem with the variables in the equation. Problems of this type were formed using open sources containing databases of mathematical problems.
1236
 
1237
- **math**
1238
 
1239
  Math problems consist of a mathematical expression (a linear equation or a system of linear equations) and a question about that expression. One must solve a linear equation or system of linear equations to answer the question. For some tasks, it is also necessary to perform a comparison operation. Mathematical expressions are synthetic data generated using an open-source library using the linear_1d and linear_2d modules. The resulting generated expressions were manually rewritten by experts from mathematical language into natural Russian. Next, the experts formulated a question in natural language and the correct answer for each expression.
1240
 
1241
  When creating the dataset, experts added instructions in natural language to some tasks. The experts also formulated 3 incorrect answer options for each task from the dataset.
1242
 
1243
- **Validation**
1244
 
1245
  All examples from the dataset have been validated on the Yandex.Toloka platform. Tolokers checked the correctness of the problem conditions and the answer. The dataset included 2000 examples of type `math` and 570 examples of type `logic`. Each example had a 3-person overlap, which could increase to 5 if the agreement on the task answer was below 70%. The responses of the Toloka annotators who showed labeling accuracy of less than 50% on control tasks were excluded.
1246
 
1247
  As a result of validation, the final test set included examples with complete consistency between the annotators. The training set included the remaining examples with agreement above 60%.
1248
 
1249
- ### *Evaluation*
1250
 
1251
- #### *Metrics*
1252
 
1253
  Models’ performance is evaluated using the Accuracy score. The choice of this metric was due to the balance of classes.
1254
 
1255
- #### *Human Benchmark*
1256
 
1257
- Human-level score is measured on a test set with Yandex.Toloka project with the overlap of 5 reviewers per task. The human accuracy score is `0.995`.
1258
 
1259
 
1260
  ## **MultiQ**
1261
 
1262
- ### *Task Description*
1263
 
1264
- MultiQ is a question-answering multi-hop dataset for the Russian language. The dataset is based on the [dataset](https://tape-benchmark.com/datasets.html#multiq) of the same name from the TAPE benchmark.
1265
 
1266
- Question-answering systems have always played an essential role in natural language processing tasks. However, some areas related to question-answer tasks are still quite complicated for modern models. Those tasks include question-answering multi-hop tasks such as MultiQ.
1267
 
1268
- ### *Dataset Description*
1269
 
1270
- #### *Data Fields*
1271
 
1272
- - `meta` a dictionary containing meta-information about the example:
1273
- - `id` — the task ID;
1274
- - `bridge answer` — a list of entities necessary to answer the question contained in the `outputs` field using two available texts;
1275
- - `instruction` — an instructional prompt specified for the current task;
1276
- - `inputs` — a dictionary containing the following information:
1277
- - `text` — the main text line;
1278
- - `support text` — a line with additional text;
1279
- - `question` — the question, the answer to which is contained in these texts;
1280
- - `outputs` — the answer information:
1281
- - `label` — the answer label;
1282
- - `length` — the answer length;
1283
- - `offset` — the answer start index;
1284
- - `segment` — a string containing the answer.
1285
 
1286
- #### *Data Instances*
1287
 
1288
- Below is an example from the dataset:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1289
 
1290
  ```json
1291
  {
1292
- "instruction": "Прочитайте два текста и ответьте на вопрос.\nТекст 1: {support_text}\nТекст 2: {text}\nВопрос: {question}\nОтвет:",
1293
  "inputs": {
1294
- "question": "В какую реку впадает река, притоком которой является Висвож?",
1295
- "support_text": "Висвож река в России, протекает по Республике Коми. Устье реки находится в 6 км по левому берегу реки Кыбантывис. Длина реки составляет 24 км.",
1296
- "text": "Кыбантывис (Кабан-Тывис) река в России, протекает по Республике Коми. Левый приток Айювы. Длина реки составляет 31 км. Система водного объекта: Айюва → Ижма → Печора → Баренцево море."
1297
  },
1298
- "outputs": [{
1299
- "label": "answer",
1300
- "length": 5,
1301
- "offset": 85,
1302
- "segment": "Айювы"
1303
- }],
1304
  "meta": {
1305
- "id": 9,
1306
- "bridge_answers": [{
1307
- "label": "passage",
1308
- "length": 10,
1309
- "offset": 104,
1310
- "segment": "Кыбантывис"
1311
- }]
1312
  }
1313
  }
1314
  ```
1315
 
1316
- #### *Data Splits*
1317
 
1318
- The dataset consists of 1056 training examples (train set) and 900 test examples (test set).
1319
 
1320
- #### *Prompts*
1321
 
1322
- We prepared 5 different prompts of various difficulties for this task.
1323
  An example of the prompt is given below:
1324
 
1325
- `"Прочитайте два текста и ответьте на вопрос.\nТекст 1: {support_text}\nТекст 2: {text}\nВопрос: {question}\nОтвет:"`.
 
 
1326
 
1327
- #### *Dataset Creation*
1328
 
1329
- The dataset is based on the corresponding dataset from the TAPE benchmark and was composed of texts from Wikipedia and WikiData.
1330
 
1331
- ### *Evaluation*
1332
 
1333
- #### *Metrics*
1334
 
1335
- To evaluate models on this dataset, two metrics are used: F1 score and complete match (Exact Match — EM).
1336
 
1337
- #### *Human Benchmark*
1338
 
1339
- The F1 score/EM results are `0.928` / `0.91`, respectively.
1340
 
1341
 
1342
  ## **PARus**
1343
 
1344
- ### *Task Description*
1345
 
1346
  The choice of Plausible Alternatives for the Russian language (PARus) evaluation provides researchers with a tool for assessing progress in open-domain commonsense causal reasoning.
1347
 
1348
  Each question in PARus is composed of a premise and two alternatives, where the task is to select the alternative that more plausibly has a causal relation with the premise. The correct alternative is randomized, so the expected randomly guessing performance is 50%. The dataset was first proposed in [Russian SuperGLUE](https://russiansuperglue.com/tasks/task_info/PARus) and is an analog of the English [COPA](https://people.ict.usc.edu/~gordon/copa.html) dataset that was constructed as a translation of the English COPA dataset from [SuperGLUE](https://super.gluebenchmark.com/tasks) and edited by professional editors. The data split from COPA is retained.
1349
 
1350
- The dataset allows you to evaluate how well the models solve a logical text entailment. The dataset is constructed in such a way as to take into account discoursive characteristics. This dataset in the Russian SuperGLUE benchmark is one of the few for which there is still a significant gap between human scores and models' scores.
1351
 
1352
- ### *Dataset Description*
1353
 
1354
- #### *Data Fields*
 
 
 
 
 
 
1355
 
1356
  Each dataset sample represents a `premise` and two `options` for continuing situations depending on the task tag: cause or effect.
1357
 
1358
- - `instruction` a prompt specified for the task, selected from different pools for cause and effect;
1359
- - `inputs` a dictionary containing the following input information:
1360
- - `premise` a text situation;
1361
- - `choice1` the first option;
1362
- - `choice2` the second option;
1363
- - `outputs` string values `1` or `2`;
1364
- - `meta` meta-information about the task:
1365
- - `task` a task class: cause or effect;
1366
- - `id` an id of the example from the dataset.
1367
 
1368
- #### *Data Instances*
1369
 
1370
  Below is an example from the dataset:
1371
 
1372
  ```json
1373
  {
1374
- "instruction": "Дано описание ситуации:\n'{premise}'\nи два фрагмента текста:\n1. {choice1}\n2. {choice2}\nОпредели, какой из двух фрагментов является следствием описанной ситуации? Ответь одной цифрой 1 или 2, ничего не добавляя.",
1375
  "inputs": {
1376
- "premise": "Власти пообещали сохранить в тайне личность жертвы преступления.",
1377
- "choice1": "Жертва изо всех сил пыталась вспомнить подробности преступления.",
1378
- "choice2": "Они скрывали имя жертвы от общественности."
1379
  },
1380
- "outputs": "2",
1381
  "meta": {
1382
- "task": "effect",
1383
- "id": 72
1384
  }
1385
  }
1386
  ```
1387
 
1388
- #### *Data Splits*
1389
 
1390
- The dataset consists of 500 train samples, 100 dev samples, and 400 private test samples.
1391
- The number of sentences in the whole set is 1000. The number of tokens is 5.4 · 10^3.
1392
 
1393
- #### *Prompts*
1394
 
1395
- Prompts are presented separately for the `cause` and for the `effect`, e.g.:
1396
 
1397
- For cause: `"Дано описание ситуации:\n'{premise}'\nи два фрагмента текста:\n1. {choice1}\n2. {choice2}\nОпредели, какой из двух фрагментов является причиной описанной ситуации? Ответь одной цифрой 1 или 2, ничего не добавляя."`.
1398
 
1399
- For effect: `"Дано описание ситуации:\n'{premise}'\nи два фрагмента текста:\n1. {choice1}\n2. {choice2}\nОпредели, какой из двух фрагментов является следствием описанной ситуации? Ответь одной цифрой 1 или 2, ничего не добавляя."`.
 
 
1400
 
1401
- ### *Evaluation*
1402
 
1403
- #### *Metrics*
 
 
1404
 
1405
- The metric for this task is Accuracy.
1406
 
1407
- #### *Human Benchmark*
 
 
1408
 
1409
- Human-level score is measured on a test set with Yandex.Toloka project with the overlap of 3 reviewers per task.
 
 
 
 
 
 
1410
 
1411
- The Accuracy is `0.982`.
1412
 
1413
 
1414
  ## **RCB**
 
1086
 
1087
  ### *Task Description*
1088
 
1089
+ **Massive Multitask Russian AMplified Understudy (MaMuRAMu)** is a dataset designed to measure model professional knowledge acquired during pretraining in various fields. The task covers 57 subjects (subdomains) across different topics (domains): HUMANITIES; SOCIAL SCIENCE; SCIENCE, TECHNOLOGY, ENGINEERING, AND MATHEMATICS (STEM); OTHER. The dataset was created based on the English MMLU and follows its methodology in instruction format. Each example contains a question from one of the categories with four possible answers, only one of which is correct.
1090
 
1091
  **Warning:** to avoid data leakage for MaMuRAMu, we created the NEW closed dataset that follows the original MMLU design. Thus, **results on the MMLU and MaMuRAMu datasets cannot be directly compared with each other.**
1092
 
 
1094
 
1095
  #### Motivation
1096
 
1097
+ This set is a continuation of the idea GLUE and SuperGLUE benchmarks, which focus on generalized assessment of tasks for understanding the language (NLU). Unlike sets like ruWorldTree and ruOpenBookQA (where questions are similar to MMLU format), which cover tests of the school curriculum and elementary knowledge, MaMuRAMu is designed to test professional knowledge in various fields.
1098
 
1099
  ### Dataset Description
1100
 
 
1168
 
1169
  ## **MathLogicQA**
1170
 
1171
+ ### Task Description
1172
 
1173
  The task is to solve mathematical problems formulated in natural language.
1174
 
 
1179
  - solving problems on proportions and comparison,
1180
  - comparing the objects described in the problem with the variables in the equation.
1181
 
1182
+ #### Motivation
1183
+
1184
  The goal of the task is to analyze the ability of the model to solve mathematical tasks using simple operations such as addition, subtraction, multiplication, division, and comparison operations.
1185
 
1186
+ ### Dataset Description
1187
 
1188
+ Each dataset sample consists of the problem text and 4 answer options, only one of which is correct.
1189
 
1190
+ #### Data Fields
1191
 
1192
+ - `instruction` is a string containing instructions for the task and information about the requirements for the model output format. All used products are presented in the project repository;
1193
+ - `inputs` is a dictionary containing input data for the model:
1194
+ - `id` is an integer indicating the index of the example;
1195
+ - `option_a` is a string containing answer option A;
1196
+ - `option_b` is a string containing answer option B;
1197
+ - `option_c` is a string containing answer option C;
1198
+ - `option_d` is a string containing answer option D;
1199
+ - `outputs` is a string containing the letter of the correct answer;
1200
+ - `meta` is a dictionary containing meta information:
1201
+ - `id` is an integer indicating the index of the example;
1202
+ - `task` is a string containing information about the task type: `math` includes solving systems of equations and comparing quantities, `logimath` includes matching the objects described in the problem with the variables in the equation and solving it.
1203
 
1204
+ #### Data Instances
1205
 
1206
  Below is an example from the dataset:
1207
 
1208
  ```json
1209
  {
1210
+ "instruction": "{text}\nA. {option_a}\nB. {option_b}\nC. {option_c}\nD. {option_d}\nУкажите только букву правильного ответа.\nОтвет:",
1211
+ "inputs": {
1212
+ "text": "Если из 17 вычесть 26, то получится 3, умноженное на q. Рассчитайте значение переменной q.",
1213
+ "option_a": "-3",
1214
+ "option_b": "3",
1215
+ "option_c": "14",
1216
+ "option_d": "14.3"
1217
+ },
1218
+ "outputs": "A",
1219
+ "meta": {
1220
+ "id": 1,
1221
+ "task": "math"
1222
+ }
1223
  }
1224
  ```
1225
 
1226
+ #### Data Splits
1227
 
1228
+ The train set consists of `680` examples. The test set consists of `1143` examples. Train and test sets are balanced in class labels.
 
1229
 
1230
+ #### Prompts
1231
+ 10 prompts of varying difficulty were created for this task. Example:
1232
+
1233
+ ```json
1234
+ "Решите математичеcкую задачу: {text}\nA) {option_a}\nB) {option_b}\nC) {option_c}\nD) {option_d}\nВыберите один правильный ответ. В ответе укажите только букву правильного ответа.\nОтвет:"
1235
+ ```
1236
+
1237
+ #### Dataset Creation
1238
 
1239
+ The dataset includes two types of problems: `logic` and `math`.
1240
 
1241
+ ##### logic
1242
 
1243
  Logic problems are mathematical problems formulated in natural language. To solve this type of problem, it is necessary to construct a system of equations (or one equation) and solve it by comparing the objects described in the problem with the variables in the equation. Problems of this type were formed using open sources containing databases of mathematical problems.
1244
 
1245
+ ##### math
1246
 
1247
  Math problems consist of a mathematical expression (a linear equation or a system of linear equations) and a question about that expression. One must solve a linear equation or system of linear equations to answer the question. For some tasks, it is also necessary to perform a comparison operation. Mathematical expressions are synthetic data generated using an open-source library using the linear_1d and linear_2d modules. The resulting generated expressions were manually rewritten by experts from mathematical language into natural Russian. Next, the experts formulated a question in natural language and the correct answer for each expression.
1248
 
1249
  When creating the dataset, experts added instructions in natural language to some tasks. The experts also formulated 3 incorrect answer options for each task from the dataset.
1250
 
1251
+ #### Validation
1252
 
1253
  All examples from the dataset have been validated on the Yandex.Toloka platform. Tolokers checked the correctness of the problem conditions and the answer. The dataset included 2000 examples of type `math` and 570 examples of type `logic`. Each example had a 3-person overlap, which could increase to 5 if the agreement on the task answer was below 70%. The responses of the Toloka annotators who showed labeling accuracy of less than 50% on control tasks were excluded.
1254
 
1255
  As a result of validation, the final test set included examples with complete consistency between the annotators. The training set included the remaining examples with agreement above 60%.
1256
 
1257
+ ### Evaluation
1258
 
1259
+ #### Metrics
1260
 
1261
  Models’ performance is evaluated using the Accuracy score. The choice of this metric was due to the balance of classes.
1262
 
1263
+ #### Human Benchmark
1264
 
1265
+ Human-level score is measured on a test set with the Yandex.Toloka project with the overlap of 5 reviewers per task. The human accuracy score is `0.99`.
1266
 
1267
 
1268
  ## **MultiQ**
1269
 
1270
+ ### Task Description
1271
 
1272
+ MultiQ is a multi-hop QA dataset for Russian, suitable for general open-domain question answering, information retrieval, and reading comprehension tasks. The dataset is based on the [dataset](https://tape-benchmark.com/datasets.html#multiq) of the same name from the TAPE benchmark [1].
1273
 
1274
+ **Keywords:** Multi-hop QA, World Knowledge, Logic, Question-Answering
1275
 
1276
+ **Authors:** Ekaterina Taktasheva, Tatiana Shavrina, Alena Fenogenova, Denis Shevelev, Nadezhda Katricheva, Maria Tikhonova, Albina Akhmetgareeva, Oleg Zinkevich, Anastasiia Bashmakova, Svetlana Iordanskaia, Alena Spiridonova, Valentina Kurenshchikova, Ekaterina Artemova, Vladislav Mikhailov
1277
 
1278
+ #### Motivation
1279
 
1280
+ Question-answering has been an essential task in natural language processing and information retrieval. However, certain areas in QA remain quite challenging for modern approaches, including the multi-hop one, which is traditionally considered an intersection of graph methods, knowledge representation, and SOTA language modeling.
 
 
 
 
 
 
 
 
 
 
 
 
1281
 
1282
+ ### Dataset Description
1283
 
1284
+ #### Data Fields
1285
+
1286
+ - `meta` is a dictionary containing meta-information about the example:
1287
+ - `id` is the task ID;
1288
+ - `bridge_answer` is a list of entities necessary to answer the question contained in the `outputs` field using two available texts;
1289
+ - `instruction` is an instructional prompt specified for the current task;
1290
+ - `inputs` is a dictionary containing the following information:
1291
+ - `text` is the main text line;
1292
+ - `support_text` is a line with additional text;
1293
+ - `question` is the question, the answer to which is contained in these texts;
1294
+ - `outputs` is a string containing the answer.
1295
+
1296
+ #### Data Instances
1297
+
1298
+ Each dataset sample consists of two texts (the main and the supporting ones) and a question based on these two texts. Below is an example from the dataset:
1299
 
1300
  ```json
1301
  {
1302
+ "instruction": "Даны два текста:\nТекст 1: {support_text}\nТекст 2: {text}\nОпираясь на данные тексты, ответьте на вопрос: {question}\nВаш ответ не должен содержать дополнительные объяснения.\nОтвет:",
1303
  "inputs": {
1304
+ "text": "Нижний Новгород разговорной речи часто \"Нижний\", c XIII по XVII век — Новгород Низовской земли, с 7 октября 1932 по 22 октября 1990 года — Горький) — город в центральной России, административный центр Приволжского федерального округа и Нижегородской области. Второй по численности населения город в Приволжском федеральном округе и на рек�� Волге.\\n\\nКультура.\\nИсторический центр Нижнего Новгорода, расположенный в Нагорной части города, несмотря на значительные перестройки, сохранил значительное число исторических гражданских строений XVIII — начала XX веков, включая многочисленные памятники деревянного зодчества. Дмитриевская башня Кремля выходит на историческую площадь Минина и Пожарского. Нижегородский кремль является официальной резиденцией Городской думы Нижнего Новгорода и правительства Нижегородской области. Зоопарк \"Лимпопо\". Зоопарк \"Лимпопо\" — первый частный зоопарк в России, расположенный в Московском районе.",
1305
+ "support_text": "Евгений Владимирович Крестьянинов (род. 12 июля 1948, Горький) российский государственный деятель.",
1306
+ "question": "Как называется законодательный орган города, где родился Евгений Владимирович Крестьянинов?"
1307
  },
1308
+ "outputs": "Городской думы",
 
 
 
 
 
1309
  "meta": {
1310
+ "id": 0,
1311
+ "bridge_answers": "Горький"
 
 
 
 
 
1312
  }
1313
  }
1314
  ```
1315
 
1316
+ #### Data Splits
1317
 
1318
+ The dataset consists of `1056` training examples (train set) and `900` test examples (test set).
1319
 
1320
+ #### Prompts
1321
 
1322
+ We prepared 10 different prompts of various difficulties for this task.
1323
  An example of the prompt is given below:
1324
 
1325
+ ```json
1326
+ "Текст 1: {support_text}\nТекст 2: {text}\nОпираясь на данные тексты, ответьте на вопрос: {question}\nЗапишите только ответ без дополнительных объяснений.\nОтвет:"
1327
+ ```
1328
 
1329
+ #### Dataset Creation
1330
 
1331
+ The dataset was created using the corresponding dataset from the TAPE benchmark and was initially sampled from Wikipedia and Wikidata. The whole pipeline of the data collection can be found [here](https://tape-benchmark.com/datasets.html#multiq).
1332
 
1333
+ ### Evaluation
1334
 
1335
+ #### Metrics
1336
 
1337
+ To evaluate models on this dataset, two metrics are used: F1-score and complete match (Exact Match — EM).
1338
 
1339
+ #### Human Benchmark
1340
 
1341
+ The F1-score / EM results are `0.928` / `0.91`, respectively.
1342
 
1343
 
1344
  ## **PARus**
1345
 
1346
+ ### Task Description
1347
 
1348
  The choice of Plausible Alternatives for the Russian language (PARus) evaluation provides researchers with a tool for assessing progress in open-domain commonsense causal reasoning.
1349
 
1350
  Each question in PARus is composed of a premise and two alternatives, where the task is to select the alternative that more plausibly has a causal relation with the premise. The correct alternative is randomized, so the expected randomly guessing performance is 50%. The dataset was first proposed in [Russian SuperGLUE](https://russiansuperglue.com/tasks/task_info/PARus) and is an analog of the English [COPA](https://people.ict.usc.edu/~gordon/copa.html) dataset that was constructed as a translation of the English COPA dataset from [SuperGLUE](https://super.gluebenchmark.com/tasks) and edited by professional editors. The data split from COPA is retained.
1351
 
1352
+ **Keywords:** reasoning, commonsense, causality, commonsense causal reasoning
1353
 
1354
+ **Authors:** Shavrina Tatiana, Fenogenova Alena, Emelyanov Anton, Shevelev Denis, Artemova Ekaterina, Malykh Valentin, Mikhailov Vladislav, Tikhonova Maria, Evlampiev Andrey
1355
 
1356
+ #### Motivation
1357
+
1358
+ The dataset tests the models’ ability to identify cause-and-effect relationships in the text and draw conclusions based on them. The dataset is first presented from the [RussianSuperGLUE](https://russiansuperglue.com/tasks/task_info/PARus) leaderboard, and it’s one of the sets for which there is still a significant gap between model and human estimates.
1359
+
1360
+ ### Dataset Description
1361
+
1362
+ #### Data Fields
1363
 
1364
  Each dataset sample represents a `premise` and two `options` for continuing situations depending on the task tag: cause or effect.
1365
 
1366
+ - `instruction` is a prompt specified for the task, selected from different pools for cause and effect;
1367
+ - `inputs` is a dictionary containing the following input information:
1368
+ - `premise` is a text situation;
1369
+ - `choice1` is the first option;
1370
+ - `choice2` is the second option;
1371
+ - `outputs` are string values "1" or "2";
1372
+ - `meta` is meta-information about the task:
1373
+ - `task` is a task class: cause or effect;
1374
+ - `id` is the id of the example from the dataset.
1375
 
1376
+ #### Data Instances
1377
 
1378
  Below is an example from the dataset:
1379
 
1380
  ```json
1381
  {
1382
+ "instruction": "Дано описание ситуации: \"{premise}\" и два возможных продолжения текста: 1. {choice1} 2. {choice2} Определи, какой из двух фрагментов является причиной описанной ситуации? Выведи одну цифру правильного ответа.",
1383
  "inputs": {
1384
+ "premise": "Моё тело отбрасывает тень на траву.",
1385
+ "choice1": "Солнце уже поднялось.",
1386
+ "choice2": "Трава уже подстрижена."
1387
  },
1388
+ "outputs": "1",
1389
  "meta": {
1390
+ "task": "cause",
1391
+ "id": 0
1392
  }
1393
  }
1394
  ```
1395
 
1396
+ #### Data Splits
1397
 
1398
+ The dataset consists of `400` train samples, `100` dev samples, and `500` private test samples. The number of sentences in the whole set is `1000`. The number of tokens is 5.4 · 10^3.
 
1399
 
1400
+ #### Prompts
1401
 
1402
+ We prepare 10 different prompts of various difficulty for the `cause` and for the `effect` parts of this task:
1403
 
1404
+ For cause:
1405
 
1406
+ ```json
1407
+ "Дана текстовая ситуация: \"{premise}\" и два текста продолжения: 1) {choice1} 2) {choice2} Определи, какой из двух фрагментов является причиной описанной ситуации? В качестве ответа выведи о��ну цифру 1 или 2."
1408
+ ```
1409
 
1410
+ For effect:
1411
 
1412
+ ```json
1413
+ "Дано описание ситуации: \"{premise}\" и два фрагмента текста: 1) {choice1} 2) {choice2} Определи, какой из двух фрагментов является следствием описанной ситуации? В качестве ответа выведи цифру 1 (первый текст) или 2 (второй текст)."
1414
+ ```
1415
 
1416
+ #### Dataset Creation
1417
 
1418
+ The dataset was taken initially from the RussianSuperGLUE set and reformed in an instructions format. All examples for the original set from RussianSuperGLUE were collected from open news sources and literary magazines, then manually cross-checked and supplemented by human evaluation on Yandex.Toloka.
1419
+
1420
+ Please, be careful! [PArsed RUssian Sentences](https://parus-proj.github.io/PaRuS/parus_pipe.html) is not the same dataset. It’s not a part of the Russian SuperGLUE.
1421
 
1422
+ ### Evaluation
1423
+
1424
+ #### Metrics
1425
+
1426
+ The metric for this task is Accuracy.
1427
+
1428
+ #### Human Benchmark
1429
 
1430
+ Human-level score is measured on a test set with Yandex.Toloka project with the overlap of 3 reviewers per task. The Accuracy score is `0.982`.
1431
 
1432
 
1433
  ## **RCB**