MERA-evaluation commited on
Commit
e186921
1 Parent(s): d81cf27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +175 -0
README.md CHANGED
@@ -1054,6 +1054,90 @@ The task is evaluated using Accuracy.
1054
  The human benchmark is measured on a subset of size 100 (sampled with the same original distribution). The accuracy for this task is `0.704`.
1055
 
1056
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1057
  ## **MathLogicQA**
1058
 
1059
  ### *Task Description*
@@ -1370,6 +1454,97 @@ Human Benchmark was measured on a test set with Yandex.Toloka project with the o
1370
  Average Macro F1 and Accuracy results are `0.68` / `0.702`, respectively.
1371
 
1372
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1373
  ## **ruDetox**
1374
 
1375
  ### *Task Description*
 
1054
  The human benchmark is measured on a subset of size 100 (sampled with the same original distribution). The accuracy for this task is `0.704`.
1055
 
1056
 
1057
+ ## **MaMuRAMu**
1058
+
1059
+ ### *Task Description*
1060
+
1061
+ **Massive Multitask Russian AMplified Understudy (MaMuRAMu)** is a dataset designed to measure model professional knowledge acquired during pretraining in various fields. The task covers 57 subjects (subdomains) across different topics (domains): HUMANITIES; SOCIAL SCIENCE; SCIENCE, TECHNOLOGY, ENGINEERING, AND MATHEMATICS (STEM); OTHER. The dataset was created based on the English MMLU proposed in [1] and follows its methodology in instruction format. Each example contains a question from one of the categories with four possible answers, only one of which is correct.
1062
+
1063
+ **Warning:** to avoid data leakage for MaMuRAMu, we created the NEW closed dataset that follows the original MMLU design. Thus, **results on the MMLU and MaMuRAMu datasets cannot be directly compared with each other.**
1064
+
1065
+ **Keywords**: logic, world knowledge, factual, expert knowledge
1066
+
1067
+ #### Motivation
1068
+
1069
+ This set is a continuation of the idea GLUE [2] and SuperGLUE [3] benchmarks, which focus on generalized assessment of tasks for understanding the language (NLU). Unlike sets like ruWorldTree and ruOpenBookQA (where questions are similar to MMLU format), which cover tests of the school curriculum and elementary knowledge, MaMuRAMu is designed to test professional knowledge in various fields.
1070
+
1071
+ ### Dataset Description
1072
+
1073
+ #### Data Fields
1074
+
1075
+ - `instruction` is a string containing instructions for the task and information about the requirements for the model output format;
1076
+ - `inputs` is a dictionary that contains the following information:
1077
+ - `text` is the test question;
1078
+ - `option_a` is the option A;
1079
+ - `option_b` is the option B;
1080
+ - `option_c` is the option C;
1081
+ - `option_d` is the option D;
1082
+ - `subject` is the topic of the question (generalization of a group of subdomains by meaning);
1083
+ - `outputs` is the result: can be one of the following string variables: "A", "B", "C", "D";
1084
+ - `meta` is a dictionary containing meta information:
1085
+ - `id` is an integer indicating the index of the example;
1086
+ - `domain` is question subdomain.
1087
+
1088
+ #### Data Instances
1089
+
1090
+ Below is an example from the dataset:
1091
+
1092
+ ```json
1093
+ {
1094
+ "instruction": "Задание содержит вопрос по теме {subject} и 4 варианта ответа A, B, C, D, из которых только один правильный.\n{text}\nA {option_a}\nB {option_b}\nC {option_c}\nD {option_d}\nЗапишите букву правильного ответа\nОтвет:",
1095
+ "inputs": {
1096
+ "text": "Какое число больше остальных: 73; 52,5; -5; 75; 32,83?",
1097
+ "option_a": "73",
1098
+ "option_b": "52,5",
1099
+ "option_c": "-5",
1100
+ "option_d": "75",
1101
+ "subject": "Математика"
1102
+ },
1103
+ "outputs": "D",
1104
+ "meta": {
1105
+ "id": 0,
1106
+ "domain": "elementary_mathematics"
1107
+ }
1108
+ }
1109
+ ```
1110
+
1111
+ #### Data Splits
1112
+
1113
+ The private test set (test split) contains `4248` examples. The few-shot set (train split) `285` hand-written examples.
1114
+
1115
+ #### Prompts
1116
+
1117
+ For this task 10 prompts of varying difficulty were created. Example:
1118
+
1119
+ ```json
1120
+ "Вопрос:\n{text}. Варианты ответа:\nA {option_a}\nB {option_b}\nC {option_c}\nD {option_d}\nИспользуй знания по теме {subject} и выбери правильный ответ. Выведи только одну букву. Ответ:"
1121
+ ```
1122
+
1123
+ ### Dataset Creation
1124
+
1125
+ The test set is based on the [the original MMLU dataset](https://github.com/hendrycks/test) methodology. The set was assembled manually according to the original format with domains as close as possible to the original set. The set is adapted for the Russian language and culture. The distribution of tasks across individual specific domains and subjects are balanced and corresponds to the distribution of the original MMLU.
1126
+
1127
+ ### Evaluation
1128
+
1129
+ #### Metrics
1130
+
1131
+ The dataset is evaluated using Accuracy and, following the original methodology, is evaluated in the few-shot format with five shots.
1132
+
1133
+ #### Human benchmark
1134
+
1135
+ According to the original article, for English test human-level accuracy varies:
1136
+ "Unspecialized humans from Amazon Mechanical Turk obtain 34.5% accuracy on English test. Meanwhile, expert-level performance can be far higher. For example, real-world test-taker human accuracy at the 95th percentile is around 87% for US Medical Licensing Examinations, and these questions make up our “Professional Medicine” task. If we take the 95th percentile human test-taker accuracy for exams that build up our test, and if we make an educated guess when such information is unavailable, we then estimate that expert-level accuracy is approximately 89.8%.".
1137
+
1138
+ Accuracy of the annotation on the test set is `84.4%`.
1139
+
1140
+
1141
  ## **MathLogicQA**
1142
 
1143
  ### *Task Description*
 
1454
  Average Macro F1 and Accuracy results are `0.68` / `0.702`, respectively.
1455
 
1456
 
1457
+ ## **ruCodeEval**
1458
+
1459
+ ### Task Description
1460
+
1461
+ Russian Code Evaluation (ruCodeEval) is the Russian analog of the original HumanEval dataset, created to evaluate the ability of language models to generate code in the Python programming language to solve simple problems.
1462
+ The dataset aims to measure the functional correctness of code generation based on information from the function's documentation lines—a text description of the function's operation and several examples of results for different input data.
1463
+
1464
+ **Keywords:** PLP, programming, Python
1465
+
1466
+ #### Motivation
1467
+
1468
+ This task tests the ability of models to generate simple Python programs based on a description (condition) in natural language. Since large models have in their training corpus a proportion of texts (programs) written in various programming languages, they are assumed to have the ability to understand and write code for simple tasks.
1469
+
1470
+ ### Dataset Description
1471
+
1472
+ #### Data Fields
1473
+
1474
+ - `instruction` is a string containing instructions for the task;
1475
+ - `inputs` is a dictionary that contains the following information:
1476
+ - `function` is a line containing the function signature, as well as its docstring in the form of an unwritten function;
1477
+ - `tests` is a list of dictionaries that contain input data of test cases for a given task (variants of input data on which the final function code is tested);
1478
+ - `outputs` is a two-dimensional array of size (n_samples, n_tests), where n_samples is the number of samples required to calculate the pass@k metric, n_tests is the number of test cases in tests; each list in the outputs is the same and contains correct answers to all test cases as strings;
1479
+ - `meta` is a dictionary containing meta information:
1480
+ - `id` is an integer indicating the index of the example;
1481
+ - `canonical_solution` is the canonical solution;
1482
+ - `entry_point` is the function name.
1483
+
1484
+ #### Data Instances
1485
+
1486
+ Below is an example from the dataset:
1487
+
1488
+ ```json
1489
+ {
1490
+ "instruction": "Необходимо реализовать логику на языке Python для следующей программы\n{function}",
1491
+ "inputs": {
1492
+ "function": "\n\ndef greatest_common_divisor(a: int, b: int) -> int:\n \"\"\"Верните наибольший общий делитель двух целых чисел a и b.\n Примеры: \n greatest_common_divisor(3, 5) \n 1 \n greatest_common_divisor(25, 15) \n 5\n \"\"\"",
1493
+ "tests": "[{'a': 100, 'b': 50}, {'a': 98, 'b': 56}, {'a': 540, 'b': 288}, {'a': 81, 'b': 27}, {'a': 33, 'b': 55}, {'a': 7, 'b': 13}, {'a': 14, 'b': 28}, {'a': 10, 'b': 25}, {'a': 12, 'b': 54}, {'a': 21, 'b': 35}]"
1494
+ },
1495
+ "outputs": [
1496
+ "50",
1497
+ "14",
1498
+ "36",
1499
+ "27",
1500
+ "11",
1501
+ "1",
1502
+ "14",
1503
+ "5",
1504
+ "6",
1505
+ "7"
1506
+ ],
1507
+ "meta": {
1508
+ "id": 13,
1509
+ "canonical_solution": "\n\n def query_gcd(a: int, b: int) -> int:\n return a if b == 0 else query_gcd(b, a % b)\n return query_gcd(a, b) \n\n",
1510
+ "entry_point": "greatest_common_divisor"
1511
+ }
1512
+ }
1513
+
1514
+ ```
1515
+
1516
+ #### Data Splits
1517
+
1518
+ The closed test set contains `164` tasks with closed answers specially collected by authors for this benchmark. For the test set, we provide only test cases without outputs and solutions.
1519
+
1520
+ #### Prompts
1521
+
1522
+ For this task 10 prompts of varying difficulty were created. Example:
1523
+
1524
+ ```json
1525
+ "Допишите код на языке Python в соответствии с условием, приведенным в описании\n{function}"
1526
+ ```
1527
+
1528
+ #### Dataset Creation
1529
+
1530
+ The test set was manually collected from open sources according to the format of the original open set [openai_humaneval](https://huggingface.co/datasets/openai_humaneval), adjusted the dataset to avoid data leakage in training and took into account the corrections described in [2].
1531
+
1532
+ ### Evaluation
1533
+
1534
+ #### Metrics
1535
+
1536
+ The model is evaluated using the `pass@k` metric, which is computed as follows:
1537
+
1538
+ $$ pass@k:=\mathbb{E}_{problems}\left[1-\frac{\binom{n-c}{k}}{\binom{n}{k}}\right] $$
1539
+
1540
+ Notation: *n* is the total number of generated solution options, *c* is the number of solutions that are correct, *k* is the selected indicator, how many options are taken into account.
1541
+
1542
+ To calculate `pass@k`, `n ≥ k` solutions are generated for each problem and are run through test cases (we use n = 10 and k ≤ 10 and an average of 10 test cases per problem). Then, the number of the correct solutions is calculated (`c ≤ n`). The solution is considered to be correct if it passes all test cases. That means the result of running solutions on test cases should be equal to the correct answers (outputs) for one problem. Such an evaluation process yields an unbiased score.
1543
+
1544
+ #### Human evaluation
1545
+ The dataset includes algorithmic problems that require knowledge of the Python programming language, which is too complex for an average annotator. All problems have strict solutions, so all human evaluation metrics are taken as `1.0`.
1546
+
1547
+
1548
  ## **ruDetox**
1549
 
1550
  ### *Task Description*