Datasets:
MERA-evaluation
commited on
Commit
•
73cf223
1
Parent(s):
ff4c325
Update README.md
Browse files
README.md
CHANGED
@@ -830,7 +830,9 @@ MERA (Multimodal Evaluation for Russian-language Architectures) is a new open in
|
|
830 |
|
831 |
*The MERA benchmark unites industry and academic partners in one place to research the capabilities of fundamental models, draw attention to AI-related issues, foster collaboration within the Russian Federation and in the international arena, and create an independent, unified system for measuring all current models.*
|
832 |
|
833 |
-
The benchmark covers 23 evaluation tasks comprising knowledge about the world, logic, reasoning, AI ethics, and other domains. Each task is supplied with a dataset and a human-level score on this task.
|
|
|
|
|
834 |
|
835 |
## MERA tasks & datasets
|
836 |
|
|
|
830 |
|
831 |
*The MERA benchmark unites industry and academic partners in one place to research the capabilities of fundamental models, draw attention to AI-related issues, foster collaboration within the Russian Federation and in the international arena, and create an independent, unified system for measuring all current models.*
|
832 |
|
833 |
+
The benchmark covers 23 evaluation tasks comprising knowledge about the world, logic, reasoning, AI ethics, and other domains. Each task is supplied with a dataset and a human-level score on this task.
|
834 |
+
|
835 |
+
NB that 8 datasets are diagnostic and not used in the overall model evaluation.
|
836 |
|
837 |
## MERA tasks & datasets
|
838 |
|