--- task_categories: - text-classification - token-classification - question-answering - multiple-choice language: - bg pretty_name: Bulgarian GLUE size_categories: - n<1K - 1K
Name Task type Identifier Download More Info Metrics Train / Val / Test
Balto-Slavic NLP Shared Task Named Entity Recognition BSNLP URL Info F1 724 / 182 / 301
CheckThat! (2021), Task 1A Check-Worthiness Estimation CT21.T1 URL Info Average Precision 2,995 / 350 / 357
Cinexio Movie Reviews Sentiment Analysis Cinexio URL Info Pearson-Spearman Corr 8,155 / 811 / 861
Hack the News Datathon (2019) Fake News Detection Fake-N URL Info Binary F1 1,990 / 221 / 701
In Search of Credible News Humor Detection Cred.-N URL Info Binary F1 19,227 / 5,949 / 17,887
Multi-Subject High School Examinations Dataset Multiple-choice QA EXAMS URL Info Accuracy 1,512 / 365 / 1,472
Universal Dependencies Part-of-Speech Tagging U.Dep URL Info F1 8,907 / 1,115 / 1,116
Cross-lingual Natural Language Inference Natural Language Inference XNLI URL Info Accuracy 392,702 / 5,010 / 2,490
Cross-lingual Name Tagging and Linking (PAN-X / WikiAnn) Named Entity Recognition PAN-X URL Info F1 16,237 / 7,029 / 7,263
## Dataset Creation ### Source Data #### Initial Data Collection and Normalization Here, we describe the pre-processing steps we took to prepare the datasets before including them in the bgGLUE benchmark. Our main goal was to ensure that the setup evaluated the language understanding abilities of the models in a principled way and in a diverse set of domains. Since all of the datasets were publicly available, we preserved the original setup as much as possible. Nevertheless, we found that some datasets contained duplicate examples across their train/dev/test splits, or that all of the splits came from the same domain, which may overestimate the model's performance. Hereby, \textit{we removed data leaks} and proposed new topic-based or temporal-based (i.e., timestamp-based) data splits where needed. We deduplicated the examples based on a complete word overlap in two pairs of normalized texts, i.e., lowercased, and excluding all stop words. ## Considerations for Using the Data ### Discussion of Biases The datasets included in bgGLUE were annotated by human annotators, who could be subject to potential biases in their annotation process. Hence, the datasets in \benchmarkName could potentially be misused to develop models that make predictions that are unfair to individuals or groups. Therefore, we ask users of bgGLUE to be aware of such potential biases and risks of misuse. We note that any biases that might exist in the original resources gathered in this benchmark are unintentional and do not aim to cause harm. ### Other Known Limitations #### Tasks in bgGLUE The bgGLUE benchmark is comprised of nine challenging NLU tasks, including three token classification tasks, one ranking task and five text classification tasks. While we cover three different types of tasks in the benchmark, we are restricted by the available resources for Bulgarian, and thus we could not include some other NLP tasks, such as language generation. We also consider only NLP tasks and we do not include tasks with other/multiple modalities. Finally, some of the tasks are of similar nature, e.g., we include two datasets for NER and two for credibility/fake news classification. ### Domains in bgGLUE The tasks included in bgGLUE span over multiple domains such as social media posts, Wikipedia, and news articles and can test both for short and long document understanding. However, each task is limited to one domain and the topics within the domain do not necessarily have full coverage of all possible topics. Moreover, some of the tasks have overlapping domains, e.g., the documents in both Cred.-N and Fake-N are news articles. ## Additional Information ### Licensing Information The primary bgGLUE tasks are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset. For each dataset the license is listed on its ["Tasks" page](https://bgglue.github.io/tasks/) on the bgGLUE website. ### Citation Information ``` @inproceedings{hardalov-etal-2023-bgglue, title = "bg{GLUE}: A {B}ulgarian General Language Understanding Evaluation Benchmark", author = "Hardalov, Momchil and Atanasova, Pepa and Mihaylov, Todor and Angelova, Galia and Simov, Kiril and Osenova, Petya and Stoyanov, Veselin and Koychev, Ivan and Nakov, Preslav and Radev, Dragomir", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.487", pages = "8733--8759", } ``` ### Contributions [List of bgGLUE contributors](https://bgglue.github.io/people/)