bgglue / README.md
mhardalov's picture
Update README.md
f17e0e1
metadata
task_categories:
  - text-classification
  - token-classification
  - question-answering
  - multiple-choice
language:
  - bg
pretty_name: Bulgarian GLUE
size_categories:
  - n<1K
  - 1K<n<10K
  - 10K<n<100K
  - 100K<n<1M
license:
  - mit
  - cc-by-3.0
  - cc-by-sa-4.0
  - other
  - cc-by-nc-4.0
  - cc-by-nc-3.0
task_ids:
  - multiple-choice-qa
  - named-entity-recognition
  - natural-language-inference
  - part-of-speech
  - sentiment-analysis
source_datasets:
  - bsnlp
  - wikiann
  - exams
  - ct21.t1
  - fakenews
  - crediblenews
  - universal_dependencies
tags:
  - check-worthiness-estimation
  - fake-new-detection
  - humor-detection
  - regression
  - ranking

Dataset Card for "bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark"

Table of Contents

Dataset Description

alt text

Dataset Summary

bgGLUE (Bulgarian General Language Understanding Evaluation) is a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian. The benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression).

Supported Tasks and Leaderboards

List of supported tasks: Tasks.

Leaderboard: bgGLUE Leaderboard.

Languages

Bulgarian

Dataset Structure

Data Instances

Name Task type Identifier Download More Info Metrics Train / Val / Test
Balto-Slavic NLP Shared Task Named Entity Recognition BSNLP URL Info F1 724 / 182 / 301
CheckThat! (2021), Task 1A Check-Worthiness Estimation CT21.T1 URL Info Average Precision 2,995 / 350 / 357
Cinexio Movie Reviews Sentiment Analysis Cinexio URL Info Pearson-Spearman Corr 8,155 / 811 / 861
Hack the News Datathon (2019) Fake News Detection Fake-N URL Info Binary F1 1,990 / 221 / 701
In Search of Credible News Humor Detection Cred.-N URL Info Binary F1 19,227 / 5,949 / 17,887
Multi-Subject High School Examinations Dataset Multiple-choice QA EXAMS URL Info Accuracy 1,512 / 365 / 1,472
Universal Dependencies Part-of-Speech Tagging U.Dep URL Info F1 8,907 / 1,115 / 1,116
Cross-lingual Natural Language Inference Natural Language Inference XNLI URL Info Accuracy 392,702 / 5,010 / 2,490
Cross-lingual Name Tagging and Linking (PAN-X / WikiAnn) Named Entity Recognition PAN-X URL Info F1 16,237 / 7,029 / 7,263

Dataset Creation

Source Data

Initial Data Collection and Normalization

Here, we describe the pre-processing steps we took to prepare the datasets before including them in the bgGLUE benchmark. Our main goal was to ensure that the setup evaluated the language understanding abilities of the models in a principled way and in a diverse set of domains. Since all of the datasets were publicly available, we preserved the original setup as much as possible. Nevertheless, we found that some datasets contained duplicate examples across their train/dev/test splits, or that all of the splits came from the same domain, which may overestimate the model's performance. Hereby, \textit{we removed data leaks} and proposed new topic-based or temporal-based (i.e., timestamp-based) data splits where needed. We deduplicated the examples based on a complete word overlap in two pairs of normalized texts, i.e., lowercased, and excluding all stop words.

Considerations for Using the Data

Discussion of Biases

The datasets included in bgGLUE were annotated by human annotators, who could be subject to potential biases in their annotation process. Hence, the datasets in \benchmarkName could potentially be misused to develop models that make predictions that are unfair to individuals or groups. Therefore, we ask users of bgGLUE to be aware of such potential biases and risks of misuse. We note that any biases that might exist in the original resources gathered in this benchmark are unintentional and do not aim to cause harm.

Other Known Limitations

Tasks in bgGLUE

The bgGLUE benchmark is comprised of nine challenging NLU tasks, including three token classification tasks, one ranking task and five text classification tasks. While we cover three different types of tasks in the benchmark, we are restricted by the available resources for Bulgarian, and thus we could not include some other NLP tasks, such as language generation. We also consider only NLP tasks and we do not include tasks with other/multiple modalities. Finally, some of the tasks are of similar nature, e.g., we include two datasets for NER and two for credibility/fake news classification.

Domains in bgGLUE

The tasks included in bgGLUE span over multiple domains such as social media posts, Wikipedia, and news articles and can test both for short and long document understanding. However, each task is limited to one domain and the topics within the domain do not necessarily have full coverage of all possible topics. Moreover, some of the tasks have overlapping domains, e.g., the documents in both Cred.-N and Fake-N are news articles.

Additional Information

Licensing Information

The primary bgGLUE tasks are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset. For each dataset the license is listed on its "Tasks" page on the bgGLUE website.

Citation Information

@inproceedings{hardalov-etal-2023-bgglue,
    title = "bg{GLUE}: A {B}ulgarian General Language Understanding Evaluation Benchmark",
    author = "Hardalov, Momchil  and
      Atanasova, Pepa  and
      Mihaylov, Todor  and
      Angelova, Galia  and
      Simov, Kiril  and
      Osenova, Petya  and
      Stoyanov, Veselin  and
      Koychev, Ivan  and
      Nakov, Preslav  and
      Radev, Dragomir",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.487",
    pages = "8733--8759",
}

Contributions

List of bgGLUE contributors