bgglue / README.md
mhardalov's picture
Update README.md
f17e0e1
---
task_categories:
- text-classification
- token-classification
- question-answering
- multiple-choice
language:
- bg
pretty_name: Bulgarian GLUE
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
license:
- mit
- cc-by-3.0
- cc-by-sa-4.0
- other
- cc-by-nc-4.0
- cc-by-nc-3.0
task_ids:
- multiple-choice-qa
- named-entity-recognition
- natural-language-inference
- part-of-speech
- sentiment-analysis
source_datasets:
- bsnlp
- wikiann
- exams
- ct21.t1
- fakenews
- crediblenews
- universal_dependencies
tags:
- check-worthiness-estimation
- fake-new-detection
- humor-detection
- regression
- ranking
---
# Dataset Card for "bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://bgglue.github.io/](https://bgglue.github.io/)
- **Repository:** [https://github.com/bgGLUE](https://github.com/bgGLUE)
- **Paper:** [bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark](https://arxiv.org/abs/2306.02349)
- **Point of Contact:** [bulgarianglue [at] gmail [dot] com](mailto:bulgarianglue@gmail.com)
![alt text](https://github.com/bgGLUE/bgglue/raw/main/logo.png "Title")
### Dataset Summary
bgGLUE (Bulgarian General Language Understanding Evaluation) is a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian. The benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression).
### Supported Tasks and Leaderboards
List of supported tasks: [Tasks](https://bgglue.github.io/tasks/).
Leaderboard: [bgGLUE Leaderboard](https://bgglue.github.io/leaderboard/).
### Languages
Bulgarian
## Dataset Structure
### Data Instances
<div id="container">
<table id="table-tasks" class="table table-striped table-bordered">
<thead>
<tr>
<th scope="col">Name</th>
<th scope="col">Task type</th>
<th scope="col">Identifier</th>
<th scope="col" data-toggle="tooltip" data-placement="top" title="Tooltip on right">Download</th>
<th scope="col">More Info</th>
<th scope="col">Metrics</th>
<th scope="col">Train / Val / Test</th>
</tr>
</thead>
<tbody>
<tr>
<td>Balto-Slavic NLP Shared Task</td>
<td>Named Entity Recognition</td>
<td>BSNLP</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/bsnlp.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/bsnlp/">Info</a> </td>
<td>F1</td>
<td>724 / 182 / 301</td>
</tr>
<tr>
<td>CheckThat! (2021), Task 1A </td>
<td>Check-Worthiness Estimation</td>
<td>CT21.T1</td>
<td class="text-center"><a href="https://gitlab.com/checkthat_lab/clef2021-checkthat-lab/-/tree/master/task1" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/ct21-t1/">Info</a> </td>
<td>Average Precision</td>
<td>2,995 / 350 / 357</td>
</tr>
<tr>
<td>Cinexio Movie Reviews</td>
<td>Sentiment Analysis</td>
<td>Cinexio</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/cinexio.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/cinexio/">Info</a> </td>
<td>Pearson-Spearman Corr</td>
<td>8,155 / 811 / 861</td>
</tr>
<tr>
<td>Hack the News Datathon (2019)</td>
<td>Fake News Detection</td>
<td>Fake-N</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/fakenews.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/fakenews/">Info</a> </td>
<td>Binary F1</td>
<td>1,990 / 221 / 701</td>
</tr>
<tr>
<td>In Search of Credible News</td>
<td>Humor Detection</td>
<td>Cred.-N</td>
<td class="text-center"><a href="https://forms.gle/Z7PYHMAvFvFusWT37" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/crediblenews/">Info</a> </td>
<td>Binary F1</td>
<td>19,227 / 5,949 / 17,887</td>
</tr>
<tr>
<td>Multi-Subject High School Examinations Dataset</td>
<td>Multiple-choice QA</td>
<td>EXAMS</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/exams.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/exams/">Info</a> </td>
<td>Accuracy</td>
<td>1,512 / 365 / 1,472</td>
</tr>
<tr>
<td>Universal Dependencies</td>
<td>Part-of-Speech Tagging</td>
<td>U.Dep</td>
<td class="text-center"><a href="https://universaldependencies.org/#bulgarian-treebanks" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/udep/">Info</a> </td>
<td>F1</td>
<td>8,907 / 1,115 / 1,116</td>
</tr>
<tr>
<td>Cross-lingual Natural Language Inference</td>
<td>Natural Language Inference</td>
<td>XNLI</td>
<td class="text-center"><a href="https://github.com/facebookresearch/XNLI#download" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/xnli/">Info</a> </td>
<td>Accuracy</td>
<td>392,702 / 5,010 / 2,490</td>
</tr>
<tr>
<td>Cross-lingual Name Tagging and Linking (PAN-X / WikiAnn)</td>
<td>Named Entity Recognition</td>
<td>PAN-X</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/wikiann_bg.tar.gz">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/wikiann/">Info</a> </td>
<td>F1</td>
<td>16,237 / 7,029 / 7,263 </td>
</tr>
</tbody>
</table>
</div>
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
Here, we describe the pre-processing steps we took to prepare the datasets before including them in the bgGLUE benchmark. Our main goal was to ensure that the setup evaluated the language understanding abilities of the models in a principled way and in a diverse set of domains. Since all of the datasets were publicly available, we preserved the original setup as much as possible. Nevertheless, we found that some datasets contained duplicate examples across their train/dev/test splits, or that all of the splits came from the same domain, which may overestimate the model's performance. Hereby, \textit{we removed data leaks} and proposed new topic-based or temporal-based (i.e., timestamp-based) data splits where needed. We deduplicated the examples based on a complete word overlap in two pairs of normalized texts, i.e., lowercased, and excluding all stop words.
## Considerations for Using the Data
### Discussion of Biases
The datasets included in bgGLUE were annotated by human annotators, who could be subject to potential biases in their annotation process. Hence, the datasets in \benchmarkName could potentially be misused to develop models that make predictions that are unfair to individuals or groups. Therefore, we ask users of bgGLUE to be aware of such potential biases and risks of misuse. We note that any biases that might exist in the original resources gathered in this benchmark are unintentional and do not aim to cause harm.
### Other Known Limitations
#### Tasks in bgGLUE
The bgGLUE benchmark is comprised of nine challenging NLU tasks, including three token classification tasks, one ranking task and five text classification tasks. While we cover three different types of tasks in the benchmark, we are restricted by the available resources for Bulgarian, and thus we could not include some other NLP tasks, such as language generation. We also consider only NLP tasks and we do not include tasks with other/multiple modalities. Finally, some of the tasks are of similar nature, e.g., we include two datasets for NER and two for credibility/fake news classification.
### Domains in bgGLUE
The tasks included in bgGLUE span over multiple domains such as social media posts, Wikipedia, and news articles and can test both for short and long document understanding. However, each task is limited to one domain and the topics within the domain do not necessarily have full coverage of all possible topics. Moreover, some of the tasks have overlapping domains, e.g., the documents in both Cred.-N and Fake-N are news articles.
## Additional Information
### Licensing Information
The primary bgGLUE tasks are built on and derived from existing datasets.
We refer users to the original licenses accompanying each dataset.
For each dataset the license is listed on its ["Tasks" page](https://bgglue.github.io/tasks/) on the bgGLUE website.
### Citation Information
```
@inproceedings{hardalov-etal-2023-bgglue,
title = "bg{GLUE}: A {B}ulgarian General Language Understanding Evaluation Benchmark",
author = "Hardalov, Momchil and
Atanasova, Pepa and
Mihaylov, Todor and
Angelova, Galia and
Simov, Kiril and
Osenova, Petya and
Stoyanov, Veselin and
Koychev, Ivan and
Nakov, Preslav and
Radev, Dragomir",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.487",
pages = "8733--8759",
}
```
### Contributions
[List of bgGLUE contributors](https://bgglue.github.io/people/)