File size: 11,572 Bytes
2020f1e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28b817e
 
 
 
 
 
 
2020f1e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4f4c623
 
 
2020f1e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f17e0e1
2020f1e
 
 
 
f17e0e1
2020f1e
 
f17e0e1
2020f1e
 
 
 
 
 
 
 
 
 
 
 
 
0000f05
 
 
 
 
 
 
 
 
 
 
2020f1e
0000f05
2020f1e
 
0000f05
 
 
2020f1e
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
---
task_categories:
- text-classification
- token-classification
- question-answering
- multiple-choice
language:
- bg
pretty_name: Bulgarian GLUE
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
license:
  - mit
  - cc-by-3.0
  - cc-by-sa-4.0
  - other
  - cc-by-nc-4.0
  - cc-by-nc-3.0
task_ids:
  - multiple-choice-qa
  - named-entity-recognition
  - natural-language-inference
  - part-of-speech
  - sentiment-analysis
source_datasets:
  - bsnlp
  - wikiann
  - exams
  - ct21.t1
  - fakenews
  - crediblenews
  - universal_dependencies

tags:
  - check-worthiness-estimation
  - fake-new-detection
  - humor-detection
  - regression
  - ranking
---

# Dataset Card for "bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark"

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [https://bgglue.github.io/](https://bgglue.github.io/)
- **Repository:** [https://github.com/bgGLUE](https://github.com/bgGLUE)
- **Paper:** [bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark](https://arxiv.org/abs/2306.02349)
- **Point of Contact:** [bulgarianglue [at] gmail [dot] com](mailto:bulgarianglue@gmail.com)

![alt text](https://github.com/bgGLUE/bgglue/raw/main/logo.png "Title")


### Dataset Summary

bgGLUE (Bulgarian General Language Understanding Evaluation) is a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian. The benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression).

### Supported Tasks and Leaderboards

List of supported tasks: [Tasks](https://bgglue.github.io/tasks/). 

Leaderboard: [bgGLUE Leaderboard](https://bgglue.github.io/leaderboard/).

### Languages

Bulgarian

## Dataset Structure

### Data Instances

<div id="container">
    <table id="table-tasks" class="table table-striped table-bordered">
      <thead>
        <tr>
          <th scope="col">Name</th>
          <th scope="col">Task type</th>
          <th scope="col">Identifier</th>
          <th scope="col" data-toggle="tooltip" data-placement="top" title="Tooltip on right">Download</th>
          <th scope="col">More Info</th>
          <th scope="col">Metrics</th>
          <th scope="col">Train / Val / Test</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>Balto-Slavic NLP Shared Task</td>
          <td>Named Entity Recognition</td>
          <td>BSNLP</td>
          <td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/bsnlp.tar.gz" target="_blank" rel="noopener">URL</a> </td>
          <td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/bsnlp/">Info</a> </td>
          <td>F1</td>  
          <td>724 / 182 / 301</td>  
        </tr>
        <tr>
          <td>CheckThat! (2021), Task 1A </td>      
          <td>Check-Worthiness Estimation</td>
          <td>CT21.T1</td>
          <td class="text-center"><a href="https://gitlab.com/checkthat_lab/clef2021-checkthat-lab/-/tree/master/task1" target="_blank" rel="noopener">URL</a> </td>
          <td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/ct21-t1/">Info</a> </td>
          <td>Average Precision</td>  
          <td>2,995 / 350 / 357</td>  
        </tr>
        <tr>
          <td>Cinexio Movie Reviews</td>      
          <td>Sentiment Analysis</td>
          <td>Cinexio</td>  
          <td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/cinexio.tar.gz" target="_blank" rel="noopener">URL</a> </td>
          <td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/cinexio/">Info</a> </td>
          <td>Pearson-Spearman Corr</td>  
          <td>8,155 / 811 / 861</td>  
        </tr>
        <tr>
          <td>Hack the News Datathon (2019)</td>      
          <td>Fake News Detection</td>
          <td>Fake-N</td>  
          <td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/fakenews.tar.gz" target="_blank" rel="noopener">URL</a> </td>
          <td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/fakenews/">Info</a> </td>
          <td>Binary F1</td>  
          <td>1,990 / 221 / 701</td>  
        </tr>
        <tr>
          <td>In Search of Credible News</td>      
          <td>Humor Detection</td>
          <td>Cred.-N</td>  
          <td class="text-center"><a href="https://forms.gle/Z7PYHMAvFvFusWT37" target="_blank" rel="noopener">URL</a> </td>
          <td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/crediblenews/">Info</a> </td>
          <td>Binary F1</td>  
          <td>19,227 / 5,949 / 17,887</td>  
        </tr>
        <tr>
          <td>Multi-Subject High School Examinations Dataset</td>  
          <td>Multiple-choice QA</td>
          <td>EXAMS</td>  
          <td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/exams.tar.gz" target="_blank" rel="noopener">URL</a> </td>
          <td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/exams/">Info</a> </td>
          <td>Accuracy</td>  
          <td>1,512 / 365 / 1,472</td>  
        </tr>
        <tr>
          <td>Universal Dependencies</td>
          <td>Part-of-Speech Tagging</td>
          <td>U.Dep</td>
          <td class="text-center"><a href="https://universaldependencies.org/#bulgarian-treebanks" target="_blank" rel="noopener">URL</a> </td>
          <td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/udep/">Info</a> </td>
          <td>F1</td>  
          <td>8,907 / 1,115 / 1,116</td>  
        </tr>
        <tr>
          <td>Cross-lingual Natural Language Inference</td>   
          <td>Natural Language Inference</td>
          <td>XNLI</td>   
          <td class="text-center"><a href="https://github.com/facebookresearch/XNLI#download" target="_blank" rel="noopener">URL</a> </td>
          <td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/xnli/">Info</a> </td>
          <td>Accuracy</td>  
          <td>392,702 / 5,010 / 2,490</td>  
        </tr>
        <tr>
          <td>Cross-lingual Name Tagging and Linking (PAN-X / WikiAnn)</td>
          <td>Named Entity Recognition</td>
          <td>PAN-X</td>   
          <td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/wikiann_bg.tar.gz">URL</a> </td>
          <td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/wikiann/">Info</a> </td>
          <td>F1</td>  
          <td>16,237 / 7,029 / 7,263 </td>  
        </tr>
      </tbody>
    </table>
</div>

## Dataset Creation

### Source Data

#### Initial Data Collection and Normalization

Here, we describe the pre-processing steps we took to prepare the datasets before including them in the bgGLUE benchmark. Our main goal was to ensure that the setup evaluated the language understanding abilities of the models in a principled way and in a diverse set of domains. Since all of the datasets were publicly available, we preserved the original setup as much as possible. Nevertheless, we found that some datasets contained duplicate examples across their train/dev/test splits, or that all of the splits came from the same domain, which may overestimate the model's performance. Hereby, \textit{we removed data leaks} and proposed new topic-based or temporal-based (i.e., timestamp-based) data splits where needed. We deduplicated the examples based on a complete word overlap in two pairs of normalized texts, i.e., lowercased, and excluding all stop words.

## Considerations for Using the Data

### Discussion of Biases

The datasets included in bgGLUE were annotated by human annotators, who could be subject to potential biases in their annotation process. Hence, the datasets in \benchmarkName could potentially be misused to develop models that make predictions that are unfair to individuals or groups. Therefore, we ask users of bgGLUE to be aware of such potential biases and risks of misuse. We note that any biases that might exist in the original resources gathered in this benchmark are unintentional and do not aim to cause harm.

### Other Known Limitations

#### Tasks in bgGLUE 
The bgGLUE benchmark is comprised of  nine challenging NLU tasks, including three token classification tasks, one ranking task and five text classification tasks. While we cover three different types of tasks in the benchmark, we are restricted by the available resources for Bulgarian, and thus we could not include some other NLP tasks, such as language generation. We also consider only NLP tasks and we do not include tasks with other/multiple modalities. Finally, some of the tasks are of similar nature, e.g., we include two datasets for NER and two for credibility/fake news classification.

### Domains in bgGLUE
The tasks included in bgGLUE span over multiple domains such as social media posts, Wikipedia, and news articles and can test both for short and long document understanding. However, each task is limited to one domain and the topics within the domain do not necessarily have full coverage of all possible topics. Moreover, some of the tasks have overlapping domains, e.g., the documents in both Cred.-N and Fake-N are news articles.

## Additional Information

### Licensing Information

The primary bgGLUE tasks are built on and derived from existing datasets. 
We refer users to the original licenses accompanying each dataset. 
For each dataset the license is listed on its ["Tasks" page](https://bgglue.github.io/tasks/) on the bgGLUE website.

### Citation Information

```
@inproceedings{hardalov-etal-2023-bgglue,
    title = "bg{GLUE}: A {B}ulgarian General Language Understanding Evaluation Benchmark",
    author = "Hardalov, Momchil  and
      Atanasova, Pepa  and
      Mihaylov, Todor  and
      Angelova, Galia  and
      Simov, Kiril  and
      Osenova, Petya  and
      Stoyanov, Veselin  and
      Koychev, Ivan  and
      Nakov, Preslav  and
      Radev, Dragomir",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.487",
    pages = "8733--8759",
}
```


### Contributions

[List of bgGLUE contributors](https://bgglue.github.io/people/)