File size: 1,826 Bytes
40cef13
 
 
 
 
 
 
 
 
 
 
 
 
8cbb643
 
40cef13
 
8cbb643
 
 
 
40cef13
 
 
 
 
 
b7f2d0e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
dataset_info:
  features:
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: id
    dtype: string
  - name: lang
    dtype: string
  - name: answer
    dtype: string
  - name: answer_len
    dtype: int64
  splits:
  - name: train
    num_bytes: 513179
    num_examples: 417
  download_size: 346284
  dataset_size: 513179
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
# mlqa filtered version

For a better dataset description, please visit the official site of the source dataset: [LINK](https://huggingface.co/datasets/mlqa) <br>
<br>
**This dataset was prepared by converting mlqa dataset**. I've concatenated versions of the dataset for languages of interest and retrieved a text answers from "answers" column. 

**I additionaly share the code which I used to convert the original dataset to make everything more clear**
```
def download_mlqa(subset_name):
    dataset_valid = load_dataset("mlqa", subset_name, split="validation").to_pandas()
    dataset_test  = load_dataset("mlqa", subset_name, split="test").to_pandas()
    full_dataset = pd.concat([dataset_valid, dataset_test])
    full_dataset.reset_index(drop=True, inplace=True)
    return full_dataset

needed_langs = ["mlqa.en.en", "mlqa.de.de", "mlqa.ar.ar", "mlqa.es.es", "mlqa.vi.vi", "mlqa.zh.zh"]
datasets = []
for lang in tqdm(needed_langs):
    dataset = download_mlqa(lang)
    dataset["lang"] = lang.split(".")[2]
    datasets.append(dataset)
    
full_mlqa = pd.concat(datasets)
full_mlqa.reset_index(drop=True, inplace=True)
full_mlqa["answer"] = [answer_dict["text"][0] for answer_dict in full_mlqa["answers"]]
full_mlqa.drop("answers", axis=1, inplace=True)
```

**How to download**

```
from datasets import load_dataset
data = load_dataset("dkoterwa/mlqa_filtered")
```