Update README.md
Browse files
README.md
CHANGED
@@ -23,3 +23,37 @@ configs:
|
|
23 |
- split: train
|
24 |
path: data/train-*
|
25 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
- split: train
|
24 |
path: data/train-*
|
25 |
---
|
26 |
+
# mlqa filtered version
|
27 |
+
|
28 |
+
For a better dataset description, please visit the official site of the source dataset: [LINK](https://huggingface.co/datasets/mlqa) <br>
|
29 |
+
<br>
|
30 |
+
**This dataset was prepared by converting mlqa dataset**. I've concatenated versions of the dataset for languages of interest and retrieved a text answers from "answers" column.
|
31 |
+
|
32 |
+
**I additionaly share the code which I used to convert the original dataset to make everything more clear**
|
33 |
+
```
|
34 |
+
def download_mlqa(subset_name):
|
35 |
+
dataset_valid = load_dataset("mlqa", subset_name, split="validation").to_pandas()
|
36 |
+
dataset_test = load_dataset("mlqa", subset_name, split="test").to_pandas()
|
37 |
+
full_dataset = pd.concat([dataset_valid, dataset_test])
|
38 |
+
full_dataset.reset_index(drop=True, inplace=True)
|
39 |
+
return full_dataset
|
40 |
+
|
41 |
+
needed_langs = ["mlqa.en.en", "mlqa.de.de", "mlqa.ar.ar", "mlqa.es.es", "mlqa.vi.vi", "mlqa.zh.zh"]
|
42 |
+
datasets = []
|
43 |
+
for lang in tqdm(needed_langs):
|
44 |
+
dataset = download_mlqa(lang)
|
45 |
+
dataset["lang"] = lang.split(".")[2]
|
46 |
+
datasets.append(dataset)
|
47 |
+
|
48 |
+
full_mlqa = pd.concat(datasets)
|
49 |
+
full_mlqa.reset_index(drop=True, inplace=True)
|
50 |
+
full_mlqa["answer"] = [answer_dict["text"][0] for answer_dict in full_mlqa["answers"]]
|
51 |
+
full_mlqa.drop("answers", axis=1, inplace=True)
|
52 |
+
```
|
53 |
+
|
54 |
+
**How to download**
|
55 |
+
|
56 |
+
```
|
57 |
+
from datasets import load_dataset
|
58 |
+
data = load_dataset("dkoterwa/mlqa_filtered")
|
59 |
+
```
|