--- dataset_info: features: - name: lang dtype: string - name: example_id dtype: string - name: query dtype: string - name: answer dtype: string splits: - name: train num_bytes: 4193271 num_examples: 40548 download_size: 2118715 dataset_size: 4193271 configs: - config_name: default data_files: - split: train path: data/train-* --- # mkqa filtered version For a better dataset description, please visit the official site of the source dataset: [LINK](https://huggingface.co/datasets/mkqa)

**This dataset was prepared by converting mkqa dataset**. **I additionaly share the code which I used to convert the original dataset to make everything more clear** ``` mkqa = load_dataset("mkqa", split="train").to_pandas() needed_langs = ["en", "ar", "de", "es", "vi", "zh_cn"] rows = [] for i, row in tqdm(mkqa.iterrows(), total=mkqa.shape[0]): for lang in needed_langs: rows.append([lang, row["example_id"], row["queries"][lang], row["answers"][lang][0]["text"]]) filtered_dataset = pd.DataFrame(rows, columns=["lang", "example_id", "query", "answer"]) filtered_dataset.dropna(inplace=True) filtered_dataset.reset_index(drop=True, inplace=True) ``` **How to download** ``` from datasets import load_dataset data = load_dataset("dkoterwa/oasst1_filtered_retrieval") ```