--- license: cc-by-sa-3.0 configs: - config_name: ara_Arab data_files: - split: validation path: data/ara_Arab/validation* - split: test path: data/ara_Arab/test* - config_name: deu_Latn data_files: - split: validation path: data/deu_Latn/validation* - split: test path: data/deu_Latn/test* - config_name: eng_Latn data_files: - split: validation path: data/eng_Latn/validation* - split: test path: data/eng_Latn/test* - config_name: hin_Deva data_files: - split: validation path: data/hin_Deva/validation* - split: test path: data/hin_Deva/test* - config_name: hin_Latn data_files: - split: validation path: data/hin_Latn/validation* - split: test path: data/hin_Latn/test* - config_name: spa_Latn data_files: - split: validation path: data/spa_Latn/validation* - split: test path: data/spa_Latn/test* - config_name: vie_Latn data_files: - split: validation path: data/vie_Latn/validation* - split: test path: data/vie_Latn/test* - config_name: zho_Hans data_files: - split: validation path: data/zho_Hans/validation* - split: test path: data/zho_Hans/test* task_categories: - question-answering language: - en - hi - ar - de - es - vi - zh --- **Source Dataset** - Link: [facebook/mlqa](https://huggingface.co/datasets/facebook/mlqa) - Revision: `397ed406c1a7902140303e7faf60fff35b58d285` **MLQA** MLQA (MultiLingual Question Answering) is a benchmark dataset for evaluating cross-lingual question answering performance. MLQA consists of over 5K extractive QA instances (12K in English) in SQuAD format in seven languages - English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. MLQA is highly parallel, with QA instances parallel between 4 different languages on average. **MLQA Plus** MLQA Plus additionally has hin_Latn data generated using indictrans library.