--- annotations_creators: - expert-generated language_creators: - expert-generated language: - fa license: - cc-by-nc-sa-4.0 multilinguality: - fa - en size_categories: - 1K Persian). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text dataset is in Persian (`fa`) and English (`en`). ## Dataset Structure ### Data Instances Here is an example from the dataset: ```json { "source": "چه زحمت‌ها که بکشد تا منابع مالی را تامین کند اصطلاحات را ترویج کند نهادهایی به راه اندازد.", "targets": ["how toil to raise funds, propagate reforms, initiate institutions!"], "category": "mizan_dev_en_fa" } ``` ### Data Fields - `source`: the input sentences, in Persian. - `targets`: the list of gold target translations in English. - `category`: the source from which the example is mined. ### Data Splits The train/dev/test split contains 1,622,281/2,138/47,745 samples. ## Dataset Creation ### Curation Rationale For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154). ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-NC-SA 4.0 License ### Citation Information ```bibtex @article{huggingface:dataset, title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian}, authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others}, year={2020} journal = {arXiv e-prints}, eprint = {2012.06154}, } ``` ### Contributions Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.