id
stringlengths
2
115
README
stringlengths
0
977k
rcds/swiss_judgment_prediction
--- pretty_name: Swiss-Judgment-Prediction annotations_creators: - found language_creators: - found language: - de - fr - it - en license: - cc-by-sa-4.0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: [] tags: - judgement-prediction dataset_info: - config_name: de features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: 0: dismissal 1: approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 104270719 num_examples: 35458 - name: validation num_bytes: 12131878 num_examples: 4705 - name: test num_bytes: 26056177 num_examples: 9725 download_size: 1000382331 dataset_size: 142458774 - config_name: fr features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: 0: dismissal 1: approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 96807957 num_examples: 21179 - name: validation num_bytes: 13031904 num_examples: 3095 - name: test num_bytes: 33318359 num_examples: 6820 download_size: 1000382331 dataset_size: 143158220 - config_name: it features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: 0: dismissal 1: approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 10773516 num_examples: 3072 - name: validation num_bytes: 1045551 num_examples: 408 - name: test num_bytes: 2474761 num_examples: 812 download_size: 1000382331 dataset_size: 14293828 - config_name: mt_de features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: 0: dismissal 1: approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 106990696 num_examples: 24251 - name: validation - name: test download_size: 1000382331 dataset_size: 106990696 - config_name: mt_fr features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: 0: dismissal 1: approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 117932134 num_examples: 38524 - name: validation - name: test download_size: 1000382331 dataset_size: 117932134 - config_name: mt_it features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: 0: dismissal 1: approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 201749076 num_examples: 56631 - name: validation - name: test download_size: 1000382331 dataset_size: 201749076 - config_name: mt_en features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: 0: dismissal 1: approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 196352783 num_examples: 59703 - name: validation - name: test download_size: 1000382331 dataset_size: 196352783 - config_name: all features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: 0: dismissal 1: approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 211852192 num_examples: 59709 - name: validation num_bytes: 26209333 num_examples: 8208 - name: test num_bytes: 61849297 num_examples: 17357 download_size: 1000382331 dataset_size: 299910822 - config_name: all+mt features: - name: id dtype: int32 - name: year dtype: int32 - name: text dtype: string - name: label dtype: class_label: names: 0: dismissal 1: approval - name: language dtype: string - name: region dtype: string - name: canton dtype: string - name: legal area dtype: string - name: source_language dtype: string splits: - name: train num_bytes: 834876881 num_examples: 238818 - name: validation num_bytes: 26209333 num_examples: 8208 - name: test num_bytes: 61849297 num_examples: 17357 download_size: 1000382331 dataset_size: 922935511 --- # Dataset Card for "SwissJudgmentPrediction" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/JoelNiklaus/SwissCourtRulingCorpus - **Repository:** https://github.com/JoelNiklaus/SwissCourtRulingCorpus - **Paper:** https://arxiv.org/abs/2110.00806 - **Leaderboard:** N/A - **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus@inf.unibe.ch) ### Dataset Summary **Documents** Swiss-Judgment-Prediction is a multilingual, diachronic dataset of 85K Swiss Federal Supreme Court (FSCS) cases annotated with the respective binarized judgment outcome (approval/dismissal), posing a challenging text classification task. We also provide additional metadata, i.e., the publication year, the legal area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP. ### Supported Tasks and Leaderboards SwissJudgmentPrediction can be used for the legal judgment prediction task. The dataset is not yet part of an established benchmark. ### Languages Switzerland has four official languages with 3 languages (German, French and Italian) being represented in more than 1000 Swiss Federal Supreme court decisions. The decisions are written by the judges and clerks in the language of the proceedings. ## Dataset Structure In version 2 we added machine translated data using [EasyNMT](https://github.com/UKPLab/EasyNMT) for all documents into German, French, Italian and English as an additional training set. ### Data Instances **Multilingual use of the dataset** When the dataset is used in a multilingual setting selecting the the 'all_languages' flag: ```python from datasets import load_dataset dataset = load_dataset('swiss_judgment_prediction', 'all_languages') ``` ``` { "id": 48757, "year": 2015, "facts": "Sachverhalt: A. X._ war bei der Krankenversicherung C._ taggeldversichert. Infolge einer Arbeitsunf\u00e4higkeit leistete ihm die C._ vom 30. Juni 2011 bis am 28. Juni 2013 Krankentaggelder, wobei die Leistungen bis am 30. September 2012 auf Grundlage einer Arbeitsunf\u00e4higkeit von 100% und danach basierend auf einer Arbeitsunf\u00e4higkeit von 55% erbracht wurden. Die Neueinsch\u00e4tzung der Arbeitsf\u00e4higkeit erfolgte anhand eines Gutachtens der D._ AG vom 27. August 2012, welches im Auftrag der C._ erstellt wurde. X._ machte daraufhin gegen\u00fcber der C._ geltend, er sei entgegen dem Gutachten auch nach dem 30. September 2012 zu 100% arbeitsunf\u00e4hig gewesen. Ferner verlangte er von der D._ AG zwecks externer \u00dcberpr\u00fcfung des Gutachtens die Herausgabe s\u00e4mtlicher diesbez\u00fcglicher Notizen, Auswertungen und Unterlagen. A._ (als Gesch\u00e4ftsf\u00fchrer der D._ AG) und B._ (als f\u00fcr das Gutachten medizinisch Verantwortliche) antworteten ihm, dass sie alle Unterlagen der C._ zugestellt h\u00e4tten und dass allf\u00e4llige Fragen zum Gutachten direkt der C._ zu stellen seien. X._ reichte am 2. Januar 2014 eine Strafanzeige gegen A._ und B._ ein. Er wirft diesen vor, ihn durch die Nichtherausgabe der Dokumente und durch Behinderung des IV-Verfahrens gen\u00f6tigt, Daten besch\u00e4digt bzw. vernichtet und ein falsches \u00e4rztliches Zeugnis ausgestellt zu haben. Zudem h\u00e4tten sie durch die Verz\u00f6gerung des IV-Verfahrens und insbesondere durch das falsche \u00e4rztliche Zeugnis sein Verm\u00f6gen arglistig gesch\u00e4digt. B. Die Staatsanwaltschaft des Kantons Bern, Region Oberland, nahm das Verfahren wegen N\u00f6tigung, Datenbesch\u00e4digung, falschem \u00e4rztlichem Zeugnis und arglistiger Verm\u00f6genssch\u00e4digung mit Verf\u00fcgung vom 10. November 2014 nicht an die Hand. Das Obergericht des Kantons Bern wies die von X._ dagegen erhobene Beschwerde am 27. April 2015 ab, soweit darauf einzutreten war. C. X._ beantragt mit Beschwerde in Strafsachen, der Beschluss vom 27. April 2015 sei aufzuheben und die Angelegenheit zur korrekten Ermittlung des Sachverhalts an die Staatsanwaltschaft zur\u00fcckzuweisen. Er stellt zudem den sinngem\u00e4ssen Antrag, das bundesgerichtliche Verfahren sei w\u00e4hrend der Dauer des konnexen Strafverfahrens gegen eine Teilgutachterin und des ebenfalls konnexen Zivil- oder Strafverfahrens gegen die C._ wegen Einsichtsverweigerung in das mutmasslich gef\u00e4lschte Originalgutachten zu sistieren. X._ ersucht um unentgeltliche Rechtspflege. ", "labels": 0, # dismissal "language": "de", "region": "Espace Mittelland", "canton": "be", "legal area": "penal law" } ``` **Monolingual use of the dataset** When the dataset is used in a monolingual setting selecting the ISO language code for one of the 3 supported languages. For example: ```python from datasets import load_dataset dataset = load_dataset('swiss_judgment_prediction', 'de') ``` ``` { "id": 48757, "year": 2015, "facts": "Sachverhalt: A. X._ war bei der Krankenversicherung C._ taggeldversichert. Infolge einer Arbeitsunf\u00e4higkeit leistete ihm die C._ vom 30. Juni 2011 bis am 28. Juni 2013 Krankentaggelder, wobei die Leistungen bis am 30. September 2012 auf Grundlage einer Arbeitsunf\u00e4higkeit von 100% und danach basierend auf einer Arbeitsunf\u00e4higkeit von 55% erbracht wurden. Die Neueinsch\u00e4tzung der Arbeitsf\u00e4higkeit erfolgte anhand eines Gutachtens der D._ AG vom 27. August 2012, welches im Auftrag der C._ erstellt wurde. X._ machte daraufhin gegen\u00fcber der C._ geltend, er sei entgegen dem Gutachten auch nach dem 30. September 2012 zu 100% arbeitsunf\u00e4hig gewesen. Ferner verlangte er von der D._ AG zwecks externer \u00dcberpr\u00fcfung des Gutachtens die Herausgabe s\u00e4mtlicher diesbez\u00fcglicher Notizen, Auswertungen und Unterlagen. A._ (als Gesch\u00e4ftsf\u00fchrer der D._ AG) und B._ (als f\u00fcr das Gutachten medizinisch Verantwortliche) antworteten ihm, dass sie alle Unterlagen der C._ zugestellt h\u00e4tten und dass allf\u00e4llige Fragen zum Gutachten direkt der C._ zu stellen seien. X._ reichte am 2. Januar 2014 eine Strafanzeige gegen A._ und B._ ein. Er wirft diesen vor, ihn durch die Nichtherausgabe der Dokumente und durch Behinderung des IV-Verfahrens gen\u00f6tigt, Daten besch\u00e4digt bzw. vernichtet und ein falsches \u00e4rztliches Zeugnis ausgestellt zu haben. Zudem h\u00e4tten sie durch die Verz\u00f6gerung des IV-Verfahrens und insbesondere durch das falsche \u00e4rztliche Zeugnis sein Verm\u00f6gen arglistig gesch\u00e4digt. B. Die Staatsanwaltschaft des Kantons Bern, Region Oberland, nahm das Verfahren wegen N\u00f6tigung, Datenbesch\u00e4digung, falschem \u00e4rztlichem Zeugnis und arglistiger Verm\u00f6genssch\u00e4digung mit Verf\u00fcgung vom 10. November 2014 nicht an die Hand. Das Obergericht des Kantons Bern wies die von X._ dagegen erhobene Beschwerde am 27. April 2015 ab, soweit darauf einzutreten war. C. X._ beantragt mit Beschwerde in Strafsachen, der Beschluss vom 27. April 2015 sei aufzuheben und die Angelegenheit zur korrekten Ermittlung des Sachverhalts an die Staatsanwaltschaft zur\u00fcckzuweisen. Er stellt zudem den sinngem\u00e4ssen Antrag, das bundesgerichtliche Verfahren sei w\u00e4hrend der Dauer des konnexen Strafverfahrens gegen eine Teilgutachterin und des ebenfalls konnexen Zivil- oder Strafverfahrens gegen die C._ wegen Einsichtsverweigerung in das mutmasslich gef\u00e4lschte Originalgutachten zu sistieren. X._ ersucht um unentgeltliche Rechtspflege. ", "labels": 0, # dismissal "language": "de", "region": "Espace Mittelland", "canton": "be", "legal area": "penal law" } ``` ### Data Fields **Multilingual use of the dataset** The following data fields are provided for documents (`train`, `validation`, `test`): `id`: (**int**) a unique identifier of the for the document \ `year`: (**int**) the publication year \ `text`: (**str**) the facts of the case \ `label`: (**class label**) the judgment outcome: 0 (dismissal) or 1 (approval) \ `language`: (**str**) one of (de, fr, it) \ `region`: (**str**) the region of the lower court \ `canton`: (**str**) the canton of the lower court \ `legal area`: (**str**) the legal area of the case **Monolingual use of the dataset** The following data fields are provided for documents (`train`, `validation`, `test`): `id`: (**int**) a unique identifier of the for the document \ `year`: (**int**) the publication year \ `text`: (**str**) the facts of the case \ `label`: (**class label**) the judgment outcome: 0 (dismissal) or 1 (approval) \ `language`: (**str**) one of (de, fr, it) \ `region`: (**str**) the region of the lower court \ `canton`: (**str**) the canton of the lower court \ `legal area`: (**str**) the legal area of the case ### Data Splits | Language | Subset | Number of Documents (Training/Validation/Test) | |------------|------------|------------------------------------------------| | German | **de** | 35'452 / 4'705 / 9'725 | | French | **fr** | 21'179 / 3'095 / 6'820 | | Italian | **it** | 3'072 / 408 / 812 | | All | **all** | 59'709 / 8'208 / 17'357 | | MT German | **mt_de** | 24'251 / 0 / 0 | | MT French | **mt_fr** | 38'524 / 0 / 0 | | MT Italian | **mt_it** | 56'631 / 0 / 0 | | MT All | **all+mt** | 238'818 / 8'208 / 17'357 | ## Dataset Creation ### Curation Rationale The dataset was curated by Niklaus et al. (2021). ### Source Data #### Initial Data Collection and Normalization The original data are available at the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML. #### Who are the source language producers? Switzerland has four official languages with 3 languages (German, French and Italian) being represented in more than 1000 Swiss Federal Supreme court decisions. The decisions are written by the judges and clerks in the language of the proceedings. ### Annotations #### Annotation process The decisions have been annotated with the binarized judgment outcome using parsers and regular expressions. #### Who are the annotators? Joel Niklaus and Adrian Jörg annotated the binarized judgment outcomes. Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch). ### Personal and Sensitive Information The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Niklaus et al. (2021) ### Licensing Information We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf) © Swiss Federal Supreme Court, 2000-2020 The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf ### Citation Information *Joel Niklaus, Ilias Chalkidis, and Matthias Stürmer.* *Swiss-Judgment-Prediction: A Multilingual Legal Judgment Prediction Benchmark* *Proceedings of the 2021 Natural Legal Language Processing Workshop. Punta Cana, Dominican Republic. 2021* ``` @InProceedings{niklaus-etal-2021-swiss, author = {Niklaus, Joel and Chalkidis, Ilias and Stürmer, Matthias}, title = {Swiss-Judgment-Prediction: A Multilingual Legal Judgment Prediction Benchmark}, booktitle = {Proceedings of the 2021 Natural Legal Language Processing Workshop}, year = {2021}, location = {Punta Cana, Dominican Republic}, } ``` and the new citation ``` @misc{niklaus2022empirical, title={An Empirical Study on Cross-X Transfer for Legal Judgment Prediction}, author={Joel Niklaus and Matthias Stürmer and Ilias Chalkidis}, year={2022}, eprint={2209.12325}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@joelniklaus](https://github.com/joelniklaus) for adding this dataset.
tab_fact
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - fact-checking paperswithcode_id: tabfact pretty_name: TabFact dataset_info: - config_name: tab_fact features: - name: id dtype: int32 - name: table_id dtype: string - name: table_text dtype: string - name: table_caption dtype: string - name: statement dtype: string - name: label dtype: class_label: names: '0': refuted '1': entailed splits: - name: train num_bytes: 99852664 num_examples: 92283 - name: validation num_bytes: 13846872 num_examples: 12792 - name: test num_bytes: 13493391 num_examples: 12779 download_size: 196508436 dataset_size: 127192927 - config_name: blind_test features: - name: id dtype: int32 - name: table_id dtype: string - name: table_text dtype: string - name: table_caption dtype: string - name: statement dtype: string - name: test_id dtype: string splits: - name: test num_bytes: 10954442 num_examples: 9750 download_size: 196508436 dataset_size: 10954442 --- # Dataset Card for TabFact ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [TabFact](https://tabfact.github.io/index.html) - **Repository:** [GitHub](https://github.com/wenhuchen/Table-Fact-Checking) - **Paper:** [TabFact: A Large-scale Dataset for Table-based Fact Verification](https://arxiv.org/abs/1909.02164) - **Leaderboard:** [Leaderboard](https://competitions.codalab.org/competitions/21611) - **Point of Contact:** [Wenhu Chen](wenhuchen@cs.ucsb.edu) ### Dataset Summary The problem of verifying whether a textual hypothesis holds the truth based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation. However, existing studies are restricted to dealing with unstructured textual evidence (e.g., sentences and passages, a pool of passages), while verification using structured forms of evidence, such as tables, graphs, and databases, remains unexplored. TABFACT is large scale dataset with 16k Wikipedia tables as evidence for 118k human annotated statements designed for fact verification with semi-structured evidence. The statements are labeled as either ENTAILED or REFUTED. TABFACT is challenging since it involves both soft linguistic reasoning and hard symbolic reasoning. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{2019TabFactA, title={TabFact : A Large-scale Dataset for Table-based Fact Verification}, author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang}, booktitle = {International Conference on Learning Representations (ICLR)}, address = {Addis Ababa, Ethiopia}, month = {April}, year = {2020} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
tamilmixsentiment
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - en - ta license: - unknown multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: Tamilmixsentiment dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': Positive '1': Negative '2': Mixed_feelings '3': unknown_state '4': not-Tamil splits: - name: train num_bytes: 790132 num_examples: 11335 - name: validation num_bytes: 89618 num_examples: 1260 - name: test num_bytes: 218764 num_examples: 3149 download_size: 1150792 dataset_size: 1098514 --- # Dataset Card for Tamilmixsentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Tamilmixsentiment Homepage](https://dravidian-codemix.github.io/2020/index.html) - **Repository:** [Tamilmixsentiment repository](https://dravidian-codemix.github.io/2020/datasets.html) - **Paper:** [Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text](https://www.aclweb.org/anthology/2020.sltu-1.28/) - **Leaderboard:** [Rank list](https://drive.google.com/file/d/1Mf8-No-63koGRwdF13RrO01NAFBlNmI0/view?usp=sharing) - **Point of Contact:** [Bharathi Raja Chakravarthi](mailto:bharathiraja.akr@gmail.com) ### Dataset Summary The first gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. This makes the largest general domain sentiment dataset for this relatively low-resource language with code-mixing phenomenon. The comment/post may contain more than one sentence but the average sentence length of the corpora is 1. Each comment/post is annotated with sentiment polarity at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios. ### Supported Tasks and Leaderboards To identify sentiment polarity of the code-mixed dataset of comments/posts in Tamil-English collected from social media. ### Languages Tamil-English code-switched. The dataset contains all the three types of code-mixed sentences - Inter-Sentential switch, Intra-Sentential switch and Tag switching. Most comments were written in Roman script with either Tamil grammar with English lexicon or English grammar with Tamil lexicon. Some comments were written in Tamil script with English expressions in between. ## Dataset Structure ### Data Instances An example from the Tamilmixsentiment train set looks as follows: ``` text label Trailer late ah parthavanga like podunga Positive ``` ### Data Fields - `text`: Tamil-English code-mixed comment. - `label`: list of the possible sentiments "Positive", "Negative", "Mixed_feelings", "unknown_state", "not-Tamil" ### Data Splits The entire dataset of 15,744 sentences was randomly shuffled and split into three parts as follows: | | train | validation | test | |------------------------------|------:|-----------:|-----:| | Tamilmixsentiment | 11335 | 1260 | 3149 | ## Dataset Creation ### Curation Rationale Sentiment analysis has become important in social media research (Yang and Eisenstein, 2017). Until recently these applications were created for high-resourced languages which analysed monolingual utterances. But social media in multilingual communities contains more code-mixed text. Code-mixing is common among speakers in a bilingual speech community. As English is seen as the language of prestige and education, the influence of lexicon, connectives and phrases from English language is common in spoken Tamil. Tamil has little annotated data for code-mixed scenarios. An annotated corpus developed for monolingual data cannot deal with code-mixed usage and therefore it fails to yield good results due to mixture of languages at different levels of linguistic analysis. Therefore this dataset of code-mixed Tamil-English sentiment annotated corpus is created. ### Source Data #### Initial Data Collection and Normalization The data was scraped from Youtube. In total 184,573 sentences for Tamil from YouTube comments from the trailers of a movies released in 2019. Many of the them contained sentences that were either entirely written in English or code-mixed Tamil-English or fully written in Tamil. So we filtered out a non-code-mixed corpus based on language identification at comment level using the langdetect library. The comment is written fully in Tamil or English, we discarded that comment since monolingual resources are available for these languages. We also identified if the sentences were written in other languages such as Hindi, Malayalam, Urdu, Telugu, and Kannada. We preprocessed the comments by removing the emoticons and applying a sentence length filter. We want to create a code-mixed corpus of reasonable size with sentences that have fairly defined sentiments which will be useful for future research. Thus our filter removed sentences with less than five words and more than 15 words after cleaning the data. In the end we got 15,744 Tanglish sentences. #### Who are the source language producers? Youtube users ### Annotations #### Annotation process Three steps complete the annotation setup. First, each sentence was annotated by two people. In the second step, the data were collected if both of them agreed. In the case of conflict, a third person annotated the sentence. In the third step, if all the three of them did not agree, then two more annotators annotated the sentences. #### Who are the annotators? Eleven volunteers were involved in the process. All of them were native speakers of Tamil with diversity in gender, educational level and medium of instruction in their school education. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{chakravarthi-etal-2020-corpus, title = "Corpus Creation for Sentiment Analysis in Code-Mixed {T}amil-{E}nglish Text", author = "Chakravarthi, Bharathi Raja and Muralidaran, Vigneshwaran and Priyadharshini, Ruba and McCrae, John Philip", booktitle = "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources association", url = "https://www.aclweb.org/anthology/2020.sltu-1.28", pages = "202--210", abstract = "Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.", language = "English", ISBN = "979-10-95546-35-1", } ``` ### Contributions Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset.
tanzil
--- annotations_creators: - found language_creators: - found language: - am - ar - az - bg - bn - bs - cs - de - dv - en - es - fa - fr - ha - hi - id - it - ja - ko - ku - ml - ms - nl - 'no' - pl - pt - ro - ru - sd - so - sq - sv - sw - ta - tg - th - tr - tt - ug - ur - uz - zh license: - unknown multilinguality: - multilingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: tanzil dataset_info: - config_name: bg-en features: - name: id dtype: string - name: translation dtype: translation: languages: - bg - en splits: - name: train num_bytes: 34473016 num_examples: 135477 download_size: 9305292 dataset_size: 34473016 - config_name: bn-hi features: - name: id dtype: string - name: translation dtype: translation: languages: - bn - hi splits: - name: train num_bytes: 18869103 num_examples: 24942 download_size: 3542740 dataset_size: 18869103 - config_name: fa-sv features: - name: id dtype: string - name: translation dtype: translation: languages: - fa - sv splits: - name: train num_bytes: 29281634 num_examples: 68601 download_size: 8550826 dataset_size: 29281634 - config_name: ru-zh features: - name: id dtype: string - name: translation dtype: translation: languages: - ru - zh splits: - name: train num_bytes: 59736143 num_examples: 99779 download_size: 16214659 dataset_size: 59736143 - config_name: en-tr features: - name: id dtype: string - name: translation dtype: translation: languages: - en - tr splits: - name: train num_bytes: 255891913 num_examples: 1189967 download_size: 82954694 dataset_size: 255891913 --- # Dataset Card for tanzil ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/Tanzil.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Tanzil.php E.g. `dataset = load_dataset("tanzil", lang1="en", lang2="ru")` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
tapaco
--- annotations_creators: - machine-generated language_creators: - crowdsourced language: - af - ar - az - be - ber - bg - bn - br - ca - cbk - cmn - cs - da - de - el - en - eo - es - et - eu - fi - fr - gl - gos - he - hi - hr - hu - hy - ia - id - ie - io - is - it - ja - jbo - kab - ko - kw - la - lfn - lt - mk - mr - nb - nds - nl - orv - ota - pes - pl - pt - rn - ro - ru - sl - sr - sv - tk - tl - tlh - tok - tr - tt - ug - uk - ur - vi - vo - war - wuu - yue license: - cc-by-2.0 multilinguality: - multilingual size_categories: - 100K<n<1M - 10K<n<100K - 1K<n<10K - 1M<n<10M - n<1K source_datasets: - extended|other-tatoeba task_categories: - text2text-generation - translation - text-classification task_ids: - semantic-similarity-classification paperswithcode_id: tapaco pretty_name: TaPaCo Corpus configs: - af - all_languages - ar - az - be - ber - bg - bn - br - ca - cbk - cmn - cs - da - de - el - en - eo - es - et - eu - fi - fr - gl - gos - he - hi - hr - hu - hy - ia - id - ie - io - is - it - ja - jbo - kab - ko - kw - la - lfn - lt - mk - mr - nb - nds - nl - orv - ota - pes - pl - pt - rn - ro - ru - sl - sr - sv - tk - tl - tlh - tok - tr - tt - ug - uk - ur - vi - vo - war - wuu - yue tags: - paraphrase-generation dataset_info: - config_name: all_languages features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 162802556 num_examples: 1926192 download_size: 32213126 dataset_size: 162802556 - config_name: af features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 21219 num_examples: 307 download_size: 32213126 dataset_size: 21219 - config_name: ar features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 546200 num_examples: 6446 download_size: 32213126 dataset_size: 546200 - config_name: az features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 44461 num_examples: 624 download_size: 32213126 dataset_size: 44461 - config_name: be features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 140376 num_examples: 1512 download_size: 32213126 dataset_size: 140376 - config_name: ber features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 5118620 num_examples: 67484 download_size: 32213126 dataset_size: 5118620 - config_name: bg features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 590535 num_examples: 6324 download_size: 32213126 dataset_size: 590535 - config_name: bn features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 146654 num_examples: 1440 download_size: 32213126 dataset_size: 146654 - config_name: br features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 177919 num_examples: 2536 download_size: 32213126 dataset_size: 177919 - config_name: ca features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 39404 num_examples: 518 download_size: 32213126 dataset_size: 39404 - config_name: cbk features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 19404 num_examples: 262 download_size: 32213126 dataset_size: 19404 - config_name: cmn features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 964514 num_examples: 12549 download_size: 32213126 dataset_size: 964514 - config_name: cs features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 482292 num_examples: 6659 download_size: 32213126 dataset_size: 482292 - config_name: da features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 848886 num_examples: 11220 download_size: 32213126 dataset_size: 848886 - config_name: de features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 10593377 num_examples: 125091 download_size: 32213126 dataset_size: 10593377 - config_name: el features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 926054 num_examples: 10072 download_size: 32213126 dataset_size: 926054 - config_name: en features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 15070349 num_examples: 158053 download_size: 32213126 dataset_size: 15070349 - config_name: eo features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 16810965 num_examples: 207105 download_size: 32213126 dataset_size: 16810965 - config_name: es features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 6851135 num_examples: 85064 download_size: 32213126 dataset_size: 6851135 - config_name: et features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 17127 num_examples: 241 download_size: 32213126 dataset_size: 17127 - config_name: eu features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 42702 num_examples: 573 download_size: 32213126 dataset_size: 42702 - config_name: fi features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 2520167 num_examples: 31753 download_size: 32213126 dataset_size: 2520167 - config_name: fr features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 9481426 num_examples: 116733 download_size: 32213126 dataset_size: 9481426 - config_name: gl features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 26551 num_examples: 351 download_size: 32213126 dataset_size: 26551 - config_name: gos features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 18442 num_examples: 279 download_size: 32213126 dataset_size: 18442 - config_name: he features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 6024345 num_examples: 68350 download_size: 32213126 dataset_size: 6024345 - config_name: hi features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 209382 num_examples: 1913 download_size: 32213126 dataset_size: 209382 - config_name: hr features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 36638 num_examples: 505 download_size: 32213126 dataset_size: 36638 - config_name: hu features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 5289610 num_examples: 67964 download_size: 32213126 dataset_size: 5289610 - config_name: hy features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 49230 num_examples: 603 download_size: 32213126 dataset_size: 49230 - config_name: ia features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 194035 num_examples: 2548 download_size: 32213126 dataset_size: 194035 - config_name: id features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 124568 num_examples: 1602 download_size: 32213126 dataset_size: 124568 - config_name: ie features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 31956 num_examples: 488 download_size: 32213126 dataset_size: 31956 - config_name: io features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 33892 num_examples: 480 download_size: 32213126 dataset_size: 33892 - config_name: is features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 132062 num_examples: 1641 download_size: 32213126 dataset_size: 132062 - config_name: it features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 15073750 num_examples: 198919 download_size: 32213126 dataset_size: 15073750 - config_name: ja features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 4314423 num_examples: 44267 download_size: 32213126 dataset_size: 4314423 - config_name: jbo features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 201564 num_examples: 2704 download_size: 32213126 dataset_size: 201564 - config_name: kab features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 1211051 num_examples: 15944 download_size: 32213126 dataset_size: 1211051 - config_name: ko features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 40458 num_examples: 503 download_size: 32213126 dataset_size: 40458 - config_name: kw features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 88577 num_examples: 1328 download_size: 32213126 dataset_size: 88577 - config_name: la features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 485749 num_examples: 6889 download_size: 32213126 dataset_size: 485749 - config_name: lfn features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 203383 num_examples: 2313 download_size: 32213126 dataset_size: 203383 - config_name: lt features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 599166 num_examples: 8042 download_size: 32213126 dataset_size: 599166 - config_name: mk features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 1240185 num_examples: 14678 download_size: 32213126 dataset_size: 1240185 - config_name: mr features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 1838921 num_examples: 16413 download_size: 32213126 dataset_size: 1838921 - config_name: nb features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 85371 num_examples: 1094 download_size: 32213126 dataset_size: 85371 - config_name: nds features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 195021 num_examples: 2633 download_size: 32213126 dataset_size: 195021 - config_name: nl features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 1790975 num_examples: 23561 download_size: 32213126 dataset_size: 1790975 - config_name: orv features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 40484 num_examples: 471 download_size: 32213126 dataset_size: 40484 - config_name: ota features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 44996 num_examples: 486 download_size: 32213126 dataset_size: 44996 - config_name: pes features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 433406 num_examples: 4285 download_size: 32213126 dataset_size: 433406 - config_name: pl features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 1722188 num_examples: 22391 download_size: 32213126 dataset_size: 1722188 - config_name: pt features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 6141178 num_examples: 78430 download_size: 32213126 dataset_size: 6141178 - config_name: rn features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 47387 num_examples: 648 download_size: 32213126 dataset_size: 47387 - config_name: ro features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 162955 num_examples: 2092 download_size: 32213126 dataset_size: 162955 - config_name: ru features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 24540667 num_examples: 251263 download_size: 32213126 dataset_size: 24540667 - config_name: sl features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 49610 num_examples: 706 download_size: 32213126 dataset_size: 49610 - config_name: sr features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 667308 num_examples: 8175 download_size: 32213126 dataset_size: 667308 - config_name: sv features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 509884 num_examples: 7005 download_size: 32213126 dataset_size: 509884 - config_name: tk features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 95047 num_examples: 1165 download_size: 32213126 dataset_size: 95047 - config_name: tl features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 76059 num_examples: 1017 download_size: 32213126 dataset_size: 76059 - config_name: tlh features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 185309 num_examples: 2804 download_size: 32213126 dataset_size: 185309 - config_name: toki features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 310864 num_examples: 3738 download_size: 32213126 dataset_size: 310864 - config_name: tr features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 11271158 num_examples: 142088 download_size: 32213126 dataset_size: 11271158 - config_name: tt features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 277269 num_examples: 2398 download_size: 32213126 dataset_size: 277269 - config_name: ug features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 118474 num_examples: 1183 download_size: 32213126 dataset_size: 118474 - config_name: uk features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 4885677 num_examples: 54431 download_size: 32213126 dataset_size: 4885677 - config_name: ur features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 24075 num_examples: 252 download_size: 32213126 dataset_size: 24075 - config_name: vi features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 84773 num_examples: 962 download_size: 32213126 dataset_size: 84773 - config_name: vo features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 22164 num_examples: 328 download_size: 32213126 dataset_size: 22164 - config_name: war features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 25759 num_examples: 327 download_size: 32213126 dataset_size: 25759 - config_name: wuu features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 31640 num_examples: 408 download_size: 32213126 dataset_size: 31640 - config_name: yue features: - name: paraphrase_set_id dtype: string - name: sentence_id dtype: string - name: paraphrase dtype: string - name: lists sequence: string - name: tags sequence: string - name: language dtype: string splits: - name: train num_bytes: 42766 num_examples: 561 download_size: 32213126 dataset_size: 42766 --- # Dataset Card for TaPaCo Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages](https://zenodo.org/record/3707949#.X9Dh0cYza3I) - **Paper:** [TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages](https://www.aclweb.org/anthology/2020.lrec-1.848.pdf) - **Point of Contact:** [Yves Scherrer](https://blogs.helsinki.fi/yvesscherrer/) ### Dataset Summary A freely available paraphrase corpus for 73 languages extracted from the Tatoeba database. Tatoeba is a crowdsourcing project mainly geared towards language learners. Its aim is to provide example sentences and translations for particular linguistic constructions and words. The paraphrase corpus is created by populating a graph with Tatoeba sentences and equivalence links between sentences “meaning the same thing”. This graph is then traversed to extract sets of paraphrases. Several language-independent filters and pruning steps are applied to remove uninteresting sentences. A manual evaluation performed on three languages shows that between half and three quarters of inferred paraphrases are correct and that most remaining ones are either correct but trivial, or near-paraphrases that neutralize a morphological distinction. The corpus contains a total of 1.9 million sentences, with 200 – 250 000 sentences per language. It covers a range of languages for which, to our knowledge, no other paraphrase dataset exists. ### Supported Tasks and Leaderboards Paraphrase detection and generation have become popular tasks in NLP and are increasingly integrated into a wide variety of common downstream tasks such as machine translation , information retrieval, question answering, and semantic parsing. Most of the existing datasets cover only a single language – in most cases English – or a small number of languages. Furthermore, some paraphrase datasets focus on lexical and phrasal rather than sentential paraphrases, while others are created (semi -)automatically using machine translation. The number of sentences per language ranges from 200 to 250 000, which makes the dataset more suitable for fine-tuning and evaluation purposes than for training. It is well-suited for multi-reference evaluation of paraphrase generation models, as there is generally not a single correct way of paraphrasing a given input sentence. ### Languages The dataset contains paraphrases in Afrikaans, Arabic, Azerbaijani, Belarusian, Berber languages, Bulgarian, Bengali , Breton, Catalan; Valencian, Chavacano, Mandarin, Czech, Danish, German, Greek, Modern (1453-), English, Esperanto , Spanish; Castilian, Estonian, Basque, Finnish, French, Galician, Gronings, Hebrew, Hindi, Croatian, Hungarian , Armenian, Interlingua (International Auxiliary Language Association), Indonesian, Interlingue; Occidental, Ido , Icelandic, Italian, Japanese, Lojban, Kabyle, Korean, Cornish, Latin, Lingua Franca Nova\t, Lithuanian, Macedonian , Marathi, Bokmål, Norwegian; Norwegian Bokmål, Low German; Low Saxon; German, Low; Saxon, Low, Dutch; Flemish, ]Old Russian, Turkish, Ottoman (1500-1928), Iranian Persian, Polish, Portuguese, Rundi, Romanian; Moldavian; Moldovan, Russian, Slovenian, Serbian, Swedish, Turkmen, Tagalog, Klingon; tlhIngan-Hol, Toki Pona, Turkish, Tatar, Uighur; Uyghur, Ukrainian, Urdu, Vietnamese, Volapük, Waray, Wu Chinese and Yue Chinese ## Dataset Structure ### Data Instances Each data instance corresponds to a paraphrase, e.g.: ``` { 'paraphrase_set_id': '1483', 'sentence_id': '5778896', 'paraphrase': 'Ɣremt adlis-a.', 'lists': ['7546'], 'tags': [''], 'language': 'ber' } ``` ### Data Fields Each dialogue instance has the following fields: - `paraphrase_set_id`: a running number that groups together all sentences that are considered paraphrases of each other - `sentence_id`: OPUS sentence id - `paraphrase`: Sentential paraphrase in a given language for a given paraphrase_set_id - `lists`: Contributors can add sentences to list in order to specify the original source of the data - `tags`: Indicates morphological or phonological properties of the sentence when available - `language`: Language identifier, one of the 73 languages that belong to this dataset. ### Data Splits The dataset is having a single `train` split, contains a total of 1.9 million sentences, with 200 – 250 000 sentences per language ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Creative Commons Attribution 2.0 Generic ### Citation Information ``` @dataset{scherrer_yves_2020_3707949, author = {Scherrer, Yves}, title = {{TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages}}, month = mar, year = 2020, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.3707949}, url = {https://doi.org/10.5281/zenodo.3707949} } ``` ### Contributions Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset.
tashkeela
--- annotations_creators: - no-annotation language_creators: - found language: - ar license: - gpl-2.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: Tashkeela tags: - diacritics-prediction dataset_info: features: - name: text dtype: string - name: book dtype: string config_name: plain_text splits: - name: train num_bytes: 1081110249 num_examples: 97 download_size: 183393530 dataset_size: 1081110249 --- # Dataset Card for Tashkeela ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Tashkeela](https://sourceforge.net/projects/tashkeela/) - **Repository:** [Tashkeela](https://sourceforge.net/projects/tashkeela/) - **Paper:** [Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems](https://www.sciencedirect.com/science/article/pii/S2352340917300112) - **Point of Contact:** [Taha Zerrouki](mailto:t_zerrouki@esi.dz) ### Dataset Summary It contains 75 million of fully vocalized words mainly 97 books from classical and modern Arabic language. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances ``` {'book': 'zip://Tashkeela-arabic-diacritized-text-utf8-0.3/texts.txt/msa/al-kalema.org/أشكال-التجارب-في-مَثَل-الزارع.htm.txt::https://sourceforge.net/projects/tashkeela/files/latest/download', 'text': 'الكلمة\n\n\nصفحه اصلی\nاشترك\nالكتاب المقدس\nجميع المقالات\nالترتيب بالموضوع\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nهذا المقال على نسخة PDF\n\n\nأشكال التجارب في مَثَل الزارع\n\n\tقد رأينا في مقال " \nوسائل واشكال التجارب" الأشكال التي من الممكن أن تتخذها التجارب (وخاصة الاختبارات التي تأتي من خلال الآلام والاضطهاد وأشراك إطاعة شهوات الإنسان العتيق، الجسد)، نستطيع أيضاً أن نرى هذه الأقسام عاملة في مثال الزارع. هناك مجموعتين في مثال الزارع أنه برغم من سماعهم واستقبالهم للكلمة، إلا أنهم لم يجلبوا ثماراً. والسؤال هو لماذا؟\n\n1. التجارب في القسم الثاني من مثال الزارع\n\nفيما يخص القسم الثاني من مثال الزارع، تخبرنا عنها متى 13: 20- 21 ولوقا 8: 13 \nمتى 13: 20- 21\n" وَالْمَزْرُوعُ عَلَى الأَمَاكِنِ الْمُحْجِرَةِ هُوَ الَّذِي يَسْمَعُ الْكَلِمَةَ، وَحَالاً يَقْبَلُهَا بِفَرَحٍ، وَلكِنْ لَيْسَ لَهُ أَصْلٌ فِي ذَاتِهِ، بَلْ هُوَ إِلَى حِينٍ. فَإِذَا حَدَثَ ضِيقٌ أَوِ اضْطِهَادٌ مِنْ أَجْلِ الْكَلِمَةِ فَحَالاً يَعْثُرُ."\nلوقا 8: 13\n" وَالَّذِينَ عَلَى الصَّخْرِ هُمُ الَّذِينَ مَتَى سَمِعُوا يَقْبَلُونَ الْكَلِمَةَ بِفَرَحٍ، وَهؤُلاَءِ لَيْسَ لَهُمْ أَصْلٌ، فَيُؤْمِنُونَ إِلَى حِينٍ، وَفِي وَقْتِ التَّجْرِبَةِ يَرْتَدُّونَ."\n\nكما نرى، الناس في هذا القسم سمعوا الكلمة وحالاً قبلوها بفرح! بمعنى آخر، لقد كانوا متحمسين جداً تجاه الكلمة. ثم جاءت التجارب والاختبارات في شكل ضيق واضطهاد من أجل الكلمة، أي أنه بسبب الكلمة، اضطهد هؤلاء الناس. وعندئذ توقفوا. عوضاً عن أن يحفظوا ويتمسكوا بالكلمة التي قد حدث واستقبلوها بفرح، تراجعوا وسقطوا بعيداً، إن كنت مؤمناً صغيراً مليء بالحماسة تجاه الله، وبالرغم من أنه قد يبدو أنه لا يوجد شيطان من حولك، فهذا لن يستمر إلى الأبد. فالتجارب والاختبارات آتية. ستحتاج إلى أن تحفظ وتتمسك بالإيمان وبالكلمة التي قد حدث واستقبلتها بفرح. كما تقول لنا الكلمة:\nعبرانيين 10: 35- 39\n" فَلاَ تَطْرَحُوا ثِقَتَكُمُ الَّتِي لَهَا مُجَازَاةٌ عَظِيمَةٌ. لأَنَّكُمْ تَحْتَاجُونَ إِلَى الصَّبْرِ، حَتَّى إِذَا صَنَعْتُمْ مَشِيئَةَ اللهِ تَنَالُونَ الْمَوْعِدَ. لأَنَّهُ بَعْدَ قَلِيل جِدًّا «سَيَأْتِي الآتِي وَلاَ يُبْطِئُ. أَمَّا الْبَارُّ فَبِالإِيمَانِ يَحْيَا، وَإِنِ ارْتَدَّ لاَ تُسَرُّ بِهِ نَفْسِي». وَأَمَّا نَحْنُ فَلَسْنَا مِنَ الارْتِدَادِ لِلْهَلاَكِ، بَلْ مِنَ الإِيمَانِ لاقْتِنَاءِ النَّفْسِ."\n\nوالضيق قد يأخذ أشكالاً عديدة. رأيت أناساً يسقطون، تاركين الإيمان لأن آبائهم أو أقاربهم وأصدقائهم قد عارضوهم ورفضوهم بسبب إيمانهم. بالطبع قد يأخذ الاضطهاد أشكالاً أكثر من ذلك أيضاً، مثل أن تلقى في سجن أو أن تعذب لأجل إيمانك. قد يسبب الموت كذلك، كما حدث مع اسطفانوس ويعقوب أخو يوحنا. وتقول الكلمة من أجلك ومن أجل كل الذين حوكموا:\nرومية 16: 19- 20\n" لأَنَّ طَاعَتَكُمْ ذَاعَتْ إِلَى الْجَمِيعِ، فَأَفْرَحُ أَنَا بِكُمْ، وَأُرِيدُ أَنْ تَكُونُوا حُكَمَاءَ لِلْخَيْرِ وَبُسَطَاءَ لِلشَّرِّ. وَإِلهُ السَّلاَمِ سَيَسْحَقُ الشَّيْطَانَ تَحْتَ أَرْجُلِكُمْ سَرِيعًا."\nو بطرس الأولى 5: 8- 10\n" اُصْحُوا وَاسْهَرُوا. لأَنَّ إِبْلِيسَ خَصْمَكُمْ كَأَسَدٍ زَائِرٍ، يَجُولُ مُلْتَمِسًا مَنْ يَبْتَلِعُهُ هُوَ. فَقَاوِمُوهُ، رَاسِخِينَ فِي الإِيمَانِ، عَالِمِينَ أَنَّ نَفْسَ هذِهِ الآلاَمِ تُجْرَى عَلَى إِخْوَتِكُمُ الَّذِينَ فِي الْعَالَمِ. وَإِلهُ كُلِّ نِعْمَةٍ الَّذِي دَعَانَا إِلَى مَجْدِهِ الأَبَدِيِّ فِي الْمَسِيحِ يَسُوعَ، بَعْدَمَا تَأَلَّمْتُمْ يَسِيرًا، هُوَ يُكَمِّلُكُمْ، وَيُثَبِّتُكُمْ، وَيُقَوِّيكُمْ، وَيُمَكِّنُكُمْ."\n\nتمسك بالإيمان حتى النهاية. ضع حياتك ووضعك بين يدي الله وكن مستعداً لمواجهة أي شيء قد يحدث، أجل وحتى السخرية والعذاب. الله معك، سيقويك وسيعينك تماماً مثلما فعل مع يسوع في بستان جسثيماني. وتماماً مثلما فعل مع بولس في السجن عندما اضطهد من قِبَل اليهود (أعمال الرسل 23: 11). وكما قال بولس في كورنثوس الثانية 1: 7:" عَالِمِينَ أَنَّكُمْ كَمَا أَنْتُمْ شُرَكَاءُ فِي الآلاَمِ، كَذلِكَ فِي التَّعْزِيَةِ أَيْضًا." فالعزاء الآتي من الله يوازن أي سخرية أو أي عذاب قد يأتي إلينا من أي إنسان.\n\n2. التجارب في القسم الثالث من مثال الزارع\n\nبخصوص القسم الثالث من مثال الزارع، فنقرأ عنه في مرقس 4: 18- 19\n\n" وَهؤُلاَءِ هُمُ الَّذِينَ زُرِعُوا بَيْنَ الشَّوْكِ: هؤُلاَءِ هُمُ الَّذِينَ يَسْمَعُونَ الْكَلِمَةَ، وَهُمُومُ هذَا الْعَالَمِ وَغُرُورُ الْغِنَى وَشَهَوَاتُ سَائِرِ الأَشْيَاءِ تَدْخُلُ وَتَخْنُقُ الْكَلِمَةَ فَتَصِيرُ بِلاَ ثَمَرٍ."\nو لوقا 8: 14\n" وَالَّذِي سَقَطَ بَيْنَ الشَّوْكِ هُمُ الَّذِينَ يَسْمَعُونَ، ثُمَّ يَذْهَبُونَ فَيَخْتَنِقُونَ مِنْ هُمُومِ الْحَيَاةِ وَغِنَاهَا وَلَذَّاتِهَا، وَلاَ يُنْضِجُونَ ثَمَرًا."\n\nهؤلاء قد سمعوا الكلمة وفهموها ولكنهم صاروا بلا ثمر، وما هو السبب؟ السبب هو لأنهم تركوا أبواب قلوبهم مفتوحة لأشواك " وَهُمُومُ هذَا الْعَالَمِ وَغُرُورُ الْغِنَى وَشَهَوَاتُ سَائِرِ الأَشْيَاءِ" (مرقس 4: 19)، والتي تدخل فتخنق الكلمة، كما رأينا يعقوب دائماً ما يقول:\nيعقوب 1: 13- 15\n" لاَ يَقُلْ أَحَدٌ إِذَا جُرِّبَ: «إِنِّي أُجَرَّبُ مِنْ قِبَلِ اللهِ»، لأَنَّ اللهَ غَيْرُ مُجَرَّبٍ بِالشُّرُورِ، وَهُوَ لاَ يُجَرِّبُ أَحَدًا. وَلكِنَّ كُلَّ وَاحِدٍ يُجَرَّبُ إِذَا انْجَذَبَ وَانْخَدَعَ مِنْ شَهْوَتِهِ. ثُمَّ الشَّهْوَةُ إِذَا حَبِلَتْ تَلِدُ خَطِيَّةً، وَالْخَطِيَّةُ إِذَا كَمَلَتْ تُنْتِجُ مَوْتًا."\nوتيموثاوس الأولى 6: 9 تقول لنا\n" وَأَمَّا الَّذِينَ يُرِيدُونَ أَنْ يَكُونُوا أَغْنِيَاءَ، فَيَسْقُطُونَ فِي تَجْرِبَةٍ وَفَخٍّ وَشَهَوَاتٍ كَثِيرَةٍ غَبِيَّةٍ وَمُضِرَّةٍ، تُغَرِّقُ النَّاسَ فِي الْعَطَبِ وَالْهَلاَكِ."\n\nيجب أن نلاحظ شيئاً هنا: أن تأثير هموم الحياة هو نفس التأثير الذي لتجارب الغنى وشهوات الأشياء الأخرى. فهموم الحياة أيضاً لا تجلب الثمار، إذاً فإن اردت أن تكون مسيحياً مثمراً، أي مسيحي حقيقي وليس فقط مسيحي اسمي، فيجب عليك أن تزيل أشواك الهموم والغنى وملذات الحياة وأن تمنعهم من العودة مرة أخرى. تحتاج إلى أن تفعل شيئاً، تحتاج إلى أن تتغير والله سيعينك في هذا إن كنت حقاً تريده. التجارب في القسم الثالث من مثال الزارع لا تأتي من خلال الاضطهاد والآلام عن طريق الشيطان. ولكن هنا تأخذ التجارب صوراً أكثر مكراً والتي مع هذا تتطلب مقاومتنا. الاهتمام بما يهتم به هذا العالم ("هموم هذا العالم")، الرغبة في الغنى أو اشتهاء الأشياء الأخرى هي أمور خطيرة جداً. إنها أشواك يجب إزالتها. كما رأينا بولس يقول:\nرومية 13: 14\n" بَلِ الْبَسُوا الرَّبَّ يَسُوعَ الْمَسِيحَ، وَلاَ تَصْنَعُوا تَدْبِيرًا لِلْجَسَدِ لأَجْلِ الشَّهَوَاتِ."\n\n" لاَ تَصْنَعُوا تَدْبِيرًا لِلْجَسَدِ" والتي تعني أنه يجب علينا أن لا نهتم بالجسد وشهواته. ولكن عوضاً عن ذلك ينبغي لنا أن نطعم أنفسنا بلبن الكلمة الصافي الذي ننمو بواستطه (بطرس الأولى 2: 2).\n\n\nتاسوس كيولاشوجلو'} ``` ### Data Fields - `book` (str): Book filename. - `text` (str): Text of the book. ### Data Splits The dataset is not split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization The Modern Standard Arabic texts crawled from the Internet. #### Who are the source language producers? Websites. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [GNU General Public License, version 2 (GPLv2)](https://opensource.org/licenses/GPL-2.0). ### Citation Information The dataset was published on this [paper](https://www.sciencedirect.com/science/article/pii/S2352340917300112#!): ``` @article{zerrouki2017tashkeela, title={Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems}, author={Zerrouki, Taha and Balla, Amar}, journal={Data in brief}, volume={11}, pages={147}, year={2017}, publisher={Elsevier} } ``` ### Contributions Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
taskmaster1
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - dialogue-modeling paperswithcode_id: taskmaster-1 pretty_name: Taskmaster-1 dataset_info: - config_name: one_person_dialogs features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 18037058 num_examples: 6168 - name: validation num_bytes: 2239656 num_examples: 770 - name: test num_bytes: 2224163 num_examples: 770 download_size: 103276427 dataset_size: 22500877 - config_name: woz_dialogs features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 13028593 num_examples: 5507 download_size: 103276427 dataset_size: 13028593 --- # Dataset Card for Taskmaster-1 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Taskmaster-1](https://research.google/tools/datasets/taskmaster-1/) - **Repository:** [GitHub](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019) - **Paper:** [Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset](https://arxiv.org/abs/1909.05358) - **Leaderboard:** N/A - **Point of Contact:** [Taskmaster Googlegroup](taskmaster-datasets@googlegroups.com) ### Dataset Summary Taskmaster-1 is a goal-oriented conversational dataset. It includes 13,215 task-based dialogs comprising six domains. Two procedures were used to create this collection, each with unique advantages. The first involves a two-person, spoken "Wizard of Oz" (WOz) approach in which trained agents and crowdsourced workers interact to complete the task while the second is "self-dialog" in which crowdsourced workers write the entire dialog themselves. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is in English language. ## Dataset Structure ### Data Instances A typical example looks like this ``` { "conversation_id":"dlg-336c8165-068e-4b4b-803d-18ef0676f668", "instruction_id":"restaurant-table-2", "utterances":[ { "index":0, "segments":[ ], "speaker":"USER", "text":"Hi, I'm looking for a place that sells spicy wet hotdogs, can you think of any?" }, { "index":1, "segments":[ { "annotations":[ { "name":"restaurant_reservation.name.restaurant.reject" } ], "end_index":37, "start_index":16, "text":"Spicy Wet Hotdogs LLC" } ], "speaker":"ASSISTANT", "text":"You might enjoy Spicy Wet Hotdogs LLC." }, { "index":2, "segments":[ ], "speaker":"USER", "text":"That sounds really good, can you make me a reservation?" }, { "index":3, "segments":[ ], "speaker":"ASSISTANT", "text":"Certainly, when would you like a reservation?" }, { "index":4, "segments":[ { "annotations":[ { "name":"restaurant_reservation.num.guests" }, { "name":"restaurant_reservation.num.guests" } ], "end_index":20, "start_index":18, "text":"50" } ], "speaker":"USER", "text":"I have a party of 50 who want a really sloppy dog on Saturday at noon." } ] } ``` ### Data Fields Each conversation in the data file has the following structure: - `conversation_id`: A universally unique identifier with the prefix 'dlg-'. The ID has no meaning. - `utterances`: A list of utterances that make up the conversation. - `instruction_id`: A reference to the file(s) containing the user (and, if applicable, agent) instructions for this conversation. Each utterance has the following fields: - `index`: A 0-based index indicating the order of the utterances in the conversation. - `speaker`: Either USER or ASSISTANT, indicating which role generated this utterance. - `text`: The raw text of the utterance. In case of self dialogs (one_person_dialogs), this is written by the crowdsourced worker. In case of the WOz dialogs, 'ASSISTANT' turns are written and 'USER' turns are transcribed from the spoken recordings of crowdsourced workers. - `segments`: A list of various text spans with semantic annotations. Each segment has the following fields: - `start_index`: The position of the start of the annotation in the utterance text. - `end_index`: The position of the end of the annotation in the utterance text. - `text`: The raw text that has been annotated. - `annotations`: A list of annotation details for this segment. Each annotation has a single field: - `name`: The annotation name. ### Data Splits - one_person_dialogs The data in `one_person_dialogs` config is split into `train`, `dev` and `test` splits. | | train | validation | test | |--------------|-------:|------------:|------:| | N. Instances | 6168 | 770 | 770 | - woz_dialogs The data in `woz_dialogs` config has no default splits. | | train | |--------------|-------:| | N. Instances | 5507 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is licensed under `Creative Commons Attribution 4.0 License` ### Citation Information [More Information Needed] ``` @inproceedings{48484, title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset}, author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik}, year = {2019} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
taskmaster2
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - dialogue-modeling paperswithcode_id: taskmaster-2 pretty_name: Taskmaster-2 dataset_info: - config_name: flights features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 7073487 num_examples: 2481 download_size: 23029880 dataset_size: 7073487 - config_name: food-ordering features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 1734825 num_examples: 1050 download_size: 5376675 dataset_size: 1734825 - config_name: hotels features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 7436667 num_examples: 2357 download_size: 22507266 dataset_size: 7436667 - config_name: movies features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 7112301 num_examples: 3056 download_size: 21189893 dataset_size: 7112301 - config_name: music features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 2814030 num_examples: 1603 download_size: 8981720 dataset_size: 2814030 - config_name: restaurant-search features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 7341998 num_examples: 3276 download_size: 21472680 dataset_size: 7341998 - config_name: sports features: - name: conversation_id dtype: string - name: instruction_id dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 5738818 num_examples: 3481 download_size: 19549440 dataset_size: 5738818 --- # Dataset Card for Taskmaster-2 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Taskmaster-1](https://research.google/tools/datasets/taskmaster-1/) - **Repository:** [GitHub](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-2-2020) - **Paper:** [Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset](https://arxiv.org/abs/1909.05358) - **Leaderboard:** N/A - **Point of Contact:** [Taskmaster Googlegroup](taskmaster-datasets@googlegroups.com) ### Dataset Summary Taskmaster is dataset for goal oriented conversations. The Taskmaster-2 dataset consists of 17,289 dialogs in the seven domains which include restaurants, food ordering, movies, hotels, flights, music and sports. Unlike Taskmaster-1, which includes both written "self-dialogs" and spoken two-person dialogs, Taskmaster-2 consists entirely of spoken two-person dialogs. In addition, while Taskmaster-1 is almost exclusively task-based, Taskmaster-2 contains a good number of search- and recommendation-oriented dialogs. All dialogs in this release were created using a Wizard of Oz (WOz) methodology in which crowdsourced workers played the role of a 'user' and trained call center operators played the role of the 'assistant'. In this way, users were led to believe they were interacting with an automated system that “spoke” using text-to-speech (TTS) even though it was in fact a human behind the scenes. As a result, users could express themselves however they chose in the context of an automated interface. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is in English language. ## Dataset Structure ### Data Instances A typical example looks like this ``` { "conversation_id": "dlg-0047a087-6a3c-4f27-b0e6-268f53a2e013", "instruction_id": "flight-6", "utterances": [ { "index": 0, "segments": [], "speaker": "USER", "text": "Hi, I'm looking for a flight. I need to visit a friend." }, { "index": 1, "segments": [], "speaker": "ASSISTANT", "text": "Hello, how can I help you?" }, { "index": 2, "segments": [], "speaker": "ASSISTANT", "text": "Sure, I can help you with that." }, { "index": 3, "segments": [], "speaker": "ASSISTANT", "text": "On what dates?" }, { "index": 4, "segments": [ { "annotations": [ { "name": "flight_search.date.depart_origin" } ], "end_index": 37, "start_index": 27, "text": "March 20th" }, { "annotations": [ { "name": "flight_search.date.return" } ], "end_index": 45, "start_index": 41, "text": "22nd" } ], "speaker": "USER", "text": "I'm looking to travel from March 20th to 22nd." } ] } ``` ### Data Fields Each conversation in the data file has the following structure: - `conversation_id`: A universally unique identifier with the prefix 'dlg-'. The ID has no meaning. - `utterances`: A list of utterances that make up the conversation. - `instruction_id`: A reference to the file(s) containing the user (and, if applicable, agent) instructions for this conversation. Each utterance has the following fields: - `index`: A 0-based index indicating the order of the utterances in the conversation. - `speaker`: Either USER or ASSISTANT, indicating which role generated this utterance. - `text`: The raw text of the utterance. In case of self dialogs (one_person_dialogs), this is written by the crowdsourced worker. In case of the WOz dialogs, 'ASSISTANT' turns are written and 'USER' turns are transcribed from the spoken recordings of crowdsourced workers. - `segments`: A list of various text spans with semantic annotations. Each segment has the following fields: - `start_index`: The position of the start of the annotation in the utterance text. - `end_index`: The position of the end of the annotation in the utterance text. - `text`: The raw text that has been annotated. - `annotations`: A list of annotation details for this segment. Each annotation has a single field: - `name`: The annotation name. ### Data Splits There are no deafults splits for all the config. The below table lists the number of examples in each config. | Config | Train | |-------------------|--------| | flights | 2481 | | food-orderings | 1050 | | hotels | 2355 | | movies | 3047 | | music | 1602 | | restaurant-search | 3276 | | sports | 3478 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is licensed under `Creative Commons Attribution 4.0 License` ### Citation Information [More Information Needed] ``` @inproceedings{48484, title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset}, author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik}, year = {2019} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
taskmaster3
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - dialogue-modeling paperswithcode_id: null pretty_name: taskmaster3 dataset_info: features: - name: conversation_id dtype: string - name: vertical dtype: string - name: instructions dtype: string - name: scenario dtype: string - name: utterances list: - name: index dtype: int32 - name: speaker dtype: string - name: text dtype: string - name: apis list: - name: name dtype: string - name: index dtype: int32 - name: args list: - name: arg_name dtype: string - name: arg_value dtype: string - name: response list: - name: response_name dtype: string - name: response_value dtype: string - name: segments list: - name: start_index dtype: int32 - name: end_index dtype: int32 - name: text dtype: string - name: annotations list: - name: name dtype: string splits: - name: train num_bytes: 143609327 num_examples: 23757 download_size: 313402141 dataset_size: 143609327 --- # Dataset Card for taskmaster3 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Taskmaster](https://research.google/tools/datasets/taskmaster-1/) - **Repository:** [GitHub](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020) - **Paper:** [Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset](https://arxiv.org/abs/1909.05358) - **Leaderboard:** N/A - **Point of Contact:** [Taskmaster Googlegroup](taskmaster-datasets@googlegroups.com) ### Dataset Summary Taskmaster is dataset for goal oriented conversations. The Taskmaster-3 dataset consists of 23,757 movie ticketing dialogs. By "movie ticketing" we mean conversations where the customer's goal is to purchase tickets after deciding on theater, time, movie name, number of tickets, and date, or opt out of the transaction. This collection was created using the "self-dialog" method. This means a single, crowd-sourced worker is paid to create a conversation writing turns for both speakers, i.e. the customer and the ticketing agent. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is in English language. ## Dataset Structure ### Data Instances A typical example looks like this ``` { "conversation_id": "dlg-ddee80da-9ffa-4773-9ce7-f73f727cb79c", "instructions": "SCENARIO: Pretend you’re *using a digital assistant to purchase tickets for a movie currently showing in theaters*. ...", "scenario": "4 exchanges with 1 error and predefined variables", "utterances": [ { "apis": [], "index": 0, "segments": [ { "annotations": [ { "name": "num.tickets" } ], "end_index": 21, "start_index": 20, "text": "2" }, { "annotations": [ { "name": "name.movie" } ], "end_index": 42, "start_index": 37, "text": "Mulan" } ], "speaker": "user", "text": "I would like to buy 2 tickets to see Mulan." }, { "index": 6, "segments": [], "speaker": "user", "text": "Yes.", "apis": [ { "args": [ { "arg_name": "name.movie", "arg_value": "Mulan" }, { "arg_name": "name.theater", "arg_value": "Mountain AMC 16" } ], "index": 6, "name": "book_tickets", "response": [ { "response_name": "status", "response_value": "success" } ] } ] } ], "vertical": "Movie Tickets" } ``` ### Data Fields Each conversation in the data file has the following structure: - `conversation_id`: A universally unique identifier with the prefix 'dlg-'. The ID has no meaning. - `utterances`: A list of utterances that make up the conversation. - `instructions`: Instructions for the crowdsourced worker used in creating the conversation. - `vertical`: In this dataset the vertical for all dialogs is "Movie Tickets". - `scenario`: This is the title of the instructions for each dialog. Each utterance has the following fields: - `index`: A 0-based index indicating the order of the utterances in the conversation. - `speaker`: Either USER or ASSISTANT, indicating which role generated this utterance. - `text`: The raw text of the utterance. In case of self dialogs (one_person_dialogs), this is written by the crowdsourced worker. In case of the WOz dialogs, 'ASSISTANT' turns are written and 'USER' turns are transcribed from the spoken recordings of crowdsourced workers. - `segments`: A list of various text spans with semantic annotations. - `apis`: An array of API invocations made during the utterance. Each API has the following structure: - `name`: The name of the API invoked (e.g. find_movies). - `index`: The index of the parent utterance. - `args`: A `list` of `dict` with keys `arg_name` and `arg_value` which represent the name of the argument and the value for the argument respectively. - `response`: A `list` of `dict`s with keys `response_name` and `response_value` which represent the name of the response and the value for the response respectively. Each segment has the following fields: - `start_index`: The position of the start of the annotation in the utterance text. - `end_index`: The position of the end of the annotation in the utterance text. - `text`: The raw text that has been annotated. - `annotations`: A list of annotation details for this segment. Each annotation has a single field: - `name`: The annotation name. ### Data Splits There are no deafults splits for all the config. The below table lists the number of examples in each config. | | Train | |-------------------|--------| | n_instances | 23757 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is licensed under `Creative Commons Attribution 4.0 License` ### Citation Information [More Information Needed] ``` @inproceedings{48484, title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset}, author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik}, year = {2019} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
tatoeba
--- annotations_creators: - found language_creators: - found language: - ab - acm - ady - af - afb - afh - aii - ain - ajp - akl - aln - am - an - ang - aoz - apc - ar - arq - ary - arz - as - ast - avk - awa - ayl - az - ba - bal - bar - be - ber - bg - bho - bjn - bm - bn - bo - br - brx - bs - bua - bvy - bzt - ca - cay - cbk - ce - ceb - ch - chg - chn - cho - chr - cjy - ckb - ckt - cmn - co - code - cpi - crh - crk - cs - csb - cv - cy - da - de - dng - drt - dsb - dtp - dv - dws - ee - egl - el - emx - en - enm - eo - es - et - eu - ext - fi - fj - fkv - fo - fr - frm - fro - frr - fuc - fur - fuv - fy - ga - gag - gan - gbm - gcf - gd - gil - gl - gn - gom - gos - got - grc - gsw - gu - gv - ha - hak - haw - hbo - he - hi - hif - hil - hnj - hoc - hr - hrx - hsb - hsn - ht - hu - hy - ia - iba - id - ie - ig - ii - ike - ilo - io - is - it - izh - ja - jam - jbo - jdt - jpa - jv - ka - kaa - kab - kam - kek - kha - kjh - kk - kl - km - kmr - kn - ko - koi - kpv - krc - krl - ksh - ku - kum - kw - kxi - ky - la - laa - lad - lb - ldn - lfn - lg - lij - liv - lkt - lld - lmo - ln - lo - lt - ltg - lut - lv - lzh - lzz - mad - mai - max - mdf - mfe - mg - mgm - mh - mhr - mi - mic - min - mk - ml - mn - mni - mnw - moh - mr - mt - mvv - mwl - mww - my - myv - na - nah - nan - nb - nch - nds - ngt - ngu - niu - nl - nlv - nn - nog - non - nov - npi - nst - nus - nv - ny - nys - oar - oc - ofs - ood - or - orv - os - osp - ota - otk - pa - pag - pal - pam - pap - pau - pcd - pdc - pes - phn - pi - pl - pms - pnb - ppl - prg - ps - pt - qu - quc - qya - rap - rif - rm - rn - ro - rom - ru - rue - rw - sa - sah - sc - scn - sco - sd - sdh - se - sg - sgs - shs - shy - si - sjn - sl - sm - sma - sn - so - sq - sr - stq - su - sux - sv - swg - swh - syc - ta - te - tet - tg - th - thv - ti - tig - tk - tl - tlh - tly - tmr - tmw - tn - to - toi - tok - tpi - tpw - tr - ts - tt - tts - tvl - ty - tyv - tzl - udm - ug - uk - umb - ur - uz - vec - vep - vi - vo - vro - wa - war - wo - wuu - xal - xh - xqa - yi - yo - yue - zlm - zsm - zu - zza license: - cc-by-2.0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: tatoeba pretty_name: Tatoeba dataset_info: - config_name: en-mr features: - name: id dtype: string - name: translation dtype: translation: languages: - en - mr splits: - name: train num_bytes: 6190484 num_examples: 53462 download_size: 1436200 dataset_size: 6190484 - config_name: eo-nl features: - name: id dtype: string - name: translation dtype: translation: languages: - eo - nl splits: - name: train num_bytes: 8150048 num_examples: 93650 download_size: 3020382 dataset_size: 8150048 - config_name: es-pt features: - name: id dtype: string - name: translation dtype: translation: languages: - es - pt splits: - name: train num_bytes: 6180464 num_examples: 67782 download_size: 2340361 dataset_size: 6180464 - config_name: fr-ru features: - name: id dtype: string - name: translation dtype: translation: languages: - fr - ru splits: - name: train num_bytes: 19775390 num_examples: 195161 download_size: 5509784 dataset_size: 19775390 - config_name: es-gl features: - name: id dtype: string - name: translation dtype: translation: languages: - es - gl splits: - name: train num_bytes: 287683 num_examples: 3135 download_size: 128506 dataset_size: 287683 --- # Dataset Card for Tatoeba ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/Tatoeba.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary Tatoeba is a collection of sentences and translations. To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Tatoeba.php E.g. `dataset = load_dataset("tatoeba", lang1="en", lang2="he")` The default date is v2021-07-22, but you can also change the date with `dataset = load_dataset("tatoeba", lang1="en", lang2="he", date="v2020-11-09")` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The languages in the dataset are: - ab - acm - ady - af - afb - afh - aii - ain - ajp - akl - aln - am - an - ang - aoz - apc - ar - arq - ary - arz - as - ast - avk - awa - ayl - az - ba - bal - bar - be - ber - bg - bho - bjn - bm - bn - bo - br - brx - bs - bua - bvy - bzt - ca - cay - cbk - ce - ceb - ch - chg - chn - cho - chr - cjy - ckb - ckt - cmn - co - code - cpi - crh - crk - cs - csb - cv - cy - da - de - dng - drt - dsb - dtp - dv - dws - ee - egl - el - emx - en - enm - eo - es - et - eu - ext - fi - fj - fkv - fo - fr - frm - fro - frr - fuc - fur - fuv - fy - ga - gag - gan - gbm - gcf - gd - gil - gl - gn - gom - gos - got - grc - gsw - gu - gv - ha - hak - haw - hbo - he - hi - hif - hil - hnj - hoc - hr - hrx - hsb - hsn - ht - hu - hy - ia - iba - id - ie - ig - ii - ike - ilo - io - is - it - izh - ja - jam - jbo - jdt - jpa - jv - ka - kaa - kab - kam - kek - kha - kjh - kk - kl - km - kmr - kn - ko - koi - kpv - krc - krl - ksh - ku - kum - kw - kxi - ky - kzj: Coastal Kadazan (deprecated tag; preferred value: Kadazan Dusun; Central Dusun (`dtp`)) - la - laa - lad - lb - ldn - lfn - lg - lij - liv - lkt - lld - lmo - ln - lo - lt - ltg - lut - lv - lzh - lzz - mad - mai - max - mdf - mfe - mg - mgm - mh - mhr - mi - mic - min - mk - ml - mn - mni - mnw - moh - mr - mt - mvv - mwl - mww - my - myv - na - nah - nan - nb - nch - nds - ngt - ngu - niu - nl - nlv - nn - nog - non - nov - npi - nst - nus - nv - ny - nys - oar - oc - ofs - ood - or - orv - os - osp - ota - otk - pa - pag - pal - pam - pap - pau - pcd - pdc - pes - phn - pi - pl - pms - pnb - ppl - prg - ps - pt - qu - quc - qya - rap - rif - rm - rn - ro - rom - ru - rue - rw - sa - sah - sc - scn - sco - sd - sdh - se - sg - sgs - shs - shy - si - sjn - sl - sm - sma - sn - so - sq - sr - stq - su - sux - sv - swg - swh - syc - ta - te - tet - tg - th - thv - ti - tig - tk - tl - tlh - tly - tmr - tmw - tn - to - toi - tok - tpi - tpw - tr - ts - tt - tts - tvl - ty - tyv - tzl - udm - ug - uk - umb - ur - uz - vec - vep - vi - vo - vro - wa - war - wo - wuu - xal - xh - xqa - yi - yo - yue - zlm - zsm - zu - zza ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
ted_hrlr
--- annotations_creators: - crowdsourced language: - az - be - en - es - fr - gl - he - it - pt - ru - tr language_creators: - expert-generated license: - cc-by-nc-nd-4.0 multilinguality: - translation pretty_name: TEDHrlr size_categories: - 1M<n<10M source_datasets: - extended|ted_talks_iwslt task_categories: - translation task_ids: [] paperswithcode_id: null dataset_info: - config_name: az_to_en features: - name: translation dtype: translation: languages: - az - en splits: - name: test num_bytes: 186540 num_examples: 904 - name: train num_bytes: 1226853 num_examples: 5947 - name: validation num_bytes: 122709 num_examples: 672 download_size: 131005909 dataset_size: 1536102 - config_name: aztr_to_en features: - name: translation dtype: translation: languages: - az_tr - en splits: - name: test num_bytes: 186540 num_examples: 904 - name: train num_bytes: 39834469 num_examples: 188397 - name: validation num_bytes: 122709 num_examples: 672 download_size: 131005909 dataset_size: 40143718 - config_name: be_to_en features: - name: translation dtype: translation: languages: - be - en splits: - name: test num_bytes: 186606 num_examples: 665 - name: train num_bytes: 1176899 num_examples: 4510 - name: validation num_bytes: 59328 num_examples: 249 download_size: 131005909 dataset_size: 1422833 - config_name: beru_to_en features: - name: translation dtype: translation: languages: - be_ru - en splits: - name: test num_bytes: 186606 num_examples: 665 - name: train num_bytes: 59953616 num_examples: 212615 - name: validation num_bytes: 59328 num_examples: 249 download_size: 131005909 dataset_size: 60199550 - config_name: es_to_pt features: - name: translation dtype: translation: languages: - es - pt splits: - name: test num_bytes: 343640 num_examples: 1764 - name: train num_bytes: 8611393 num_examples: 44939 - name: validation num_bytes: 181535 num_examples: 1017 download_size: 131005909 dataset_size: 9136568 - config_name: fr_to_pt features: - name: translation dtype: translation: languages: - fr - pt splits: - name: test num_bytes: 311650 num_examples: 1495 - name: train num_bytes: 8755387 num_examples: 43874 - name: validation num_bytes: 212317 num_examples: 1132 download_size: 131005909 dataset_size: 9279354 - config_name: gl_to_en features: - name: translation dtype: translation: languages: - gl - en splits: - name: test num_bytes: 193213 num_examples: 1008 - name: train num_bytes: 1961363 num_examples: 10018 - name: validation num_bytes: 137929 num_examples: 683 download_size: 131005909 dataset_size: 2292505 - config_name: glpt_to_en features: - name: translation dtype: translation: languages: - gl_pt - en splits: - name: test num_bytes: 193213 num_examples: 1008 - name: train num_bytes: 11734254 num_examples: 61803 - name: validation num_bytes: 137929 num_examples: 683 download_size: 131005909 dataset_size: 12065396 - config_name: he_to_pt features: - name: translation dtype: translation: languages: - he - pt splits: - name: test num_bytes: 361378 num_examples: 1624 - name: train num_bytes: 10627615 num_examples: 48512 - name: validation num_bytes: 230725 num_examples: 1146 download_size: 131005909 dataset_size: 11219718 - config_name: it_to_pt features: - name: translation dtype: translation: languages: - it - pt splits: - name: test num_bytes: 324726 num_examples: 1670 - name: train num_bytes: 8905825 num_examples: 46260 - name: validation num_bytes: 210375 num_examples: 1163 download_size: 131005909 dataset_size: 9440926 - config_name: pt_to_en features: - name: translation dtype: translation: languages: - pt - en splits: - name: test num_bytes: 347803 num_examples: 1804 - name: train num_bytes: 9772911 num_examples: 51786 - name: validation num_bytes: 207960 num_examples: 1194 download_size: 131005909 dataset_size: 10328674 - config_name: ru_to_en features: - name: translation dtype: translation: languages: - ru - en splits: - name: test num_bytes: 1459576 num_examples: 5477 - name: train num_bytes: 58778442 num_examples: 208107 - name: validation num_bytes: 1318357 num_examples: 4806 download_size: 131005909 dataset_size: 61556375 - config_name: ru_to_pt features: - name: translation dtype: translation: languages: - ru - pt splits: - name: test num_bytes: 409062 num_examples: 1589 - name: train num_bytes: 11882860 num_examples: 47279 - name: validation num_bytes: 276866 num_examples: 1185 download_size: 131005909 dataset_size: 12568788 - config_name: tr_to_en features: - name: translation dtype: translation: languages: - tr - en splits: - name: test num_bytes: 1026406 num_examples: 5030 - name: train num_bytes: 38607636 num_examples: 182451 - name: validation num_bytes: 832358 num_examples: 4046 download_size: 131005909 dataset_size: 40466400 --- # Dataset Card for "ted_hrlr" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/neulab/word-embeddings-for-nmt - **Paper:** [When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?](https://aclanthology.org/N18-2084/) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.83 GB - **Size of the generated dataset:** 281.66 MB - **Total amount of disk used:** 2.12 GB ### Dataset Summary Data sets derived from TED talk transcripts for comparing similar language pairs where one is high resource and the other is low resource. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### az_to_en - **Size of downloaded dataset files:** 131.01 MB - **Size of the generated dataset:** 1.53 MB - **Total amount of disk used:** 132.54 MB An example of 'train' looks as follows. ``` { "translation": { "az": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .", "en": "please raise your hand if something applies to you ." } } ``` #### aztr_to_en - **Size of downloaded dataset files:** 131.01 MB - **Size of the generated dataset:** 40.14 MB - **Total amount of disk used:** 171.15 MB An example of 'train' looks as follows. ``` { "translation": { "az_tr": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .", "en": "please raise your hand if something applies to you ." } } ``` #### be_to_en - **Size of downloaded dataset files:** 131.01 MB - **Size of the generated dataset:** 1.43 MB - **Total amount of disk used:** 132.42 MB An example of 'train' looks as follows. ``` { "translation": { "be": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .", "en": "please raise your hand if something applies to you ." } } ``` #### beru_to_en - **Size of downloaded dataset files:** 131.01 MB - **Size of the generated dataset:** 60.20 MB - **Total amount of disk used:** 191.21 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"be_ru\": \"11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .\", \"en\": \"when i was..." } ``` #### es_to_pt - **Size of downloaded dataset files:** 131.01 MB - **Size of the generated dataset:** 9.13 MB - **Total amount of disk used:** 140.14 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "translation": "{\"es\": \"11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .\", \"pt\": \"when i was 11..." } ``` ### Data Fields The data fields are the same among all splits. #### az_to_en - `translation`: a multilingual `string` variable, with possible languages including `az`, `en`. #### aztr_to_en - `translation`: a multilingual `string` variable, with possible languages including `az_tr`, `en`. #### be_to_en - `translation`: a multilingual `string` variable, with possible languages including `be`, `en`. #### beru_to_en - `translation`: a multilingual `string` variable, with possible languages including `be_ru`, `en`. #### es_to_pt - `translation`: a multilingual `string` variable, with possible languages including `es`, `pt`. ### Data Splits | name |train |validation|test| |----------|-----:|---------:|---:| |az_to_en | 5947| 672| 904| |aztr_to_en|188397| 672| 904| |be_to_en | 4510| 249| 665| |beru_to_en|212615| 249| 665| |es_to_pt | 44939| 1017|1764| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{qi-etal-2018-pre, title = "When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?", author = "Qi, Ye and Sachan, Devendra and Felix, Matthieu and Padmanabhan, Sarguna and Neubig, Graham", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2084", doi = "10.18653/v1/N18-2084", pages = "529--535", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
ted_iwlst2013
--- annotations_creators: - found language_creators: - found language: - ar - de - en - es - fa - fr - it - nl - pl - pt - ro - ru - sl - tr - zh license: - unknown multilinguality: - multilingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: TedIwlst2013 configs: - ar-en - de-en - en-es - en-fa - en-fr - en-it - en-nl - en-pl - en-pt - en-ro - en-ru - en-sl - en-tr - en-zh dataset_info: - config_name: ar-en features: - name: id dtype: string - name: translation dtype: translation: languages: - ar - en splits: - name: train num_bytes: 37413446 num_examples: 152838 download_size: 12065234 dataset_size: 37413446 - config_name: de-en features: - name: id dtype: string - name: translation dtype: translation: languages: - de - en splits: - name: train num_bytes: 30295518 num_examples: 143836 download_size: 10931406 dataset_size: 30295518 - config_name: en-es features: - name: id dtype: string - name: translation dtype: translation: languages: - en - es splits: - name: train num_bytes: 32522545 num_examples: 157895 download_size: 11642092 dataset_size: 32522545 - config_name: en-fa features: - name: id dtype: string - name: translation dtype: translation: languages: - en - fa splits: - name: train num_bytes: 22228781 num_examples: 80510 download_size: 6579696 dataset_size: 22228781 - config_name: en-fr features: - name: id dtype: string - name: translation dtype: translation: languages: - en - fr splits: - name: train num_bytes: 34355481 num_examples: 160420 download_size: 12061420 dataset_size: 34355481 - config_name: en-it features: - name: id dtype: string - name: translation dtype: translation: languages: - en - it splits: - name: train num_bytes: 32916537 num_examples: 159391 download_size: 11774644 dataset_size: 32916537 - config_name: en-nl features: - name: id dtype: string - name: translation dtype: translation: languages: - en - nl splits: - name: train num_bytes: 29679822 num_examples: 145951 download_size: 10712032 dataset_size: 29679822 - config_name: en-pl features: - name: id dtype: string - name: translation dtype: translation: languages: - en - pl splits: - name: train num_bytes: 29776339 num_examples: 149120 download_size: 10999482 dataset_size: 29776339 - config_name: en-pt features: - name: id dtype: string - name: translation dtype: translation: languages: - en - pt splits: - name: train num_bytes: 32179607 num_examples: 155995 download_size: 11493053 dataset_size: 32179607 - config_name: en-ro features: - name: id dtype: string - name: translation dtype: translation: languages: - en - ro splits: - name: train num_bytes: 32958421 num_examples: 158483 download_size: 11936172 dataset_size: 32958421 - config_name: en-ru features: - name: id dtype: string - name: translation dtype: translation: languages: - en - ru splits: - name: train num_bytes: 36529465 num_examples: 133660 download_size: 11167700 dataset_size: 36529465 - config_name: en-sl features: - name: id dtype: string - name: translation dtype: translation: languages: - en - sl splits: - name: train num_bytes: 2831344 num_examples: 14960 download_size: 1060712 dataset_size: 2831344 - config_name: en-tr features: - name: id dtype: string - name: translation dtype: translation: languages: - en - tr splits: - name: train num_bytes: 28016103 num_examples: 137028 download_size: 10038531 dataset_size: 28016103 - config_name: en-zh features: - name: id dtype: string - name: translation dtype: translation: languages: - en - zh splits: - name: train num_bytes: 30205477 num_examples: 154579 download_size: 11714497 dataset_size: 30205477 --- # Dataset Card for TedIwlst2013 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/TED2013.php - **Repository:** None - **Paper:** hhttp://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** None - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
ted_multi
--- pretty_name: TEDMulti paperswithcode_id: null dataset_info: features: - name: translations dtype: translation_variable_languages: languages: - ar - az - be - bg - bn - bs - calv - cs - da - de - el - en - eo - es - et - eu - fa - fi - fr - fr-ca - gl - he - hi - hr - hu - hy - id - it - ja - ka - kk - ko - ku - lt - mk - mn - mr - ms - my - nb - nl - pl - pt - pt-br - ro - ru - sk - sl - sq - sr - sv - ta - th - tr - uk - ur - vi - zh - zh-cn - zh-tw num_languages: 60 - name: talk_name dtype: string config_name: plain_text splits: - name: test num_bytes: 23364983 num_examples: 7213 - name: train num_bytes: 748209995 num_examples: 258098 - name: validation num_bytes: 19435383 num_examples: 6049 download_size: 352222045 dataset_size: 791010361 --- # Dataset Card for "ted_multi" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/neulab/word-embeddings-for-nmt](https://github.com/neulab/word-embeddings-for-nmt) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 352.23 MB - **Size of the generated dataset:** 791.01 MB - **Total amount of disk used:** 1.14 GB ### Dataset Summary Massively multilingual (60 language) data set derived from TED Talk transcripts. Each record consists of parallel arrays of language and text. Missing and incomplete translations will be filtered out. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 352.23 MB - **Size of the generated dataset:** 791.01 MB - **Total amount of disk used:** 1.14 GB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "talk_name": "shabana_basij_rasikh_dare_to_educate_afghan_girls", "translations": "{\"language\": [\"ar\", \"az\", \"bg\", \"bn\", \"cs\", \"da\", \"de\", \"el\", \"en\", \"es\", \"fa\", \"fr\", \"he\", \"hi\", \"hr\", \"hu\", \"hy\", \"id\", \"it\", ..." } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `translations`: a multilingual `string` variable, with possible languages including `ar`, `az`, `be`, `bg`, `bn`. - `talk_name`: a `string` feature. ### Data Splits | name |train |validation|test| |----------|-----:|---------:|---:| |plain_text|258098| 6049|7213| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{qi-EtAl:2018:N18-2, author = {Qi, Ye and Sachan, Devendra and Felix, Matthieu and Padmanabhan, Sarguna and Neubig, Graham}, title = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?}, booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)}, month = {June}, year = {2018}, address = {New Orleans, Louisiana}, publisher = {Association for Computational Linguistics}, pages = {529--535}, abstract = {The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases -- providing gains of up to 20 BLEU points in the most favorable setting.}, url = {http://www.aclweb.org/anthology/N18-2084} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
ted_talks_iwslt
--- annotations_creators: - expert-generated language_creators: - crowdsourced - expert-generated language: - af - am - ar - arq - art - as - ast - az - be - bg - bi - bn - bo - bs - ca - ceb - cnh - cs - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - ga - gl - gu - ha - he - hi - hr - ht - hu - hup - hy - id - ig - inh - is - it - ja - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - ltg - lv - mg - mk - ml - mn - mr - ms - mt - my - nb - ne - nl - nn - oc - pa - pl - ps - pt - ro - ru - rup - sh - si - sk - sl - so - sq - sr - sv - sw - szl - ta - te - tg - th - tl - tlh - tr - tt - ug - uk - ur - uz - vi - zh language_bcp47: - art-x-bork - fr-CA - pt-BR - zh-CN - zh-TW license: - cc-by-nc-nd-4.0 multilinguality: - translation size_categories: - 1K<n<10K - n<1K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: Web Inventory of Transcribed & Translated (WIT) Ted Talks configs: - de_ja_2014 - de_ja_2015 - de_ja_2016 - eu_ca_2014 - eu_ca_2015 - eu_ca_2016 - fr-ca_hi_2014 - fr-ca_hi_2015 - fr-ca_hi_2016 - nl_en_2014 - nl_en_2015 - nl_en_2016 - nl_hi_2014 - nl_hi_2015 - nl_hi_2016 dataset_info: - config_name: eu_ca_2014 features: - name: translation dtype: translation: languages: - eu - ca splits: - name: train num_bytes: 15192 num_examples: 44 download_size: 1666674366 dataset_size: 15192 - config_name: eu_ca_2015 features: - name: translation dtype: translation: languages: - eu - ca splits: - name: train num_bytes: 18768 num_examples: 52 download_size: 1666674366 dataset_size: 18768 - config_name: eu_ca_2016 features: - name: translation dtype: translation: languages: - eu - ca splits: - name: train num_bytes: 19506 num_examples: 54 download_size: 1666674366 dataset_size: 19506 - config_name: nl_en_2014 features: - name: translation dtype: translation: languages: - nl - en splits: - name: train num_bytes: 1035545 num_examples: 2966 download_size: 1666674366 dataset_size: 1035545 - config_name: nl_en_2015 features: - name: translation dtype: translation: languages: - nl - en splits: - name: train num_bytes: 1292610 num_examples: 3550 download_size: 1666674366 dataset_size: 1292610 - config_name: nl_en_2016 features: - name: translation dtype: translation: languages: - nl - en splits: - name: train num_bytes: 1434207 num_examples: 3852 download_size: 1666674366 dataset_size: 1434207 - config_name: nl_hi_2014 features: - name: translation dtype: translation: languages: - nl - hi splits: - name: train num_bytes: 214870 num_examples: 367 download_size: 1666674366 dataset_size: 214870 - config_name: nl_hi_2015 features: - name: translation dtype: translation: languages: - nl - hi splits: - name: train num_bytes: 252192 num_examples: 421 download_size: 1666674366 dataset_size: 252192 - config_name: nl_hi_2016 features: - name: translation dtype: translation: languages: - nl - hi splits: - name: train num_bytes: 310922 num_examples: 496 download_size: 1666674366 dataset_size: 310922 - config_name: de_ja_2014 features: - name: translation dtype: translation: languages: - de - ja splits: - name: train num_bytes: 1074403 num_examples: 2536 download_size: 1666674366 dataset_size: 1074403 - config_name: de_ja_2015 features: - name: translation dtype: translation: languages: - de - ja splits: - name: train num_bytes: 1442047 num_examples: 3247 download_size: 1666674366 dataset_size: 1442047 - config_name: de_ja_2016 features: - name: translation dtype: translation: languages: - de - ja splits: - name: train num_bytes: 1630729 num_examples: 3590 download_size: 1666674366 dataset_size: 1630729 - config_name: fr-ca_hi_2014 features: - name: translation dtype: translation: languages: - fr-ca - hi splits: - name: train num_bytes: 74472 num_examples: 127 download_size: 1666674366 dataset_size: 74472 - config_name: fr-ca_hi_2015 features: - name: translation dtype: translation: languages: - fr-ca - hi splits: - name: train num_bytes: 82448 num_examples: 141 download_size: 1666674366 dataset_size: 82448 - config_name: fr-ca_hi_2016 features: - name: translation dtype: translation: languages: - fr-ca - hi splits: - name: train num_bytes: 93425 num_examples: 156 download_size: 1666674366 dataset_size: 93425 --- # Dataset Card for Web Inventory of Transcribed & Translated(WIT) Ted Talks ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://wit3.fbk.eu/home - **Repository:** https://drive.google.com/file/d/1Cz1Un9p8Xn9IpEMMrg2kXSDt0dnjxc4z/view?usp=sharing - **Paper:** https://www.aclweb.org/anthology/2012.eamt-1.60.pdf - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Mauro Cettolo](mailto:cettolo@fbk.eu) [Roldano Cattoni](mailto:cattoni@fbk.eu) ### Dataset Summary The Web Inventory Talk is a collection of the original Ted talks and their translated version. The translations are available in more than 109+ languages, though the distribution is not uniform. To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. E.g. `dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")` The full list of languages is: 'af', 'am', 'ar', 'arq', 'art-x-bork', 'as', 'ast', 'az', 'be', 'bg', 'bi', 'bn', 'bo', 'bs', 'ca', 'ceb', 'cnh', 'cs', 'da', 'de', 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fil', 'fr', 'fr-ca', 'ga', 'gl', 'gu', 'ha', 'he', 'hi', 'hr', 'ht', 'hu', 'hup', 'hy', 'id', 'ig', 'inh', 'is', 'it', 'ja', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lb', 'lo', 'lt', 'ltg', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'mt', 'my', 'nb', 'ne', 'nl', 'nn', 'oc', 'pa', 'pl', 'ps', 'pt', 'pt-br', 'ro', 'ru', 'rup', 'sh', 'si', 'sk', 'sl', 'so', 'sq', 'sr', 'srp', 'sv', 'sw', 'szl', 'ta', 'te', 'tg', 'th', 'tl', 'tlh', 'tr', 'tt', 'ug', 'uk', 'ur', 'uz', 'vi', 'zh', 'zh-cn', 'zh-tw'. The full list of years is: '2014', '2015', '2016'. ### Supported Tasks and Leaderboards machine learning task, language modeling and generation ### Languages Ted talks are mostly held in English (`en`). Almost all of the talks have been translated, by volunteers, into Arabic, Bulgarian, Chinese (simplified), French, Italian, Korean, Portuguese (Brazil) and Spanish. For about 70 other languages, the number of translated talks ranges from several hundreds (e.g. such as other Dutch, German, Hebrew, Romanian) to one (e.g. Hausa, Hupa, Bislama, Ingush, Maltese). The languages in the dataset are: - af - am - ar - arq - art - as - ast - az - be - bg - bi - bn - bo - bs - ca - ceb - cnh - cs - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - ga - gl - gu - ha - he - hi - hr - ht - hu - hup - hy - id - ig - inh - is - it - ja - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - ltg - lv - mg - mk - ml - mn - mr - ms - mt - my - nb - ne - nl - nn - oc - pa - pl - ps - pt - ro - ru - rup - sh - si - sk - sl - so - sq - sr - srp: Serbian (`sr`) - sv - sw - szl - ta - te - tg - th - tl - tlh - tr - tt - ug - uk - ur - uz - vi - zh ## Dataset Structure ### Data Instances One example from the dataset is: ``` {'translation': {'hi': 'जब मार्च २०१४ में इबोला का प्रकोप छाया, पर्डिस सबेटी और उनकी टीम को वाइरस के जीनोम का अनुक्रमण करना था, सीखना था कि यह कैसे परवतिर्त होते हैं और फैलते हैं। सबेटी ने तुरंत ही अपने अनुसंधान को वेब में जारी किया, ताकि दुनिया भर के वाइरस ट्रैकर्स और वैज्ञानिक इस तत्काल लड़ाई में शामिल हो सकें। इस बातचीत में, वह दिखाती हैं कि सबका सहयोग ही कुंजी है वाइरस को रोकने के लिए--और लड़ने के लिए आगे आने वाले हमलों से। सबेटी ने कहा,"हमने खुले तौर पर काम किया, साझा किया और साथ काम किया"। "हमे दुनिया को एक वाइरस के विनाश से नहीं, पर अरबों दिलों और दिमागों की एकता से परिभाषित करना है"।', 'nl': 'Toen Ebola in maart 2014 uitbrak, zijn Pardis Sabeti en haar team aan het werk gegaan om het genoom in kaart te brengen. Zo ontdekten ze hoe het virus zich verspreidde en muteerde. Sabeti zette direct haar onderzoek op het internet, zodat wereldwijd virus-jagers en wetenschappers mee konden werken aan de strijd. In deze talk laat ze zien hoe die openheid geholpen heeft bij het stoppen van het virus en hoe het kan helpen bij de strijd tegen het volgende virus. "We moesten transparant werken, delen en samenwerken". Sabeti zegt:"Laat de wereld niet ten onder gaan aan een virus, maar verlicht worden door miljoenen harten en geesten die samenwerken."'}} ``` The original XML files are formatted like this example: ``` <file id="1"> <head> <url>http://www.ted.com/talks/ryan_holladay_to_hear_this_music_you_have_to_be_there_literally.html</url> <pagesize>66634</pagesize> <dtime>Sun Jan 12 15:17:32 CET 2014</dtime> <content-type>text/html; charset=utf-8</content-type> <encoding>utf-8</encoding> <videourl>http://download.ted.com/talks/RyanHolladay_2013S.mp4</videourl> <videopath>talks/RyanHolladay_2013S.mp4</videopath> <transcription> <seekvideo id="2939">(Music)</seekvideo> <seekvideo id="7555">For any of you who have visited or lived in New York City,</seekvideo> <seekvideo id="11221">these shots might start to look familiar.</seekvideo> <seekvideo id="16116">This is Central Park,</seekvideo> . . . <seekvideo id="361992">for people to interact with</seekvideo> <seekvideo id="363709">and experience music.</seekvideo> <seekvideo id="365451">Thank you.</seekvideo> <seekvideo id="367495">(Applause)</seekvideo> </transcription> <talkid>1903</talkid> <title>Ryan Holladay: To hear this music you have to be there. Literally</title> <description>The music industry ......segments of sounds that only play when a listener is physically nearby. (Filmed at TED@BCG.)</description> <keywords>entertainment,music,technology</keywords>  <date>2014/01/12</date> <wordnum>885</wordnum> <charnum>5051</charnum> </head> <content>(Music) For any of you who have visited or lived in New York City, these shots might start to look familiar. This is Central Park, ............new ways for people to interact with and experience music. Thank you. (Applause)</content> </file> ``` ### Data Fields The fields of the dataset are: - translation: - <lang1>: text in <lang1> - <lang2>L translated text in <lang2> Information about the original data files: For each language, a single XML file is generated which includes all talks subtitled in that language. Each talk is enclosed in tags `<file id="int">` and `</file>` and includes, among other tags: | Tags | Description | |---|:---| | `<url>`| the address of the original HTML document of the talk | | `<speaker>` | the name of the talk speaker | | `<talkid>` | the numeric talk identifier | | `<transcript>` | talk subtitles split in captions | | `<date>` | the issue date of the talk | | `<content>` | talk subtitles | ### Data Splits The paper doesn't provide any specific train-test-dev splits. However data can be split by available years (2014, 2015, 2016) ## Dataset Creation ### Curation Rationale TED Conference, based in California, has been posting all video recordings of its talks together with subtitles in English and their translations in more than 80 languages. Aside from its cultural and social relevance, this content, which is published under the Creative Commons BYNC-ND license, also represents a precious language resource for the machine translation research community, thanks to its size, variety of topics, and covered languages. ### Source Data #### Initial Data Collection and Normalization The talks were collected from the [Ted Conference website](http://www.ted.com/) #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? Translation has been contributed by volunteers ### Personal and Sensitive Information No personal and sensitive information is provided in the dataset. All talks are publicly available ## Considerations for Using the Data ### Social Impact of Dataset In statistical machine translation, large amount of in-domain parallel data are usually required to properly train translation and reordering models. With more than 900+ Ted talks (as of 2011) and translation in more than 90+ languages. This dataset provides a useful resource for the MT research community. In turn, this enables easy access to a vast treasure trove of human knowledge. ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The original dataset was curated by: [Mauro Cettolo](mailto:cettolo@fbk.eu) [Roldano Cattoni](mailto:cattoni@fbk.eu) Author: Christian Girardi For issues with the HuggingFace Dataset implementation, reach out: [Aakash Gupta](mailto:aakashg80@gmail.com) ### Licensing Information cc-by-nc-nd-4.0 ### Citation Information ``` @inproceedings{cettolo-etal-2012-wit3, title = "{WIT}3: Web Inventory of Transcribed and Translated Talks", author = "Cettolo, Mauro and Girardi, Christian and Federico, Marcello", booktitle = "Proceedings of the 16th Annual conference of the European Association for Machine Translation", month = may # " 28{--}30", year = "2012", address = "Trento, Italy", publisher = "European Association for Machine Translation", url = "https://www.aclweb.org/anthology/2012.eamt-1.60", pages = "261--268", } ``` ### Contributions Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset.
telugu_books
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - te license: - unknown multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: TeluguBooks dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 315076011 num_examples: 25794 download_size: 0 dataset_size: 315076011 --- # Dataset Card for [telugu_books] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Telugu Books](https://www.kaggle.com/sudalairajkumar/telugu-nlp) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset is created by scraping telugu novels from teluguone.com this dataset can be used for nlp tasks like topic modeling, word embeddings, transfer learning etc ### Supported Tasks and Leaderboards [More Information Needed] ### Languages TE - Telugu ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - Text: Sentence from a novel ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Anusha Motamarri ### Annotations #### Annotation process Anusha Motamarri #### Who are the annotators? Anusha Motamarri ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset.
telugu_news
--- annotations_creators: - machine-generated language_creators: - other language: - te license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation - fill-mask - text-classification task_ids: - language-modeling - masked-language-modeling - multi-class-classification - topic-classification pretty_name: TeluguNews dataset_info: features: - name: sno dtype: int32 - name: date dtype: string - name: heading dtype: string - name: body dtype: string - name: topic dtype: class_label: names: '0': business '1': editorial '2': entertainment '3': nation '4': sports splits: - name: train num_bytes: 69400234 num_examples: 17312 - name: test num_bytes: 17265514 num_examples: 4329 download_size: 0 dataset_size: 86665748 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.kaggle.com/sudalairajkumar/telugu-nlp?select=telugu_news - **Repository:** https://github.com/AnushaMotamarri/Telugu-Newspaper-Article-Dataset ### Dataset Summary This dataset contains Telugu language news articles along with respective topic labels (business, editorial, entertainment, nation, sport) extracted from the daily Andhra Jyoti. This dataset could be used to build Classification and Language Models. ### Supported Tasks and Leaderboards Multiclass classification, Topic Classification, Language Model ### Languages TE - Telugu, India ## Dataset Structure ### Data Instances Two CSV files (train, test) with five columns (sno, date, heading, body, topic). ### Data Fields - sno: id - date: publish date of the news article - heading: article heading/title - body: article body/content - topic: one of the following topics (business, editorial, entertainment, nation, sport) ### Data Splits Train and Test ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data - https://www.kaggle.com/sudalairajkumar/telugu-nlp?select=telugu_news - https://github.com/AnushaMotamarri/Telugu-Newspaper-Article-Dataset #### Initial Data Collection and Normalization The source data is scraped articles from archives of Telugu newspaper website Andhra Jyoti. A set of queries were created and the corresponding ground truth answers were retrieved by a combination of BM25 and tf-idf. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Sudalai Rajkumar, Anusha Motamarri ### Licensing Information [More Information Needed] ### Citation Information ``` @InProceedings{kaggle:dataset, title = {Telugu News - Natural Language Processing for Indian Languages}, authors={Sudalai Rajkumar, Anusha Motamarri}, year={2019} } ``` ### Contributions Thanks to [@oostopitre](https://github.com/oostopitre) for adding this dataset.
tep_en_fa_para
--- annotations_creators: - found language_creators: - found language: - en - fa license: - unknown multilinguality: - translation size_categories: - 100K<n<1M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: TepEnFaPara dataset_info: features: - name: translation dtype: translation: languages: - en - fa config_name: en-fa splits: - name: train num_bytes: 58735557 num_examples: 612087 download_size: 16353318 dataset_size: 58735557 --- # Dataset Card for [tep_en_fa_para] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[TEP: Tehran English-Persian parallel corpus](http://opus.nlpl.eu/TEP.php) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary TEP: Tehran English-Persian parallel corpus. The first free Eng-Per corpus, provided by the Natural Language and Text Processing Laboratory, University of Tehran. ### Supported Tasks and Leaderboards The underlying task is machine translation for language pair English-Persian ### Languages English, Persian ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information M. T. Pilevar, H. Faili, and A. H. Pilevar, “TEP: Tehran English-Persian Parallel Corpus”, in proceedings of 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2011). ### Contributions Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
text2log
--- annotations_creators: - machine-generated language_creators: - machine-generated language: - en license: - unknown multilinguality: - monolingual pretty_name: text2log size_categories: - 100K<n<1M source_datasets: - original task_categories: - translation task_ids: [] dataset_info: features: - name: sentence dtype: string - name: fol_translation dtype: string splits: - name: train num_bytes: 10358134 num_examples: 101931 download_size: 9746473 dataset_size: 10358134 --- # Dataset Card for text2log ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** - **Repository:** [GitHub](https://github.com/alevkov/text2log) - **Paper:** - **Leaderboard:** - **Point of Contact:** https://github.com/alevkov ### Dataset Summary The dataset contains 100,000 simple English sentences selected and filtered from `enTenTen15` and their translation into First Order Logic (FOL) using `ccg2lambda`. ### Supported Tasks and Leaderboards 'semantic-parsing': The data set is used to train models which can generate FOL statements from natural language text ### Languages en-US ## Dataset Structure ### Data Instances ``` { 'clean':'All things that are new are good.', 'trans':'all x1.(_thing(x1) -> (_new(x1) -> _good(x1)))' } ``` ### Data Fields - 'clean': a simple English sentence - 'trans': the corresponding translation into Lambda Dependency-based Compositional Semantics ### Data Splits No predefined train/test split is given. The authors used a 80/20 split ## Dataset Creation ### Curation Rationale The text2log data set is used to improve FOL statement generation from natural text ### Source Data #### Initial Data Collection and Normalization Short text samples selected from enTenTen15 #### Who are the source language producers? See https://www.sketchengine.eu/ententen-english-corpus/ ### Annotations #### Annotation process Machine generated using https://github.com/mynlp/ccg2lambda #### Who are the annotators? none ### Personal and Sensitive Information The dataset does not contain personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information None given ### Citation Information ```bibtex @INPROCEEDINGS{9401852, author={Levkovskyi, Oleksii and Li, Wei}, booktitle={SoutheastCon 2021}, title={Generating Predicate Logic Expressions from Natural Language}, year={2021}, volume={}, number={}, pages={1-8}, doi={10.1109/SoutheastCon45413.2021.9401852} } ``` ### Contributions Thanks to [@apergo-ai](https://github.com/apergo-ai) for adding this dataset.
thai_toxicity_tweet
--- annotations_creators: - expert-generated language_creators: - found language: - th license: - cc-by-nc-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: ThaiToxicityTweet dataset_info: features: - name: tweet_id dtype: string - name: tweet_text dtype: string - name: toxic_votes dtype: int32 - name: nontoxic_votes dtype: int32 - name: is_toxic dtype: class_label: names: '0': neg '1': pos config_name: thai_toxicity_tweet splits: - name: train num_bytes: 637387 num_examples: 3300 download_size: 194740 dataset_size: 637387 --- # Dataset Card for `thai_toxicity_tweet` ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/ - **Repository:** https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/ - **Paper:** https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf - **Leaderboard:** - **Point of Contact:** https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf ### Dataset Summary Thai Toxicity Tweet Corpus contains 3,300 tweets (506 tweets with texts missing) annotated by humans with guidelines including a 44-word dictionary. The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing target, and word sense ambiguity. Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`. Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1). ### Supported Tasks and Leaderboards text classification ### Languages Thai (`th`) ## Dataset Structure ### Data Instances ``` {'is_toxic': 0, 'nontoxic_votes': 3, 'toxic_votes': 0, 'tweet_id': '898576382384418817', 'tweet_text': 'วันๆ นี่คุยกะหมา แมว หมู ไก่ ม้า ควาย มากกว่าคุยกับคนไปละ'} {'is_toxic': 1, 'nontoxic_votes': 0, 'toxic_votes': 3, 'tweet_id': '898573084981985280', 'tweet_text': 'ควายแดงเมิงด่ารัฐบาลจนรองนายกป่วย พวกมึงกำลังทำลายชาติรู้มั้ย มั้ย มั้ย มั้ยยยยยยยยย news.voicetv.co.th/thailand/51672…'} ``` ### Data Fields "tweet_id": Id of tweet on Twitter "tweet_text": text of the tweet "toxic_votes": how many annotators say it is toxic, out of 3 annotators "nontoxic_votes": how many annotators say it is NOT toxic, out of 3 annotators "is_toxic": 1 if tweet is toxic else 0 (majority rules) ### Data Splits No explicit split is given. ## Dataset Creation ### Curation Rationale The dataset is created as part of [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf). ### Source Data #### Initial Data Collection and Normalization The authors used the public Twitter Search API to collect 9,819 tweets from January–December 2017 based on our keyword dictionary. Then, they selected 75 tweets for each keyword. In total, they collected 3,300 tweets for annotation. To ensure quality of data, they set the following selection criteria. 1. All tweets are selected by humans to prevent word ambiguity. (The Twitter API selected the tweets based on characters in the keyword. For example, in the case of “บ้า(crazy),” the API will also select “บ้านนอก” (countryside)” which is not our target.) 2. The length of the tweet should be sufficiently long to discern the context of the tweet. Hence, they set five words as the minimum limit. 3. The tweets that contain only extremely toxic words, (for example: “damn, retard, bitch, f*ck, slut!!!”) are not considered. 4. In addition, they allowed tweets with English words if they were not critical elements in the labeling decision, for example, the word “f*ck.” As a result, our corpus contains English words, but they are less than 2% of the total. All hashtags, re-tweets, and links were removed from these tweets. However, they did not delete emoticons because these emotional icons can imply the real intent of the post owners. Furthermore, only in the case of annotation, some entries such as the names of famous people were replaced with a tag <ไม่ขอเปิดเผยชื่อ>, for anonymity to prevent individual bias. #### Who are the source language producers? Twitter users in Thailand ### Annotations #### Annotation process We manually annotated our dataset with two labels: Toxic and Non-Toxic. We define a message as toxic if it indicates any harmful, damage, or negative intent based on our definition of toxicity. Furthermore, all the tweets were annotated by three annotators to identify toxicity; the conditions used for this identification are presented in the following list. - A toxic message is a message that should be deleted or not be allowed in public. - A message’s target or consequence must exist. It can either be an individual or a generalized group based on a commonality such as religion or ethnicity, or an entire community. - Self-complain is not considered toxic, because it is not harmful to anyone. However, if self-complain is intended to indicate something bad, it will be considered as toxic. - Both direct and indirect messages including those with sarcasm are taken into consideration. We strictly instructed all the annotators about these concepts and asked them to perform a small test to ensure they understood these conditions. The annotation process was divided into two rounds. We asked the candidates to annotate their answers in the first round to learn our annotation standard. Then, we asked them to annotate a different dataset and selected the ones who obtained a full-score for the second round as an annotator. From among these annotators, 20% of the annotators failed the first round and were not involved in the final annotation. #### Who are the annotators? Three annotators hired by [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf) ### Personal and Sensitive Information Despite all tweets being public, due to the nature of toxic tweets, there might be personal attacks and toxic language used. ## Considerations for Using the Data ### Social Impact of Dataset - toxic social media message classification dataset ### Discussion of Biases - Users are masked before annotation by the annotators to prevent biases based on tweet authors ### Other Known Limitations - The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`. ## Additional Information ### Dataset Curators [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf) ### Licensing Information CC-BY-NC 3.0 ### Citation Information Please cite the following if you make use of the dataset: ``` @article{sirihattasak2019annotation, title={Annotation and Classification of Toxicity for Thai Twitter}, author={Sirihattasak, Sugan and Komachi, Mamoru and Ishikawa, Hiroshi}, year={2019} } ``` ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
thainer
--- annotations_creators: - expert-generated - machine-generated language_creators: - found - expert-generated language: - th license: - cc-by-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|other-tirasaroj-aroonmanakun task_categories: - token-classification task_ids: - named-entity-recognition - part-of-speech pretty_name: thainer dataset_info: features: - name: id dtype: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': ADJ '1': ADP '2': ADV '3': AUX '4': CCONJ '5': DET '6': NOUN '7': NUM '8': PART '9': PRON '10': PROPN '11': PUNCT '12': SCONJ '13': VERB - name: ner_tags sequence: class_label: names: '0': B-DATE '1': B-EMAIL '2': B-LAW '3': B-LEN '4': B-LOCATION '5': B-MONEY '6': B-ORGANIZATION '7': B-PERCENT '8': B-PERSON '9': B-PHONE '10': B-TIME '11': B-URL '12': B-ZIP '13': B-ไม่ยืนยัน '14': I-DATE '15': I-EMAIL '16': I-LAW '17': I-LEN '18': I-LOCATION '19': I-MONEY '20': I-ORGANIZATION '21': I-PERCENT '22': I-PERSON '23': I-PHONE '24': I-TIME '25': I-URL '26': I-ไม่ยืนยัน '27': O config_name: thainer splits: - name: train num_bytes: 8117902 num_examples: 6348 download_size: 5456461 dataset_size: 8117902 --- # Dataset Card for `thainer` ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/wannaphong/thai-ner - **Repository:** https://github.com/wannaphong/thai-ner - **Paper:** - **Leaderboard:** - **Point of Contact:** https://github.com/wannaphong/ ### Dataset Summary ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/)) for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/). The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`. [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset. ### Supported Tasks and Leaderboards - named entity recognition - pos tagging ### Languages Thai ## Dataset Structure ### Data Instances ``` {'id': 100, 'ner_tags': [27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27], 'pos_tags': [6, 12, 13, 1, 6, 5, 11, 7, 11, 6, 5, 13, 6, 6, 6, 11, 6, 6, 11, 6, 6, 11, 6, 6, 13, 6, 11, 11, 6, 11, 6, 11, 6, 11, 6, 11, 11, 6, 6, 11, 12, 6, 13, 5, 11, 7, 11, 6, 3, 11, 12, 3, 13, 6, 1, 6, 12, 13, 1, 6, 6, 5, 11, 3, 11, 5, 4, 6, 13, 6, 13, 6, 10, 3, 13, 13, 12, 13, 12, 0, 1, 10, 11, 6, 6, 11, 6, 11, 6, 12, 13, 5, 12, 3, 13, 13, 1, 6, 1, 6, 13], 'tokens': ['เชื้อโรค', 'ที่', 'ปรากฏ', 'ใน', 'สัตว์', 'ทั้ง', ' ', '4', ' ', 'ชนิด', 'นี้', 'เป็น', 'เชื้อ', 'โรคไข้หวัด', 'นก', ' ', 'เอช', 'พี', ' ', 'เอ', 'เวียน', ' ', 'อิน', 'ฟลู', 'เอน', 'ซา', ' ', '(', 'Hight', ' ', 'Polygenic', ' ', 'Avain', ' ', 'Influenza', ')', ' ', 'ชนิด', 'รุนแรง', ' ', 'ซึ่ง', 'การ', 'ตั้งชื่อ', 'ทั้ง', ' ', '4', ' ', 'ขึ้น', 'มา', ' ', 'เพื่อที่จะ', 'สามารถ', 'ระบุ', 'เชื้อ', 'ของ', 'ไวรัส', 'ที่', 'ทำอันตราย', 'ตาม', 'สิ่งมีชีวิต', 'ประเภท', 'ต่างๆ', ' ', 'ได้', ' ', 'อีก', 'ทั้ง', 'การ', 'ระบุ', 'สถานที่', 'คือ', 'ประเทศ', 'ไทย', 'จะ', 'ทำให้', 'รู้', 'ว่า', 'พบ', 'ที่', 'แรก', 'ใน', 'ไทย', ' ', 'ส่วน', 'วัน', ' ', 'เดือน', ' ', 'ปี', 'ที่', 'พบ', 'นั้น', 'ก็', 'จะ', 'ทำให้', 'ทราบ', 'ถึง', 'ครั้งแรก', 'ของ', 'การ', 'ค้นพบ']} {'id': 107, 'ner_tags': [27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27], 'pos_tags': [0, 1, 6, 5, 11, 12, 3, 3, 13, 6, 13, 12, 0, 2, 12, 11, 6, 5, 13, 6, 5, 1, 6, 6, 1, 10, 11, 4, 13, 6, 11, 12, 6, 6, 10, 11, 13, 6, 1, 6, 4, 6, 1, 6, 6, 11, 4, 6, 1, 5, 6, 12, 2, 13, 6, 6, 5, 1, 11, 12, 13, 1, 6, 6, 11, 13, 11, 6, 6, 6, 11, 11, 6, 11, 11, 4, 10, 11, 11, 6, 11], 'tokens': ['ล่าสุด', 'ใน', 'เรื่อง', 'นี้', ' ', 'ทั้งนี้', 'คง', 'ต้อง', 'มี', 'การ', 'ตรวจสอบ', 'ให้', 'ชัดเจน', 'อีกครั้ง', 'ว่า', ' ', 'ไวรัส', 'นี้', 'เป็น', 'ชนิด', 'เดียว', 'กับ', 'ไข้หวัด', 'นก', 'ใน', 'ไทย', ' ', 'หรือ', 'เป็น', 'การกลายพันธุ์', ' ', 'โดยที่', 'คณะ', 'สัตวแพทย์', 'มหาวิทยาลัยเกษตรศาสตร์', ' ', 'จัด', 'ระดมสมอง', 'จาก', 'คณบดี', 'และ', 'ผู้เชี่ยวชาญ', 'จาก', 'คณะ', 'สัตวแพทย์', ' ', 'และ', 'ปศุสัตว์', 'ของ', 'หลาย', 'มหาวิทยาลัย', 'เพื่อ', 'ร่วมกัน', 'หา', 'ข้อมูล', 'เรื่อง', 'นี้', 'ด้วย', ' ', 'โดย', 'ประสาน', 'กับ', 'เจ้าหน้าที่', 'ระหว่างประเทศ', ' ', 'คือ', ' ', 'องค์การ', 'สุขภาพ', 'สัตว์โลก', ' ', '(', 'OIE', ')', ' ', 'และ', 'องค์การอนามัยโลก', ' ', '(', 'WHO', ')']} ``` ### Data Fields - `id`: sentence id - `tokens`: word tokens by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s dictionary-based tokenizer `newmm` - `pos_tags`: POS tags tagged by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud` - `ner_tags`: NER tags tagged by humans ### Data Splits No explicit split is given ## Dataset Creation ### Curation Rationale ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). ### Source Data #### Initial Data Collection and Normalization The earlier part of the dataset is all news articles, whereas the part added by [@wannaphong](https://github.com/wannaphong/) includes news articles, public announcements and [@wannaphong](https://github.com/wannaphong/)'s own chat messages with personal and sensitive information removed. #### Who are the source language producers? News articles and public announcements are created by their respective authors. Chat messages are created by [@wannaphong](https://github.com/wannaphong/). ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/) for the earlier 2,258 sentences and [@wannaphong](https://github.com/wannaphong/) for the rest ### Personal and Sensitive Information News articles and public announcements are not expected to include personal and sensitive information. [@wannaphong](https://github.com/wannaphong/) has removed such information from his own chat messages. ## Considerations for Using the Data ### Social Impact of Dataset - named entity recognition in Thai ### Discussion of Biases Since almost all of collection and annotation is done by [@wannaphong](https://github.com/wannaphong/), his biases are expected to be reflected in the dataset. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/) for the earlier 2,258 sentences and [@wannaphong](https://github.com/wannaphong/) for the rest ### Licensing Information CC-BY 3.0 ### Citation Information ``` @misc{Wannaphong Phatthiyaphaibun_2019, title={wannaphongcom/thai-ner: ThaiNER 1.3}, url={https://zenodo.org/record/3550546}, DOI={10.5281/ZENODO.3550546}, abstractNote={Thai Named Entity Recognition}, publisher={Zenodo}, author={Wannaphong Phatthiyaphaibun}, year={2019}, month={Nov} } ``` Work extended from: [Tirasaroj, N. and Aroonmanakun, W. 2012. Thai NER using CRF model based on surface features. In Proceedings of SNLP-AOS 2011, 9-10 February, 2012, Bangkok, pages 176-180.](http://pioneer.chula.ac.th/~awirote/publications/) ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
thaiqa_squad
--- annotations_creators: - expert-generated language_creators: - found language: - th license: - cc-by-nc-sa-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|other-thaiqa task_categories: - question-answering task_ids: - extractive-qa - open-domain-qa paperswithcode_id: null pretty_name: thaiqa-squad dataset_info: features: - name: question_id dtype: int32 - name: article_id dtype: int32 - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: answer dtype: string - name: answer_begin_position dtype: int32 - name: answer_end_position dtype: int32 config_name: thaiqa_squad splits: - name: train num_bytes: 47905050 num_examples: 4000 - name: validation num_bytes: 744813 num_examples: 74 download_size: 10003354 dataset_size: 48649863 --- # Dataset Card for `thaiqa-squad` ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://github.com/pythainlp/thaiqa_squad (original `thaiqa` at https://aiforthai.in.th/) - **Repository:** http://github.com/pythainlp/thaiqa_squad - **Paper:** - **Leaderboard:** - **Point of Contact:**http://github.com/pythainlp/ (original `thaiqa` at https://aiforthai.in.th/) ### Dataset Summary `thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/). ### Supported Tasks and Leaderboards extractive question answering ### Languages Thai ## Dataset Structure ### Data Instances ``` {'answers': {'answer': ['ฮิกกิ้นส์'], 'answer_begin_position': [528], 'answer_end_position': [537]}, 'article_id': 115035, 'context': '<doc id="115035" url="https://th.wikipedia.org/wiki?curid=115035" title="เบนจี้">เบนจี้ เบนจี้ () เป็นชื่อตัวละครหมาพันทางแสนรู้ ที่ปรากฏอยู่ในภาพยนตร์หลายเรื่องที่เขียนบท และกำกับโดย โจ แคมป์ ในช่วงทศวรรษ 1970 ถึง 1980 ภาพยนตร์เรื่องแรกในชุด ใช้ชื่อเรื่องว่า เบนจี้ เช่นเดียวกับตัวละคร ถ่ายทำที่เมืองดัลลัส รัฐเทกซัส ฉายครั้งแรกในปี พ.ศ. 2517 ภาพยนตร์ได้รับการเสนอชื่อเข้าชิงรางวัลออสการ์ และได้รางวัลลูกโลกทองคำ สาขาเพลงประกอบยอดเยี่ยม จากเพลง Benji\'s Theme (I Feel Love) ร้องโดย ชาร์ลี ริช หมาที่แสดงเป็นเบนจี้ตัวแรก ชื่อว่า ฮิกกิ้นส์ (พ.ศ. 2502 - พ.ศ. 2518) มีอายุถึง 15 ปีแล้วในขณะแสดง หลังจากภาพยนตร์ออกฉายได้ไม่นาน มันก็ตายในปี พ.ศ. 2518เบนจี้ในภาพยนตร์เบนจี้ในภาพยนตร์. - พ.ศ. 2517, Benji (ภาพยนตร์) - พ.ศ. 2520, For the Love of Benji (ภาพยนตร์) - พ.ศ. 2521, Benji\'s Very Own Christmas Story (ภาพยนตร์โทรทัศน์) - พ.ศ. 2523, Oh Heavenly Dog (ภาพยนตร์) - พ.ศ. 2523, Benji at Work (ภาพยนตร์โทรทัศน์) - พ.ศ. 2524, Benji Takes a Dive at Marineland (ภาพยนตร์โทรทัศน์) - พ.ศ. 2526, Benji, Zax & the Alien Prince (ภาพยนตร์ซีรีส์) - พ.ศ. 2530, Benji the Hunted (ภาพยนตร์) - พ.ศ. 2547, Benji: Off the Leash! (ภาพยนตร์) - พ.ศ. 2550, Benji: The Barkening (ภาพยนตร์)</doc>\n', 'question': 'สุนัขตัวแรกรับบทเป็นเบนจี้ในภาพยนตร์เรื่อง Benji ที่ออกฉายในปี พ.ศ. 2517 มีชื่อว่าอะไร', 'question_id': 1} {'answers': {'answer': ['ชาร์ลี ริช'], 'answer_begin_position': [482], 'answer_end_position': [492]}, 'article_id': 115035, 'context': '<doc id="115035" url="https://th.wikipedia.org/wiki?curid=115035" title="เบนจี้">เบนจี้ เบนจี้ () เป็นชื่อตัวละครหมาพันทางแสนรู้ ที่ปรากฏอยู่ในภาพยนตร์หลายเรื่องที่เขียนบท และกำกับโดย โจ แคมป์ ในช่วงทศวรรษ 1970 ถึง 1980 ภาพยนตร์เรื่องแรกในชุด ใช้ชื่อเรื่องว่า เบนจี้ เช่นเดียวกับตัวละคร ถ่ายทำที่เมืองดัลลัส รัฐเทกซัส ฉายครั้งแรกในปี พ.ศ. 2517 ภาพยนตร์ได้รับการเสนอชื่อเข้าชิงรางวัลออสการ์ และได้รางวัลลูกโลกทองคำ สาขาเพลงประกอบยอดเยี่ยม จากเพลง Benji\'s Theme (I Feel Love) ร้องโดย ชาร์ลี ริช หมาที่แสดงเป็นเบนจี้ตัวแรก ชื่อว่า ฮิกกิ้นส์ (พ.ศ. 2502 - พ.ศ. 2518) มีอายุถึง 15 ปีแล้วในขณะแสดง หลังจากภาพยนตร์ออกฉายได้ไม่นาน มันก็ตายในปี พ.ศ. 2518เบนจี้ในภาพยนตร์เบนจี้ในภาพยนตร์. - พ.ศ. 2517, Benji (ภาพยนตร์) - พ.ศ. 2520, For the Love of Benji (ภาพยนตร์) - พ.ศ. 2521, Benji\'s Very Own Christmas Story (ภาพยนตร์โทรทัศน์) - พ.ศ. 2523, Oh Heavenly Dog (ภาพยนตร์) - พ.ศ. 2523, Benji at Work (ภาพยนตร์โทรทัศน์) - พ.ศ. 2524, Benji Takes a Dive at Marineland (ภาพยนตร์โทรทัศน์) - พ.ศ. 2526, Benji, Zax & the Alien Prince (ภาพยนตร์ซีรีส์) - พ.ศ. 2530, Benji the Hunted (ภาพยนตร์) - พ.ศ. 2547, Benji: Off the Leash! (ภาพยนตร์) - พ.ศ. 2550, Benji: The Barkening (ภาพยนตร์)</doc>\n', 'question': "เพลง Benji's Theme ใช้ประกอบภาพยนตร์เรื่อง Benji ในปีพ.ศ. 2517 ขับร้องโดยใคร", 'question_id': 2035} ``` ### Data Fields ``` { "question_id": question id "article_id": article id "context": article texts "question": question "answers": { "answer": answer text "answer_begin_position": answer beginning position "answer_end_position": answer exclusive upper bound position } ), } ``` ### Data Splits | | train | valid | |-------------------------|-------------|-------------| | # questions | 4000 | 74 | | # avg words in context | 1186.740750 | 1016.459459 | | # avg words in question | 14.325500 | 12.743243 | | # avg words in answer | 3.279750 | 4.608108 | ## Dataset Creation ### Curation Rationale [PyThaiNLP](https://github.com/PyThaiNLP/) created `thaiqa_squad` as a [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) version of [thaiqa](http://copycatch.in.th/thai-qa-task.html). [thaiqa](https://aiforthai.in.th/corpus.php) is part of [The 2nd Question answering program from Thai Wikipedia](http://copycatch.in.th/thai-qa-task.html) of [National Software Contest 2020](http://nsc.siit.tu.ac.th/GENA2/login.php). ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Wikipedia authors for contexts and [NECTEC](https://www.nectec.or.th/en/) for questions and answer annotations ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [NECTEC](https://www.nectec.or.th/en/) ### Personal and Sensitive Information All contents are from Wikipedia. No personal and sensitive information is expected to be included. ## Considerations for Using the Data ### Social Impact of Dataset - open-domain, extractive question answering in Thai ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. The contexts include `<doc>` tags at start and at the end ## Additional Information ### Dataset Curators [NECTEC](https://www.nectec.or.th/en/) for original [thaiqa](https://aiforthai.in.th/corpus.php). SQuAD formattting by [PyThaiNLP](https://github.com/PyThaiNLP/). ### Licensing Information CC-BY-NC-SA 3.0 ### Citation Information No clear citation guidelines from source: https://aiforthai.in.th/corpus.php SQuAD version: https://github.com/PyThaiNLP/thaiqa_squad ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
thaisum
--- annotations_creators: - no-annotation language_creators: - found language: - th license: - mit multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - summarization - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: ThaiSum dataset_info: features: - name: title dtype: string - name: body dtype: string - name: summary dtype: string - name: type dtype: string - name: tags dtype: string - name: url dtype: string config_name: thaisum splits: - name: train num_bytes: 2945472406 num_examples: 358868 - name: validation num_bytes: 118437310 num_examples: 11000 - name: test num_bytes: 119496704 num_examples: 11000 download_size: 647582078 dataset_size: 3183406420 --- # Dataset Card for ThaiSum ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/nakhunchumpolsathien/ThaiSum - **Repository:** https://github.com/nakhunchumpolsathien/ThaiSum - **Paper:** - **Leaderboard:** - **Point of Contact:** https://github.com/nakhunchumpolsathien ### Dataset Summary ThaiSum is a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists. ### Supported Tasks and Leaderboards summarization, language modeling ### Languages Thai ## Dataset Structure ### Data Instances ``` {'body': 'กีเก ซานเชซ ฟลอเรส\xa0 กุนซือเลือดกระทิงของทีมวัตฟอร์ด\xa0 เมินประเด็นจุดโทษปัญหาในเกมพรีเมียร์ลีก อังกฤษ นัดที่แตนอาละวาดเปิดบ้านพ่าย คริสตัล พาเลซ 0-1ชี้ทีมของเขาเล่นไม่ดีพอเอง,สำนักข่าวต่างประเทศรายงานวันที่ 27 ก.ย. ว่า กีเก ซานเชซ ฟลอเรส\xa0 ผู้จัดการทีมชาวสเปน ของ แตนอาละวาด วัตฟอร์ด\xa0 ยอมรับทีมของเขาเล่นได้ไม่ดีพอเอง ในเกมพรีเมียร์ลีก อังกฤษ นัดเปิดบ้านพ่าย อินทรีผงาด คริสตัล พาเลซ 0-1 เมื่อคืนวันอาทิตย์ที่ผ่านมา,เกมนี้จุดเปลี่ยนมาอยู่ที่การได้จุดโทษในช่วงครึ่งหลังของ คริสตัล พาเลซ ซึ่งไม่ค่อยชัดเจนเท่าไหร่ว่า อัลลัน นียอม นั้นไปทำฟาล์วใส่ วิลฟรีด ซาฮา ในเขตโทษหรือไม่ แต่ผู้ตัดสินก็ชี้เป็นจุดโทษ ซึ่ง โยอัน กาบาย สังหารไม่พลาด และเป็นประตูชัยช่วยให้ คริสตัล พาเลซ เอาชนะ วัตฟอร์ด ไป 1-0 และเป็นการพ่ายแพ้ในบ้านนัดแรกของวัตฟอร์ดในฤดูกาลนี้อีกด้วย,ฟลอเรส กล่าวว่า มันเป็นเรื่องยากในการหยุดเกมรุกของคริสตัล พาเลซ ซึ่งมันอึดอัดจริงๆสำหรับเรา เราเล่นกันได้ไม่ดีนักในตอนที่ได้ครองบอล เราต้องเล่นทางริมเส้นให้มากกว่านี้ เราไม่สามารถหยุดเกมสวนกลับของพวกเขาได้ และแนวรับของเราก็ยืนไม่เป็นระเบียบสักเท่าไหร่ในช่วงครึ่งแรก ส่วนเรื่องจุดโทษการตัดสินใจขั้นสุดท้ายมันอยู่ที่ผู้ตัดสิน ซึ่งมันเป็นการตัดสินใจที่สำคัญ ผมเองก็ไม่รู้ว่าเขาตัดสินถูกหรือเปล่า บางทีมันอาจเป็นจุดที่ตัดสินเกมนี้เลย แต่เราไม่ได้แพ้เกมนี้เพราะจุดโทษ เราแพ้ในวันนี้เพราะเราเล่นไม่ดีและคริสตัล พาเลซ เล่นดีกว่าเรา เราไม่ได้มีฟอร์มการเล่นที่ดีในเกมนี้เลย', 'summary': 'กีเก ซานเชซ ฟลอเรส กุนซือเลือดกระทิงของทีมวัตฟอร์ด เมินประเด็นจุดโทษปัญหาในเกมพรีเมียร์ลีก อังกฤษ นัดที่แตนอาละวาดเปิดบ้านพ่าย คริสตัล พาเลซ 0-1ชี้ทีมของเขาเล่นไม่ดีพอเอง', 'tags': 'พรีเมียร์ลีก,วัตฟอร์ด,คริสตัล พาเลซ,กีเก ซานเชซ ฟลอเรส,ข่าวกีฬา,ข่าว,ไทยรัฐออนไลน์', 'title': 'ฟลอเรส รับ วัตฟอร์ดห่วยเองเกมพ่ายพาเลซคาบ้าน', 'type': '', 'url': 'https://www.thairath.co.th/content/528322'} ``` ### Data Fields - `title`: title of article - `body`: body of article - `summary`: summary of article - `type`: type of article, if any - `tags`: tags of article, separated by `,` - `url`: URL of article ### Data Splits train/valid/test: 358868 / 11000 / 11000 ## Dataset Creation ### Curation Rationale Sequence-to-sequence (Seq2Seq) models have shown great achievement in text summarization. However, Seq2Seq model often requires large-scale training data to achieve effective results. Although many impressive advancements in text summarization field have been made, most of summarization studies focus on resource-rich languages. The progress of Thai text summarization is still far behind. The dearth of large-scale dataset keeps Thai text summarization in its infancy. As far as our knowledge goes, there is not a large-scale dataset for Thai text summarization available anywhere. Thus, we present ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. ### Source Data #### Initial Data Collection and Normalization We used a python library named Scrapy to crawl articles from several news websites namely Thairath, Prachatai, ThaiPBS and, The Standard. We first collected news URLs provided in their sitemaps. During web-crawling, we used HTML markup and metadata available in HTML pages to identify article text, summary, headline, tags and label. Collected articles were published online from 2014 to August 2020. <br> <br> We further performed data cleansing process to minimize noisy data. We filtered out articles that their article text or summary is missing. Articles that contains article text with less than 150 words or summary with less than 15 words were removed. We also discarded articles that contain at least one of these following tags: ‘ดวง’ (horoscope), ‘นิยาย’ (novel), ‘อินสตราแกรมดารา’ (celebrity Instagram), ‘คลิปสุดฮา’(funny video) and ‘สรุปข่าว’ (highlight news). Some summaries were completely irrelevant to their original article texts. To eliminate those irrelevant summaries, we calculated abstractedness score between summary and its article text. Abstractedness score is written formally as: <br> <center><a href="https://www.codecogs.com/eqnedit.php?latex=\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" target="_blank"><img src="https://latex.codecogs.com/gif.latex?\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" title="\begin{equation} \frac{|S-A|}{r} \times 100 \end{equation}" /></a></center><br> <br>Where 𝑆 denotes set of article tokens. 𝐴 denotes set of summary tokens. 𝑟 denotes a total number of summary tokens. We omitted articles that have abstractedness score at 1-grams higher than 60%. <br><br> It is important to point out that we used [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp), version 2.2.4, tokenizing engine = newmm, to process Thai texts in this study. It is challenging to tokenize running Thai text into words or sentences because there are not clear word/sentence delimiters in Thai language. Therefore, using different tokenization engines may result in different segment of words/sentences. After data-cleansing process, ThaiSum dataset contains over 358,000 articles. The size of this dataset is comparable to a well-known English document summarization dataset, CNN/Dily mail dataset. Moreover, we analyse the characteristics of this dataset by measuring the abstractedness level, compassion rate, and content diversity. For more details, see [thaisum_exploration.ipynb](https://github.com/nakhunchumpolsathien/ThaiSum/blob/master/thaisum_exploration.ipynb). #### Dataset Statistics ThaiSum dataset consists of 358,868 articles. Average lengths of article texts and summaries are approximately 530 and 37 words respectively. As mentioned earlier, we also collected headlines, tags and labels provided in each article. Tags are similar to keywords of the article. An article normally contains several tags but a few labels. Tags can be name of places or persons that article is about while labels indicate news category (politic, entertainment, etc.). Ultimatly, ThaiSum contains 538,059 unique tags and 59 unique labels. Note that not every article contains tags or labels. |Dataset Size| 358,868 | articles | |:---|---:|---:| |Avg. Article Length| 529.5 | words| |Avg. Summary Length | 37.3 | words| |Avg. Headline Length | 12.6 | words| |Unique Vocabulary Size | 407,355 | words| |Occurring > 10 times | 81,761 | words| |Unique News Tag Size | 538,059 | tags| |Unique News Label Size | 59 | labels| #### Who are the source language producers? Journalists of respective articles ### Annotations #### Annotation process `summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers. #### Who are the annotators? `summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers. ### Personal and Sensitive Information All data are public news articles. No personal and sensitive information is expected to be included. ## Considerations for Using the Data ### Social Impact of Dataset - News summarization in Thai - Language modeling for Thai news ### Discussion of Biases - [ThaiPBS](https://www.thaipbs.or.th/home) [receives funding from Thai government](https://www.bangkokbiznews.com/blog/detail/648740). - [Thairath](https://www.thairath.co.th/) is known as [the most popular newspaper in Thailand](https://mgronline.com/onlinesection/detail/9620000058532); no clear political leaning. - [The Standard](https://thestandard.co/) is a left-leaning online magazine. - [Prachathai](https://prachatai.com/) is a left-leaning, human-right-focused news site. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [@nakhunchumpolsathien](https://github.com/nakhunchumpolsathien/) [@caramelWaffle](https://github.com/caramelWaffle) ### Licensing Information MIT License ### Citation Information ``` @mastersthesis{chumpolsathien_2020, title={Using Knowledge Distillation from Keyword Extraction to Improve the Informativeness of Neural Cross-lingual Summarization}, author={Chumpolsathien, Nakhun}, year={2020}, school={Beijing Institute of Technology} ``` ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
EleutherAI/the_pile
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - other multilinguality: - monolingual pretty_name: The Pile size_categories: - unknown source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: the-pile --- # Dataset Card for The Pile ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://pile.eleuther.ai/ - **Repository:** https://github.com/EleutherAI/the-pile - **Paper:** [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027) - **Leaderboard:** - **Point of Contact:** [EleutherAI](mailto:contact@eleuther.ai) ### Dataset Summary The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages This dataset is in English (`EN`). ## Dataset Structure ### Data Instances #### all ``` { 'meta': {'pile_set_name': 'Pile-CC'}, 'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...' } ``` #### enron_emails ``` { 'text': 'Name\t\t\tNew Title\t\t\t\tEffective Date\t\t\tMid Year promotion Yes/No\n\nFloyd, Jodie\t\tSr Cust Svc Rep (no change)\t\t7/16/01\t\t\t\tNo\n\nBuehler, Craig\t\tSr Mkt/Sup Analyst (no change)\t\t7/16/01\t\t\t\tNo\n\nWagoner, Mike\t\tTeam Advisor - Gas Control\t\t7/1/01\t\t\t\tNo\n\nClapper, Karen\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nGreaney, Chris\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nWilkens, Jerry\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nMinton, Kevin\t\tPipeline Controller\t\t\t8/1/01\t\t\t\tYes\n\nCox, Don\t\tPipeline Controller\t\t\t8/1/01\t\t\t\tYes\n\nHanagriff, Richard\tSr Accounting Control Spec\t\t8/1/01\t\t\t\tYes\n\n\nThanks,\nMS' 'meta': "{}", } ``` #### europarl ``` { 'text': 'Uvádění biocidních přípravků na trh - Nový návrh revize týkající se biocidních přípravků (rozprava) \nPředsedající\nDalším bodem je společná rozprava o následujících tématech:\nzpráva paní Sârbuové za Výbor pro životní prostředí, veřejné zdraví a bezpečnost potravin o návrhu...' 'meta': "{'language': 'cs'}", } ``` #### free_law ``` { 'meta': "{'case_jurisdiction': 'scotus.tar.gz', 'case_ID': '110921.json','date_created': '2010-04-28T17:12:49Z'}", 'text': '\n461 U.S. 238 (1983)\nOLIM ET AL.\nv.\nWAKINEKONA\nNo. 81-1581.\nSupreme Court of United States.\nArgued...' } ``` #### hacker_news ``` { 'text': "\nChina Deserves Donald Trump - rm2889\nhttps://www.nytimes.com/2019/05/21/opinion/china-trump-trade.html\n======\nNotPaidToPost\n> so he’d be wise to curb his nationalistic “no-one-tells-China-what-to-do”\n> bluster\n\nThis comment highlights both ignorance of Chinese history and continuing\nAmerican arrogance.\n\nChina has been painfully dictated what to do during the last 200 years. This\nhas had a profound effect on the country and has led to the collapse of\nimperial rule and the drive to 'rejuvenate'...", 'meta': "{'id': '19979654'}", } ``` #### nih_exporter ``` { 'text': "The National Domestic Violence Hotline (NDVH) and the National Dating Abuse Helpline (NDAH), which are supported by the Division of Family Violence Prevention and Services within the Family and Youth Services Bureau, serve as critical partners in the intervention, prevention, and resource assistance efforts of the network of family violence, domestic violence, and dating violence service providers. They provide crisis intervention and support services; information about resources on domestic...", 'meta': " {'APPLICATION_ID': 100065}", } ``` #### pubmed ``` { 'meta': {'pmid': 11409574, 'language': 'eng'}, 'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection.\nTo determine the prevalence of hypoxaemia in children aged under 5 years suffering acute lower respiratory infections (ALRI), the risk factors for hypoxaemia in children under 5 years of age with ALRI, and the association of hypoxaemia with an increased risk of dying in children of the same age. Systematic review of the published literature. Out-patient clinics, emergency departments and hospitalisation wards in 23 health centres from 10 countries. Cohort studies reporting the frequency of hypoxaemia in children under 5 years of age with ALRI, and the association between hypoxaemia and the risk of dying. Prevalence of hypoxaemia measured in children with ARI and relative risks for the association between the severity of illness and the frequency of hypoxaemia, and between hypoxaemia and the risk of dying. Seventeen published studies were found that included 4,021 children under 5 with acute respiratory infections (ARI) and reported the prevalence of hypoxaemia. Out-patient children and those with a clinical diagnosis of upper ARI had a low risk of hypoxaemia (pooled estimate of 6% to 9%). The prevalence increased to 31% and to 43% in patients in emergency departments and in cases with clinical pneumonia, respectively, and it was even higher among hospitalised children (47%) and in those with radiographically confirmed pneumonia (72%). The cumulated data also suggest that hypoxaemia is more frequent in children living at high altitude. Three papers reported an association between hypoxaemia and death, with relative risks varying between 1.4 and 4.6. Papers describing predictors of hypoxaemia have focused on clinical signs for detecting hypoxaemia rather than on identifying risk factors for developing this complication. Hypoxaemia is a common and potentially lethal complication of ALRI in children under 5, particularly among those with severe disease and those living at high altitude. Given the observed high prevalence of hypoxaemia and its likely association with increased mortality, efforts should be made to improve the detection of hypoxaemia and to provide oxygen earlier to more children with severe ALRI.' } ``` #### pubmed_central ``` { 'meta': "{id': 'PMC5595690'}", 'text': 'Introduction {#acel12642-sec-0001}\n============\n\nAlzheimer\\\'s disease (AD), the most common cause of...' } ``` #### ubuntu_irc ``` { 'text': "#ubuntu 2004-07-05\n* Window 3\n* \tServer: [0] <None>\n* \tScreen: 0x817e90c\n* \tGeometry Info: [0 11 0 11 11 11] \n* \tCO, LI are [94 49] \n* \tCurrent channel: #ubuntu\n* \tQuery User: <None> \n*\tPrompt: <None>\n* \tSecond status line is OFF\n* \tSplit line is ON triple is OFF\n* \tLogging is ON\n* \tLogfile is irclogs/ubuntu.log\n* \tNotification is OFF\n* \tHold mode is OFF\n* \tWindow level is NONE\n* \tLastlog level is ALL\n* \tNotify level is ALL\n<mdz> lifeless: using tla effectively for all packages in Warty requ...", 'meta': "{'channel': 'ubuntu', 'month': 7}" } ``` #### uspto ``` { 'text': "1. Field of the Invention\nIn an extensive plant breeding program, Grant Merrill, originator and now deceased, originated a large number of new and distinct varieties of fruit trees, and which included the herein-claimed variety of peach tree. Such plant breeding program was undertaken in originator's experimental orchard located near Exeter, Tulare County, Calif.\n2. Prior Varieties\nAmong the existent varieties of peach trees which were known to originator, particular reference is made to Gemfree (U.S. Plant Pat. No. 1,409) and June Lady (U.S. Plant Pat. No. 3,022) hereinafter mentioned for the purpose of comparison.", 'meta': "{'bibliographic_information': {'Patent Number': 'PP0049700', 'Series Code': '6', 'Application Number': '2845415', 'Application Type': '6', 'Art unit': '337', 'Application Filing Date': '19810720', 'Title of Invention': 'Peach tree (A3-10)', 'Issue Date': '19830104', 'Number of Claims': '1', 'Exemplary Claim Number(s)': '1', 'Primary Examiner': 'Bagwill; Robert E.', 'Number of Drawing Sheets': '1', 'Number of figures': '1'}, 'source_file': 'https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/1983/pftaps19830104_wk01.zip', 'abstract': 'A peach tree which is large, vigorous, and spreading; foliated with large, lanceolate leaves having a finely serrate margin, a petiole of medium length and thickness, and medium size, reniform glands; blooms from medium size, conic, plump, pubescent buds; the flowers, medium in blooming period compared with other varieties, being of medium size, and pink; and is a regular and very productive bearer of medium but variable size, round truncate, clingstone fruit having yellow skin substantially overspread with red, yellow flesh mottled with red adjacent the skin, and an amber stone.', 'classifications': [{'OCL': ['Plt', '43'], 'EDF': ['3'], 'ICL': ['A01H', '503'], 'FSC': ['Plt'], 'FSS': ['43']}], 'inventors': [{'inventor name': 'Merrill, deceased; Grant', 'Street': '325 Breese Ave.', 'City': 'late of Red Bluff', 'State': 'CA'}, {'inventor name': 'Merrill, executrix; by Lucile B.', 'Street': '325 Breese Ave.', 'City': 'Red Bluff', 'State': 'CA', 'Zip code': '96080'}]}" } ``` #### github ``` { 'text': "/* filesystem.c\n * Filesystem utility routines\n *\n * Wireshark - Network traffic analyzer\n * By Gerald Combs <gerald@wireshark.org>\n * Copyright 1998 Gerald Combs\n *\n * SPDX-License-Identifier: GPL-2.0-or-later\n */\n\n#include <config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n\n#include <glib.h>...", 'meta': "{'repo_name': 'wireshark/wireshark', 'stars': '2789', 'repo_language': 'C', 'file_name': 'packet-mpeg-audio-template.c', 'mime_type': 'text/x-c'}" } ``` ### Data Fields #### all - `text` (str): Text. - `meta` (dict): Metadata of the data instance with keys: - pile_set_name: Name of the subset. #### enron_emails - `text` (str): Text. - `meta` (str): Metadata of the data instance. #### europarl - `text` (str): Text. - `meta` (str): Metadata of the data instance with: language. #### free_law - `text` (str): Text. - `meta` (str): Metadata of the data instance with: case_ID, case_jurisdiction, date_created. #### hacker_news - `text` (str): Text. - `meta` (str): Metadata of the data instance with: id. #### nih_exporter - `text` (str): Text. - `meta` (str): Metadata of the data instance with: APPLICATION_ID. #### pubmed - `text` (str): Text. - `meta` (str): Metadata of the data instance with: pmid, language. #### pubmed_central - `text` (str): Text. - `meta` (str): Metadata of the data instance with: ID of the data instance. #### ubuntu_irc - `text` (str): Text. - `meta` (str): Metadata of the data instance with: channel, month. #### uspto - `text` (str): Text. - `meta` (str): Metadata of the data instance with: bibliographic_information, source_file, abstract, classifications, inventors. #### github - `text` (str): Text. - `meta` (str): Metadata of the data instance with: repo_name, stars, repo_language, file_name, mime_type. ### Data Splits The "all" configuration is composed of 3 splits: train, validation and test. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Please refer to the specific license depending on the subset you use: - PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE) ### Citation Information ``` @misc{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy}, year={2020}, eprint={2101.00027}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
the_pile_books3
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - mit multilinguality: - monolingual pretty_name: Books3 size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling dataset_info: features: - name: title dtype: string - name: text dtype: string config_name: plain_text splits: - name: train num_bytes: 108392037000 num_examples: 196639 download_size: 39516981435 dataset_size: 108392037000 --- # Dataset Card for the_pile_books3 ## Table of Contents - [Dataset Card for the_pile_books3](#dataset-card-for-the_pile_books3) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [|split|num examples|](#splitnum-examples) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub](https://github.com/soskek/bookcorpus/issues/27#issuecomment-716104208) - **Repository:** [Needs More Information] - **Paper:** [arXiv](https://arxiv.org/abs/2101.00027) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This dataset is Shawn Presser's work and is part of EleutherAi/The Pile dataset. This dataset contains all of bibliotik in plain .txt form, aka 197,000 books processed in exactly the same way as did for bookcorpusopen (a.k.a. books1). seems to be similar to OpenAI's mysterious "books2" dataset referenced in their papers. Unfortunately OpenAI will not give details, so we know very little about any differences. People suspect it's "all of libgen", but it's purely conjecture. |download_size|36.8 Gib| |dataset_size|100.9 Gib| ### Supported Tasks and Leaderboards This dataset is used for Language Modeling. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances ``` {'title': '07 LEGO Ninjago - The Search For Zane (Scholastic) - Kate Howard (retail)' 'text': '\n\nTITLE PAGE\n\nFROM THE JOURNAL OF SENSEI GARMADON\n\nCHAPTER 1\n\nCHAPTER 2\n\nCHAPTER 3\n\nCHAPTER 4\n\nCHAPTER 5\n\nCHAPTER 6\n\nCHAPTER 7\n\nCHAPTER 8\n\nCHAPTER 9\n\nCOPYRIGHT\n\nThroughout Ninjago", five ninja are well-known for their speed, strength, and  of course  the elemental powers that help them protect our world from evil. But there are others who possess some of the same powers as the ninja. Others who may not always use their powers for good.\n\nBefore now, the ninja believed they were special. They di.......'} ``` ### Data Fields - `title`: title of the book - `text`: text content of the book ### Data Splits |split|num examples| -------------------------------- |train|196640| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information MIT ### Citation Information ``` @article{pile, title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ``` ### Contributions Thanks to [@shawwn](https://github.com/shawwn) for creating this dataset. Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
the_pile_openwebtext2
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - mit multilinguality: - monolingual pretty_name: OpenWebText2 size_categories: - 10M<n<100M source_datasets: - original task_categories: - text-generation - fill-mask - text-classification task_ids: - language-modeling - masked-language-modeling - text-scoring dataset_info: features: - name: title dtype: string - name: text dtype: string config_name: plain_text splits: - name: train num_bytes: 68571017395 num_examples: 17103059 download_size: 29344276480 dataset_size: 68571017395 --- # Dataset Card for the_pile_openwebtext2 ## Table of Contents - [Dataset Card for the_pile_openwebtext2](#dataset-card-for-the_pile_openwebtext2) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [|split|num examples|](#splitnum-examples) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://openwebtext2.readthedocs.io/en/latest/ - **Repository:** [GitHub](https://github.com/EleutherAI/openwebtext2) - **Paper:** https://arxiv.org/abs/2101.00027 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary OpenWebText2 is part of EleutherAi/The Pile dataset and is an enhanced version of the original OpenWebTextCorpus covering all Reddit submissions from 2005 up until April 2020, with further months becoming available after the corresponding PushShift dump files are released. |download_size|27.3 Gib| |dataset_size|63.8 Gib| ### Supported Tasks and Leaderboards This dataset is used for Language Modeling. ### Languages This dataset is in English. ## Dataset Structure ### Data Instances ``` This example was too long and was cropped: {'title': Xiaomi Mi Note 10 Gearbest Coupon Promo Code [6+128GB] [France Warehouse], 'text': '27% off Xiaomi Mi Note 10 (CC9 Pro) 108MP Penta Camera Mobile Phone Global Version Online Smartphone – Black Gearbest Coupon Promo Code\n\nGearbest Coupon Price :$439.99\n\nRegular Price : $603.19 Your Save : $163.20 Coupon Limit: 100 times Warehouse: France Expires : September 30, 2020 Coupon Valid for...', 'reddit_scores': [6],} ``` ### Data Fields - `title`: title of the web page - `text`: text content of the web page - `reddit_scores`: scores of the reddit submissions that mention this web page, as a list of integers ### Data Splits |split|num examples| -------------------------------- |train|17103059| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ``` @article{pile, title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ``` ### Contributions [researcher2](https://github.com/researcher2) Wrote much of this code, with inspiration and some straight copying of the scraping code found [here](https://github.com/yet-another-account/openwebtext/).<br/> [sdtblck](https://github.com/sdtblck/) kindly put together the Colab notebook, and performed a chunk of the scraping. <br/> [leogao2](https://github.com/leogao2/) provided overall design guidance, lm_dataformat, and performed another chunk of scraping. <br /> [Colaboratory](https://colab.research.google.com/) VMs helped with about 10% of our overall scraping. <br /> [The Eye](http://the-eye.eu/) host the processed datasets.<br /> [Read The Docs](https://readthedocs.org/) host our documentation.<br /> [@richarddwang](https://github.com/richarddwang) added this dataset to HF/datasets.
the_pile_stack_exchange
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: Stack Exchange size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling dataset_info: features: - name: domain dtype: string - name: text dtype: string config_name: plain_text splits: - name: train num_bytes: 11075434609 num_examples: 5096117 download_size: 36802959360 dataset_size: 11075434609 --- # Dataset Card for Stack Exchange ## Table of Contents - [Dataset Card for Stack Exchange](#dataset-card-for-the_pile_stack_exchange) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [|split|num examples|](#splitnum-examples) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub](https://github.com/EleutherAI/stackexchange-dataset) - **Repository:** [Needs More Information] - **Paper:** [arXiv](https://arxiv.org/abs/2101.00027) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This dataset is part of EleutherAI/The Pile dataset and is a dataset for Language Models from processing stackexchange data dump, which is an anonymized dump of all user-contributed content on the Stack Exchange network. |download_size|34.28 Gib| |dataset_size|10.3 Gib| ### Supported Tasks and Leaderboards The dataset is used for Language Modeling. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances ``` {'domain': 'chemistry', 'text':"\nQ: \n \nReviving old questions or asking a new one? \n \nI'm relatively new to the Chemistry SE community, and sometimes when I go to ask a question, I notice that the same (or similar) question has \nalready been asked. However, the previous question doesn't have a good answer (or is unanswered). In this case, is it better to ask the questi\non again in a new post (which might be marked as duplicate) or comment on the old post (which might be several years old)? In other words, wha\nt are the customs of this site in regards to reviving old questions/discussions?\n\nA:\n\nAs Martin commented, it really depends on the type of question. In any case, you always have the following possibilities:\n\nAsk a new question\nEdit the question to bump it to the first page\nAdd a bounty\nBring it to the attention of people in chat\n\nConsider the following cases:\n\nI have exactly the same question as asked and unanswered before!\n\nIf you ask a new question which turns out to be the same question, it may be closed as a dupe (depending on whether users remember the old que\nstion). Not the ideal option.\nIf you can find something substantial to edit and bump the question, do so. Maybe add a comment that you would really love an answer.\nIf you can spare some rep for a bounty (50 is usually enough), do so.\nYou can always bring it to the attention of people in chat.\n",} ``` ### Data Fields - `domain`: Stack Exchange domain of the sample - `text`: Text content containing both the question and the answer ### Data Splits |split|num examples| -------------------------------- |train|5096117| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ``` @article{pile, title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ``` ### Contributions Thanks to [sdtblck](https://github.com/sdtblck) for creating the dataset. Thanks to [richarddwang](https://github.com/richarddwang) for adding the dataset.
tilde_model
--- annotations_creators: - found language_creators: - found language: - bg - cs - da - de - el - en - es - et - fi - fr - hr - hu - is - it - lt - lv - mt - nl - 'no' - pl - pt - ro - ru - sk - sl - sq - sr - sv - tr - uk license: - cc-by-sa-4.0 multilinguality: - multilingual size_categories: - n<1K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: tilde-model-corpus pretty_name: Tilde Multilingual Open Data for European Languages dataset_info: - config_name: bg-el features: - name: id dtype: string - name: translation dtype: translation: languages: - bg - el splits: - name: train num_bytes: 258081 num_examples: 455 download_size: 64430 dataset_size: 258081 - config_name: cs-en features: - name: id dtype: string - name: translation dtype: translation: languages: - cs - en splits: - name: train num_bytes: 709168 num_examples: 3100 download_size: 201503 dataset_size: 709168 - config_name: de-hr features: - name: id dtype: string - name: translation dtype: translation: languages: - de - hr splits: - name: train num_bytes: 180148538 num_examples: 683194 download_size: 49585877 dataset_size: 180148538 - config_name: en-no features: - name: id dtype: string - name: translation dtype: translation: languages: - en - 'no' splits: - name: train num_bytes: 73797124 num_examples: 348141 download_size: 17852861 dataset_size: 73797124 - config_name: es-pt features: - name: id dtype: string - name: translation dtype: translation: languages: - es - pt splits: - name: train num_bytes: 3808423 num_examples: 13464 download_size: 1160892 dataset_size: 3808423 --- # Dataset Card for Tilde Multilingual Open Data for European Languages ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/TildeMODEL.php - **Repository:** None - **Paper:** https://www.aclweb.org/anthology/W17-0235.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/TildeMODEL.php E.g. `dataset = load_dataset("tilde_model", lang1="en", lang2="lv")` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
time_dial
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-nc-sa-4.0 multilinguality: - monolingual pretty_name: 'TimeDial: Temporal Commonsense Reasoning in Dialog' size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - multi-label-classification paperswithcode_id: timedial tags: - dialog-act-classification dataset_info: features: - name: id dtype: int32 - name: conversation sequence: string - name: correct1 dtype: string - name: correct2 dtype: string - name: incorrect1 dtype: string - name: incorrect1_rule dtype: string - name: incorrect2 dtype: string - name: incorrect2_rule dtype: string splits: - name: test num_bytes: 1449879 num_examples: 1446 download_size: 1613806 dataset_size: 1449879 --- # Dataset Card for TimeDial: Temporal Commonsense Reasoning in Dialog ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [TimeDial](https://github.com/google-research-datasets/timedial) - **Paper:** [TimeDial: Temporal Commonsense Reasoning in Dialog](https://arxiv.org/abs/2106.04571) - **Point of Contact:** [Please create an issue in the official repository](https://github.com/google-research-datasets/timedial) ### Dataset Summary TimeDial presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from the DailyDialog ([Li et al., 2017](https://www.aclweb.org/anthology/I17-1099/)), which is a multi-turn dialog corpus. In order to establish strong baselines and provide information on future model development, the authors conducted extensive experiments with state-of-the-art LMs. While humans can easily answer these questions (97.8\%), the best T5 model variant struggles on this challenge set (73\%). Moreover, our qualitative error analyses show that the models often rely on shallow, spurious features (particularly text matching), instead of truly doing reasoning over the context. Detailed experiments and analyses can be found in their [paper](https://arxiv.org/pdf/2106.04571.pdf). ### Supported Tasks and Leaderboards To be updated soon. ### Languages The dataset is in English only. ## Dataset Structure ### Data Instances ``` { "id": 1, "conversation": [ "A: We need to take the accounts system offline to carry out the upgrade . But don't worry , it won't cause too much inconvenience . We're going to do it over the weekend .", "B: How long will the system be down for ?", "A: We'll be taking everything offline in about two hours ' time . It'll be down for a minimum of twelve hours . If everything goes according to plan , it should be up again by 6 pm on Saturday .", "B: That's fine . We've allowed <MASK> to be on the safe side ." ], "correct1": "forty-eight hours", "correct2": "50 hours ", "incorrect1": "two hours ", "incorrect1_rule": "Rule 1", "incorrect2": "12 days ", "incorrect2_rule": "Rule 2" } ``` ### Data Fields - "id": Unique identifier, as a integer - "conversation": Dialog context with <MASK> span, as a string - "correct1": Original <MASK> span, as a string - "correct2": Additional correct option provided by annotators, as a string - "incorrect1": Incorrect option #1 provided by annotators, as a string - "incorrect1_rule": One of phrase matching ("Rule 1"), numeral matching ("Rule 2"), or open ended ("Rule 3"), as a string - "incorrect2": Incorrect option #2 provided by annotators, as a string - "incorrect2_rule": One of phrase matching ("Rule 1"), numeral matching ("Rule 2"), or open ended ("Rule 3"), as a string ### Data Splits TimeDial dataset consists only of a test set of 1,104 dialog instances with 2 correct and 2 incorrect options with the following statistics: | | Avg. | |-----|-----| |Turns per Dialog | 11.7 | |Words per Turn | 16.5 | |Time Spans per Dialog | 3 | ## Dataset Creation ### Curation Rationale Although previous works have studied temporal reasoning in natural language, they have either focused on specific time-related concepts in isolation, such as temporal ordering and relation extraction, and/or dealt with limited context, such as single-sentence-based question answering and natural language inference. In this work, they make the first systematic study of temporal commonsense reasoning in a multi-turn dialog setting. The task involves complex reasoning that requires operations like comparison and arithmetic reasoning over temporal expressions and the need for commonsense and world knowledge. ### Source Data #### Initial Data Collection and Normalization The TIMEDIAL dataset is derived from DailyDialog data (Li et al., 2017), which is a multi-turn dialog corpus containing over 13K English dialogs. Dialogs in this dataset consist of turn-taking between two people on topics over 10 broad categories, ranging from daily lives to financial topics. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process The data collection process involves two steps: (1) identifying dialogs that are rich in temporal expressions, and (2) asking human annotators to provide correct and incorrect options for cloze instances derived from these dialogs. More details about the two steps: 1) Temporal expression identification: Here, they select dialogs that are rich with temporal information, in order to focus on complex temporal reasoning that arises in natural dialogs. Temporal expressions are automatically identified with SU-Time, an off-the-shelf temporal expression detector. They keep only the dialogs with more than 3 temporal expressions and at least one expression that contains numerals like “two weeks” (as opposed to non-numeric spans, like “summer”, “right now”, and “later”). In their initial experiment, they observe that language models can often correctly predict these non-numerical temporal phrases. 2) Human annotated options: Next, they make spans in the dialogs. For a dialog, they mask out each temporal expression that contains numerals, each resulting in a cloze question that is then sent for human annotation. This resulted in 1,526 instances for annotation. For each masked span in each dialog, they obtain human annotation to derive a fixed set of correct and incorrect options given the context. Concretely, given a masked dialog and a seed correct answer (i.e., the original text) for the masked span, the annotators were asked to (1) come up with an alternative correct answer that makes sense in the dialog adhering to commonsense, and (2) formulate two incorrect answers that have no possibility of making sense in the dialog context. They highlight all time expressions in the context to make it easier for annotators to select reasonable time expressions. #### Who are the annotators? They are English linguists. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information TimeDial dataset is licensed under CC BY-NC-SA 4.0. ### Citation Information ``` @inproceedings{qin-etal-2021-timedial, title = "{TimeDial: Temporal Commonsense Reasoning in Dialog}", author = "Qin, Lianhui and Gupta, Aditya and Upadhyay, Shyam and He, Luheng and Choi, Yejin and Faruqui, Manaal", booktitle = "Proc. of ACL", year = "2021" } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
times_of_india_news_headlines
--- annotations_creators: - no-annotation language_creators: - expert-generated language: - en license: - cc0-1.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - text2text-generation - text-retrieval task_ids: - document-retrieval - fact-checking-retrieval - text-simplification paperswithcode_id: null pretty_name: Times of India News Headlines dataset_info: features: - name: publish_date dtype: string - name: headline_category dtype: string - name: headline_text dtype: string splits: - name: train num_bytes: 260939306 num_examples: 3297173 download_size: 0 dataset_size: 260939306 --- # Dataset Card for Times of India News Headlines ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/J7BYRX - **Repository:** [More Information Needed] - **Paper:** [More Information Needed] - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary This news dataset is a persistent historical archive of noteable events in the Indian subcontinent from start-2001 to mid-2020, recorded in realtime by the journalists of India. It contains approximately 3.3 million events published by Times of India. Times Group as a news agency, reaches out a very wide audience across Asia and drawfs every other agency in the quantity of english articles published per day. Due to the heavy daily volume over multiple years, this data offers a deep insight into Indian society, its priorities, events, issues and talking points and how they have unfolded over time. It is possible to chop this dataset into a smaller piece for a more focused analysis, based on one or more facets. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances ``` { 'publish_date': '20010530', 'headline_category': city.kolkata, 'headline_text': "Malda fake notes" } ``` ### Data Fields - `publish_date`: Date of publishing in yyyyMMdd format - `headline_category`: Category of event in ascii, dot-delimited values - `headline_text`: Headline of article en la Engrezi (2020-07-10) ### Data Splits This dataset has no splits. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by Rohit Kulkarni. ### Licensing Information The data is under the [CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/) ### Citation Information ``` @data{DVN/DPQMQH_2020, author = {Kulkarni, Rohit}, publisher = {Harvard Dataverse}, title = {{Times of India News Headlines}}, year = {2020}, version = {V1}, doi = {10.7910/DVN/DPQMQH}, url = {https://doi.org/10.7910/DVN/DPQMQH} } ``` ### Contributions Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset.
timit_asr
--- pretty_name: TIMIT annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - other license_details: "LDC-User-Agreement-for-Non-Members" multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - automatic-speech-recognition task_ids: [] paperswithcode_id: timit train-eval-index: - config: clean task: automatic-speech-recognition task_id: speech_recognition splits: train_split: train eval_split: test col_mapping: file: path text: text metrics: - type: wer name: WER - type: cer name: CER --- # Dataset Card for timit_asr ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [TIMIT Acoustic-Phonetic Continuous Speech Corpus](https://catalog.ldc.upenn.edu/LDC93S1) - **Repository:** [Needs More Information] - **Paper:** [TIMIT: Dataset designed to provide speech data for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems.](https://catalog.ldc.upenn.edu/LDC93S1) - **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/speech-recognition-on-timit) - **Point of Contact:** [Needs More Information] ### Dataset Summary The TIMIT corpus of read speech is designed to provide speech data for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems. TIMIT contains broadband recordings of 630 speakers of eight major dialects of American English, each reading ten phonetically rich sentences. The TIMIT corpus includes time-aligned orthographic, phonetic and word transcriptions as well as a 16-bit, 16kHz speech waveform file for each utterance. Corpus design was a joint effort among the Massachusetts Institute of Technology (MIT), SRI International (SRI) and Texas Instruments, Inc. (TI). The speech was recorded at TI, transcribed at MIT and verified and prepared for CD-ROM production by the National Institute of Standards and Technology (NIST). The dataset needs to be downloaded manually from https://catalog.ldc.upenn.edu/LDC93S1: ``` To use TIMIT you have to download it manually. Please create an account and download the dataset from https://catalog.ldc.upenn.edu/LDC93S1 Then extract all files in one folder and load the dataset with: `datasets.load_dataset('timit_asr', data_dir='path/to/folder/folder_name')` ``` ### Supported Tasks and Leaderboards - `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-timit and ranks models based on their WER. ### Languages The audio is in English. The TIMIT corpus transcriptions have been hand verified. Test and training subsets, balanced for phonetic and dialectal coverage, are specified. Tabular computer-searchable information is included as well as written documentation. ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided. ``` { 'file': '/data/TRAIN/DR4/MMDM0/SI681.WAV', 'audio': {'path': '/data/TRAIN/DR4/MMDM0/SI681.WAV', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'text': 'Would such an act of refusal be useful?', 'phonetic_detail': [{'start': '0', 'stop': '1960', 'utterance': 'h#'}, {'start': '1960', 'stop': '2466', 'utterance': 'w'}, {'start': '2466', 'stop': '3480', 'utterance': 'ix'}, {'start': '3480', 'stop': '4000', 'utterance': 'dcl'}, {'start': '4000', 'stop': '5960', 'utterance': 's'}, {'start': '5960', 'stop': '7480', 'utterance': 'ah'}, {'start': '7480', 'stop': '7880', 'utterance': 'tcl'}, {'start': '7880', 'stop': '9400', 'utterance': 'ch'}, {'start': '9400', 'stop': '9960', 'utterance': 'ix'}, {'start': '9960', 'stop': '10680', 'utterance': 'n'}, {'start': '10680', 'stop': '13480', 'utterance': 'ae'}, {'start': '13480', 'stop': '15680', 'utterance': 'kcl'}, {'start': '15680', 'stop': '15880', 'utterance': 't'}, {'start': '15880', 'stop': '16920', 'utterance': 'ix'}, {'start': '16920', 'stop': '18297', 'utterance': 'v'}, {'start': '18297', 'stop': '18882', 'utterance': 'r'}, {'start': '18882', 'stop': '19480', 'utterance': 'ix'}, {'start': '19480', 'stop': '21723', 'utterance': 'f'}, {'start': '21723', 'stop': '22516', 'utterance': 'y'}, {'start': '22516', 'stop': '24040', 'utterance': 'ux'}, {'start': '24040', 'stop': '25190', 'utterance': 'zh'}, {'start': '25190', 'stop': '27080', 'utterance': 'el'}, {'start': '27080', 'stop': '28160', 'utterance': 'bcl'}, {'start': '28160', 'stop': '28560', 'utterance': 'b'}, {'start': '28560', 'stop': '30120', 'utterance': 'iy'}, {'start': '30120', 'stop': '31832', 'utterance': 'y'}, {'start': '31832', 'stop': '33240', 'utterance': 'ux'}, {'start': '33240', 'stop': '34640', 'utterance': 's'}, {'start': '34640', 'stop': '35968', 'utterance': 'f'}, {'start': '35968', 'stop': '37720', 'utterance': 'el'}, {'start': '37720', 'stop': '39920', 'utterance': 'h#'}], 'word_detail': [{'start': '1960', 'stop': '4000', 'utterance': 'would'}, {'start': '4000', 'stop': '9400', 'utterance': 'such'}, {'start': '9400', 'stop': '10680', 'utterance': 'an'}, {'start': '10680', 'stop': '15880', 'utterance': 'act'}, {'start': '15880', 'stop': '18297', 'utterance': 'of'}, {'start': '18297', 'stop': '27080', 'utterance': 'refusal'}, {'start': '27080', 'stop': '30120', 'utterance': 'be'}, {'start': '30120', 'stop': '37720', 'utterance': 'useful'}], 'dialect_region': 'DR4', 'sentence_type': 'SI', 'speaker_id': 'MMDM0', 'id': 'SI681' } ``` ### Data Fields - file: A path to the downloaded audio file in .wav format. - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - text: The transcription of the audio file. - phonetic_detail: The phonemes that make up the sentence. The PHONCODE.DOC contains a table of all the phonemic and phonetic symbols used in TIMIT lexicon. - word_detail: Word level split of the transcript. - dialect_region: The dialect code of the recording. - sentence_type: The type of the sentence - 'SA':'Dialect', 'SX':'Compact' or 'SI':'Diverse'. - speaker_id: Unique id of the speaker. The same speaker id can be found for multiple data samples. - id: ID of the data sample. Contains the <SENTENCE_TYPE><SENTENCE_NUMBER>. ### Data Splits The speech material has been subdivided into portions for training and testing. The default train-test split will be made available on data download. The test data alone has a core portion containing 24 speakers, 2 male and 1 female from each dialect region. More information about the test set can be found [here](https://catalog.ldc.upenn.edu/docs/LDC93S1/TESTSET.TXT) ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators The dataset was created by John S. Garofolo, Lori F. Lamel, William M. Fisher, Jonathan G. Fiscus, David S. Pallett, Nancy L. Dahlgren, Victor Zue ### Licensing Information [LDC User Agreement for Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) ### Citation Information ``` @inproceedings{ title={TIMIT Acoustic-Phonetic Continuous Speech Corpus}, author={Garofolo, John S., et al}, ldc_catalog_no={LDC93S1}, DOI={https://doi.org/10.35111/17gk-bn40}, journal={Linguistic Data Consortium, Philadelphia}, year={1983} } ``` ### Contributions Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset.
tiny_shakespeare
--- paperswithcode_id: null pretty_name: TinyShakespeare dataset_info: features: - name: text dtype: string splits: - name: test num_bytes: 55780 num_examples: 1 - name: train num_bytes: 1003864 num_examples: 1 - name: validation num_bytes: 55780 num_examples: 1 download_size: 1115394 dataset_size: 1115424 --- # Dataset Card for "tiny_shakespeare" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/karpathy/char-rnn/blob/master/data/tinyshakespeare/input.txt](https://github.com/karpathy/char-rnn/blob/master/data/tinyshakespeare/input.txt) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.11 MB - **Size of the generated dataset:** 1.11 MB - **Total amount of disk used:** 2.23 MB ### Dataset Summary 40,000 lines of Shakespeare from a variety of Shakespeare's plays. Featured in Andrej Karpathy's blog post 'The Unreasonable Effectiveness of Recurrent Neural Networks': http://karpathy.github.io/2015/05/21/rnn-effectiveness/. To use for e.g. character modelling: ``` d = datasets.load_dataset(name='tiny_shakespeare')['train'] d = d.map(lambda x: datasets.Value('strings').unicode_split(x['text'], 'UTF-8')) # train split includes vocabulary for other splits vocabulary = sorted(set(next(iter(d)).numpy())) d = d.map(lambda x: {'cur_char': x[:-1], 'next_char': x[1:]}) d = d.unbatch() seq_len = 100 batch_size = 2 d = d.batch(seq_len) d = d.batch(batch_size) ``` ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 1.11 MB - **Size of the generated dataset:** 1.11 MB - **Total amount of disk used:** 2.23 MB An example of 'train' looks as follows. ``` { "text": "First Citizen:\nBefore we proceed any further, hear me " } ``` ### Data Fields The data fields are the same among all splits. #### default - `text`: a `string` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default| 1| 1| 1| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @misc{ author={Karpathy, Andrej}, title={char-rnn}, year={2015}, howpublished={\url{https://github.com/karpathy/char-rnn}} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
tlc
--- pretty_name: Thai Literature Corpora (TLC) annotations_creators: - expert-generated - no-annotation language_creators: - expert-generated language: - th license: - unknown multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null dataset_info: - config_name: tlcv1.0 features: - name: ch_num dtype: string - name: title dtype: string - name: text sequence: sequence: string splits: - name: train num_bytes: 32498 num_examples: 1 download_size: 2904472 dataset_size: 32498 - config_name: tlcv2.0 features: - name: ch_num dtype: string - name: title dtype: string - name: text sequence: sequence: string splits: - name: train num_bytes: 32498 num_examples: 1 download_size: 5551710 dataset_size: 32498 - config_name: tnhcv1.0 features: - name: text sequence: string splits: - name: train num_bytes: 25198 num_examples: 152 download_size: 1465403 dataset_size: 25198 --- # Dataset Card for Thai Literature Corpora (TLC) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://attapol.github.io/tlc.html - **Leaderboard:** https://www.kaggle.com/c/wisesight-sentiment/ - **Paper:** - **Leaderboard:** - **Point of Contact:** Jitkapat Sawatphol, Attapol Rutherford; attapolrutherford at gmail.com ### Dataset Summary Thai Literature Corpora (TLC): Corpora of machine-ingestible Thai classical literature texts. It consists of two datasets: ## TLC set It is texts from [Vajirayana Digital Library](https://vajirayana.org/), stored by chapters and stanzas (non-tokenized). tlc v.2.0 (6/17/19 : a total of 34 documents, 292,270 lines, 31,790,734 characters) tlc v.1.0 (6/11/19 : a total of 25 documents, 113,981 lines, 28,775,761 characters) ## TNHC set It is texts from Thai National Historical Corpus, stored by lines (manually tokenized). tnhc v.1.0 (6/25/19 : a total of 47 documents, 756,478 lines, 13,361,142 characters) ### Supported Tasks and Leaderboards Language Modeling, Language Generation ### Languages Thai ## Dataset Structure ### Data Instances ``` { "ch_num": "๑", "title": "กากี กลอนสุภาพ", "text": [ [ "๏ จักกล่าวอดีตนิทานแต่ปางก่อน\n", "เมื่อครั้งองค์สมเด็จพระชินวร\tยังสัญจรแสวงหาโพธิญาณ\n", "เสวยชาติเป็นสกุณาพระยานก\tจึงชักเรื่องชาดกมาบรรหาร\n", "หวังแสดงแห่งจิตหญิงพาล\tให้ชายชาญรู้เชิงกระสัตรี ฯ\n" ] } ``` ### Data Fields - `ch_num`: chapter number in Thai Numerals (๑, ๒, ๓, ๔, ๕, ๖, ๗, ๘, ๙, ๑๐, ...) - `title`: chapter name - `text`: each item corresponds to one stanzas, each line is a couplet which can be seperated by `\t` ### Data Splits tlc v.2.0 (6/17/19 : a total of 34 documents, 292,270 lines, 31,790,734 characters) tlc v.1.0 (6/11/19 : a total of 25 documents, 113,981 lines, 28,775,761 characters) ## TNHC set It is texts from Thai National Historical Corpus, stored by lines (manually tokenized). tnhc v.1.0 (6/25/19 : a total of 47 documents, 756,478 lines, 13,361,142 characters) | | tlc2.0 | tlc1.0 | tnhc | |-----------|-------|-------|-------| | # documents | 34 | 25 | 47 | | # lines | 292,270 | 113,981 | 756,478 | ## Dataset Creation ### Curation Rationale Originally, the dataset was compiled for the [Thai Poetry Generator](https://github.com/jitkapat/thaipoetrygenerator) at Chulalongkorn university as the Final project for `2209372 Introduction to Computational Linguistics` by [Jitkapat Sawatphol](https://jitkapat.github.io/) (Faculty of Engineering, Chulalongkorn University). ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information There is no personal information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Thanks [Jitkapat Sawatphol](https://jitkapat.github.io/) (Faculty of Arts, Chulalongkorn University), and [Attapol Rutherford](https://attapol.github.io/) (Faculty of Arts, Chulalongkorn University) ### Licensing Information [More Information Needed] ### Citation Information Please cite the following if you make use of the dataset: Jitkapat Sawatphol, and Attapol Rutherford. 2019. **Thai Literature Corpora (TLC)**. BibTeX: ``` @misc{ author={Sawatphol, Jitkapat}, title={Thai Literature Corpora}, year={2019}, howpublished={\\url{https://attapol.github.io/tlc.html}} } ``` ### Contributions Thanks to [@chameleonTK](https://github.com/chameleonTK) for adding this dataset.
tmu_gfm_dataset
--- annotations_creators: - crowdsourced language_creators: - machine-generated language: - en license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text2text-generation task_ids: [] paperswithcode_id: null pretty_name: TMU-GFM-Dataset tags: - grammatical-error-correction dataset_info: features: - name: source dtype: string - name: output dtype: string - name: grammer sequence: int32 - name: fluency sequence: int32 - name: meaning sequence: int32 - name: system dtype: string - name: ave_g dtype: float32 - name: ave_f dtype: float32 - name: ave_m dtype: float32 splits: - name: train num_bytes: 1446144 num_examples: 4221 download_size: 1270197 dataset_size: 1446144 --- # Dataset Card for TMU-GFM-Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [N/A] - **Repository:** https://github.com/tmu-nlp/TMU-GFM-Dataset - **Paper:** [SOME: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction](https://www.aclweb.org/anthology/2020.coling-main.573.pdf) - **Leaderboard:** [N/A] - **Point of Contact:** Check the paper. ### Dataset Summary Authors collected manual evaluations for the grammaticality, fluency, and meaning preservation of the system outputs of 1,381 sentences from CoNLL 2013. To collect the manual evaluations for various system outputs, each source sentence was corrected by the following five typical systems: statistical machine translation (SMT) (Grundkiewicz and Junczys-Dowmunt, 2018), recurrent neural network (RNN) (Luong et al., 2015), convolutional neural network (CNN) (Chollampatt and Ng, 2018), self-attention network (SAN) (Vaswani et al., 2017), and SAN with copy mechanism (SAN+Copy) (Zhao et al., 2019). Manual evaluation for the grammaticality, fluency, and meaning preservation were assigned to a total of 4,223 sentences. ### Supported Tasks and Leaderboards Grammatical Error Correction ### Languages English ## Dataset Structure ### Data Instances An example from the TMU-GFM-Dataset looks as follows: ``` {'ave_f': 3.4000000953674316, 'ave_g': 3.4000000953674316, 'ave_m': 3.5999999046325684, 'fluency': [3, 4, 3, 4, 3], 'grammer': [3, 4, 3, 4, 3], 'meaning': [3, 4, 4, 4, 3], 'output': 'After all, there will be an endless battle between the technology and human mentality.', 'source': 'Afterall there will be an endless battle between the technology and human mentality.', 'system': 'lstm,cnn'} ``` ### Data Fields The are 9 columns in the tmu-gfm-dataset. - source: source sentence. - output: system output sentence. - grammer: Grammaticaliry annotations by 5 annotators. - fluency: Fluency annotations by 5 annotators. - meaning: Meaning Preservation annotations by 5 annotators. - system: Which system the output sentence is from. - ave_g: Average grammer score. - ave_f: Average fluency score. - ave_m: Average meaning score. ### Data Splits Authors divided the dataset into train/dev/test with 3,376/422/423 sentences and used for fine-tuning BERT in thier paper. ## Dataset Creation ### Curation Rationale The authors proposed a reference-less metric trained on manual evaluations of system outputs for grammatical error correction (GEC). They said that previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluation of the system output because there is no dataset of system output with manual evaluation. To achieve a better correlation with manual evaluation, they created a dataset to optimize each sub-metric to the manual evaluation of GEC systems. Their annotators evaluated the output of five typical GEC systems. ### Source Data #### Initial Data Collection and Normalization Authors collected manual evaluations for the grammaticality, fluency, and meaning preservation of the system outputs of 1,381 sentences from CoNLL 2013. To collect the manual evaluations for various system outputs, each source sentence was corrected by the following five typical systems: statistical machine translation (SMT) (Grundkiewicz and Junczys-Dowmunt, 2018), recurrent neural network (RNN) (Luong et al., 2015), convolutional neural network (CNN) (Chollampatt and Ng, 2018), self-attention network (SAN) (Vaswani et al., 2017), and SAN with copy mechanism (SAN+Copy) (Zhao et al., 2019). #### Who are the source language producers? machine-generated ### Annotations #### Annotation process By excluding duplicate corrected sentences, manual evaluation for the grammaticality, fluency, and meaning preservation were assigned to a total of 4,223 sentences, as follows: - Grammaticality: Annotators evaluated the grammatical correctness of the system output. The authors followed the five-point scale evaluation criteria (4: Perfect, 3: Comprehensible, 2: Somewhat comprehensible, 1: Incomprehensible, and 0: Other) proposed by Heilman et al. (2014). - Fluency: Annotators evaluated how natural the sentence sounds for native speakers. The authors followed the criteria (4: Extremely natural, 3: Somewhat natural, 2: Somewhat unnatural, and 1: Extremely unnatural) proposed by Lau et al. (2015). - Meaning preservation: Annotators evaluated the extent to which the meaning of source sentences is preserved in system output. The authors followed the criteria (4: Identical, 3: Minor differences, 2: Moderate differences, 1: Sub- stantially different, and 0: Other) proposed by Xu et al. (2016). Finally, the authors created a dataset with manual evaluations for a total of 4,221 sentences, excluding sentences in which three or more annotators answered “0: Other.” #### Who are the annotators? Five native English annotators reqruited by using Amazon Mechaincal turk ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @inproceedings{yoshimura-etal-2020-reference, title = "{SOME}: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction", author = "Yoshimura, Ryoma and Kaneko, Masahiro and Kajiwara, Tomoyuki and Komachi, Mamoru", booktitle = "Proceedings of the 28th International Conference on Computational Linguistics", month = dec, year = "2020", address = "Barcelona, Spain (Online)", publisher = "International Committee on Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.coling-main.573", pages = "6516--6522", abstract = "We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction (GEC). Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluations of the system outputs because no dataset of the system output exists with manual evaluation. This study manually evaluates outputs of GEC systems to optimize the metrics. Experimental results show that the proposed metric improves correlation with the manual evaluation in both system- and sentence-level meta-evaluation. Our dataset and metric will be made publicly available.", } ### Contributions Thanks to [@forest1988](https://github.com/forest1988) for adding this dataset.
told-br
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - pt license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: [] paperswithcode_id: told-br pretty_name: ToLD-Br language_bcp47: - pt-BR tags: - hate-speech-detection dataset_info: - config_name: multilabel features: - name: text dtype: string - name: homophobia dtype: class_label: names: '0': zero_votes '1': one_vote '2': two_votes '3': three_votes - name: obscene dtype: class_label: names: '0': zero_votes '1': one_vote '2': two_votes '3': three_votes - name: insult dtype: class_label: names: '0': zero_votes '1': one_vote '2': two_votes '3': three_votes - name: racism dtype: class_label: names: '0': zero_votes '1': one_vote '2': two_votes '3': three_votes - name: misogyny dtype: class_label: names: '0': zero_votes '1': one_vote '2': two_votes '3': three_votes - name: xenophobia dtype: class_label: names: '0': zero_votes '1': one_vote '2': two_votes '3': three_votes splits: - name: train num_bytes: 2978006 num_examples: 21000 download_size: 2430416 dataset_size: 2978006 - config_name: binary features: - name: text dtype: string - name: label dtype: class_label: names: '0': not-toxic '1': toxic splits: - name: train num_bytes: 1709560 num_examples: 16800 - name: test num_bytes: 216297 num_examples: 2100 - name: validation num_bytes: 212153 num_examples: 2100 download_size: 853322 dataset_size: 2138010 --- # Dataset Card for "ToLD-Br" ## Table of Contents - [Dataset Card for "ToLD-Br"](#dataset-card-for-told-br) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://paperswithcode.com/dataset/told-br - **Repository:** https://github.com/JAugusto97/ToLD-Br - **Paper:** https://arxiv.org/abs/2010.04543 - **Leaderboard:** https://paperswithcode.com/sota/hate-speech-detection-on-told-br - **Point of Contact:** joao.leite@estudante.ufscar.br ### Dataset Summary ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced by 42 annotators selected from a pool of 129 volunteers. Annotators were selected aiming to create a plural group in terms of demographics (ethnicity, sexual orientation, age, gender). Each tweet was labeled by three annotators in 6 possible categories: LGBTQ+phobia, Xenophobia, Obscene, Insult, Misogyny and Racism. ### Supported Tasks and Leaderboards -`text-classification-other-hate-speech-detection`: The dataset can be used to train a model for Hate Speech Detection, either using it's multi-label classes or by grouping them into a binary Hate vs. Non-Hate class. A [BERT](https://huggingface.co/docs/transformers/model_doc/bert) model can be fine-tuned to perform this task and achieve 0.75 F1-Score for it's binary version. ### Languages The text in the dataset is in Brazilian Portuguese, as spoken by Tweet users. The associated BCP-47 code is `pt-BR`. ## Dataset Structure ### Data Instances ToLD-Br has two versions: binary and multilabel. Multilabel: A data point consists of the tweet text (string) followed by 6 categories that have values ranging from 0 to 3, meaning the amount of votes from annotators for that specific class on homophobia, obscene, insult, racism, misogyny and xenophobia. An example from multilabel ToLD-Br looks as follows: ``` {'text': '@user bandido dissimulado. esse sérgio moro é uma espécie de mal carater com ditadura e pitadas de atraso' 'homophobia': 0 'obscene': 0 'insult': 2 'racism': 0 'misogyny': 0 'xenophobia': 0} ``` Binary: A data point consists of the tweet text (string) followed by a binary class "toxic" with values 0 or 1. An example from binary ToLD-Br looks as follows: ``` {'text': '@user bandido dissimulado. esse sérgio moro é uma espécie de mal carater com ditadura e pitadas de atraso' 'toxic': 1} ``` ### Data Fields Multilabel: - text: A string representing the tweet posted by a user. Mentions to other users are anonymized by replacing the mention with a @user tag. - homophobia: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as homophobic. - obscene: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as obscene. - insult: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as insult. - racism: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as racism. - misogyny: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as misogyny. - xenophobia: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as xenophobia. Binary: - text: A string representing the tweet posted by a user. Mentions to other users are anonymized by replacing the mention with a @user tag. - label: numerical binary value {0, 1} representing if the respective text is toxic/abusive or not. ### Data Splits Multilabel: The entire dataset consists of 21.000 examples. Binary: The train set consists of 16.800 examples, validation set consists of 2.100 examples and test set consists of 2.100 examples. ## Dataset Creation ### Curation Rationale Despite Portuguese being the 5th most spoken language in the world and Brazil being the 4th country with most unique users, Brazilian Portuguese was underrepresented in the hate-speech detection task. Only two other datasets were available, one of them being European Portuguese. ToLD-Br is 4x bigger than both these datasets combined. Also, none of them had multiple annotators per instance. Also, this work proposes a plural and diverse group of annotators carefully selected to avoid inserting bias into the annotation. ### Source Data #### Initial Data Collection and Normalization Data was collected in 15 days in August 2019 using Gate Cloud's Tweet Collector. Ten million tweets were collected using two methods: a keyword-based method and a user-mention method. The first method collected tweets mentioning the following keywords: viado,veado,viadinho,veadinho,viadao,veadao,bicha,bixa,bichinha,bixinha,bichona,bixona,baitola,sapatão,sapatao,traveco,bambi,biba,boiola,marica,gayzão,gayzao,flor,florzinha,vagabundo,vagaba,desgraçada,desgraçado,desgracado,arrombado,arrombada,foder,fuder,fudido,fodido,cú,cu,pinto,pau,pal,caralho,caraio,carai,pica,cacete,rola,porra,escroto,buceta,fdp,pqp,vsf,tnc,vtnc,puto,putinho,acéfalo,acefalo,burro,idiota,trouxa,estúpido,estupido,estúpida,canalha,demente,retardado,retardada,verme,maldito,maldita,ridículo,ridiculo,ridícula,ridicula,morfético,morfetico,morfética,morfetica,lazarento,lazarenta,lixo,mongolóide,mongoloide,mongol,asqueroso,asquerosa,cretino,cretina,babaca,pilantra,neguinho,neguinha,pretinho,pretinha,escurinho,escurinha,pretinha,pretinho,crioulo,criolo,crioula,criola,macaco,macaca,gorila,puta,vagabunda,vagaba,mulherzinha,piranha,feminazi,putinha,piriguete,vaca,putinha,bahiano,baiano,baianagem,xingling,xing ling,xing-ling,carioca,paulista,sulista,mineiro,gringo The list of most followed Brazilian Twitter accounts can be found [here](https://assuperlistas.com/2022/01/21/os-100-brasileiros-mais-seguidos-do-twitter/). #### Who are the source language producers? The language producers are Twitter users from Brazil, speakers of Portuguese. ### Annotations #### Annotation process A form was published at the Federal University of São Carlos asking for volunteers to annotate our dataset. 129 people volunteered and 42 were selected according to their demographics in order to create a diverse and plural annotation group. Guidelines were produced and presented to the annotators. The entire process was done asynchronously because of the Covid-19 pandemic. The tool used was Google Sheets. Annotators were grouped into 14 teams of three annotators each. Each group annotated a respective file containing 1500 tweets. Annotators didn't have contact with each other, nor did they know that other annotators were labelling the same tweets as they were. #### Who are the annotators? Annotators were people from the Federal University of São Carlos' Facebook group. Their demographics are described below: | Gender | | |--------|--------| | Male | 18 | | Female | 24 | | Sexual Orientation | | |--------------------|----| | Heterosexual | 22 | | Bisexual | 12 | | Homosexual | 5 | | Pansexual | 3 | | Ethnicity | | |--------------|----| | White | 25 | | Brown | 9 | | Black | 5 | | Asian | 2 | | Non-Declared | 1 | Ages range from 18 to 37 years old. Annotators were paid R$50 ($10) to label 1500 examples each. ### Personal and Sensitive Information The dataset contains sensitive information for homophobia, obscene, insult, racism, misogyny and xenophobia. Tweets were anonymized by replacing user mentions with a @user tag. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop better hate speech detection systems. A system that succeeds at this task would be able to identify hate speech tweets associated with the classes available in the dataset. ### Discussion of Biases An effort was made to reduce annotation bias by selecting annotators with a diverse demographic background. In terms of data collection, by using keywords and user mentions, we are introducing some bias to the data, restricting our scope to the list of keywords and users we created. ### Other Known Limitations Because of the massive data skew for the multilabel classes, it is extremely hard to train a robust model for this version of the dataset. We advise using it for analysis and experimentation only. The binary version of the dataset is robust enough to train a classifier with up to 76% F1-score. ## Additional Information ### Dataset Curators The dataset was created by João Augusto Leite, Diego Furtado Silva, both from the Federal University of São Carlos (BR), Carolina Scarton and Kalina Bontcheva both from the University of Sheffield (UK) ### Licensing Information ToLD-Br is licensed under a Creative Commons BY-SA 4.0 ### Citation Information ``` @article{DBLP:journals/corr/abs-2010-04543, author = {Joao Augusto Leite and Diego F. Silva and Kalina Bontcheva and Carolina Scarton}, title = {Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis}, journal = {CoRR}, volume = {abs/2010.04543}, year = {2020}, url = {https://arxiv.org/abs/2010.04543}, eprinttype = {arXiv}, eprint = {2010.04543}, timestamp = {Tue, 15 Dec 2020 16:10:16 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2010-04543.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@JAugusto97](https://github.com/JAugusto97) for adding this dataset.
totto
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - table-to-text task_ids: [] paperswithcode_id: totto pretty_name: ToTTo dataset_info: features: - name: id dtype: int32 - name: table_page_title dtype: string - name: table_webpage_url dtype: string - name: table_section_title dtype: string - name: table_section_text dtype: string - name: table list: list: - name: column_span dtype: int32 - name: is_header dtype: bool - name: row_span dtype: int32 - name: value dtype: string - name: highlighted_cells sequence: sequence: int32 - name: example_id dtype: string - name: sentence_annotations sequence: - name: original_sentence dtype: string - name: sentence_after_deletion dtype: string - name: sentence_after_ambiguity dtype: string - name: final_sentence dtype: string - name: overlap_subset dtype: string splits: - name: train num_bytes: 652754806 num_examples: 120761 - name: validation num_bytes: 47277039 num_examples: 7700 - name: test num_bytes: 40883586 num_examples: 7700 download_size: 187724372 dataset_size: 740915431 --- # Dataset Card for ToTTo ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** https://github.com/google-research-datasets/ToTTo - **Paper:** https://arxiv.org/abs/2004.14373 - **Leaderboard:** https://github.com/google-research-datasets/ToTTo#leaderboard - **Point of Contact:** [totto@google.com](mailto:totto@google.com) ### Dataset Summary ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances A sample training set is provided below ``` {'example_id': '1762238357686640028', 'highlighted_cells': [[13, 2]], 'id': 0, 'overlap_subset': 'none', 'sentence_annotations': {'final_sentence': ['A Favorita is the telenovela aired in the 9 pm timeslot.'], 'original_sentence': ['It is also the first telenovela by the writer to air in the 9 pm timeslot.'], 'sentence_after_ambiguity': ['A Favorita is the telenovela aired in the 9 pm timeslot.'], 'sentence_after_deletion': ['It is the telenovela air in the 9 pm timeslot.']}, 'table': [[{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': '#'}, {'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Run'}, {'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Title'}, {'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Chapters'}, {'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Author'}, {'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Director'}, {'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Ibope Rating'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '59'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'June 5, 2000— February 2, 2001'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Laços de Família'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Manoel Carlos'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Ricardo Waddington'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '44.9'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '60'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'February 5, 2001— September 28, 2001'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Porto dos Milagres'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Aguinaldo Silva Ricardo Linhares'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Marcos Paulo Simões'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '44.6'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '61'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'October 1, 2001— June 14, 2002'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'O Clone'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '221'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Glória Perez'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Jayme Monjardim'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '47.0'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '62'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'June 17, 2002— February 14, 2003'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Esperança'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Benedito Ruy Barbosa'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Luiz Fernando'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '37.7'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '63'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'February 17, 2003— October 10, 2003'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Mulheres Apaixonadas'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Manoel Carlos'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Ricardo Waddington'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '46.6'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '64'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'October 13, 2003— June 25, 2004'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Celebridade'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '221'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Gilberto Braga'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Dennis Carvalho'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '46.0'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '65'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'June 28, 2004— March 11, 2005'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Senhora do Destino'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '221'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Aguinaldo Silva'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Wolf Maya'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '50.4'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '66'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'March 14, 2005— November 4, 2005'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'América'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Glória Perez'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Jayme Monjardim Marcos Schechtman'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '49.4'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '67'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'November 7, 2005— July 7, 2006'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Belíssima'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Sílvio de Abreu'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Denise Saraceni'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '48.5'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '68'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'July 10, 2006— March 2, 2007'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Páginas da Vida'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Manoel Carlos'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Jayme Monjardim'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '46.8'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '69'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'March 5, 2007— September 28, 2007'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Paraíso Tropical'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '179'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Gilberto Braga Ricardo Linhares'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Dennis Carvalho'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '42.8'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '70'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'October 1, 2007— May 31, 2008'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Duas Caras'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '210'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Aguinaldo Silva'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Wolf Maya'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '41.1'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '71'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'June 2, 2008— January 16, 2009'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'A Favorita'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '197'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'João Emanuel Carneiro'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Ricardo Waddington'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '39.5'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '72'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'January 19, 2009— September 11, 2009'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Caminho das Índias'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '203'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Glória Perez'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Marcos Schechtman'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '38.8'}], [{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '73'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'September 14, 2009— May 14, 2010'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Viver a Vida'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '209'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Manoel Carlos'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Jayme Monjardim'}, {'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '35.6'}]], 'table_page_title': 'List of 8/9 PM telenovelas of Rede Globo', 'table_section_text': '', 'table_section_title': '2000s', 'table_webpage_url': 'http://en.wikipedia.org/wiki/List_of_8/9_PM_telenovelas_of_Rede_Globo'} ``` Please note that in test set sentence annotations are not available and thus values inside `sentence_annotations` can be safely ignored. ### Data Fields - `table_webpage_url` (`str`): Table webpage URL. - `table_page_title` (`str`): Table metadata with context about the table. - `table_section_title` (`str`): Table metadata with context about the table. - `table_section_text` (`str`): Table metadata with context about the table. - `table` (`List[List[Dict]]`): The outer lists represents rows and the inner lists columns. Each Dict has the fields: - `column_span` (`int`) - `is_header` (`bool`) - `row_span` (`int`) - `value` (`str`) - `highlighted_cells` (`List[[row_index, column_index]]`): Where each `[row_index, column_index]` pair indicates that `table[row_index][column_index]` is highlighted. - `example_id` (`int`): A unique id for this example. - `sentence_annotations`: Consists of the `original_sentence` and the sequence of revised sentences performed in order to produce the `final_sentence`. ### Data Splits ``` DatasetDict({ train: Dataset({ features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'], num_rows: 120761 }) validation: Dataset({ features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'], num_rows: 7700 }) test: Dataset({ features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'], num_rows: 7700 }) }) ``` ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{parikh2020totto, title={{ToTTo}: A Controlled Table-To-Text Generation Dataset}, author={Parikh, Ankur P and Wang, Xuezhi and Gehrmann, Sebastian and Faruqui, Manaal and Dhingra, Bhuwan and Yang, Diyi and Das, Dipanjan}, booktitle={Proceedings of EMNLP}, year={2020} } ``` ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
trec
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification paperswithcode_id: trecqa pretty_name: Text Retrieval Conference Question Answering dataset_info: features: - name: text dtype: string - name: coarse_label dtype: class_label: names: '0': ABBR '1': ENTY '2': DESC '3': HUM '4': LOC '5': NUM - name: fine_label dtype: class_label: names: '0': ABBR:abb '1': ABBR:exp '2': ENTY:animal '3': ENTY:body '4': ENTY:color '5': ENTY:cremat '6': ENTY:currency '7': ENTY:dismed '8': ENTY:event '9': ENTY:food '10': ENTY:instru '11': ENTY:lang '12': ENTY:letter '13': ENTY:other '14': ENTY:plant '15': ENTY:product '16': ENTY:religion '17': ENTY:sport '18': ENTY:substance '19': ENTY:symbol '20': ENTY:techmeth '21': ENTY:termeq '22': ENTY:veh '23': ENTY:word '24': DESC:def '25': DESC:desc '26': DESC:manner '27': DESC:reason '28': HUM:gr '29': HUM:ind '30': HUM:title '31': HUM:desc '32': LOC:city '33': LOC:country '34': LOC:mount '35': LOC:other '36': LOC:state '37': NUM:code '38': NUM:count '39': NUM:date '40': NUM:dist '41': NUM:money '42': NUM:ord '43': NUM:other '44': NUM:period '45': NUM:perc '46': NUM:speed '47': NUM:temp '48': NUM:volsize '49': NUM:weight splits: - name: train num_bytes: 385090 num_examples: 5452 - name: test num_bytes: 27983 num_examples: 500 download_size: 359212 dataset_size: 413073 --- # Dataset Card for "trec" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://cogcomp.seas.upenn.edu/Data/QA/QC/](https://cogcomp.seas.upenn.edu/Data/QA/QC/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 0.36 MB - **Size of the generated dataset:** 0.41 MB - **Total amount of disk used:** 0.78 MB ### Dataset Summary The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set. The dataset has 6 coarse class labels and 50 fine class labels. Average length of each sentence is 10, vocabulary size of 8700. Data are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set. These questions were manually labeled. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages The language in this dataset is English (`en`). ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 0.36 MB - **Size of the generated dataset:** 0.41 MB - **Total amount of disk used:** 0.78 MB An example of 'train' looks as follows. ``` { 'text': 'How did serfdom develop in and then leave Russia ?', 'coarse_label': 2, 'fine_label': 26 } ``` ### Data Fields The data fields are the same among all splits. - `text` (`str`): Text of the question. - `coarse_label` (`ClassLabel`): Coarse class label. Possible values are: - 'ABBR' (0): Abbreviation. - 'ENTY' (1): Entity. - 'DESC' (2): Description and abstract concept. - 'HUM' (3): Human being. - 'LOC' (4): Location. - 'NUM' (5): Numeric value. - `fine_label` (`ClassLabel`): Fine class label. Possible values are: - ABBREVIATION: - 'ABBR:abb' (0): Abbreviation. - 'ABBR:exp' (1): Expression abbreviated. - ENTITY: - 'ENTY:animal' (2): Animal. - 'ENTY:body' (3): Organ of body. - 'ENTY:color' (4): Color. - 'ENTY:cremat' (5): Invention, book and other creative piece. - 'ENTY:currency' (6): Currency name. - 'ENTY:dismed' (7): Disease and medicine. - 'ENTY:event' (8): Event. - 'ENTY:food' (9): Food. - 'ENTY:instru' (10): Musical instrument. - 'ENTY:lang' (11): Language. - 'ENTY:letter' (12): Letter like a-z. - 'ENTY:other' (13): Other entity. - 'ENTY:plant' (14): Plant. - 'ENTY:product' (15): Product. - 'ENTY:religion' (16): Religion. - 'ENTY:sport' (17): Sport. - 'ENTY:substance' (18): Element and substance. - 'ENTY:symbol' (19): Symbols and sign. - 'ENTY:techmeth' (20): Techniques and method. - 'ENTY:termeq' (21): Equivalent term. - 'ENTY:veh' (22): Vehicle. - 'ENTY:word' (23): Word with a special property. - DESCRIPTION: - 'DESC:def' (24): Definition of something. - 'DESC:desc' (25): Description of something. - 'DESC:manner' (26): Manner of an action. - 'DESC:reason' (27): Reason. - HUMAN: - 'HUM:gr' (28): Group or organization of persons - 'HUM:ind' (29): Individual. - 'HUM:title' (30): Title of a person. - 'HUM:desc' (31): Description of a person. - LOCATION: - 'LOC:city' (32): City. - 'LOC:country' (33): Country. - 'LOC:mount' (34): Mountain. - 'LOC:other' (35): Other location. - 'LOC:state' (36): State. - NUMERIC: - 'NUM:code' (37): Postcode or other code. - 'NUM:count' (38): Number of something. - 'NUM:date' (39): Date. - 'NUM:dist' (40): Distance, linear measure. - 'NUM:money' (41): Price. - 'NUM:ord' (42): Order, rank. - 'NUM:other' (43): Other number. - 'NUM:period' (44): Lasting time of something - 'NUM:perc' (45): Percent, fraction. - 'NUM:speed' (46): Speed. - 'NUM:temp' (47): Temperature. - 'NUM:volsize' (48): Size, area and volume. - 'NUM:weight' (49): Weight. ### Data Splits | name | train | test | |---------|------:|-----:| | default | 5452 | 500 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{li-roth-2002-learning, title = "Learning Question Classifiers", author = "Li, Xin and Roth, Dan", booktitle = "{COLING} 2002: The 19th International Conference on Computational Linguistics", year = "2002", url = "https://www.aclweb.org/anthology/C02-1150", } @inproceedings{hovy-etal-2001-toward, title = "Toward Semantics-Based Answer Pinpointing", author = "Hovy, Eduard and Gerber, Laurie and Hermjakob, Ulf and Lin, Chin-Yew and Ravichandran, Deepak", booktitle = "Proceedings of the First International Conference on Human Language Technology Research", year = "2001", url = "https://www.aclweb.org/anthology/H01-1069", } ``` ### Contributions Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
trivia_qa
--- annotations_creators: - crowdsourced language_creators: - machine-generated language: - en license: - unknown multilinguality: - monolingual paperswithcode_id: triviaqa pretty_name: TriviaQA size_categories: - 10K<n<100K - 100K<n<1M source_datasets: - original task_categories: - question-answering - text2text-generation task_ids: - open-domain-qa - open-domain-abstractive-qa - extractive-qa - abstractive-qa dataset_info: - config_name: rc features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train num_bytes: 12749652867 num_examples: 138384 - name: validation num_bytes: 1662321436 num_examples: 17944 - name: test num_bytes: 1577710751 num_examples: 17210 download_size: 2665779500 dataset_size: 15989685054 - config_name: rc.nocontext features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train num_bytes: 106884466 num_examples: 138384 - name: validation num_bytes: 14060078 num_examples: 17944 - name: test num_bytes: 3668151 num_examples: 17210 download_size: 2665779500 dataset_size: 124612695 - config_name: unfiltered features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train num_bytes: 25019623548 num_examples: 87622 - name: validation num_bytes: 3038803991 num_examples: 11313 - name: test num_bytes: 2906455559 num_examples: 10832 download_size: 3298328560 dataset_size: 30964883098 - config_name: unfiltered.nocontext features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train num_bytes: 63301342 num_examples: 87622 - name: validation num_bytes: 8297118 num_examples: 11313 - name: test num_bytes: 2320908 num_examples: 10832 download_size: 632549060 dataset_size: 73919368 - config_name: rc.web features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train num_bytes: 9408852131 num_examples: 76496 - name: validation num_bytes: 1232155262 num_examples: 9951 - name: test num_bytes: 1171664123 num_examples: 9509 download_size: 2665779500 dataset_size: 11812671516 - config_name: rc.web.nocontext features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train num_bytes: 58524077 num_examples: 76496 - name: validation num_bytes: 7694681 num_examples: 9951 - name: test num_bytes: 2024871 num_examples: 9509 download_size: 2665779500 dataset_size: 68243629 - config_name: unfiltered.web features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train - name: validation - name: test download_size: 3298328560 dataset_size: 0 - config_name: unfiltered.web.nocontext features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train - name: validation - name: test download_size: 632549060 dataset_size: 0 - config_name: rc.wikipedia features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train num_bytes: 3340800860 num_examples: 61888 - name: validation num_bytes: 430166174 num_examples: 7993 - name: test num_bytes: 406046628 num_examples: 7701 download_size: 2665779500 dataset_size: 4177013662 - config_name: rc.wikipedia.nocontext features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train num_bytes: 48360513 num_examples: 61888 - name: validation num_bytes: 6365397 num_examples: 7993 - name: test num_bytes: 1643280 num_examples: 7701 download_size: 2665779500 dataset_size: 56369190 - config_name: unfiltered.wikipedia features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train - name: validation - name: test download_size: 3298328560 dataset_size: 0 - config_name: unfiltered.wikipedia.nocontext features: - name: question dtype: string - name: question_id dtype: string - name: question_source dtype: string - name: entity_pages sequence: - name: doc_source dtype: string - name: filename dtype: string - name: title dtype: string - name: wiki_context dtype: string - name: search_results sequence: - name: description dtype: string - name: filename dtype: string - name: rank dtype: int32 - name: title dtype: string - name: url dtype: string - name: search_context dtype: string - name: answer struct: - name: aliases sequence: string - name: normalized_aliases sequence: string - name: matched_wiki_entity_name dtype: string - name: normalized_matched_wiki_entity_name dtype: string - name: normalized_value dtype: string - name: type dtype: string - name: value dtype: string splits: - name: train - name: validation - name: test download_size: 632549060 dataset_size: 0 --- # Dataset Card for "trivia_qa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://nlp.cs.washington.edu/triviaqa/](http://nlp.cs.washington.edu/triviaqa/) - **Repository:** [https://github.com/mandarjoshi90/triviaqa](https://github.com/mandarjoshi90/triviaqa) - **Paper:** [TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension](https://arxiv.org/abs/1705.03551) - **Leaderboard:** [CodaLab Leaderboard](https://competitions.codalab.org/competitions/17208#results) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 9.26 GB - **Size of the generated dataset:** 45.46 GB - **Total amount of disk used:** 54.72 GB ### Dataset Summary TriviaqQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaqQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages English. ## Dataset Structure ### Data Instances #### rc - **Size of downloaded dataset files:** 2.67 GB - **Size of the generated dataset:** 16.02 GB - **Total amount of disk used:** 18.68 GB An example of 'train' looks as follows. ``` ``` #### rc.nocontext - **Size of downloaded dataset files:** 2.67 GB - **Size of the generated dataset:** 126.27 MB - **Total amount of disk used:** 2.79 GB An example of 'train' looks as follows. ``` ``` #### unfiltered - **Size of downloaded dataset files:** 3.30 GB - **Size of the generated dataset:** 29.24 GB - **Total amount of disk used:** 32.54 GB An example of 'validation' looks as follows. ``` ``` #### unfiltered.nocontext - **Size of downloaded dataset files:** 632.55 MB - **Size of the generated dataset:** 74.56 MB - **Total amount of disk used:** 707.11 MB An example of 'train' looks as follows. ``` ``` ### Data Fields The data fields are the same among all splits. #### rc - `question`: a `string` feature. - `question_id`: a `string` feature. - `question_source`: a `string` feature. - `entity_pages`: a dictionary feature containing: - `doc_source`: a `string` feature. - `filename`: a `string` feature. - `title`: a `string` feature. - `wiki_context`: a `string` feature. - `search_results`: a dictionary feature containing: - `description`: a `string` feature. - `filename`: a `string` feature. - `rank`: a `int32` feature. - `title`: a `string` feature. - `url`: a `string` feature. - `search_context`: a `string` feature. - `aliases`: a `list` of `string` features. - `normalized_aliases`: a `list` of `string` features. - `matched_wiki_entity_name`: a `string` feature. - `normalized_matched_wiki_entity_name`: a `string` feature. - `normalized_value`: a `string` feature. - `type`: a `string` feature. - `value`: a `string` feature. #### rc.nocontext - `question`: a `string` feature. - `question_id`: a `string` feature. - `question_source`: a `string` feature. - `entity_pages`: a dictionary feature containing: - `doc_source`: a `string` feature. - `filename`: a `string` feature. - `title`: a `string` feature. - `wiki_context`: a `string` feature. - `search_results`: a dictionary feature containing: - `description`: a `string` feature. - `filename`: a `string` feature. - `rank`: a `int32` feature. - `title`: a `string` feature. - `url`: a `string` feature. - `search_context`: a `string` feature. - `aliases`: a `list` of `string` features. - `normalized_aliases`: a `list` of `string` features. - `matched_wiki_entity_name`: a `string` feature. - `normalized_matched_wiki_entity_name`: a `string` feature. - `normalized_value`: a `string` feature. - `type`: a `string` feature. - `value`: a `string` feature. #### unfiltered - `question`: a `string` feature. - `question_id`: a `string` feature. - `question_source`: a `string` feature. - `entity_pages`: a dictionary feature containing: - `doc_source`: a `string` feature. - `filename`: a `string` feature. - `title`: a `string` feature. - `wiki_context`: a `string` feature. - `search_results`: a dictionary feature containing: - `description`: a `string` feature. - `filename`: a `string` feature. - `rank`: a `int32` feature. - `title`: a `string` feature. - `url`: a `string` feature. - `search_context`: a `string` feature. - `aliases`: a `list` of `string` features. - `normalized_aliases`: a `list` of `string` features. - `matched_wiki_entity_name`: a `string` feature. - `normalized_matched_wiki_entity_name`: a `string` feature. - `normalized_value`: a `string` feature. - `type`: a `string` feature. - `value`: a `string` feature. #### unfiltered.nocontext - `question`: a `string` feature. - `question_id`: a `string` feature. - `question_source`: a `string` feature. - `entity_pages`: a dictionary feature containing: - `doc_source`: a `string` feature. - `filename`: a `string` feature. - `title`: a `string` feature. - `wiki_context`: a `string` feature. - `search_results`: a dictionary feature containing: - `description`: a `string` feature. - `filename`: a `string` feature. - `rank`: a `int32` feature. - `title`: a `string` feature. - `url`: a `string` feature. - `search_context`: a `string` feature. - `aliases`: a `list` of `string` features. - `normalized_aliases`: a `list` of `string` features. - `matched_wiki_entity_name`: a `string` feature. - `normalized_matched_wiki_entity_name`: a `string` feature. - `normalized_value`: a `string` feature. - `type`: a `string` feature. - `value`: a `string` feature. ### Data Splits | name |train |validation|test | |--------------------|-----:|---------:|----:| |rc |138384| 18669|17210| |rc.nocontext |138384| 18669|17210| |unfiltered | 87622| 11313|10832| |unfiltered.nocontext| 87622| 11313|10832| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The University of Washington does not own the copyright of the questions and documents included in TriviaQA. ### Citation Information ``` @article{2017arXivtriviaqa, author = {{Joshi}, Mandar and {Choi}, Eunsol and {Weld}, Daniel and {Zettlemoyer}, Luke}, title = "{triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}", journal = {arXiv e-prints}, year = 2017, eid = {arXiv:1705.03551}, pages = {arXiv:1705.03551}, archivePrefix = {arXiv}, eprint = {1705.03551}, } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
tsac
--- annotations_creators: - expert-generated language_creators: - found language: - aeb license: - lgpl-3.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: tsac pretty_name: Tunisian Sentiment Analysis Corpus dataset_info: features: - name: id dtype: string - name: sentence dtype: string - name: target dtype: class_label: names: '0': '1' '1': '-1' splits: - name: train num_bytes: 1020146 num_examples: 13669 - name: test num_bytes: 268504 num_examples: 3400 download_size: 963015 dataset_size: 1288650 --- # Dataset Card for Tunisian Sentiment Analysis Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** https://github.com/fbougares/TSAC - **Paper:** https://www.aclweb.org/anthology/W17-1307 - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** Salima Mdhaffar (firstname.lastname@univ-lemans.fr) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
ttc4900
--- annotations_creators: - found language_creators: - found language: - tr license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: [] pretty_name: TTC4900 - A Benchmark Data for Turkish Text Categorization tags: - news-category-classification dataset_info: features: - name: category dtype: class_label: names: '0': siyaset '1': dunya '2': ekonomi '3': kultur '4': saglik '5': spor '6': teknoloji - name: text dtype: string config_name: ttc4900 splits: - name: train num_bytes: 10640831 num_examples: 4900 download_size: 10627541 dataset_size: 10640831 --- # Dataset Card for TTC4900: A Benchmark Data for Turkish Text Categorization ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [TTC4900 Homepage](https://www.kaggle.com/savasy/ttc4900) - **Repository:** [TTC4900 Repository](https://github.com/savasy/TurkishTextClassification) - **Paper:** [A Comparison of Different Approaches to Document Representation in Turkish Language](https://dergipark.org.tr/en/pub/sdufenbed/issue/38975/456349) - **Point of Contact:** [Savaş Yıldırım](mailto:savasy@gmail.com) ### Dataset Summary The data set is taken from [kemik group](http://www.kemik.yildiz.edu.tr/) The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth. We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study ["A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014"](https://link.springer.com/chapter/10.1007/978-3-642-54903-8_36) If you use the dataset in a paper, please refer https://www.kaggle.com/savasy/ttc4900 as footnote and cite one of the papers as follows: - A Comparison of Different Approaches to Document Representation in Turkish Language, SDU Journal of Natural and Applied Science, Vol 22, Issue 2, 2018 - A comparative analysis of text classification for Turkish language, Pamukkale University Journal of Engineering Science Volume 25 Issue 5, 2018 - A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Turkish. ## Dataset Structure ### Data Instances A text classification dataset with 7 different news category. Here is an example from the dataset: ``` { "category": 0, # politics/siyaset "text": "paris teki infaz imralı ile başlayan sürece bir darbe mi elif_çakır ın sunduğu söz_bitmeden in bugünkü konuğu gazeteci melih altınok oldu programdan satıbaşları imralı ile görüşmeler hangi aşamada bundan sonra ne olacak hangi kesimler sürece engel oluyor psikolojik mayınlar neler türk solu bu dönemde evrensel sorumluluğunu yerine getirebiliyor mu elif_çakır sordu melih altınok söz_bitmeden de yanıtladı elif_çakır pkk nın silahsızlandırılmasına yönelik olarak öcalan ile görüşme sonrası 3 kadının infazı enteresan çünkü kurucu isimlerden birisi sen nasıl okudun bu infazı melih altınok herkesin ciddi anlamda şüpheleri var şu an yürüttüğümüz herşey bir delile dayanmadığı için komple teorisinden ibaret kalacak ama şöyle bir durum var imralı görüşmelerin ilk defa bir siyasi iktidar tarafından açıkça söylendiği bir dönem ardından geliyor bu sürecin gerçekleşmemesini isteyen kesimler yaptırmıştır dedi" } ``` ### Data Fields - **category** : Indicates to which category the news text belongs. (Such as "politics", "world", "economy", "culture", "health", "sports", "technology".) - **text** : Contains the text of the news. ### Data Splits It is not divided into Train set and Test set. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth. #### Who are the source language producers? Turkish online news sites. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by [Savaş Yıldırım](https://github.com/savasy) ### Licensing Information [More Information Needed] ### Citation Information ``` @article{doi:10.5505/pajes.2018.15931, author = {Yıldırım, Savaş and Yıldız, Tuğba}, title = {A comparative analysis of text classification for Turkish language}, journal = {Pamukkale Univ Muh Bilim Derg}, volume = {24}, number = {5}, pages = {879-886}, year = {2018}, doi = {10.5505/pajes.2018.15931}, note ={doi: 10.5505/pajes.2018.15931}, URL = {https://dx.doi.org/10.5505/pajes.2018.15931}, eprint = {https://dx.doi.org/10.5505/pajes.2018.15931} } ``` ### Contributions Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset.
tunizi
--- annotations_creators: - expert-generated language_creators: - found language: - aeb license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: tunizi pretty_name: TUNIZI dataset_info: features: - name: id dtype: string - name: sentence dtype: string - name: target dtype: class_label: names: '0': '1' '1': '-1' splits: - name: train num_bytes: 211166 num_examples: 3000 download_size: 162781 dataset_size: 211166 --- # Dataset Card for TUNIZI ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/chaymafourati/TUNIZI-Sentiment-Analysis-Tunisian-Arabizi-Dataset - **Repository:** https://github.com/chaymafourati/TUNIZI-Sentiment-Analysis-Tunisian-Arabizi-Dataset - **Paper:** https://arxiv.org/abs/2004.14303 - **Point of Contact:** Chayma Fourati (chayma@icompass.digital) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages This dataset uses Tunisian Arabic written with latin script (BCP-47: aeb-Latn) ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
tuple_ie
--- annotations_creators: - found language_creators: - machine-generated language: - en license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: tupleinf-open-ie-dataset pretty_name: TupleInf Open IE tags: - open-information-extraction dataset_info: - config_name: all features: - name: sentence dtype: string - name: tuples sequence: - name: score dtype: float32 - name: tuple_text dtype: string - name: context dtype: string - name: arg1 dtype: string - name: rel dtype: string - name: arg2s sequence: string splits: - name: train num_bytes: 115621096 num_examples: 267719 download_size: 18026102 dataset_size: 115621096 - config_name: 4th_grade features: - name: sentence dtype: string - name: tuples sequence: - name: score dtype: float32 - name: tuple_text dtype: string - name: context dtype: string - name: arg1 dtype: string - name: rel dtype: string - name: arg2s sequence: string splits: - name: train num_bytes: 65363445 num_examples: 158910 download_size: 18026102 dataset_size: 65363445 - config_name: 8th_grade features: - name: sentence dtype: string - name: tuples sequence: - name: score dtype: float32 - name: tuple_text dtype: string - name: context dtype: string - name: arg1 dtype: string - name: rel dtype: string - name: arg2s sequence: string splits: - name: train num_bytes: 50257651 num_examples: 108809 download_size: 18026102 dataset_size: 50257651 --- # Dataset Card for TupleInf Open IE ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Tuple IE Homepage](https://allenai.org/data/tuple-ie) - **Repository:** - **Paper:** [Answering Complex Questions Using Open Information Extraction](https://www.semanticscholar.org/paper/Answering-Complex-Questions-Using-Open-Information-Khot-Sabharwal/0ff595f0645a3e25a2f37145768985b10ead0509) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The TupleInf Open IE dataset contains Open IE tuples extracted from 263K sentences that were used by the solver in “Answering Complex Questions Using Open Information Extraction” (referred as Tuple KB, T). These sentences were collected from a large Web corpus using training questions from 4th and 8th grade as queries. This dataset contains 156K sentences collected for 4th grade questions and 107K sentences for 8th grade questions. Each sentence is followed by the Open IE v4 tuples using their simple format. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English, collected from a large Web corpus using training questions from 4th and 8th grade as queries. ## Dataset Structure ### Data Instances This dataset contains setences with corresponding relation tuples extracted from each sentence. Each instance should contain a sentence and followed by the [Open IE v4](https://github.com/allenai/openie-standalone) tuples using their *simple format*. An example of an instance: ```JSON { "sentence": "0.04593 kg Used a triple beam balance to mass a golf ball.", "tuples": { "score": 0.8999999761581421, "tuple_text": "(0.04593 kg; Used; a triple beam balance; to mass a golf ball)", "context": "", "arg1": "0.04593 kg", "rel": "Used", "arg2s": ["a triple beam balance", "to mass a golf ball"], } } ``` ### Data Fields - `sentence`: the input text/sentence. - `tuples`: the extracted relation tuples from the sentence. - `score`: the confident score for each tuple. - `tuple_text`: the relationship representation text of the extraction, in the *simple format* of [Open IE v4](https://github.com/allenai/openie-standalone). - `context`: an optional representation of the context for this extraction. Defaults to `""` if there's no context. - `arg1`: the first argument in the relationship. - `rel`: the relation. - `arg2s`: a sequence of the 2nd arguments in the realtionship. ### Data Splits | name | train| |-----------|-----:| | all |267719| | 4th_grade |158910| | 8th_grade |108809| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @article{Khot2017AnsweringCQ, title={Answering Complex Questions Using Open Information Extraction}, author={Tushar Khot and A. Sabharwal and Peter Clark}, journal={ArXiv}, year={2017}, volume={abs/1704.05572} } ``` ### Contributions Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
turk
--- annotations_creators: - machine-generated language_creators: - found language: - en license: - gpl-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text2text-generation task_ids: - text-simplification paperswithcode_id: null pretty_name: TURK dataset_info: features: - name: original dtype: string - name: simplifications sequence: string config_name: simplification splits: - name: validation num_bytes: 2120187 num_examples: 2000 - name: test num_bytes: 396378 num_examples: 359 download_size: 2443394 dataset_size: 2516565 --- # Dataset Card for TURK ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** [TURK](https://github.com/cocoxu/simplification) - **Paper:** [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029/) - **Leaderboard:** N/A - **Point of Contact:** [Wei Xu](mailto:wei.xu@cc.gatech.edu) ### Dataset Summary TURK is a multi-reference dataset for the evaluation of sentence simplification in English. The dataset consists of 2,359 sentences from the [Parallel Wikipedia Simplification (PWKP) corpus](https://www.aclweb.org/anthology/C10-1152/). Each sentence is associated with 8 crowdsourced simplifications that focus on only lexical paraphrasing (no sentence splitting or deletion). ### Supported Tasks and Leaderboards No Leaderboard for the task. ### Languages TURK contains English text only (BCP-47: `en`). ## Dataset Structure ### Data Instances An instance consists of an original sentence and 8 possible reference simplifications that focus on lexical paraphrasing. ``` {'original': 'one side of the armed conflicts is composed mainly of the sudanese military and the janjaweed , a sudanese militia group recruited mostly from the afro-arab abbala tribes of the northern rizeigat region in sudan .', 'simplifications': ['one side of the armed conflicts is made of sudanese military and the janjaweed , a sudanese militia recruited from the afro-arab abbala tribes of the northern rizeigat region in sudan .', 'one side of the armed conflicts consist of the sudanese military and the sudanese militia group janjaweed .', 'one side of the armed conflicts is mainly sudanese military and the janjaweed , which recruited from the afro-arab abbala tribes .', 'one side of the armed conflicts is composed mainly of the sudanese military and the janjaweed , a sudanese militia group recruited mostly from the afro-arab abbala tribes in sudan .', 'one side of the armed conflicts is made up mostly of the sudanese military and the janjaweed , a sudanese militia group whose recruits mostly come from the afro-arab abbala tribes from the northern rizeigat region in sudan .', 'the sudanese military and the janjaweed make up one of the armed conflicts , mostly from the afro-arab abbal tribes in sudan .', 'one side of the armed conflicts is composed mainly of the sudanese military and the janjaweed , a sudanese militia group recruited mostly from the afro-arab abbala tribes of the northern rizeigat regime in sudan .', 'one side of the armed conflicts is composed mainly of the sudanese military and the janjaweed , a sudanese militia group recruited mostly from the afro-arab abbala tribes of the northern rizeigat region in sudan .']} ``` ### Data Fields - `original`: an original sentence from the source datasets - `simplifications`: a set of reference simplifications produced by crowd workers. ### Data Splits TURK does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) or [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) (Jiang et. al 2020) for training. Each input sentence has 8 associated reference simplified sentences. 2,359 input sentences are randomly split into 2,000 validation and 359 test sentences. | | Dev | Test | Total | | ----- | ------ | ---- | ----- | | Input Sentences | 2000 | 359 | 2359 | | Reference Simplifications | 16000 | 2872 | 18872 | ## Dataset Creation ### Curation Rationale The TURK dataset was constructed to evaluate the task of text simplification. It contains multiple human-written references that focus on only lexical simplification. ### Source Data #### Initial Data Collection and Normalization The input sentences in the dataset are extracted from the [Parallel Wikipedia Simplification (PWKP) corpus](https://www.aclweb.org/anthology/C10-1152/). #### Who are the source language producers? The references are crowdsourced from Amazon Mechanical Turk. The annotators were asked to provide simplifications without losing any information or splitting the input sentence. No other demographic or compensation information is provided in the paper. ### Annotations #### Annotation process The instructions given to the annotators are available in the paper. #### Who are the annotators? The annotators are Amazon Mechanical Turk workers. ### Personal and Sensitive Information Since the dataset is created from English Wikipedia (August 22, 2009 version), all the information contained in the dataset is already in the public domain. ## Considerations for Using the Data ### Social Impact of Dataset The dataset helps move forward the research towards text simplification by creating a higher quality validation and test dataset. Progress in text simplification in turn has the potential to increase the accessibility of written documents to wider audiences. ### Discussion of Biases The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases [(Schmahl et al., 2020)](https://research.tudelft.nl/en/publications/is-wikipedia-succeeding-in-reducing-gender-bias-assessing-changes) and racial biases [(Adams et al., 2019)](https://journals.sagepub.com/doi/pdf/10.1177/2378023118823946). ### Other Known Limitations Since the dataset contains only 2,359 sentences that are derived from Wikipedia, it is limited to a small subset of topics present on Wikipedia. ## Additional Information ### Dataset Curators TURK was developed by researchers at the University of Pennsylvania. The work was supported by the NSF under grant IIS-1430651 and the NSF GRFP under grant 1232825. ### Licensing Information [GNU General Public License v3.0](https://github.com/cocoxu/simplification/blob/master/LICENSE) ### Citation Information ``` @article{Xu-EtAl:2016:TACL, author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch}, title = {Optimizing Statistical Machine Translation for Text Simplification}, journal = {Transactions of the Association for Computational Linguistics}, volume = {4}, year = {2016}, url = {https://cocoxu.github.io/publications/tacl2016-smt-simplification.pdf}, pages = {401--415} } ``` ### Contributions Thanks to [@mounicam](https://github.com/mounicam) for adding this dataset.
turkic_xwmt
--- annotations_creators: - crowdsourced language_creators: - found language: - az - ba - en - kaa - kk - ky - ru - sah - tr - uz license: - mit multilinguality: - translation pretty_name: turkic_xwmt size_categories: - n<1K task_categories: - translation task_ids: [] source_datasets: - extended|WMT 2020 News Translation Task configs: - az-ba - az-en - az-kaa - az-kk - az-ky - az-ru - az-sah - az-tr - az-uz - ba-az - ba-en - ba-kaa - ba-kk - ba-ky - ba-ru - ba-sah - ba-tr - ba-uz - en-az - en-ba - en-kaa - en-kk - en-ky - en-ru - en-sah - en-tr - en-uz - kaa-az - kaa-ba - kaa-en - kaa-kk - kaa-ky - kaa-ru - kaa-sah - kaa-tr - kaa-uz - kk-az - kk-ba - kk-en - kk-kaa - kk-ky - kk-ru - kk-sah - kk-tr - kk-uz - ky-az - ky-ba - ky-en - ky-kaa - ky-kk - ky-ru - ky-sah - ky-tr - ky-uz - ru-az - ru-ba - ru-en - ru-kaa - ru-kk - ru-ky - ru-sah - ru-tr - ru-uz - sah-az - sah-ba - sah-en - sah-kaa - sah-kk - sah-ky - sah-ru - sah-tr - sah-uz - tr-az - tr-ba - tr-en - tr-kaa - tr-kk - tr-ky - tr-ru - tr-sah - tr-uz - uz-az - uz-ba - uz-en - uz-kaa - uz-kk - uz-ky - uz-ru - uz-sah - uz-tr dataset_info: - config_name: az-ba features: - name: translation dtype: translation: languages: - az - ba splits: - name: test num_bytes: 266801 num_examples: 600 download_size: 12862396 dataset_size: 266801 - config_name: az-en features: - name: translation dtype: translation: languages: - az - en splits: - name: test num_bytes: 181156 num_examples: 600 download_size: 12862396 dataset_size: 181156 - config_name: az-kaa features: - name: translation dtype: translation: languages: - az - kaa splits: - name: test num_bytes: 134071 num_examples: 300 download_size: 12862396 dataset_size: 134071 - config_name: az-kk features: - name: translation dtype: translation: languages: - az - kk splits: - name: test num_bytes: 203798 num_examples: 500 download_size: 12862396 dataset_size: 203798 - config_name: az-ky features: - name: translation dtype: translation: languages: - az - ky splits: - name: test num_bytes: 210549 num_examples: 500 download_size: 12862396 dataset_size: 210549 - config_name: az-ru features: - name: translation dtype: translation: languages: - az - ru splits: - name: test num_bytes: 262739 num_examples: 600 download_size: 12862396 dataset_size: 262739 - config_name: az-sah features: - name: translation dtype: translation: languages: - az - sah splits: - name: test num_bytes: 144198 num_examples: 300 download_size: 12862396 dataset_size: 144198 - config_name: az-tr features: - name: translation dtype: translation: languages: - az - tr splits: - name: test num_bytes: 162447 num_examples: 500 download_size: 12862396 dataset_size: 162447 - config_name: az-uz features: - name: translation dtype: translation: languages: - az - uz splits: - name: test num_bytes: 194231 num_examples: 600 download_size: 12862396 dataset_size: 194231 - config_name: ba-az features: - name: translation dtype: translation: languages: - ba - az splits: - name: test num_bytes: 266801 num_examples: 600 download_size: 12862396 dataset_size: 266801 - config_name: ba-en features: - name: translation dtype: translation: languages: - ba - en splits: - name: test num_bytes: 431223 num_examples: 1000 download_size: 12862396 dataset_size: 431223 - config_name: ba-kaa features: - name: translation dtype: translation: languages: - ba - kaa splits: - name: test num_bytes: 168895 num_examples: 300 download_size: 12862396 dataset_size: 168895 - config_name: ba-kk features: - name: translation dtype: translation: languages: - ba - kk splits: - name: test num_bytes: 374756 num_examples: 700 download_size: 12862396 dataset_size: 374756 - config_name: ba-ky features: - name: translation dtype: translation: languages: - ba - ky splits: - name: test num_bytes: 268986 num_examples: 500 download_size: 12862396 dataset_size: 268986 - config_name: ba-ru features: - name: translation dtype: translation: languages: - ba - ru splits: - name: test num_bytes: 568101 num_examples: 1000 download_size: 12862396 dataset_size: 568101 - config_name: ba-sah features: - name: translation dtype: translation: languages: - ba - sah splits: - name: test num_bytes: 179022 num_examples: 300 download_size: 12862396 dataset_size: 179022 - config_name: ba-tr features: - name: translation dtype: translation: languages: - ba - tr splits: - name: test num_bytes: 309455 num_examples: 700 download_size: 12862396 dataset_size: 309455 - config_name: ba-uz features: - name: translation dtype: translation: languages: - ba - uz splits: - name: test num_bytes: 410874 num_examples: 900 download_size: 12862396 dataset_size: 410874 - config_name: en-az features: - name: translation dtype: translation: languages: - en - az splits: - name: test num_bytes: 181156 num_examples: 600 download_size: 12862396 dataset_size: 181156 - config_name: en-ba features: - name: translation dtype: translation: languages: - en - ba splits: - name: test num_bytes: 431223 num_examples: 1000 download_size: 12862396 dataset_size: 431223 - config_name: en-kaa features: - name: translation dtype: translation: languages: - en - kaa splits: - name: test num_bytes: 126304 num_examples: 300 download_size: 12862396 dataset_size: 126304 - config_name: en-kk features: - name: translation dtype: translation: languages: - en - kk splits: - name: test num_bytes: 274728 num_examples: 700 download_size: 12862396 dataset_size: 274728 - config_name: en-ky features: - name: translation dtype: translation: languages: - en - ky splits: - name: test num_bytes: 198854 num_examples: 500 download_size: 12862396 dataset_size: 198854 - config_name: en-ru features: - name: translation dtype: translation: languages: - en - ru splits: - name: test num_bytes: 422718 num_examples: 1000 download_size: 12862396 dataset_size: 422718 - config_name: en-sah features: - name: translation dtype: translation: languages: - en - sah splits: - name: test num_bytes: 136431 num_examples: 300 download_size: 12862396 dataset_size: 136431 - config_name: en-tr features: - name: translation dtype: translation: languages: - en - tr splits: - name: test num_bytes: 210144 num_examples: 700 download_size: 12862396 dataset_size: 210144 - config_name: en-uz features: - name: translation dtype: translation: languages: - en - uz splits: - name: test num_bytes: 278971 num_examples: 900 download_size: 12862396 dataset_size: 278971 - config_name: kaa-az features: - name: translation dtype: translation: languages: - kaa - az splits: - name: test num_bytes: 134071 num_examples: 300 download_size: 12862396 dataset_size: 134071 - config_name: kaa-ba features: - name: translation dtype: translation: languages: - kaa - ba splits: - name: test num_bytes: 168895 num_examples: 300 download_size: 12862396 dataset_size: 168895 - config_name: kaa-en features: - name: translation dtype: translation: languages: - kaa - en splits: - name: test num_bytes: 126304 num_examples: 300 download_size: 12862396 dataset_size: 126304 - config_name: kaa-kk features: - name: translation dtype: translation: languages: - kaa - kk splits: - name: test num_bytes: 160022 num_examples: 300 download_size: 12862396 dataset_size: 160022 - config_name: kaa-ky features: - name: translation dtype: translation: languages: - kaa - ky splits: - name: test num_bytes: 163763 num_examples: 300 download_size: 12862396 dataset_size: 163763 - config_name: kaa-ru features: - name: translation dtype: translation: languages: - kaa - ru splits: - name: test num_bytes: 168349 num_examples: 300 download_size: 12862396 dataset_size: 168349 - config_name: kaa-sah features: - name: translation dtype: translation: languages: - kaa - sah splits: - name: test num_bytes: 177151 num_examples: 300 download_size: 12862396 dataset_size: 177151 - config_name: kaa-tr features: - name: translation dtype: translation: languages: - kaa - tr splits: - name: test num_bytes: 132055 num_examples: 300 download_size: 12862396 dataset_size: 132055 - config_name: kaa-uz features: - name: translation dtype: translation: languages: - kaa - uz splits: - name: test num_bytes: 132789 num_examples: 300 download_size: 12862396 dataset_size: 132789 - config_name: kk-az features: - name: translation dtype: translation: languages: - kk - az splits: - name: test num_bytes: 203798 num_examples: 500 download_size: 12862396 dataset_size: 203798 - config_name: kk-ba features: - name: translation dtype: translation: languages: - kk - ba splits: - name: test num_bytes: 374756 num_examples: 700 download_size: 12862396 dataset_size: 374756 - config_name: kk-en features: - name: translation dtype: translation: languages: - kk - en splits: - name: test num_bytes: 274728 num_examples: 700 download_size: 12862396 dataset_size: 274728 - config_name: kk-kaa features: - name: translation dtype: translation: languages: - kk - kaa splits: - name: test num_bytes: 160022 num_examples: 300 download_size: 12862396 dataset_size: 160022 - config_name: kk-ky features: - name: translation dtype: translation: languages: - kk - ky splits: - name: test num_bytes: 253421 num_examples: 500 download_size: 12862396 dataset_size: 253421 - config_name: kk-ru features: - name: translation dtype: translation: languages: - kk - ru splits: - name: test num_bytes: 369633 num_examples: 700 download_size: 12862396 dataset_size: 369633 - config_name: kk-sah features: - name: translation dtype: translation: languages: - kk - sah splits: - name: test num_bytes: 170149 num_examples: 300 download_size: 12862396 dataset_size: 170149 - config_name: kk-tr features: - name: translation dtype: translation: languages: - kk - tr splits: - name: test num_bytes: 204442 num_examples: 500 download_size: 12862396 dataset_size: 204442 - config_name: kk-uz features: - name: translation dtype: translation: languages: - kk - uz splits: - name: test num_bytes: 290325 num_examples: 700 download_size: 12862396 dataset_size: 290325 - config_name: ky-az features: - name: translation dtype: translation: languages: - ky - az splits: - name: test num_bytes: 210549 num_examples: 500 download_size: 12862396 dataset_size: 210549 - config_name: ky-ba features: - name: translation dtype: translation: languages: - ky - ba splits: - name: test num_bytes: 268986 num_examples: 500 download_size: 12862396 dataset_size: 268986 - config_name: ky-en features: - name: translation dtype: translation: languages: - ky - en splits: - name: test num_bytes: 198854 num_examples: 500 download_size: 12862396 dataset_size: 198854 - config_name: ky-kaa features: - name: translation dtype: translation: languages: - ky - kaa splits: - name: test num_bytes: 163763 num_examples: 300 download_size: 12862396 dataset_size: 163763 - config_name: ky-kk features: - name: translation dtype: translation: languages: - ky - kk splits: - name: test num_bytes: 253421 num_examples: 500 download_size: 12862396 dataset_size: 253421 - config_name: ky-ru features: - name: translation dtype: translation: languages: - ky - ru splits: - name: test num_bytes: 265803 num_examples: 500 download_size: 12862396 dataset_size: 265803 - config_name: ky-sah features: - name: translation dtype: translation: languages: - ky - sah splits: - name: test num_bytes: 173890 num_examples: 300 download_size: 12862396 dataset_size: 173890 - config_name: ky-tr features: - name: translation dtype: translation: languages: - ky - tr splits: - name: test num_bytes: 168026 num_examples: 400 download_size: 12862396 dataset_size: 168026 - config_name: ky-uz features: - name: translation dtype: translation: languages: - ky - uz splits: - name: test num_bytes: 209619 num_examples: 500 download_size: 12862396 dataset_size: 209619 - config_name: ru-az features: - name: translation dtype: translation: languages: - ru - az splits: - name: test num_bytes: 262739 num_examples: 600 download_size: 12862396 dataset_size: 262739 - config_name: ru-ba features: - name: translation dtype: translation: languages: - ru - ba splits: - name: test num_bytes: 568101 num_examples: 1000 download_size: 12862396 dataset_size: 568101 - config_name: ru-en features: - name: translation dtype: translation: languages: - ru - en splits: - name: test num_bytes: 422718 num_examples: 1000 download_size: 12862396 dataset_size: 422718 - config_name: ru-kaa features: - name: translation dtype: translation: languages: - ru - kaa splits: - name: test num_bytes: 168349 num_examples: 300 download_size: 12862396 dataset_size: 168349 - config_name: ru-kk features: - name: translation dtype: translation: languages: - ru - kk splits: - name: test num_bytes: 369633 num_examples: 700 download_size: 12862396 dataset_size: 369633 - config_name: ru-ky features: - name: translation dtype: translation: languages: - ru - ky splits: - name: test num_bytes: 265803 num_examples: 500 download_size: 12862396 dataset_size: 265803 - config_name: ru-sah features: - name: translation dtype: translation: languages: - ru - sah splits: - name: test num_bytes: 178476 num_examples: 300 download_size: 12862396 dataset_size: 178476 - config_name: ru-tr features: - name: translation dtype: translation: languages: - ru - tr splits: - name: test num_bytes: 304586 num_examples: 700 download_size: 12862396 dataset_size: 304586 - config_name: ru-uz features: - name: translation dtype: translation: languages: - ru - uz splits: - name: test num_bytes: 403551 num_examples: 900 download_size: 12862396 dataset_size: 403551 - config_name: sah-az features: - name: translation dtype: translation: languages: - sah - az splits: - name: test num_bytes: 144198 num_examples: 300 download_size: 12862396 dataset_size: 144198 - config_name: sah-ba features: - name: translation dtype: translation: languages: - sah - ba splits: - name: test num_bytes: 179022 num_examples: 300 download_size: 12862396 dataset_size: 179022 - config_name: sah-en features: - name: translation dtype: translation: languages: - sah - en splits: - name: test num_bytes: 136431 num_examples: 300 download_size: 12862396 dataset_size: 136431 - config_name: sah-kaa features: - name: translation dtype: translation: languages: - sah - kaa splits: - name: test num_bytes: 177151 num_examples: 300 download_size: 12862396 dataset_size: 177151 - config_name: sah-kk features: - name: translation dtype: translation: languages: - sah - kk splits: - name: test num_bytes: 170149 num_examples: 300 download_size: 12862396 dataset_size: 170149 - config_name: sah-ky features: - name: translation dtype: translation: languages: - sah - ky splits: - name: test num_bytes: 173890 num_examples: 300 download_size: 12862396 dataset_size: 173890 - config_name: sah-ru features: - name: translation dtype: translation: languages: - sah - ru splits: - name: test num_bytes: 178476 num_examples: 300 download_size: 12862396 dataset_size: 178476 - config_name: sah-tr features: - name: translation dtype: translation: languages: - sah - tr splits: - name: test num_bytes: 142182 num_examples: 300 download_size: 12862396 dataset_size: 142182 - config_name: sah-uz features: - name: translation dtype: translation: languages: - sah - uz splits: - name: test num_bytes: 142916 num_examples: 300 download_size: 12862396 dataset_size: 142916 - config_name: tr-az features: - name: translation dtype: translation: languages: - tr - az splits: - name: test num_bytes: 162447 num_examples: 500 download_size: 12862396 dataset_size: 162447 - config_name: tr-ba features: - name: translation dtype: translation: languages: - tr - ba splits: - name: test num_bytes: 309455 num_examples: 700 download_size: 12862396 dataset_size: 309455 - config_name: tr-en features: - name: translation dtype: translation: languages: - tr - en splits: - name: test num_bytes: 210144 num_examples: 700 download_size: 12862396 dataset_size: 210144 - config_name: tr-kaa features: - name: translation dtype: translation: languages: - tr - kaa splits: - name: test num_bytes: 132055 num_examples: 300 download_size: 12862396 dataset_size: 132055 - config_name: tr-kk features: - name: translation dtype: translation: languages: - tr - kk splits: - name: test num_bytes: 204442 num_examples: 500 download_size: 12862396 dataset_size: 204442 - config_name: tr-ky features: - name: translation dtype: translation: languages: - tr - ky splits: - name: test num_bytes: 168026 num_examples: 400 download_size: 12862396 dataset_size: 168026 - config_name: tr-ru features: - name: translation dtype: translation: languages: - tr - ru splits: - name: test num_bytes: 304586 num_examples: 700 download_size: 12862396 dataset_size: 304586 - config_name: tr-sah features: - name: translation dtype: translation: languages: - tr - sah splits: - name: test num_bytes: 142182 num_examples: 300 download_size: 12862396 dataset_size: 142182 - config_name: tr-uz features: - name: translation dtype: translation: languages: - tr - uz splits: - name: test num_bytes: 194761 num_examples: 600 download_size: 12862396 dataset_size: 194761 - config_name: uz-az features: - name: translation dtype: translation: languages: - uz - az splits: - name: test num_bytes: 194231 num_examples: 600 download_size: 12862396 dataset_size: 194231 - config_name: uz-ba features: - name: translation dtype: translation: languages: - uz - ba splits: - name: test num_bytes: 410874 num_examples: 900 download_size: 12862396 dataset_size: 410874 - config_name: uz-en features: - name: translation dtype: translation: languages: - uz - en splits: - name: test num_bytes: 278971 num_examples: 900 download_size: 12862396 dataset_size: 278971 - config_name: uz-kaa features: - name: translation dtype: translation: languages: - uz - kaa splits: - name: test num_bytes: 132789 num_examples: 300 download_size: 12862396 dataset_size: 132789 - config_name: uz-kk features: - name: translation dtype: translation: languages: - uz - kk splits: - name: test num_bytes: 290325 num_examples: 700 download_size: 12862396 dataset_size: 290325 - config_name: uz-ky features: - name: translation dtype: translation: languages: - uz - ky splits: - name: test num_bytes: 209619 num_examples: 500 download_size: 12862396 dataset_size: 209619 - config_name: uz-ru features: - name: translation dtype: translation: languages: - uz - ru splits: - name: test num_bytes: 403551 num_examples: 900 download_size: 12862396 dataset_size: 403551 - config_name: uz-sah features: - name: translation dtype: translation: languages: - uz - sah splits: - name: test num_bytes: 142916 num_examples: 300 download_size: 12862396 dataset_size: 142916 - config_name: uz-tr features: - name: translation dtype: translation: languages: - uz - tr splits: - name: test num_bytes: 194761 num_examples: 600 download_size: 12862396 dataset_size: 194761 --- # Dataset Card for turkic_xwmt ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:**[Github](https://github.com/turkic-interlingua/til-mt/tree/master/xwmt) - **Paper:** [https://arxiv.org/abs/2109.04593](https://arxiv.org/abs/2109.04593) - **Leaderboard:** [More Information Needed] - **Point of Contact:** [turkicinterlingua@gmail.com](mailto:turkicinterlingua@gmail.com) ### Dataset Summary To establish a comprehensive and challenging evaluation benchmark for Machine Translation in Turkic languages, we translate a test set originally introduced in WMT 2020 News Translation Task for English-Russian. The original dataset is profesionally translated and consists of sentences from news articles that are both English and Russian-centric. We adopt this evaluation set (X-WMT) and begin efforts to translate it into several Turkic languages. The current version of X-WMT includes covers 8 Turkic languages and 88 language directions with a minimum of 300 sentences per language direction. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Currently covered languages are (besides English and Russian): - Azerbaijani (az) - Bashkir (ba) - Karakalpak (kaa) - Kazakh (kk) - Kirghiz (ky) - Turkish (tr) - Sakha (sah) - Uzbek (uz) ## Dataset Structure ### Data Instances A random example from the Russian-Uzbek set: ``` {"translation": {'ru': 'Моника Мутсвангва , министр информации Зимбабве , утверждает , что полиция вмешалась в отъезд Магомбейи из соображений безопасности и вследствие состояния его здоровья .', 'uz': 'Zimbabvening Axborot vaziri , Monika Mutsvanva Magombeyining xavfsizligi va sog'ligi tufayli bo'lgan jo'nab ketishinida politsiya aralashuvini ushlab turadi .'}} ``` ### Data Fields Each example has one field "translation" that contains two subfields: one per language, e.g. for the Russian-Uzbek set: - **translation**: a dictionary with two subfields: - **ru**: the russian text - **uz**: the uzbek text ### Data Splits <details> <summary>Click here to show the number of examples per configuration:</summary> | | test | |:--------|-------:| | az-ba | 600 | | az-en | 600 | | az-kaa | 300 | | az-kk | 500 | | az-ky | 500 | | az-ru | 600 | | az-sah | 300 | | az-tr | 500 | | az-uz | 600 | | ba-az | 600 | | ba-en | 1000 | | ba-kaa | 300 | | ba-kk | 700 | | ba-ky | 500 | | ba-ru | 1000 | | ba-sah | 300 | | ba-tr | 700 | | ba-uz | 900 | | en-az | 600 | | en-ba | 1000 | | en-kaa | 300 | | en-kk | 700 | | en-ky | 500 | | en-ru | 1000 | | en-sah | 300 | | en-tr | 700 | | en-uz | 900 | | kaa-az | 300 | | kaa-ba | 300 | | kaa-en | 300 | | kaa-kk | 300 | | kaa-ky | 300 | | kaa-ru | 300 | | kaa-sah | 300 | | kaa-tr | 300 | | kaa-uz | 300 | | kk-az | 500 | | kk-ba | 700 | | kk-en | 700 | | kk-kaa | 300 | | kk-ky | 500 | | kk-ru | 700 | | kk-sah | 300 | | kk-tr | 500 | | kk-uz | 700 | | ky-az | 500 | | ky-ba | 500 | | ky-en | 500 | | ky-kaa | 300 | | ky-kk | 500 | | ky-ru | 500 | | ky-sah | 300 | | ky-tr | 400 | | ky-uz | 500 | | ru-az | 600 | | ru-ba | 1000 | | ru-en | 1000 | | ru-kaa | 300 | | ru-kk | 700 | | ru-ky | 500 | | ru-sah | 300 | | ru-tr | 700 | | ru-uz | 900 | | sah-az | 300 | | sah-ba | 300 | | sah-en | 300 | | sah-kaa | 300 | | sah-kk | 300 | | sah-ky | 300 | | sah-ru | 300 | | sah-tr | 300 | | sah-uz | 300 | | tr-az | 500 | | tr-ba | 700 | | tr-en | 700 | | tr-kaa | 300 | | tr-kk | 500 | | tr-ky | 400 | | tr-ru | 700 | | tr-sah | 300 | | tr-uz | 600 | | uz-az | 600 | | uz-ba | 900 | | uz-en | 900 | | uz-kaa | 300 | | uz-kk | 700 | | uz-ky | 500 | | uz-ru | 900 | | uz-sah | 300 | | uz-tr | 600 | </details> ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? **Translators, annotators and dataset contributors** (in alphabetical order) Abilxayr Zholdybai Aigiz Kunafin Akylbek Khamitov Alperen Cantez Aydos Muxammadiyarov Doniyorbek Rafikjonov Erkinbek Vokhabov Ipek Baris Iskander Shakirov Madina Zokirjonova Mohiyaxon Uzoqova Mukhammadbektosh Khaydarov Nurlan Maharramli Petr Popov Rasul Karimov Sariya Kagarmanova Ziyodabonu Qobiljon qizi ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [MIT License](https://github.com/turkic-interlingua/til-mt/blob/master/xwmt/LICENSE) ### Citation Information ``` @inproceedings{mirzakhalov2021large, title={A Large-Scale Study of Machine Translation in Turkic Languages}, author={Mirzakhalov, Jamshidbek and Babu, Anoop and Ataman, Duygu and Kariev, Sherzod and Tyers, Francis and Abduraufov, Otabek and Hajili, Mammad and Ivanova, Sardana and Khaytbaev, Abror and Laverghetta Jr, Antonio and others}, booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing}, pages={5876--5890}, year={2021} } ``` ### Contributions This project was carried out with the help and contributions from dozens of individuals and organizations. We acknowledge and greatly appreciate each and every one of them: **Authors on the publications** (in alphabetical order) Abror Khaytbaev Ahsan Wahab Aigiz Kunafin Anoop Babu Antonio Laverghetta Jr. Behzodbek Moydinboyev Dr. Duygu Ataman Esra Onal Dr. Francis Tyers Jamshidbek Mirzakhalov Dr. John Licato Dr. Julia Kreutzer Mammad Hajili Mokhiyakhon Uzokova Dr. Orhan Firat Otabek Abduraufov Sardana Ivanova Shaxnoza Pulatova Sherzod Kariev Dr. Sriram Chellappan **Translators, annotators and dataset contributors** (in alphabetical order) Abilxayr Zholdybai Aigiz Kunafin Akylbek Khamitov Alperen Cantez Aydos Muxammadiyarov Doniyorbek Rafikjonov Erkinbek Vokhabov Ipek Baris Iskander Shakirov Madina Zokirjonova Mohiyaxon Uzoqova Mukhammadbektosh Khaydarov Nurlan Maharramli Petr Popov Rasul Karimov Sariya Kagarmanova Ziyodabonu Qobiljon qizi **Industry supporters** [Google Cloud](https://cloud.google.com/solutions/education) [Khan Academy Oʻzbek](https://uz.khanacademy.org/) [The Foundation for the Preservation and Development of the Bashkir Language](https://bsfond.ru/) Thanks to [@mirzakhalov](https://github.com/mirzakhalov) for adding this dataset.
turkish_movie_sentiment
--- annotations_creators: - found language_creators: - found language: - tr license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification - sentiment-scoring paperswithcode_id: null pretty_name: 'TurkishMovieSentiment: This dataset contains turkish movie reviews.' dataset_info: features: - name: point dtype: float32 - name: comment dtype: string - name: film_name dtype: string config_name: turkishmoviesentiment splits: - name: train num_bytes: 33954560 num_examples: 83227 download_size: 0 dataset_size: 33954560 --- # Dataset Card for TurkishMovieSentiment: This dataset contains turkish movie reviews. ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks](https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks) - **Point of Contact:** [Mustafa Keskin](https://www.linkedin.com/in/mustfkeskin/) ### Dataset Summary This data set is a dataset from kaggle consisting of Turkish movie reviews and scored between 0-5. ### Languages The dataset is based on Turkish. ## Dataset Structure ### Data Instances **Example 1:** **Comment:** Jean Reno denince zaten leon filmi gelir akla izlemeyen kalmamıştır ama kaldıysada ee ne duruyorsun hemen izle :), **Film_name:** Sevginin Gücü, **Point:** 5,0 **Example 2:** **Comment:** Bence güzel bi film olmush.İzlenmeli.İnsana şükretmek gerektini hatırlatıyor.Ama cok da poh pohlanacak bi sey yapmamıslar, **Film_name:** Cinderella Man, **Point:** 2,5 ### Data Fields - **comment**(string) : Contatins turkish movie review - **film_name**(string) : Film name in Turkish. - **point**(float) : [0-5] floating point ### Data Splits It is not divided into Train set and Test set. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Discussion of Social Impact and Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by [Mustafa Keskin](https://www.linkedin.com/in/mustfkeskin/). ### Licensing Information The data is under the [CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/) ### Citation Information [More Information Needed] ### Contributions Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset.
turkish_ner
--- annotations_creators: - machine-generated language_creators: - expert-generated language: - tr license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: TurkishNer dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: domain dtype: class_label: names: '0': architecture '1': basketball '2': book '3': business '4': education '5': fictional_universe '6': film '7': food '8': geography '9': government '10': law '11': location '12': military '13': music '14': opera '15': organization '16': people '17': religion '18': royalty '19': soccer '20': sports '21': theater '22': time '23': travel '24': tv - name: ner_tags sequence: class_label: names: '0': O '1': B-PERSON '2': I-PERSON '3': B-ORGANIZATION '4': I-ORGANIZATION '5': B-LOCATION '6': I-LOCATION '7': B-MISC '8': I-MISC splits: - name: train num_bytes: 177658278 num_examples: 532629 download_size: 204393976 dataset_size: 177658278 --- # Dataset Card for turkish_ner ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://arxiv.org/abs/1702.02363 - **Repository:** [Needs More Information] - **Paper:** http://arxiv.org/abs/1702.02363 - **Leaderboard:** [Needs More Information] - **Point of Contact:** erayyildiz@ktu.edu.tr ### Dataset Summary Automatically annotated Turkish corpus for named entity recognition and text categorization using large-scale gazetteers. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 25 different domains. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Turkish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits There's only the training set. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators H. Bahadir Sahin, Caglar Tirkaz, Eray Yildiz, Mustafa Tolga Eren and Omer Ozan Sonmez ### Licensing Information Creative Commons Attribution 4.0 International ### Citation Information @InProceedings@article{DBLP:journals/corr/SahinTYES17, author = {H. Bahadir Sahin and Caglar Tirkaz and Eray Yildiz and Mustafa Tolga Eren and Omer Ozan Sonmez}, title = {Automatically Annotated Turkish Corpus for Named Entity Recognition and Text Categorization using Large-Scale Gazetteers}, journal = {CoRR}, volume = {abs/1702.02363}, year = {2017}, url = {http://arxiv.org/abs/1702.02363}, archivePrefix = {arXiv}, eprint = {1702.02363}, timestamp = {Mon, 13 Aug 2018 16:46:36 +0200}, biburl = {https://dblp.org/rec/journals/corr/SahinTYES17.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ### Contributions Thanks to [@merveenoyan](https://github.com/merveenoyan) for adding this dataset.
turkish_product_reviews
--- annotations_creators: - found language_creators: - found language: - tr license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: Turkish Product Reviews dataset_info: features: - name: sentence dtype: string - name: sentiment dtype: class_label: names: '0': negative '1': positive splits: - name: train num_bytes: 43369710 num_examples: 235165 download_size: 13184332 dataset_size: 43369710 --- # Dataset Card for Turkish Product Reviews ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data) - **Point of Contact:** [Fatih Barmanbay](https://github.com/fthbrmnby) ### Dataset Summary This Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Turkish. ## Dataset Structure ### Data Instances **Example 1:** **sentence:** beklentimin altında bir ürün kaliteli değil **sentiment:** 0 (negative) **Example 2:** **sentence:** fiyat ve performans olarak gayet iyi **sentiment:** 1 (positive) ### Data Fields - **sentence**(string) : Contatins turkish product review - **sentiment**(int) : 0 (negative) or 1 (positive) ### Data Splits It is not divided into Train set and Test set. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by [Fatih Barmanbay](https://github.com/fthbrmnby). ### Licensing Information The data is under the [CC-BY-SA-4.0 License](https://github.com/fthbrmnby/turkish-text-data/blob/master/LICENCE) ### Citation Information No citation available for this dataset. ### Contributions Thanks to [@basakbuluz](https://github.com/basakbuluz) for adding this dataset.
turkish_shrinked_ner
--- annotations_creators: - machine-generated language_creators: - expert-generated language: - tr license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|other-turkish_ner task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: TurkishShrinkedNer dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-academic '2': I-academic '3': B-academic_person '4': I-academic_person '5': B-aircraft '6': I-aircraft '7': B-album_person '8': I-album_person '9': B-anatomy '10': I-anatomy '11': B-animal '12': I-animal '13': B-architect_person '14': I-architect_person '15': B-capital '16': I-capital '17': B-chemical '18': I-chemical '19': B-clothes '20': I-clothes '21': B-country '22': I-country '23': B-culture '24': I-culture '25': B-currency '26': I-currency '27': B-date '28': I-date '29': B-food '30': I-food '31': B-genre '32': I-genre '33': B-government '34': I-government '35': B-government_person '36': I-government_person '37': B-language '38': I-language '39': B-location '40': I-location '41': B-material '42': I-material '43': B-measure '44': I-measure '45': B-medical '46': I-medical '47': B-military '48': I-military '49': B-military_person '50': I-military_person '51': B-nation '52': I-nation '53': B-newspaper '54': I-newspaper '55': B-organization '56': I-organization '57': B-organization_person '58': I-organization_person '59': B-person '60': I-person '61': B-production_art_music '62': I-production_art_music '63': B-production_art_music_person '64': I-production_art_music_person '65': B-quantity '66': I-quantity '67': B-religion '68': I-religion '69': B-science '70': I-science '71': B-shape '72': I-shape '73': B-ship '74': I-ship '75': B-software '76': I-software '77': B-space '78': I-space '79': B-space_person '80': I-space_person '81': B-sport '82': I-sport '83': B-sport_name '84': I-sport_name '85': B-sport_person '86': I-sport_person '87': B-structure '88': I-structure '89': B-subject '90': I-subject '91': B-tech '92': I-tech '93': B-train '94': I-train '95': B-vehicle '96': I-vehicle splits: - name: train num_bytes: 200728389 num_examples: 614515 download_size: 0 dataset_size: 200728389 --- # Dataset Card for turkish_shrinked_ner ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** https://www.kaggle.com/behcetsenturk ### Dataset Summary Shrinked processed version (48 entity type) of the turkish_ner. Original turkish_ner dataset: Automatically annotated Turkish corpus for named entity recognition and text categorization using large-scale gazetteers. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 25 different domains. Shrinked entity types are: academic, academic_person, aircraft, album_person, anatomy, animal, architect_person, capital, chemical, clothes, country, culture, currency, date, food, genre, government, government_person, language, location, material, measure, medical, military, military_person, nation, newspaper, organization, organization_person, person, production_art_music, production_art_music_person, quantity, religion, science, shape, ship, software, space, space_person, sport, sport_name, sport_person, structure, subject, tech, train, vehicle ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Turkish ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits There's only the training set. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators Behcet Senturk ### Licensing Information Creative Commons Attribution 4.0 International ### Citation Information [Needs More Information] ### Contributions Thanks to [@bhctsntrk](https://github.com/bhctsntrk) for adding this dataset.
turku_ner_corpus
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - fi license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Turku NER corpus dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': B-DATE '1': B-EVENT '2': B-LOC '3': B-ORG '4': B-PER '5': B-PRO '6': I-DATE '7': I-EVENT '8': I-LOC '9': I-ORG '10': I-PER '11': I-PRO '12': O splits: - name: train num_bytes: 3257447 num_examples: 12217 - name: validation num_bytes: 364223 num_examples: 1364 - name: test num_bytes: 416644 num_examples: 1555 download_size: 1659911 dataset_size: 4038314 --- # Dataset Card for Turku NER corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://turkunlp.org/fin-ner.html - **Repository:** https://github.com/TurkuNLP/turku-ner-corpus/ - **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.567/ - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** {jouni.a.luoma,mhtoin,maria.h.pyykonen,mavela,sampo.pyysalo}@utu.f ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
tweet_eval
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M - 10K<n<100K - 1K<n<10K - n<1K source_datasets: - extended|other-tweet-datasets task_categories: - text-classification task_ids: - intent-classification - multi-class-classification - sentiment-classification paperswithcode_id: tweeteval pretty_name: TweetEval configs: - emoji - emotion - hate - irony - offensive - sentiment - stance_abortion - stance_atheism - stance_climate - stance_feminist - stance_hillary dataset_info: - config_name: emoji features: - name: text dtype: string - name: label dtype: class_label: names: '0': ❤ '1': 😍 '2': 😂 '3': 💕 '4': 🔥 '5': 😊 '6': 😎 '7': ✨ '8': 💙 '9': 😘 '10': 📷 '11': 🇺🇸 '12': ☀ '13': 💜 '14': 😉 '15': 💯 '16': 😁 '17': 🎄 '18': 📸 '19': 😜 splits: - name: train num_bytes: 3803187 num_examples: 45000 - name: test num_bytes: 4255921 num_examples: 50000 - name: validation num_bytes: 396083 num_examples: 5000 download_size: 7628721 dataset_size: 8455191 - config_name: emotion features: - name: text dtype: string - name: label dtype: class_label: names: '0': anger '1': joy '2': optimism '3': sadness splits: - name: train num_bytes: 338875 num_examples: 3257 - name: test num_bytes: 146649 num_examples: 1421 - name: validation num_bytes: 38277 num_examples: 374 download_size: 483813 dataset_size: 523801 - config_name: hate features: - name: text dtype: string - name: label dtype: class_label: names: '0': non-hate '1': hate splits: - name: train num_bytes: 1223654 num_examples: 9000 - name: test num_bytes: 428938 num_examples: 2970 - name: validation num_bytes: 154148 num_examples: 1000 download_size: 1703208 dataset_size: 1806740 - config_name: irony features: - name: text dtype: string - name: label dtype: class_label: names: '0': non_irony '1': irony splits: - name: train num_bytes: 259191 num_examples: 2862 - name: test num_bytes: 75901 num_examples: 784 - name: validation num_bytes: 86021 num_examples: 955 download_size: 385613 dataset_size: 421113 - config_name: offensive features: - name: text dtype: string - name: label dtype: class_label: names: '0': non-offensive '1': offensive splits: - name: train num_bytes: 1648069 num_examples: 11916 - name: test num_bytes: 135477 num_examples: 860 - name: validation num_bytes: 192421 num_examples: 1324 download_size: 1863383 dataset_size: 1975967 - config_name: sentiment features: - name: text dtype: string - name: label dtype: class_label: names: '0': negative '1': neutral '2': positive splits: - name: train num_bytes: 5425142 num_examples: 45615 - name: test num_bytes: 1279548 num_examples: 12284 - name: validation num_bytes: 239088 num_examples: 2000 download_size: 6465841 dataset_size: 6943778 - config_name: stance_abortion features: - name: text dtype: string - name: label dtype: class_label: names: '0': none '1': against '2': favor splits: - name: train num_bytes: 68698 num_examples: 587 - name: test num_bytes: 33175 num_examples: 280 - name: validation num_bytes: 7661 num_examples: 66 download_size: 102062 dataset_size: 109534 - config_name: stance_atheism features: - name: text dtype: string - name: label dtype: class_label: names: '0': none '1': against '2': favor splits: - name: train num_bytes: 54779 num_examples: 461 - name: test num_bytes: 25720 num_examples: 220 - name: validation num_bytes: 6324 num_examples: 52 download_size: 80947 dataset_size: 86823 - config_name: stance_climate features: - name: text dtype: string - name: label dtype: class_label: names: '0': none '1': against '2': favor splits: - name: train num_bytes: 40253 num_examples: 355 - name: test num_bytes: 19929 num_examples: 169 - name: validation num_bytes: 4805 num_examples: 40 download_size: 60463 dataset_size: 64987 - config_name: stance_feminist features: - name: text dtype: string - name: label dtype: class_label: names: '0': none '1': against '2': favor splits: - name: train num_bytes: 70513 num_examples: 597 - name: test num_bytes: 33309 num_examples: 285 - name: validation num_bytes: 8039 num_examples: 67 download_size: 104257 dataset_size: 111861 - config_name: stance_hillary features: - name: text dtype: string - name: label dtype: class_label: names: '0': none '1': against '2': favor splits: - name: train num_bytes: 69600 num_examples: 620 - name: test num_bytes: 34491 num_examples: 295 - name: validation num_bytes: 7536 num_examples: 69 download_size: 103745 dataset_size: 111627 train-eval-index: - config: emotion task: text-classification task_id: multi_class_classification splits: train_split: train eval_split: test col_mapping: text: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted - config: hate task: text-classification task_id: binary_classification splits: train_split: train eval_split: test col_mapping: text: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 binary args: average: binary - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted - config: irony task: text-classification task_id: binary_classification splits: train_split: train eval_split: test col_mapping: text: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 binary args: average: binary - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted - config: offensive task: text-classification task_id: binary_classification splits: train_split: train eval_split: test col_mapping: text: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 binary args: average: binary - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted - config: sentiment task: text-classification task_id: multi_class_classification splits: train_split: train eval_split: test col_mapping: text: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted --- # Dataset Card for tweet_eval ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [GitHub](https://github.com/cardiffnlp/tweeteval) - **Paper:** [EMNLP Paper](https://arxiv.org/pdf/2010.12421.pdf) - **Leaderboard:** [GitHub Leaderboard](https://github.com/cardiffnlp/tweeteval) - **Point of Contact:** [Needs More Information] ### Dataset Summary TweetEval consists of seven heterogenous tasks in Twitter, all framed as multi-class tweet classification. The tasks include - irony, hate, offensive, stance, emoji, emotion, and sentiment. All tasks have been unified into the same benchmark, with each dataset presented in the same format and with fixed training, validation and test splits. ### Supported Tasks and Leaderboards - `text_classification`: The dataset can be trained using a SentenceClassification model from HuggingFace transformers. ### Languages The text in the dataset is in English, as spoken by Twitter users. ## Dataset Structure ### Data Instances An instance from `emoji` config: ``` {'label': 12, 'text': 'Sunday afternoon walking through Venice in the sun with @user ️ ️ ️ @ Abbot Kinney, Venice'} ``` An instance from `emotion` config: ``` {'label': 2, 'text': "“Worry is a down payment on a problem you may never have'. \xa0Joyce Meyer. #motivation #leadership #worry"} ``` An instance from `hate` config: ``` {'label': 0, 'text': '@user nice new signage. Are you not concerned by Beatlemania -style hysterical crowds crongregating on you…'} ``` An instance from `irony` config: ``` {'label': 1, 'text': 'seeing ppl walking w/ crutches makes me really excited for the next 3 weeks of my life'} ``` An instance from `offensive` config: ``` {'label': 0, 'text': '@user Bono... who cares. Soon people will understand that they gain nothing from following a phony celebrity. Become a Leader of your people instead or help and support your fellow countrymen.'} ``` An instance from `sentiment` config: ``` {'label': 2, 'text': '"QT @user In the original draft of the 7th book, Remus Lupin survived the Battle of Hogwarts. #HappyBirthdayRemusLupin"'} ``` An instance from `stance_abortion` config: ``` {'label': 1, 'text': 'we remind ourselves that love means to be willing to give until it hurts - Mother Teresa'} ``` An instance from `stance_atheism` config: ``` {'label': 1, 'text': '@user Bless Almighty God, Almighty Holy Spirit and the Messiah. #SemST'} ``` An instance from `stance_climate` config: ``` {'label': 0, 'text': 'Why Is The Pope Upset? via @user #UnzippedTruth #PopeFrancis #SemST'} ``` An instance from `stance_feminist` config: ``` {'label': 1, 'text': "@user @user is the UK's answer to @user and @user #GamerGate #SemST"} ``` An instance from `stance_hillary` config: ``` {'label': 1, 'text': "If a man demanded staff to get him an ice tea he'd be called a sexists elitist pig.. Oink oink #Hillary #SemST"} ``` ### Data Fields For `emoji` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: ❤ `1`: 😍 `2`: 😂 `3`: 💕 `4`: 🔥 `5`: 😊 `6`: 😎 `7`: ✨ `8`: 💙 `9`: 😘 `10`: 📷 `11`: 🇺🇸 `12`: ☀ `13`: 💜 `14`: 😉 `15`: 💯 `16`: 😁 `17`: 🎄 `18`: 📸 `19`: 😜 For `emotion` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: anger `1`: joy `2`: optimism `3`: sadness For `hate` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: non-hate `1`: hate For `irony` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: non_irony `1`: irony For `offensive` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: non-offensive `1`: offensive For `sentiment` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: negative `1`: neutral `2`: positive For `stance_abortion` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: none `1`: against `2`: favor For `stance_atheism` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: none `1`: against `2`: favor For `stance_climate` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: none `1`: against `2`: favor For `stance_feminist` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: none `1`: against `2`: favor For `stance_hillary` config: - `text`: a `string` feature containing the tweet. - `label`: an `int` classification label with the following mapping: `0`: none `1`: against `2`: favor ### Data Splits | name | train | validation | test | | --------------- | ----- | ---------- | ----- | | emoji | 45000 | 5000 | 50000 | | emotion | 3257 | 374 | 1421 | | hate | 9000 | 1000 | 2970 | | irony | 2862 | 955 | 784 | | offensive | 11916 | 1324 | 860 | | sentiment | 45615 | 2000 | 12284 | | stance_abortion | 587 | 66 | 280 | | stance_atheism | 461 | 52 | 220 | | stance_climate | 355 | 40 | 169 | | stance_feminist | 597 | 67 | 285 | | stance_hillary | 620 | 69 | 295 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators Francesco Barbieri, Jose Camacho-Collados, Luis Espiinosa-Anke and Leonardo Neves through Cardiff NLP. ### Licensing Information This is not a single dataset, therefore each subset has its own license (the collection itself does not have additional restrictions). All of the datasets require complying with Twitter [Terms Of Service](https://twitter.com/tos) and Twitter API [Terms Of Service](https://developer.twitter.com/en/developer-terms/agreement-and-policy) Additionally the license are: - emoji: Undefined - emotion(EmoInt): Undefined - hate (HateEval): Need permission [here](http://hatespeech.di.unito.it/hateval.html) - irony: Undefined - Offensive: Undefined - Sentiment: [Creative Commons Attribution 3.0 Unported License](https://groups.google.com/g/semevaltweet/c/k5DDcvVb_Vo/m/zEOdECFyBQAJ) - Stance: Undefined ### Citation Information ``` @inproceedings{barbieri2020tweeteval, title={{TweetEval:Unified Benchmark and Comparative Evaluation for Tweet Classification}}, author={Barbieri, Francesco and Camacho-Collados, Jose and Espinosa-Anke, Luis and Neves, Leonardo}, booktitle={Proceedings of Findings of EMNLP}, year={2020} } ``` If you use any of the TweetEval datasets, please cite their original publications: #### Emotion Recognition: ``` @inproceedings{mohammad2018semeval, title={Semeval-2018 task 1: Affect in tweets}, author={Mohammad, Saif and Bravo-Marquez, Felipe and Salameh, Mohammad and Kiritchenko, Svetlana}, booktitle={Proceedings of the 12th international workshop on semantic evaluation}, pages={1--17}, year={2018} } ``` #### Emoji Prediction: ``` @inproceedings{barbieri2018semeval, title={Semeval 2018 task 2: Multilingual emoji prediction}, author={Barbieri, Francesco and Camacho-Collados, Jose and Ronzano, Francesco and Espinosa-Anke, Luis and Ballesteros, Miguel and Basile, Valerio and Patti, Viviana and Saggion, Horacio}, booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation}, pages={24--33}, year={2018} } ``` #### Irony Detection: ``` @inproceedings{van2018semeval, title={Semeval-2018 task 3: Irony detection in english tweets}, author={Van Hee, Cynthia and Lefever, Els and Hoste, V{\'e}ronique}, booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation}, pages={39--50}, year={2018} } ``` #### Hate Speech Detection: ``` @inproceedings{basile-etal-2019-semeval, title = "{S}em{E}val-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in {T}witter", author = "Basile, Valerio and Bosco, Cristina and Fersini, Elisabetta and Nozza, Debora and Patti, Viviana and Rangel Pardo, Francisco Manuel and Rosso, Paolo and Sanguinetti, Manuela", booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation", year = "2019", address = "Minneapolis, Minnesota, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/S19-2007", doi = "10.18653/v1/S19-2007", pages = "54--63" } ``` #### Offensive Language Identification: ``` @inproceedings{zampieri2019semeval, title={SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)}, author={Zampieri, Marcos and Malmasi, Shervin and Nakov, Preslav and Rosenthal, Sara and Farra, Noura and Kumar, Ritesh}, booktitle={Proceedings of the 13th International Workshop on Semantic Evaluation}, pages={75--86}, year={2019} } ``` #### Sentiment Analysis: ``` @inproceedings{rosenthal2017semeval, title={SemEval-2017 task 4: Sentiment analysis in Twitter}, author={Rosenthal, Sara and Farra, Noura and Nakov, Preslav}, booktitle={Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017)}, pages={502--518}, year={2017} } ``` #### Stance Detection: ``` @inproceedings{mohammad2016semeval, title={Semeval-2016 task 6: Detecting stance in tweets}, author={Mohammad, Saif and Kiritchenko, Svetlana and Sobhani, Parinaz and Zhu, Xiaodan and Cherry, Colin}, booktitle={Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)}, pages={31--41}, year={2016} } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchhablani) and [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
tweet_qa
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: tweetqa pretty_name: TweetQA dataset_info: features: - name: Question dtype: string - name: Answer sequence: string - name: Tweet dtype: string - name: qid dtype: string splits: - name: train num_bytes: 2770036 num_examples: 10692 - name: test num_bytes: 473730 num_examples: 1979 - name: validation num_bytes: 295435 num_examples: 1086 download_size: 1573980 dataset_size: 3539201 --- # Dataset Card for TweetQA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [TweetQA homepage](https://tweetqa.github.io/) - **Repository:** - **Paper:** [TWEETQA: A Social Media Focused Question Answering Dataset](https://arxiv.org/abs/1907.06292) - **Leaderboard:** [TweetQA Leaderboard](https://tweetqa.github.io/) - **Point of Contact:** [Wenhan Xiong](xwhan@cs.ucsb.edu) ### Dataset Summary With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on real-time knowledge. While previous question answering (QA) datasets have concentrated on formal text like news and Wikipedia, the first large-scale dataset for QA over social media data is presented. To make sure the tweets are meaningful and contain interesting information, tweets used by journalists to write news articles are gathered. Then human annotators are asked to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, the answer are allowed to be abstractive. The task requires model to read a short tweet and a question and outputs a text phrase (does not need to be in the tweet) as the answer. ### Supported Tasks and Leaderboards - `question-answering`: The dataset can be used to train a model for Open-Domain Question Answering where the task is to answer the given questions for a tweet. The performance is measured by comparing the model answers to the the annoted groundtruth and calculating the BLEU-1/Meteor/ROUGE-L score. This task has an active leaderboard which can be found [here](https://tweetqa.github.io/) and ranks models based on [BLEU-1](https://huggingface.co/metrics/blue), [Meteor](https://huggingface.co/metrics/meteor) and [ROUGLE-L](https://huggingface.co/metrics/rouge). ### Languages English. ## Dataset Structure ### Data Instances Sample data: ``` { "Question": "who is the tallest host?", "Answer": ["sam bee","sam bee"], "Tweet": "Don't believe @ConanOBrien's height lies. Sam Bee is the tallest host in late night. #alternativefacts\u2014 Full Frontal (@FullFrontalSamB) January 22, 2017", "qid": "3554ee17d86b678be34c4dc2c04e334f" } ``` The test split doesn't include answers so the Answer field is an empty list. ### Data Fields - `Question`: a question based on information from a tweet - `Answer`: list of possible answers from the tweet - `Tweet`: source tweet - `qid`: question id ### Data Splits The dataset is split in train, validation and test set. The train set cointains 10692 examples, the validation set 1086 and the test set 1979 examples. ## Dataset Creation ### Curation Rationale With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on real-time knowledge. While previous question answering (QA) datasets have concentrated on formal text like news and Wikipedia, the first large-scale dataset for QA over social media data is presented. To make sure the tweets are meaningful and contain interesting information, tweets used by journalists to write news articles are gathered. Then human annotators are asked to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, the answer are allowed to be abstractive. The task requires model to read a short tweet and a question and outputs a text phrase (does not need to be in the tweet) as the answer. ### Source Data #### Initial Data Collection and Normalization The authors look into the the archived snapshots of two major news websites (CNN, NBC), and then extract the tweet blocks that are embedded in the news articles. In order to get enough data, they first extract the URLs of all section pages (e.g. World, Politics, Money, Tech) from the snapshot of each home page and then crawl all articles with tweets from these section pages. Then, they filter out the tweets that heavily rely on attached media to convey information, for which they utilize a state-of-the-art semantic role labeling model trained on CoNLL-2005 (He et al., 2017) to analyze the predicate-argument structure of the tweets collected from news articles and keep only the tweets with more than two labeled arguments. This filtering process also automatically filters out most of the short tweets. For the tweets collected from CNN, 22.8% of them were filtered via semantic role labeling. For tweets from NBC, 24.1% of the tweets were filtered. #### Who are the source language producers? Twitter users. ### Annotations #### Annotation process The Amazon Mechanical Turk workers were used to collect question-answer pairs for the filtered tweets. For each Human Intelligence Task (HIT), the authors ask the worker to read three tweets and write two question-answer pairs for each tweet. To ensure the quality, they require the workers to be located in major English speaking countries (i.e. Canada, US, and UK) and have an acceptance rate larger than 95%. Since the authors use tweets as context, lots of important information are contained in hashtags or even emojis. Instead of only showing the text to the workers, they use javascript to directly embed the whole tweet into each HIT. This gives workers the same experience as reading tweets via web browsers and help them to better compose questions. To avoid trivial questions that can be simply answered by superficial text matching methods or too challenging questions that require background knowledge, the authors explicitly state the following items in the HIT instructions for question writing: - No Yes-no questions should be asked. - The question should have at least five words. - Videos, images or inserted links should not be considered. - No background knowledge should be required to answer the question. To help the workers better follow the instructions, they also include a representative example showing both good and bad questions or answers in the instructions. As for the answers, since the context they consider is relatively shorter than the context of previous datasets, they do not restrict the answers to be in the tweet, otherwise, the task may potentially be simplified as a classification problem. The workers are allowed to write their answers in their own words, but the authors require the answers to be brief and can be directly inferred from the tweets. After they retrieve the QA pairs from all HITs, they conduct further post-filtering to filter out the pairs from workers that obviously do not follow instructions. They remove QA pairs with yes/no answers. Questions with less than five words are also filtered out. This process filtered 13% of the QA pairs. The dataset now includes 10,898 articles, 17,794 tweets, and 13,757 crowdsourced question-answer pairs. All QA pairs were written by 492 individual workers. #### Who are the annotators? Amazon Mechanical Turk workers. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases From the paper: > It is also worth noting that the data collected from social media can not only capture events and developments in real-time but also capture individual opinions and thus requires reasoning related to the authorship of the content as is illustrated in Table 1. > Specifically, a significant amount of questions require certain reasoning skills that are specific to social media data: - Understanding authorship: Since tweets are highly personal, it is critical to understand how questions/tweets related to the authors. - Oral English & Tweet English: Tweets are often oral and informal. QA over tweets requires the understanding of common oral English. Our TWEETQA also requires understanding some tweet-specific English, like conversation-style English. - Understanding of user IDs & hashtags: Tweets often contains user IDs and hashtags, which are single special tokens. Understanding these special tokens is important to answer person- or event-related questions. ### Other Known Limitations [More Information Needed] ## Additional Information The annotated answers are validated by the authors as follows: For the purposes of human performance evaluation and inter-annotator agreement checking, the authors launch a different set of HITs to ask workers to answer questions in the test and development set. The workers are shown with the tweet blocks as well as the questions collected in the previous step. At this step, workers are allowed to label the questions as “NA” if they think the questions are not answerable. They find that 3.1% of the questions are labeled as unanswerable by the workers (for SQuAD, the ratio is 2.6%). Since the answers collected at this step and previous step are written by different workers, the answers can be written in different text forms even they are semantically equal to each other. For example, one answer can be “Hillary Clinton” while the other is “@HillaryClinton”. As it is not straightforward to automatically calculate the overall agreement, they manually check the agreement on a subset of 200 random samples from the development set and ask an independent human moderator to verify the result. It turns out that 90% of the answers pairs are semantically equivalent, 2% of them are partially equivalent (one of them is incomplete) and 8% are totally inconsistent. The answers collected at this step are also used to measure the human performance. 59 individual workers participated in this process. ### Dataset Curators Xiong, Wenhan and Wu, Jiawei and Wang, Hong and Kulkarni, Vivek and Yu, Mo and Guo, Xiaoxiao and Chang, Shiyu and Wang, William Yang. ### Licensing Information CC BY-SA 4.0. ### Citation Information ``` @inproceedings{xiong2019tweetqa, title={TweetQA: A Social Media Focused Question Answering Dataset}, author={Xiong, Wenhan and Wu, Jiawei and Wang, Hong and Kulkarni, Vivek and Yu, Mo and Guo, Xiaoxiao and Chang, Shiyu and Wang, William Yang}, booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics}, year={2019} } ``` ### Contributions Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
tweets_ar_en_parallel
--- annotations_creators: - expert-generated - no-annotation language_creators: - found language: - ar - en license: - apache-2.0 multilinguality: - translation size_categories: - 100K<n<1M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: bilingual-corpus-of-arabic-english-parallel pretty_name: Bilingual Corpus of Arabic-English Parallel Tweets tags: - tweets-translation dataset_info: - config_name: parallelTweets features: - name: ArabicTweetID dtype: int64 - name: EnglishTweetID dtype: int64 splits: - name: test num_bytes: 2667296 num_examples: 166706 download_size: 2937626 dataset_size: 2667296 - config_name: accountList features: - name: account dtype: string splits: - name: test num_bytes: 20108 num_examples: 1389 download_size: 2937626 dataset_size: 20108 - config_name: countryTopicAnnotation features: - name: account dtype: string - name: country dtype: class_label: names: '0': QA '1': BH '2': AE '3': OM '4': SA '5': PL '6': JO '7': IQ '8': Other '9': EG '10': KW '11': SY - name: topic dtype: class_label: names: '0': Gov '1': Culture '2': Education '3': Sports '4': Travel '5': Events '6': Business '7': Science '8': Politics '9': Health '10': Governoment '11': Media splits: - name: test num_bytes: 6036 num_examples: 200 download_size: 2937626 dataset_size: 6036 --- # Dataset Card for Bilingual Corpus of Arabic-English Parallel Tweets ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Bilingual Corpus of Arabic-English Parallel Tweets](https://alt.qcri.org/resources/bilingual_corpus_of_parallel_tweets) - **Repository:** - **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.bucc-1.3/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Twitter users often post parallel tweets—tweets that contain the same content but are written in different languages. Parallel tweets can be an important resource for developing machine translation (MT) systems among other natural language processing (NLP) tasks. This resource is a result of a generic method for collecting parallel tweets. Using the method, we compiled a bilingual corpus of English-Arabic parallel tweets and a list of Twitter accounts who post English-Arabic tweets regularly. Additionally, we annotate a subset of Twitter accounts with their countries of origin and topic of interest, which provides insights about the population who post parallel tweets. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances parallelTweets: ``` { "ArabicTweetID": 981111245209243600, "EnglishTweetID": 981111450432401400 } ``` accountList: ``` { 'account': 'HukoomiQatar' } ``` countryTopicAnnotation: ``` { 'account': 'HukoomiQatar', 'country': 'QA', 'topic': 'Gov' } ``` ### Data Fields parallelTweets: - `ArabicTweetID` (int) - `EnglishTweetID` (int) accountList: - `account` (str) countryTopicAnnotation: - `account` (str) - `country` (class label): One of: - "QA", - "BH", - "AE", - "OM", - "SA", - "PL", - "JO", - "IQ", - "Other", - "EG", - "KW", - "SY" - `topic` (class label): One of: - "Gov", - "Culture", - "Education", - "Sports", - "Travel", - "Events", - "Business", - "Science", - "Politics", - "Health", - "Governoment", - "Media", ### Data Splits All configuration have only one split: "test". ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information It is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @inproceedings{Mubarak2020bilingualtweets, title={Constructing a Bilingual Corpus of Parallel Tweets}, author={Mubarak, Hamdy and Hassan, Sabit and Abdelali, Ahmed}, booktitle={Proceedings of 13th Workshop on Building and Using Comparable Corpora (BUCC)}, address={Marseille, France}, year={2020} } ``` [More Information Needed] ### Contributions Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset.
tweets_hate_speech_detection
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - gpl-3.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: Tweets Hate Speech Detection dataset_info: features: - name: label dtype: class_label: names: '0': no-hate-speech '1': hate-speech - name: tweet dtype: string splits: - name: train num_bytes: 3191888 num_examples: 31962 - name: test num_bytes: 1711606 num_examples: 17197 download_size: 4738708 dataset_size: 4903494 train-eval-index: - config: default task: text-classification task_id: binary_classification splits: train_split: train col_mapping: tweet: text label: target metrics: - type: accuracy name: Accuracy - type: f1 name: F1 binary args: average: binary - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted --- # Dataset Card for Tweets Hate Speech Detection ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Home](https://github.com/sharmaroshan/Twitter-Sentiment-Analysis) - **Repository:** [Repo](https://github.com/sharmaroshan/Twitter-Sentiment-Analysis/blob/master/train_tweet.csv) - **Paper:** - **Leaderboard:** - **Point of Contact:** [Darshan Gandhi](darshangandhi1151@gmail.com) ### Dataset Summary The objective of this task is to detect hate speech in tweets. For the sake of simplicity, we say a tweet contains hate speech if it has a racist or sexist sentiment associated with it. So, the task is to classify racist or sexist tweets from other tweets. Formally, given a training sample of tweets and labels, where label ‘1’ denotes the tweet is racist/sexist and label ‘0’ denotes the tweet is not racist/sexist, your objective is to predict the labels on the given test dataset. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The tweets are primarily in English Language. ## Dataset Structure ### Data Instances The dataset contains a label denoting is the tweet a hate speech or not ``` {'label': 0, # not a hate speech 'tweet': ' @user when a father is dysfunctional and is so selfish he drags his kids into his dysfunction. #run'} ``` ### Data Fields * label : 1 - it is a hate speech, 0 - not a hate speech. * tweet: content of the tweet as a string. ### Data Splits The data contains training data with :31962 entries ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Crowdsourced from tweets of users #### Who are the source language producers? Cwodsourced from twitter ### Annotations #### Annotation process The data has been precprocessed and a model has been trained to assign the relevant label to the tweet #### Who are the annotators? The data has been provided by Roshan Sharma ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset With the help of this dataset, one can understand more about the human sentiments and also analye the situations when a particular person intends to make use of hatred/racist comments ### Discussion of Biases The data could be cleaned up further for additional purposes such as applying a better feature extraction techniques [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Roshan Sharma ### Licensing Information [Information](https://github.com/sharmaroshan/Twitter-Sentiment-Analysis/blob/master/LICENSE) ### Citation Information [Citation](https://github.com/sharmaroshan/Twitter-Sentiment-Analysis/blob/master/CONTRIBUTING.md) ### Contributions Thanks to [@darshan-gandhi](https://github.com/darshan-gandhi) for adding this dataset.
twi_text_c3
--- annotations_creators: - expert-generated language_creators: - found language: - tw license: - cc-by-nc-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: Twi Text C3 dataset_info: features: - name: text dtype: string config_name: plain_text splits: - name: train num_bytes: 71198430 num_examples: 675772 download_size: 69170842 dataset_size: 71198430 --- # Dataset Card for Twi Text C3 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.aclweb.org/anthology/2020.lrec-1.335 - **Repository:** https://github.com/ajesujoba/YorubaTwi-Embedding/ - **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.335 - **Leaderboard:** - **Point of Contact:** [Kwabena Amponsah-Kaakyire](mailto:s8kwampo@stud.uni-saarland.de) ### Dataset Summary Twi Text C3 was collected from various sources from the web (Bible, JW300, wikipedia, etc) to compare pre-trained word embeddings (Fasttext) and embeddings and embeddings trained on curated Twi Texts. The dataset consists of clean texts (i.e the Bible) and noisy texts (with incorrect orthography and mixed dialects) from other online sources like Wikipedia and JW300 ### Supported Tasks and Leaderboards For training word embeddings and language models on Twi texts. ### Languages The language supported is Twi. ## Dataset Structure ### Data Instances A data point is a sentence in each line. { 'text': 'mfitiaseɛ no onyankopɔn bɔɔ ɔsoro ne asaase' } ### Data Fields - `text`: a `string` feature. a sentence text per line ### Data Splits Contains only the training split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Twi. ### Source Data #### Initial Data Collection and Normalization The dataset comes from various sources of the web: Bible, JW300, and wikipedia. See Table 1 in the [paper](https://www.aclweb.org/anthology/2020.lrec-1.335/) for the summary of the dataset and statistics #### Who are the source language producers? [Jehovah Witness](https://www.jw.org/) (JW300) [Twi Bible](http://www.bible.com/) [Yorùbá Wikipedia](dumps.wikimedia.org/twwiki) ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases The dataset is biased to the religion domain (Christianity) because of the inclusion of JW300 and the Bible. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The data sets were curated by Kwabena Amponsah-Kaakyire, Jesujoba Alabi, and David Adelani, students of Saarland University, Saarbrücken, Germany . ### Licensing Information The data is under the [Creative Commons Attribution-NonCommercial 4.0 ](https://creativecommons.org/licenses/by-nc/4.0/legalcode) ### Citation Information ``` @inproceedings{alabi-etal-2020-massive, title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\`u}b{\'a} and {T}wi", author = "Alabi, Jesujoba and Amponsah-Kaakyire, Kwabena and Adelani, David and Espa{\~n}a-Bonet, Cristina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.335", pages = "2754--2762", abstract = "The success of several architectures to learn semantic representations from unannotated text and the availability of these kind of texts in online multilingual resources such as Wikipedia has facilitated the massive and automatic creation of resources for multiple languages. The evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. For low-resourced languages, the evaluation is more difficult and normally ignored, with the hope that the impressive capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced setting too. In this paper we focus on two African languages, Yor{\`u}b{\'a} and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing. We analyse the noise in the publicly available corpora, collect high quality and noisy data for the two languages and quantify the improvements that depend not only on the amount of data but on the quality too. We also use different architectures that learn word representations both from surface forms and characters to further exploit all the available information which showed to be important for these languages. For the evaluation, we manually translate the wordsim-353 word pairs dataset from English into Yor{\`u}b{\'a} and Twi. We extend the analysis to contextual word embeddings and evaluate multilingual BERT on a named entity recognition task. For this, we annotate with named entities the Global Voices corpus for Yor{\`u}b{\'a}. As output of the work, we provide corpora, embeddings and the test suits for both languages.", language = "English", ISBN = "979-10-95546-34-4", } ``` ### Contributions Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
twi_wordsim353
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - en - tw license: - unknown multilinguality: - multilingual size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: - text-scoring - semantic-similarity-scoring paperswithcode_id: null pretty_name: Yorùbá Wordsim-353 dataset_info: features: - name: twi1 dtype: string - name: twi2 dtype: string - name: similarity dtype: float32 splits: - name: test num_bytes: 7285 num_examples: 274 download_size: 6141 dataset_size: 7285 --- # Dataset Card for Yorùbá Wordsim-353 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** -https://www.aclweb.org/anthology/2020.lrec-1.335/ - **Repository:** https://github.com/ajesujoba/YorubaTwi-Embedding - **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.335/ - **Leaderboard:** - - **Point of Contact:** [Kwabena Amponsah-Kaakyire](mailto:s8kwampo@stud.uni-saarland.de) ### Dataset Summary A translation of the word pair similarity dataset wordsim-353 to Twi. However, only 274 (out of 353) pairs of words were translated ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Twi (ISO 639-1: tw) ## Dataset Structure ### Data Instances An instance consists of a pair of words as well as their similarity. The dataset contains both the original English words (from wordsim-353) as well as their translation to Twi. ### Data Fields - `twi1`: the first word of the pair; translation to Twi - `twi2`: the second word of the pair; translation to Twi - `similarity`: similarity rating according to the English dataset ### Data Splits Only the test data is available ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{alabi-etal-2020-massive, title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\`u}b{\'a} and {T}wi", author = "Alabi, Jesujoba and Amponsah-Kaakyire, Kwabena and Adelani, David and Espa{\~n}a-Bonet, Cristina", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.335", pages = "2754--2762", abstract = "The success of several architectures to learn semantic representations from unannotated text and the availability of these kind of texts in online multilingual resources such as Wikipedia has facilitated the massive and automatic creation of resources for multiple languages. The evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. For low-resourced languages, the evaluation is more difficult and normally ignored, with the hope that the impressive capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced setting too. In this paper we focus on two African languages, Yor{\`u}b{\'a} and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing. We analyse the noise in the publicly available corpora, collect high quality and noisy data for the two languages and quantify the improvements that depend not only on the amount of data but on the quality too. We also use different architectures that learn word representations both from surface forms and characters to further exploit all the available information which showed to be important for these languages. For the evaluation, we manually translate the wordsim-353 word pairs dataset from English into Yor{\`u}b{\'a} and Twi. We extend the analysis to contextual word embeddings and evaluate multilingual BERT on a named entity recognition task. For this, we annotate with named entities the Global Voices corpus for Yor{\`u}b{\'a}. As output of the work, we provide corpora, embeddings and the test suits for both languages.", language = "English", ISBN = "979-10-95546-34-4", } ``` ### Contributions Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
tydiqa
--- pretty_name: TyDi QA annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ar - bn - en - fi - id - ja - ko - ru - sw - te - th license: - apache-2.0 multilinguality: - multilingual size_categories: - unknown source_datasets: - extended|wikipedia task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: tydi-qa dataset_info: - config_name: primary_task features: - name: passage_answer_candidates sequence: - name: plaintext_start_byte dtype: int32 - name: plaintext_end_byte dtype: int32 - name: question_text dtype: string - name: document_title dtype: string - name: language dtype: string - name: annotations sequence: - name: passage_answer_candidate_index dtype: int32 - name: minimal_answers_start_byte dtype: int32 - name: minimal_answers_end_byte dtype: int32 - name: yes_no_answer dtype: string - name: document_plaintext dtype: string - name: document_url dtype: string splits: - name: train num_bytes: 5550574617 num_examples: 166916 - name: validation num_bytes: 484380443 num_examples: 18670 download_size: 1953887429 dataset_size: 6034955060 - config_name: secondary_task features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: train num_bytes: 52948607 num_examples: 49881 - name: validation num_bytes: 5006461 num_examples: 5077 download_size: 1953887429 dataset_size: 57955068 --- # Dataset Card for "tydiqa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3.91 GB - **Size of the generated dataset:** 6.10 GB - **Total amount of disk used:** 10.00 GB ### Dataset Summary TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### primary_task - **Size of downloaded dataset files:** 1.95 GB - **Size of the generated dataset:** 6.04 GB - **Total amount of disk used:** 7.99 GB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "annotations": { "minimal_answers_end_byte": [-1, -1, -1], "minimal_answers_start_byte": [-1, -1, -1], "passage_answer_candidate_index": [-1, -1, -1], "yes_no_answer": ["NONE", "NONE", "NONE"] }, "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...", "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร", "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...", "language": "thai", "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...", "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..." } ``` #### secondary_task - **Size of downloaded dataset files:** 1.95 GB - **Size of the generated dataset:** 58.03 MB - **Total amount of disk used:** 2.01 GB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [394], "text": ["بطولتين"] }, "context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...", "id": "arabic-2387335860751143628-1", "question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...", "title": "قائمة نهائيات كأس العالم" } ``` ### Data Fields The data fields are the same among all splits. #### primary_task - `passage_answer_candidates`: a dictionary feature containing: - `plaintext_start_byte`: a `int32` feature. - `plaintext_end_byte`: a `int32` feature. - `question_text`: a `string` feature. - `document_title`: a `string` feature. - `language`: a `string` feature. - `annotations`: a dictionary feature containing: - `passage_answer_candidate_index`: a `int32` feature. - `minimal_answers_start_byte`: a `int32` feature. - `minimal_answers_end_byte`: a `int32` feature. - `yes_no_answer`: a `string` feature. - `document_plaintext`: a `string` feature. - `document_url`: a `string` feature. #### secondary_task - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name | train | validation | | -------------- | -----: | ---------: | | primary_task | 166916 | 18670 | | secondary_task | 49881 | 5077 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{tydiqa, title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki} year = {2020}, journal = {Transactions of the Association for Computational Linguistics} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
ubuntu_dialogs_corpus
--- annotations_creators: - found language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: UDC (Ubuntu Dialogue Corpus) size_categories: - 1M<n<10M source_datasets: - original task_categories: - conversational task_ids: - dialogue-generation paperswithcode_id: ubuntu-dialogue-corpus dataset_info: - config_name: train features: - name: Context dtype: string - name: Utterance dtype: string - name: Label dtype: int32 splits: - name: train num_bytes: 525126729 num_examples: 1000000 download_size: 0 dataset_size: 525126729 - config_name: dev_test features: - name: Context dtype: string - name: Ground Truth Utterance dtype: string - name: Distractor_0 dtype: string - name: Distractor_1 dtype: string - name: Distractor_2 dtype: string - name: Distractor_3 dtype: string - name: Distractor_4 dtype: string - name: Distractor_5 dtype: string - name: Distractor_6 dtype: string - name: Distractor_7 dtype: string - name: Distractor_8 dtype: string splits: - name: test num_bytes: 27060502 num_examples: 18920 - name: validation num_bytes: 27663181 num_examples: 19560 download_size: 0 dataset_size: 54723683 --- # Dataset Card for "ubuntu_dialogs_corpus" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/rkadlec/ubuntu-ranking-dataset-creator - **Paper:** [The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems](https://arxiv.org/abs/1506.08909) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 65.49 MB - **Total amount of disk used:** 65.49 MB ### Dataset Summary Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### train - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 65.49 MB - **Total amount of disk used:** 65.49 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "Context": "\"i think we could import the old comment via rsync , but from there we need to go via email . i think it be easier than cach the...", "Label": 1, "Utterance": "basic each xfree86 upload will not forc user to upgrad 100mb of font for noth __eou__ no someth i do in my spare time . __eou__" } ``` ### Data Fields The data fields are the same among all splits. #### train - `Context`: a `string` feature. - `Utterance`: a `string` feature. - `Label`: a `int32` feature. ### Data Splits |name |train | |-----|-----:| |train|127422| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{DBLP:journals/corr/LowePSP15, author = {Ryan Lowe and Nissan Pow and Iulian Serban and Joelle Pineau}, title = {The Ubuntu Dialogue Corpus: {A} Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems}, journal = {CoRR}, volume = {abs/1506.08909}, year = {2015}, url = {http://arxiv.org/abs/1506.08909}, archivePrefix = {arXiv}, eprint = {1506.08909}, timestamp = {Mon, 13 Aug 2018 16:48:23 +0200}, biburl = {https://dblp.org/rec/journals/corr/LowePSP15.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
udhr
--- annotations_creators: - no-annotation language_creators: - found language: - aa - ab - ace - acu - ada - ady - af - agr - aii - ajg - als - alt - am - amc - ame - ami - amr - ar - arl - arn - ast - auc - ay - az - ban - bax - bba - bci - be - bem - bfa - bg - bho - bi - bik - bin - blt - bm - bn - bo - boa - br - bs - buc - bug - bum - ca - cab - cak - cbi - cbr - cbs - cbt - cbu - ccp - ceb - cfm - ch - chj - chk - chr - cic - cjk - cjs - cjy - ckb - cnh - cni - cnr - co - cof - cot - cpu - crh - cri - crs - cs - csa - csw - ctd - cy - da - dag - ddn - de - dga - dip - duu - dv - dyo - dyu - dz - ee - el - en - eo - es - ese - et - eu - eve - evn - fa - fat - fi - fj - fkv - fo - fon - fr - fuf - fur - fuv - fvr - fy - ga - gaa - gag - gan - gd - gjn - gkp - gl - gld - gn - gsw - gu - guc - guu - gv - gyr - ha - hak - haw - he - hi - hil - hlt - hmn - hms - hna - hni - hnj - hns - hr - hsb - hsn - ht - hu - hus - huu - hy - ia - ibb - id - idu - ig - ii - ijs - ilo - io - is - it - iu - ja - jiv - jv - ka - kaa - kbd - kbp - kde - kdh - kea - kek - kg - kha - kjh - kk - kkh - kl - km - kmb - kn - ko - koi - koo - kqn - kqs - kr - kri - krl - ktu - ku - kwi - ky - la - lad - lah - lb - lg - lia - lij - lld - ln - lns - lo - lob - lot - loz - lt - lua - lue - lun - lus - lv - mad - mag - mai - mam - man - maz - mcd - mcf - men - mfq - mg - mh - mi - mic - min - miq - mk - ml - mn - mnw - mor - mos - mr - mt - mto - mxi - mxv - my - mzi - nan - nb - nba - nds - ne - ng - nhn - nio - niu - niv - njo - nku - nl - nn - not - nr - nso - nv - ny - nym - nyn - nzi - oaa - oc - ojb - oki - om - orh - os - ote - pa - pam - pap - pau - pbb - pcd - pcm - pis - piu - pl - pon - pov - ppl - prq - ps - pt - qu - quc - qug - quh - quy - qva - qvc - qvh - qvm - qvn - qwh - qxn - qxu - rar - rgn - rm - rmn - rn - ro - ru - rup - rw - sa - sah - sc - sco - se - sey - sg - shk - shn - shp - si - sk - skr - sl - slr - sm - sn - snk - snn - so - sr - srr - ss - st - su - suk - sus - sv - sw - swb - ta - taj - tbz - tca - tdt - te - tem - tet - tg - th - ti - tiv - tk - tl - tly - tn - to - tob - toi - toj - top - tpi - tr - ts - tsz - tt - tw - ty - tyv - tzh - tzm - tzo - udu - ug - uk - umb - und - ur - ura - uz - vai - ve - vec - vep - vi - vmw - wa - war - wo - wuu - wwa - xh - xsm - yad - yao - yap - yi - ykg - yo - yrk - yua - yue - za - zam - zdj - zgh - zh - zlm - zro - ztu - zu language_bcp47: - az-Cyrl - az-Latn - bs-Cyrl - bs-Latn - ckb-Latn - de-1901 - de-1996 - el-monoton - el-polyton - fa-AF - fuf-Adlm - ha-NE - ha-NG - jv-Java - kg-AO - kkh-Lana - mn-Cyrl - pt-BR - pt-PT - rm-puter - rm-rumgr - rm-surmiran - rm-sursilv - rm-sutsilv - rm-vallader - sa-Gran - sr-Cyrl - sr-Latn - ta-LK - tk-Cyrl - tk-Latn - tw-akuapem - tw-asante - ug-Arab - ug-Latn - uz-Cyrl - uz-Latn - vi-Hani - zh-Hant - zlm-Arab - zlm-Latn license: - unknown multilinguality: - multilingual size_categories: - n<1K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: The Universal Declaration of Human Rights (UDHR) dataset_info: features: - name: text dtype: string - name: lang_key dtype: string - name: lang_name dtype: string - name: iso639-3 dtype: string - name: iso15924 dtype: string - name: bcp47 dtype: string splits: - name: train num_bytes: 6753383 num_examples: 488 download_size: 2389690 dataset_size: 6753383 --- # Dataset Card for The Universal Declaration of Human Rights (UDHR) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.ohchr.org/en/universal-declaration-of-human-rights, https://unicode.org/udhr/index.html - **Repository:** https://github.com/unicode-org/udhr - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Universal Declaration of Human Rights (UDHR) is a milestone document in the history of human rights. Drafted by representatives with different legal and cultural backgrounds from all regions of the world, it set out, for the first time, fundamental human rights to be universally protected. The Declaration was adopted by the UN General Assembly in Paris on 10 December 1948 during its 183rd plenary meeting. © 1996 – 2009 The Office of the High Commissioner for Human Rights This plain text version prepared by the "UDHR in Unicode" project, https://www.unicode.org/udhr. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset includes translations of the document in over 400 languages and dialects. The list of languages can be found [here](https://unicode.org/udhr/translations.html). ## Dataset Structure ### Data Instances Each instance corresponds to a different language and includes information about the language and the full document text. ### Data Fields - `text`: The full document text with each line of text delimited by a newline (`\n`). - `lang_key`: The unique identifier of a given translation. - `lang_name`: The textual description of language/dialect. - `iso639-3`: The [iso639-3](https://iso639-3.sil.org/) language identifier. - `iso15924`: The [iso15924](https://unicode.org/iso15924/iso15924-codes.html) language identifier. - `bcp47`: The [BCP 47](https://www.rfc-editor.org/info/bcp47) language identifier. ### Data Splits Only a `train` split included which includes the full document in all languages. | | train | |--------------------|------:| | Number of examples | 488 | ## Dataset Creation ### Curation Rationale In addition to its social significance, the document set a world record in 1999 for being the most translated document in the world and as such can be useful for settings requiring paired text between many languages. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset In addition to the social and political significance of the United Nations' Universal Declaration of Human Rights, the document set a world record in 1999 for being the most translated document in the world and as such can be useful for settings requiring paired text between many languages including those that are low resource and significantly underrepresented in NLP research. ### Discussion of Biases [More Information Needed] ### Other Known Limitations Although the document is translated into a very large number of languages, the text is very short and therefore may have limited usefulness for most types of modeling and evaluation. ## Additional Information ### Dataset Curators The txt/xml data files used here were compiled by The Unicode Consortium, which can be found [here](https://unicode.org/udhr/index.html). The original texts can be found on the [United Nations website](https://www.ohchr.org/EN/UDHR/Pages/UDHRIndex.aspx). ### Licensing Information Source text © 1996 – 2022 The Office of the High Commissioner for Human Rights The [Unicode license](https://www.unicode.org/license.txt) applies to these translations. ### Citation Information United Nations. (1998). The Universal Declaration of Human Rights, 1948-1998. New York: United Nations Dept. of Public Information. ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. Updated May 2022 [@leondz](https://github.com/leondz).
um005
--- annotations_creators: - no-annotation language_creators: - other language: - en - ur license: - unknown multilinguality: - multilingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: umc005-english-urdu pretty_name: UMC005 English-Urdu dataset_info: - config_name: bible features: - name: id dtype: string - name: translation dtype: translation: languages: - ur - en splits: - name: train num_bytes: 2350730 num_examples: 7400 - name: validation num_bytes: 113476 num_examples: 300 - name: test num_bytes: 104678 num_examples: 257 download_size: 3683565 dataset_size: 2568884 - config_name: quran features: - name: id dtype: string - name: translation dtype: translation: languages: - ur - en splits: - name: train num_bytes: 2929711 num_examples: 6000 - name: validation num_bytes: 43499 num_examples: 214 - name: test num_bytes: 44413 num_examples: 200 download_size: 3683565 dataset_size: 3017623 - config_name: all features: - name: id dtype: string - name: translation dtype: translation: languages: - ur - en splits: - name: train num_bytes: 5280441 num_examples: 13400 - name: validation num_bytes: 156963 num_examples: 514 - name: test num_bytes: 149079 num_examples: 457 download_size: 3683565 dataset_size: 5586483 --- # Dataset Card for UMC005 English-Urdu ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://ufal.ms.mff.cuni.cz/umc/005-en-ur/ - **Repository:** None - **Paper:** https://www.researchgate.net/publication/268008206_Word-Order_Issues_in_English-to-Urdu_Statistical_Machine_Translation - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** Bushra Jawaid and Daniel Zeman ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
un_ga
--- annotations_creators: - found language_creators: - found language: - ar - en - es - fr - ru - zh license: - unknown multilinguality: - translation size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: UnGa configs: - ar-to-en - ar-to-es - ar-to-fr - ar-to-ru - ar-to-zh - en-to-es - en-to-fr - en-to-ru - en-to-zh - es-to-fr - es-to-ru - es-to-zh - fr-to-ru - fr-to-zh - ru-to-zh dataset_info: - config_name: ar_to_en features: - name: id dtype: string - name: translation dtype: translation: languages: - ar - en splits: - name: train num_bytes: 53122872 num_examples: 74067 download_size: 10584906 dataset_size: 53122872 - config_name: ar_to_es features: - name: id dtype: string - name: translation dtype: translation: languages: - ar - es splits: - name: train num_bytes: 55728711 num_examples: 74067 download_size: 11084275 dataset_size: 55728711 - config_name: ar_to_fr features: - name: id dtype: string - name: translation dtype: translation: languages: - ar - fr splits: - name: train num_bytes: 55930898 num_examples: 74067 download_size: 11248563 dataset_size: 55930898 - config_name: ar_to_ru features: - name: id dtype: string - name: translation dtype: translation: languages: - ar - ru splits: - name: train num_bytes: 72657721 num_examples: 74067 download_size: 12852834 dataset_size: 72657721 - config_name: ar_to_zh features: - name: id dtype: string - name: translation dtype: translation: languages: - ar - zh splits: - name: train num_bytes: 48217675 num_examples: 74067 download_size: 10254078 dataset_size: 48217675 - config_name: en_to_es features: - name: id dtype: string - name: translation dtype: translation: languages: - en - es splits: - name: train num_bytes: 45358866 num_examples: 74067 download_size: 9850684 dataset_size: 45358866 - config_name: en_to_fr features: - name: id dtype: string - name: translation dtype: translation: languages: - en - fr splits: - name: train num_bytes: 45561053 num_examples: 74067 download_size: 10014972 dataset_size: 45561053 - config_name: en_to_ru features: - name: id dtype: string - name: translation dtype: translation: languages: - en - ru splits: - name: train num_bytes: 62287876 num_examples: 74067 download_size: 11619243 dataset_size: 62287876 - config_name: en_to_zh features: - name: id dtype: string - name: translation dtype: translation: languages: - en - zh splits: - name: train num_bytes: 37847830 num_examples: 74067 download_size: 9020487 dataset_size: 37847830 - config_name: es_to_fr features: - name: id dtype: string - name: translation dtype: translation: languages: - es - fr splits: - name: train num_bytes: 48166892 num_examples: 74067 download_size: 10514341 dataset_size: 48166892 - config_name: es_to_ru features: - name: id dtype: string - name: translation dtype: translation: languages: - es - ru splits: - name: train num_bytes: 64893715 num_examples: 74067 download_size: 12118612 dataset_size: 64893715 - config_name: es_to_zh features: - name: id dtype: string - name: translation dtype: translation: languages: - es - zh splits: - name: train num_bytes: 40453669 num_examples: 74067 download_size: 9519856 dataset_size: 40453669 - config_name: fr_to_ru features: - name: id dtype: string - name: translation dtype: translation: languages: - fr - ru splits: - name: train num_bytes: 65095902 num_examples: 74067 download_size: 12282900 dataset_size: 65095902 - config_name: fr_to_zh features: - name: id dtype: string - name: translation dtype: translation: languages: - fr - zh splits: - name: train num_bytes: 40655856 num_examples: 74067 download_size: 9684144 dataset_size: 40655856 - config_name: ru_to_zh features: - name: id dtype: string - name: translation dtype: translation: languages: - ru - zh splits: - name: train num_bytes: 57382679 num_examples: 74067 download_size: 11288415 dataset_size: 57382679 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/UN.php - **Repository:** - **Paper:** https://www.researchgate.net/publication/228579662_United_nations_general_assembly_resolutions_A_six-language_parallel_corpus - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a collection of translated documents from the United Nations originally compiled into a translation memory by Alexandre Rafalovitch, Robert Dale (see http://uncorpora.org). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @inproceedings{title = "United Nations General Assembly Resolutions: a six-language parallel corpus", abstract = "In this paper we describe a six-ways parallel public-domain corpus consisting of 2100 United Nations General Assembly Resolutions with translations in the six official languages of the United Nations, with an average of around 3 million tokens per language. The corpus is available in a preprocessed, formatting-normalized TMX format with paragraphs aligned across multiple languages. We describe the background to the corpus and its content, the process of its construction, and some of its interesting properties.", author = "Alexandre Rafalovitch and Robert Dale", year = "2009", language = "English", booktitle = "MT Summit XII proceedings", publisher = "International Association of Machine Translation", } ### Contributions Thanks to [@param087](https://github.com/param087) for adding this dataset.
un_multi
--- annotations_creators: - found language_creators: - found language: - ar - de - en - es - fr - ru - zh license: - unknown multilinguality: - multilingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: multiun pretty_name: Multilingual Corpus from United Nation Documents configs: - ar-de - ar-en - ar-es - ar-fr - ar-ru - ar-zh - de-en - de-es - de-fr - de-ru - de-zh - en-es - en-fr - en-ru - en-zh - es-fr - es-ru - es-zh - fr-ru - fr-zh - ru-zh dataset_info: - config_name: ar-de features: - name: translation dtype: translation: languages: - ar - de splits: - name: train num_bytes: 94466397 num_examples: 165090 download_size: 21869935 dataset_size: 94466397 - config_name: ar-en features: - name: translation dtype: translation: languages: - ar - en splits: - name: train num_bytes: 4189852369 num_examples: 9759125 download_size: 1036296368 dataset_size: 4189852369 - config_name: ar-es features: - name: translation dtype: translation: languages: - ar - es splits: - name: train num_bytes: 4509675284 num_examples: 10119379 download_size: 1101206667 dataset_size: 4509675284 - config_name: ar-fr features: - name: translation dtype: translation: languages: - ar - fr splits: - name: train num_bytes: 4516850009 num_examples: 9929567 download_size: 1109705925 dataset_size: 4516850009 - config_name: ar-ru features: - name: translation dtype: translation: languages: - ar - ru splits: - name: train num_bytes: 5932866867 num_examples: 10206243 download_size: 1261123878 dataset_size: 5932866867 - config_name: ar-zh features: - name: translation dtype: translation: languages: - ar - zh splits: - name: train num_bytes: 3781658413 num_examples: 9832293 download_size: 1009696775 dataset_size: 3781658413 - config_name: de-en features: - name: translation dtype: translation: languages: - de - en splits: - name: train num_bytes: 76684549 num_examples: 162981 download_size: 19468529 dataset_size: 76684549 - config_name: de-es features: - name: translation dtype: translation: languages: - de - es splits: - name: train num_bytes: 80936653 num_examples: 162078 download_size: 20266591 dataset_size: 80936653 - config_name: de-fr features: - name: translation dtype: translation: languages: - de - fr splits: - name: train num_bytes: 81888435 num_examples: 164025 download_size: 20692837 dataset_size: 81888435 - config_name: de-ru features: - name: translation dtype: translation: languages: - de - ru splits: - name: train num_bytes: 111517934 num_examples: 164792 download_size: 23507789 dataset_size: 111517934 - config_name: de-zh features: - name: translation dtype: translation: languages: - de - zh splits: - name: train num_bytes: 70534818 num_examples: 176933 download_size: 19927209 dataset_size: 70534818 - config_name: en-es features: - name: translation dtype: translation: languages: - en - es splits: - name: train num_bytes: 4128141663 num_examples: 11350967 download_size: 1123164180 dataset_size: 4128141663 - config_name: en-fr features: - name: translation dtype: translation: languages: - en - fr splits: - name: train num_bytes: 4678055160 num_examples: 13172019 download_size: 1355002731 dataset_size: 4678055160 - config_name: en-ru features: - name: translation dtype: translation: languages: - en - ru splits: - name: train num_bytes: 5632662839 num_examples: 11654416 download_size: 1285801078 dataset_size: 5632662839 - config_name: en-zh features: - name: translation dtype: translation: languages: - en - zh splits: - name: train num_bytes: 2960376046 num_examples: 9564315 download_size: 900076520 dataset_size: 2960376046 - config_name: es-fr features: - name: translation dtype: translation: languages: - es - fr splits: - name: train num_bytes: 4454712498 num_examples: 11441889 download_size: 1195733510 dataset_size: 4454712498 - config_name: es-ru features: - name: translation dtype: translation: languages: - es - ru splits: - name: train num_bytes: 5442655730 num_examples: 10605056 download_size: 1228045966 dataset_size: 5442655730 - config_name: es-zh features: - name: translation dtype: translation: languages: - es - zh splits: - name: train num_bytes: 3223871198 num_examples: 9847770 download_size: 953250084 dataset_size: 3223871198 - config_name: fr-ru features: - name: translation dtype: translation: languages: - fr - ru splits: - name: train num_bytes: 5979879089 num_examples: 11761738 download_size: 1364307157 dataset_size: 5979879089 - config_name: fr-zh features: - name: translation dtype: translation: languages: - fr - zh splits: - name: train num_bytes: 3241098333 num_examples: 9690914 download_size: 962824881 dataset_size: 3241098333 - config_name: ru-zh features: - name: translation dtype: translation: languages: - ru - zh splits: - name: train num_bytes: 4233875537 num_examples: 9557007 download_size: 1037881127 dataset_size: 4233875537 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[MultiUN](http://www.euromatrixplus.net/multi-unp) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a collection of translated documents from the United Nations. This corpus is available in all 6 official languages of the UN consisting of around 300 million words per language ### Supported Tasks and Leaderboards The underlying task is machine translation. ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{eisele-chen-2010-multiun, title = "{M}ulti{UN}: A Multilingual Corpus from United Nation Documents", author = "Eisele, Andreas and Chen, Yu", booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)", month = may, year = "2010", address = "Valletta, Malta", publisher = "European Language Resources Association (ELRA)", url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/686_Paper.pdf", abstract = "This paper describes the acquisition, preparation and properties of a corpus extracted from the official documents of the United Nations (UN). This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language. We describe the methods we used for crawling, document formatting, and sentence alignment. This corpus also includes a common test set for machine translation. We present the results of a French-Chinese machine translation experiment performed on this corpus.", } ``` ``` @InProceedings{TIEDEMANN12.463, author = {J�rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
un_pc
--- annotations_creators: - found language_creators: - found language: - ar - en - es - fr - ru - zh license: - unknown multilinguality: - multilingual size_categories: - 10M<n<100M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: united-nations-parallel-corpus pretty_name: United Nations Parallel Corpus configs: - ar-en - ar-es - ar-fr - ar-ru - ar-zh - en-es - en-fr - en-ru - en-zh - es-fr - es-ru - es-zh - fr-ru - fr-zh - ru-zh dataset_info: - config_name: ar-en features: - name: translation dtype: translation: languages: - ar - en splits: - name: train num_bytes: 8039689939 num_examples: 20044478 download_size: 2025106743 dataset_size: 8039689939 - config_name: ar-es features: - name: translation dtype: translation: languages: - ar - es splits: - name: train num_bytes: 8715754848 num_examples: 20532014 download_size: 2167791297 dataset_size: 8715754848 - config_name: ar-fr features: - name: translation dtype: translation: languages: - ar - fr splits: - name: train num_bytes: 8897848038 num_examples: 20281645 download_size: 2188765415 dataset_size: 8897848038 - config_name: ar-ru features: - name: translation dtype: translation: languages: - ar - ru splits: - name: train num_bytes: 11395923083 num_examples: 20571334 download_size: 2476562835 dataset_size: 11395923083 - config_name: ar-zh features: - name: translation dtype: translation: languages: - ar - zh splits: - name: train num_bytes: 6447658008 num_examples: 17306056 download_size: 1738869755 dataset_size: 6447658008 - config_name: en-es features: - name: translation dtype: translation: languages: - en - es splits: - name: train num_bytes: 8241635322 num_examples: 25227004 download_size: 2300411698 dataset_size: 8241635322 - config_name: en-fr features: - name: translation dtype: translation: languages: - en - fr splits: - name: train num_bytes: 9718522775 num_examples: 30340652 download_size: 2657208676 dataset_size: 9718522775 - config_name: en-ru features: - name: translation dtype: translation: languages: - en - ru splits: - name: train num_bytes: 11156164691 num_examples: 25173398 download_size: 2589707636 dataset_size: 11156164691 - config_name: en-zh features: - name: translation dtype: translation: languages: - en - zh splits: - name: train num_bytes: 4988812558 num_examples: 17451549 download_size: 1535707641 dataset_size: 4988812558 - config_name: es-fr features: - name: translation dtype: translation: languages: - es - fr splits: - name: train num_bytes: 9230891207 num_examples: 25887160 download_size: 2492342915 dataset_size: 9230891207 - config_name: es-ru features: - name: translation dtype: translation: languages: - es - ru splits: - name: train num_bytes: 10789780134 num_examples: 22294106 download_size: 2487664520 dataset_size: 10789780134 - config_name: es-zh features: - name: translation dtype: translation: languages: - es - zh splits: - name: train num_bytes: 5475365986 num_examples: 17599223 download_size: 1639717723 dataset_size: 5475365986 - config_name: fr-ru features: - name: translation dtype: translation: languages: - fr - ru splits: - name: train num_bytes: 12099669711 num_examples: 25219973 download_size: 2762585269 dataset_size: 12099669711 - config_name: fr-zh features: - name: translation dtype: translation: languages: - fr - zh splits: - name: train num_bytes: 5679222134 num_examples: 17521170 download_size: 1668823634 dataset_size: 5679222134 - config_name: ru-zh features: - name: translation dtype: translation: languages: - ru - zh splits: - name: train num_bytes: 7905443441 num_examples: 17920922 download_size: 1934425373 dataset_size: 7905443441 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[UNPC](http://opus.nlpl.eu/UNPC.php) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This parallel corpus consists of manually translated UN documents from the last 25 years (1990 to 2014) \ for the six official UN languages, Arabic, Chinese, English, French, Russian, and Spanish. 6 languages, 15 bitexts ### Supported Tasks and Leaderboards The underlying task is machine translation. ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{ziemski-etal-2016-united, title = "The {U}nited {N}ations Parallel Corpus v1.0", author = "Ziemski, Micha{\\l} and Junczys-Dowmunt, Marcin and Pouliquen, Bruno", booktitle = "Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}'16)", month = may, year = "2016", address = "Portoro{\v{z}}, Slovenia", publisher = "European Language Resources Association (ELRA)", url = "https://www.aclweb.org/anthology/L16-1561", pages = "3530--3534", abstract = "This paper describes the creation process and statistics of the official United Nations Parallel Corpus, the first parallel corpus composed from United Nations documents published by the original data creator. The parallel corpus presented consists of manually translated UN documents from the last 25 years (1990 to 2014) for the six official UN languages, Arabic, Chinese, English, French, Russian, and Spanish. The corpus is freely available for download under a liberal license. Apart from the pairwise aligned documents, a fully aligned subcorpus for the six official UN languages is distributed. We provide baseline BLEU scores of our Moses-based SMT systems trained with the full data of language pairs involving English and for all possible translation directions of the six-way subcorpus.", } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
universal_dependencies
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - af - aii - ajp - akk - am - apu - aqz - ar - be - bg - bho - bm - br - bxr - ca - ckt - cop - cs - cu - cy - da - de - el - en - es - et - eu - fa - fi - fo - fr - fro - ga - gd - gl - got - grc - gsw - gun - gv - he - hi - hr - hsb - hu - hy - id - is - it - ja - kfm - kk - kmr - ko - koi - kpv - krl - la - lt - lv - lzh - mdf - mr - mt - myu - myv - nl - 'no' - nyq - olo - orv - otk - pcm - pl - pt - ro - ru - sa - sk - sl - sme - sms - soj - sq - sr - sv - swl - ta - te - th - tl - tpn - tr - ug - uk - ur - vi - wbp - wo - yo - yue - zh license: - unknown multilinguality: - multilingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - parsing paperswithcode_id: universal-dependencies pretty_name: Universal Dependencies Treebank configs: - af_afribooms - aii_as - ajp_madar - akk_pisandub - akk_riao - am_att - apu_ufpa - aqz_tudet - ar_nyuad - ar_padt - ar_pud - be_hse - bg_btb - bho_bhtb - bm_crb - br_keb - bxr_bdt - ca_ancora - ckt_hse - cop_scriptorium - cs_cac - cs_cltt - cs_fictree - cs_pdt - cs_pud - cu_proiel - cy_ccg - da_ddt - de_gsd - de_hdt - de_lit - de_pud - el_gdt - en_esl - en_ewt - en_gum - en_gumreddit - en_lines - en_partut - en_pronouns - en_pud - es_ancora - es_gsd - es_pud - et_edt - et_ewt - eu_bdt - fa_perdt - fa_seraji - fi_ftb - fi_ood - fi_pud - fi_tdt - fo_farpahc - fo_oft - fr_fqb - fr_ftb - fr_gsd - fr_partut - fr_pud - fr_sequoia - fr_spoken - fro_srcmf - ga_idt - gd_arcosg - gl_ctg - gl_treegal - got_proiel - grc_perseus - grc_proiel - gsw_uzh - gun_dooley - gun_thomas - gv_cadhan - he_htb - hi_hdtb - hi_pud - hr_set - hsb_ufal - hu_szeged - hy_armtdp - id_csui - id_gsd - id_pud - is_icepahc - is_pud - it_isdt - it_partut - it_postwita - it_pud - it_twittiro - it_vit - ja_bccwj - ja_gsd - ja_modern - ja_pud - kfm_aha - kk_ktb - kmr_mg - ko_gsd - ko_kaist - ko_pud - koi_uh - kpv_ikdp - kpv_lattice - krl_kkpp - la_ittb - la_llct - la_perseus - la_proiel - lt_alksnis - lt_hse - lv_lvtb - lzh_kyoto - mdf_jr - mr_ufal - mt_mudt - myu_tudet - myv_jr - nl_alpino - nl_lassysmall - no_bokmaal - no_nynorsk - no_nynorsklia - nyq_aha - olo_kkpp - orv_rnc - orv_torot - otk_tonqq - pcm_nsc - pl_lfg - pl_pdb - pl_pud - pt_bosque - pt_gsd - pt_pud - qhe_hiencs - qtd_sagt - ro_nonstandard - ro_rrt - ro_simonero - ru_gsd - ru_pud - ru_syntagrus - ru_taiga - sa_ufal - sa_vedic - sk_snk - sl_ssj - sl_sst - sme_giella - sms_giellagas - soj_aha - sq_tsa - sr_set - sv_lines - sv_pud - sv_talbanken - swl_sslc - ta_mwtt - ta_ttb - te_mtg - th_pud - tl_trg - tl_ugnayan - tpn_tudet - tr_boun - tr_gb - tr_imst - tr_pud - ug_udt - uk_iu - ur_udtb - vi_vtb - wbp_ufal - wo_wtb - yo_ytb - yue_hk - zh_cfl - zh_gsd - zh_gsdsimp - zh_hk - zh_pud tags: - constituency-parsing - dependency-parsing dataset_info: - config_name: af_afribooms features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 3523113 num_examples: 1315 - name: validation num_bytes: 547285 num_examples: 194 - name: test num_bytes: 1050299 num_examples: 425 download_size: 3088237 dataset_size: 5120697 - config_name: akk_pisandub features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 153470 num_examples: 101 download_size: 101789 dataset_size: 153470 - config_name: akk_riao features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 3374577 num_examples: 1804 download_size: 2022357 dataset_size: 3374577 - config_name: aqz_tudet features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 8286 num_examples: 24 download_size: 5683 dataset_size: 8286 - config_name: sq_tsa features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 116034 num_examples: 60 download_size: 68875 dataset_size: 116034 - config_name: am_att features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 1554859 num_examples: 1074 download_size: 1019607 dataset_size: 1554859 - config_name: grc_perseus features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 22611612 num_examples: 11476 - name: validation num_bytes: 3152233 num_examples: 1137 - name: test num_bytes: 3004502 num_examples: 1306 download_size: 18898313 dataset_size: 28768347 - config_name: grc_proiel features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 30938089 num_examples: 15014 - name: validation num_bytes: 2264551 num_examples: 1019 - name: test num_bytes: 2192289 num_examples: 1047 download_size: 23715831 dataset_size: 35394929 - config_name: apu_ufpa features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 75578 num_examples: 76 download_size: 69565 dataset_size: 75578 - config_name: ar_nyuad features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 79064476 num_examples: 15789 - name: validation num_bytes: 9859912 num_examples: 1986 - name: test num_bytes: 9880240 num_examples: 1963 download_size: 58583673 dataset_size: 98804628 - config_name: ar_padt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 58537298 num_examples: 6075 - name: validation num_bytes: 7787253 num_examples: 909 - name: test num_bytes: 7428063 num_examples: 680 download_size: 51208169 dataset_size: 73752614 - config_name: ar_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2816625 num_examples: 1000 download_size: 2084082 dataset_size: 2816625 - config_name: hy_armtdp features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 7697891 num_examples: 1975 - name: validation num_bytes: 988849 num_examples: 249 - name: test num_bytes: 947287 num_examples: 278 download_size: 6886567 dataset_size: 9634027 - config_name: aii_as features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 52540 num_examples: 57 download_size: 32639 dataset_size: 52540 - config_name: bm_crb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 1502886 num_examples: 1026 download_size: 892924 dataset_size: 1502886 - config_name: eu_bdt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 8199861 num_examples: 5396 - name: validation num_bytes: 2701073 num_examples: 1798 - name: test num_bytes: 2734601 num_examples: 1799 download_size: 8213576 dataset_size: 13635535 - config_name: be_hse features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 34880663 num_examples: 21555 - name: validation num_bytes: 1745668 num_examples: 1090 - name: test num_bytes: 1818113 num_examples: 889 download_size: 26433402 dataset_size: 38444444 - config_name: bho_bhtb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 947740 num_examples: 357 download_size: 614159 dataset_size: 947740 - config_name: br_keb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 1026257 num_examples: 888 download_size: 679680 dataset_size: 1026257 - config_name: bg_btb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 18545312 num_examples: 8907 - name: validation num_bytes: 2393174 num_examples: 1115 - name: test num_bytes: 2344136 num_examples: 1116 download_size: 14910603 dataset_size: 23282622 - config_name: bxr_bdt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 17364 num_examples: 19 - name: test num_bytes: 1116630 num_examples: 908 download_size: 726053 dataset_size: 1133994 - config_name: yue_hk features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 1242850 num_examples: 1004 download_size: 710060 dataset_size: 1242850 - config_name: ca_ancora features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 46502842 num_examples: 13123 - name: validation num_bytes: 6282364 num_examples: 1709 - name: test num_bytes: 6441038 num_examples: 1846 download_size: 35924146 dataset_size: 59226244 - config_name: zh_cfl features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 660584 num_examples: 451 download_size: 384725 dataset_size: 660584 - config_name: zh_gsd features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 9268661 num_examples: 3997 - name: validation num_bytes: 1188371 num_examples: 500 - name: test num_bytes: 1130467 num_examples: 500 download_size: 6828367 dataset_size: 11587499 - config_name: zh_gsdsimp features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 9268663 num_examples: 3997 - name: validation num_bytes: 1188383 num_examples: 500 - name: test num_bytes: 1130459 num_examples: 500 download_size: 6828419 dataset_size: 11587505 - config_name: zh_hk features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 880193 num_examples: 1004 download_size: 494447 dataset_size: 880193 - config_name: zh_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2425817 num_examples: 1000 download_size: 1606982 dataset_size: 2425817 - config_name: ckt_hse features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 808669 num_examples: 1004 download_size: 771943 dataset_size: 808669 - config_name: lzh_kyoto features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 26615708 num_examples: 38669 - name: validation num_bytes: 3770507 num_examples: 5296 - name: test num_bytes: 3155207 num_examples: 4469 download_size: 22658287 dataset_size: 33541422 - config_name: cop_scriptorium features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 3944468 num_examples: 1089 - name: validation num_bytes: 1566786 num_examples: 381 - name: test num_bytes: 1487709 num_examples: 403 download_size: 4502996 dataset_size: 6998963 - config_name: hr_set features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 19104315 num_examples: 6914 - name: validation num_bytes: 2787184 num_examples: 960 - name: test num_bytes: 3035797 num_examples: 1136 download_size: 15103034 dataset_size: 24927296 - config_name: cs_cac features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 81527862 num_examples: 23478 - name: validation num_bytes: 1898678 num_examples: 603 - name: test num_bytes: 1878841 num_examples: 628 download_size: 55990235 dataset_size: 85305381 - config_name: cs_cltt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 4277239 num_examples: 860 - name: validation num_bytes: 752253 num_examples: 129 - name: test num_bytes: 646103 num_examples: 136 download_size: 3745656 dataset_size: 5675595 - config_name: cs_fictree features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 21490020 num_examples: 10160 - name: validation num_bytes: 2677727 num_examples: 1309 - name: test num_bytes: 2679930 num_examples: 1291 download_size: 17464342 dataset_size: 26847677 - config_name: cs_pdt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 201356662 num_examples: 68495 - name: validation num_bytes: 27366981 num_examples: 9270 - name: test num_bytes: 29817339 num_examples: 10148 download_size: 171506068 dataset_size: 258540982 - config_name: cs_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 3195818 num_examples: 1000 download_size: 2231853 dataset_size: 3195818 - config_name: da_ddt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 8689809 num_examples: 4383 - name: validation num_bytes: 1117939 num_examples: 564 - name: test num_bytes: 1082651 num_examples: 565 download_size: 6425281 dataset_size: 10890399 - config_name: nl_alpino features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 22503950 num_examples: 12264 - name: validation num_bytes: 1411253 num_examples: 718 - name: test num_bytes: 1354908 num_examples: 596 download_size: 16858557 dataset_size: 25270111 - config_name: nl_lassysmall features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 9001614 num_examples: 5787 - name: validation num_bytes: 1361552 num_examples: 676 - name: test num_bytes: 1391136 num_examples: 875 download_size: 8034396 dataset_size: 11754302 - config_name: en_esl features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 5335977 num_examples: 4124 - name: validation num_bytes: 648562 num_examples: 500 - name: test num_bytes: 651829 num_examples: 500 download_size: 3351548 dataset_size: 6636368 - config_name: en_ewt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 22755753 num_examples: 12543 - name: validation num_bytes: 2829889 num_examples: 2002 - name: test num_bytes: 2820398 num_examples: 2077 download_size: 16893922 dataset_size: 28406040 - config_name: en_gum features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 8999554 num_examples: 4287 - name: validation num_bytes: 1704949 num_examples: 784 - name: test num_bytes: 1743317 num_examples: 890 download_size: 7702761 dataset_size: 12447820 - config_name: en_gumreddit features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1365930 num_examples: 587 - name: validation num_bytes: 317546 num_examples: 150 - name: test num_bytes: 374707 num_examples: 158 download_size: 1195979 dataset_size: 2058183 - config_name: en_lines features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 5728898 num_examples: 3176 - name: validation num_bytes: 1911762 num_examples: 1032 - name: test num_bytes: 1766797 num_examples: 1035 download_size: 5522254 dataset_size: 9407457 - config_name: en_partut features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 4133445 num_examples: 1781 - name: validation num_bytes: 265039 num_examples: 156 - name: test num_bytes: 326834 num_examples: 153 download_size: 2720286 dataset_size: 4725318 - config_name: en_pronouns features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 207364 num_examples: 285 download_size: 147181 dataset_size: 207364 - config_name: en_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2282027 num_examples: 1000 download_size: 1340563 dataset_size: 2282027 - config_name: myv_jr features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2763297 num_examples: 1690 download_size: 1945981 dataset_size: 2763297 - config_name: et_edt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 42901059 num_examples: 24633 - name: validation num_bytes: 5551620 num_examples: 3125 - name: test num_bytes: 5994421 num_examples: 3214 download_size: 32393618 dataset_size: 54447100 - config_name: et_ewt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 4199896 num_examples: 2837 - name: validation num_bytes: 1089459 num_examples: 743 - name: test num_bytes: 1600116 num_examples: 913 download_size: 4044147 dataset_size: 6889471 - config_name: fo_farpahc features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 2114958 num_examples: 1020 - name: validation num_bytes: 809707 num_examples: 300 - name: test num_bytes: 798245 num_examples: 301 download_size: 2186706 dataset_size: 3722910 - config_name: fo_oft features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 1220792 num_examples: 1208 download_size: 802681 dataset_size: 1220792 - config_name: fi_ftb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 16800109 num_examples: 14981 - name: validation num_bytes: 2074201 num_examples: 1875 - name: test num_bytes: 2144908 num_examples: 1867 download_size: 13132466 dataset_size: 21019218 - config_name: fi_ood features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2366923 num_examples: 2122 download_size: 1480506 dataset_size: 2366923 - config_name: fi_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2086421 num_examples: 1000 download_size: 1411514 dataset_size: 2086421 - config_name: fi_tdt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 22065448 num_examples: 12217 - name: validation num_bytes: 2483303 num_examples: 1364 - name: test num_bytes: 2855263 num_examples: 1555 download_size: 16692242 dataset_size: 27404014 - config_name: fr_fqb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2674644 num_examples: 2289 download_size: 1556235 dataset_size: 2674644 - config_name: fr_ftb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 44714315 num_examples: 14759 - name: validation num_bytes: 3929428 num_examples: 1235 - name: test num_bytes: 7583038 num_examples: 2541 download_size: 30926802 dataset_size: 56226781 - config_name: fr_gsd features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 38329902 num_examples: 14449 - name: validation num_bytes: 3861548 num_examples: 1476 - name: test num_bytes: 1086926 num_examples: 416 download_size: 25492044 dataset_size: 43278376 - config_name: fr_partut features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 2620477 num_examples: 803 - name: validation num_bytes: 205839 num_examples: 107 - name: test num_bytes: 288829 num_examples: 110 download_size: 1817897 dataset_size: 3115145 - config_name: fr_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2660405 num_examples: 1000 download_size: 1685033 dataset_size: 2660405 - config_name: fr_sequoia features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 5370647 num_examples: 2231 - name: validation num_bytes: 1065411 num_examples: 412 - name: test num_bytes: 1067676 num_examples: 456 download_size: 4415282 dataset_size: 7503734 - config_name: fr_spoken features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1625626 num_examples: 1167 - name: validation num_bytes: 1091750 num_examples: 909 - name: test num_bytes: 1078438 num_examples: 730 download_size: 2483341 dataset_size: 3795814 - config_name: gl_ctg features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 8157432 num_examples: 2272 - name: validation num_bytes: 3057483 num_examples: 860 - name: test num_bytes: 3053764 num_examples: 861 download_size: 8230649 dataset_size: 14268679 - config_name: gl_treegal features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1804389 num_examples: 600 - name: test num_bytes: 1174023 num_examples: 400 download_size: 1741471 dataset_size: 2978412 - config_name: de_gsd features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 32297384 num_examples: 13814 - name: validation num_bytes: 1504189 num_examples: 799 - name: test num_bytes: 2000117 num_examples: 977 download_size: 21507364 dataset_size: 35801690 - config_name: de_hdt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 334214761 num_examples: 153035 - name: validation num_bytes: 39099013 num_examples: 18434 - name: test num_bytes: 39519143 num_examples: 18459 download_size: 249243037 dataset_size: 412832917 - config_name: de_lit features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 3327891 num_examples: 1922 download_size: 2060988 dataset_size: 3327891 - config_name: de_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2684407 num_examples: 1000 download_size: 1731875 dataset_size: 2684407 - config_name: got_proiel features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 5175361 num_examples: 3387 - name: validation num_bytes: 1498101 num_examples: 985 - name: test num_bytes: 1518642 num_examples: 1029 download_size: 5225655 dataset_size: 8192104 - config_name: el_gdt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 6028077 num_examples: 1662 - name: validation num_bytes: 1492610 num_examples: 403 - name: test num_bytes: 1521094 num_examples: 456 download_size: 5788161 dataset_size: 9041781 - config_name: he_htb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 17324640 num_examples: 5241 - name: validation num_bytes: 1440985 num_examples: 484 - name: test num_bytes: 1550465 num_examples: 491 download_size: 12054025 dataset_size: 20316090 - config_name: qhe_hiencs features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1510145 num_examples: 1448 - name: validation num_bytes: 244129 num_examples: 225 - name: test num_bytes: 236291 num_examples: 225 download_size: 914584 dataset_size: 1990565 - config_name: hi_hdtb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 61893814 num_examples: 13304 - name: validation num_bytes: 7748544 num_examples: 1659 - name: test num_bytes: 7786343 num_examples: 1684 download_size: 51589681 dataset_size: 77428701 - config_name: hi_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 3384789 num_examples: 1000 download_size: 2303495 dataset_size: 3384789 - config_name: hu_szeged features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 2822934 num_examples: 910 - name: validation num_bytes: 1584932 num_examples: 441 - name: test num_bytes: 1419130 num_examples: 449 download_size: 3687905 dataset_size: 5826996 - config_name: is_icepahc features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 97197159 num_examples: 34007 - name: validation num_bytes: 18931295 num_examples: 4865 - name: test num_bytes: 19039838 num_examples: 5157 download_size: 85106126 dataset_size: 135168292 - config_name: is_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2304432 num_examples: 1000 download_size: 1525635 dataset_size: 2304432 - config_name: id_csui features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1611334 num_examples: 656 - name: test num_bytes: 888832 num_examples: 374 download_size: 1448601 dataset_size: 2500166 - config_name: id_gsd features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 11728948 num_examples: 4477 - name: validation num_bytes: 1513894 num_examples: 559 - name: test num_bytes: 1417208 num_examples: 557 download_size: 9487349 dataset_size: 14660050 - config_name: id_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 1768596 num_examples: 1000 download_size: 1149692 dataset_size: 1768596 - config_name: ga_idt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 10327215 num_examples: 4005 - name: validation num_bytes: 1057313 num_examples: 451 - name: test num_bytes: 1109028 num_examples: 454 download_size: 7417728 dataset_size: 12493556 - config_name: it_isdt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 33510781 num_examples: 13121 - name: validation num_bytes: 1439348 num_examples: 564 - name: test num_bytes: 1267932 num_examples: 482 download_size: 20998527 dataset_size: 36218061 - config_name: it_partut features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 5428686 num_examples: 1781 - name: validation num_bytes: 335085 num_examples: 156 - name: test num_bytes: 413752 num_examples: 153 download_size: 3582155 dataset_size: 6177523 - config_name: it_postwita features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 10523322 num_examples: 5368 - name: validation num_bytes: 1299818 num_examples: 671 - name: test num_bytes: 1344079 num_examples: 674 download_size: 7611319 dataset_size: 13167219 - config_name: it_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2612838 num_examples: 1000 download_size: 1641073 dataset_size: 2612838 - config_name: it_twittiro features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 2536429 num_examples: 1138 - name: validation num_bytes: 323504 num_examples: 144 - name: test num_bytes: 316211 num_examples: 142 download_size: 1894686 dataset_size: 3176144 - config_name: it_vit features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 24536095 num_examples: 8277 - name: validation num_bytes: 3144507 num_examples: 743 - name: test num_bytes: 2870355 num_examples: 1067 download_size: 17605311 dataset_size: 30550957 - config_name: ja_bccwj features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 119164443 num_examples: 40740 - name: validation num_bytes: 23390188 num_examples: 8417 - name: test num_bytes: 21904413 num_examples: 7871 download_size: 87340125 dataset_size: 164459044 - config_name: ja_gsd features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 36905139 num_examples: 7027 - name: validation num_bytes: 2662999 num_examples: 501 - name: test num_bytes: 2858141 num_examples: 543 download_size: 30397358 dataset_size: 42426279 - config_name: ja_modern features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 3062149 num_examples: 822 download_size: 2163988 dataset_size: 3062149 - config_name: ja_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 6322307 num_examples: 1000 download_size: 4661525 dataset_size: 6322307 - config_name: krl_kkpp features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 370378 num_examples: 228 download_size: 226103 dataset_size: 370378 - config_name: kk_ktb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 64737 num_examples: 31 - name: test num_bytes: 1263246 num_examples: 1047 download_size: 849300 dataset_size: 1327983 - config_name: kfm_aha features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 8464 num_examples: 10 download_size: 6290 dataset_size: 8464 - config_name: koi_uh features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 117629 num_examples: 81 download_size: 91509 dataset_size: 117629 - config_name: kpv_ikdp features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 182189 num_examples: 132 download_size: 121684 dataset_size: 182189 - config_name: kpv_lattice features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 685683 num_examples: 435 download_size: 467085 dataset_size: 685683 - config_name: ko_gsd features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 5480313 num_examples: 4400 - name: validation num_bytes: 1156603 num_examples: 950 - name: test num_bytes: 1129555 num_examples: 989 download_size: 4882238 dataset_size: 7766471 - config_name: ko_kaist features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 29037654 num_examples: 23010 - name: validation num_bytes: 2511880 num_examples: 2066 - name: test num_bytes: 2792215 num_examples: 2287 download_size: 21855177 dataset_size: 34341749 - config_name: ko_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2511856 num_examples: 1000 download_size: 2024810 dataset_size: 2511856 - config_name: kmr_mg features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 30374 num_examples: 20 - name: test num_bytes: 1248564 num_examples: 734 download_size: 765158 dataset_size: 1278938 - config_name: la_ittb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 54306304 num_examples: 22775 - name: validation num_bytes: 4236222 num_examples: 2101 - name: test num_bytes: 4221459 num_examples: 2101 download_size: 40247546 dataset_size: 62763985 - config_name: la_llct features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 26885433 num_examples: 7289 - name: validation num_bytes: 3363915 num_examples: 850 - name: test num_bytes: 3352500 num_examples: 884 download_size: 21975884 dataset_size: 33601848 - config_name: la_perseus features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 2542043 num_examples: 1334 - name: test num_bytes: 1575350 num_examples: 939 download_size: 2573703 dataset_size: 4117393 - config_name: la_proiel features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 24956038 num_examples: 15917 - name: validation num_bytes: 2020476 num_examples: 1234 - name: test num_bytes: 2029828 num_examples: 1260 download_size: 18434442 dataset_size: 29006342 - config_name: lv_lvtb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 29167529 num_examples: 10156 - name: validation num_bytes: 4501172 num_examples: 1664 - name: test num_bytes: 4565919 num_examples: 1823 download_size: 25227301 dataset_size: 38234620 - config_name: lt_alksnis features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 7272501 num_examples: 2341 - name: validation num_bytes: 1763901 num_examples: 617 - name: test num_bytes: 1648521 num_examples: 684 download_size: 7008248 dataset_size: 10684923 - config_name: lt_hse features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 433214 num_examples: 153 - name: validation num_bytes: 433214 num_examples: 153 - name: test num_bytes: 433214 num_examples: 153 download_size: 265619 dataset_size: 1299642 - config_name: olo_kkpp features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 18096 num_examples: 19 - name: test num_bytes: 175355 num_examples: 106 download_size: 121837 dataset_size: 193451 - config_name: mt_mudt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1858001 num_examples: 1123 - name: validation num_bytes: 826004 num_examples: 433 - name: test num_bytes: 892629 num_examples: 518 download_size: 2011753 dataset_size: 3576634 - config_name: gv_cadhan features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 483042 num_examples: 291 download_size: 287206 dataset_size: 483042 - config_name: mr_ufal features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 420345 num_examples: 373 - name: validation num_bytes: 60791 num_examples: 46 - name: test num_bytes: 56582 num_examples: 47 download_size: 339354 dataset_size: 537718 - config_name: gun_dooley features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 1037858 num_examples: 1046 download_size: 571571 dataset_size: 1037858 - config_name: gun_thomas features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 143111 num_examples: 98 download_size: 92963 dataset_size: 143111 - config_name: mdf_jr features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 234147 num_examples: 167 download_size: 162330 dataset_size: 234147 - config_name: myu_tudet features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 26202 num_examples: 62 download_size: 20315 dataset_size: 26202 - config_name: pcm_nsc features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 16079391 num_examples: 7279 - name: validation num_bytes: 2099571 num_examples: 991 - name: test num_bytes: 2063685 num_examples: 972 download_size: 14907410 dataset_size: 20242647 - config_name: nyq_aha features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 8723 num_examples: 10 download_size: 6387 dataset_size: 8723 - config_name: sme_giella features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1987666 num_examples: 2257 - name: test num_bytes: 1142396 num_examples: 865 download_size: 1862302 dataset_size: 3130062 - config_name: no_bokmaal features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 25647647 num_examples: 15696 - name: validation num_bytes: 3828310 num_examples: 2409 - name: test num_bytes: 3151638 num_examples: 1939 download_size: 19177350 dataset_size: 32627595 - config_name: no_nynorsk features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 25630539 num_examples: 14174 - name: validation num_bytes: 3277649 num_examples: 1890 - name: test num_bytes: 2601676 num_examples: 1511 download_size: 18532495 dataset_size: 31509864 - config_name: no_nynorsklia features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 3500907 num_examples: 3412 - name: validation num_bytes: 1003845 num_examples: 881 - name: test num_bytes: 999943 num_examples: 957 download_size: 3349676 dataset_size: 5504695 - config_name: cu_proiel features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 6106144 num_examples: 4124 - name: validation num_bytes: 1639912 num_examples: 1073 - name: test num_bytes: 1648459 num_examples: 1141 download_size: 6239839 dataset_size: 9394515 - config_name: fro_srcmf features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 11959859 num_examples: 13909 - name: validation num_bytes: 1526574 num_examples: 1842 - name: test num_bytes: 1535923 num_examples: 1927 download_size: 9043098 dataset_size: 15022356 - config_name: orv_rnc features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1527306 num_examples: 320 - name: test num_bytes: 2552216 num_examples: 637 download_size: 2627398 dataset_size: 4079522 - config_name: orv_torot features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 18077991 num_examples: 13336 - name: validation num_bytes: 2408313 num_examples: 1852 - name: test num_bytes: 2347934 num_examples: 1756 download_size: 15296362 dataset_size: 22834238 - config_name: otk_tonqq features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 22829 num_examples: 18 download_size: 14389 dataset_size: 22829 - config_name: fa_perdt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 48654947 num_examples: 26196 - name: validation num_bytes: 2687750 num_examples: 1456 - name: test num_bytes: 2600303 num_examples: 1455 download_size: 33606395 dataset_size: 53943000 - config_name: fa_seraji features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 12627691 num_examples: 4798 - name: validation num_bytes: 1634327 num_examples: 599 - name: test num_bytes: 1675134 num_examples: 600 download_size: 9890107 dataset_size: 15937152 - config_name: pl_lfg features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 16810910 num_examples: 13774 - name: validation num_bytes: 2093712 num_examples: 1745 - name: test num_bytes: 2100915 num_examples: 1727 download_size: 14865541 dataset_size: 21005537 - config_name: pl_pdb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 44652289 num_examples: 17722 - name: validation num_bytes: 5494883 num_examples: 2215 - name: test num_bytes: 5322608 num_examples: 2215 download_size: 36340919 dataset_size: 55469780 - config_name: pl_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2943603 num_examples: 1000 download_size: 1943983 dataset_size: 2943603 - config_name: pt_bosque features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 22808617 num_examples: 8328 - name: validation num_bytes: 1201577 num_examples: 560 - name: test num_bytes: 1131511 num_examples: 476 download_size: 15201503 dataset_size: 25141705 - config_name: pt_gsd features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 22208385 num_examples: 9664 - name: validation num_bytes: 2805628 num_examples: 1210 - name: test num_bytes: 2732063 num_examples: 1204 download_size: 15300844 dataset_size: 27746076 - config_name: pt_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2431942 num_examples: 1000 download_size: 1516883 dataset_size: 2431942 - config_name: ro_nonstandard features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 74489083 num_examples: 24121 - name: validation num_bytes: 2663152 num_examples: 1052 - name: test num_bytes: 3017162 num_examples: 1052 download_size: 50345748 dataset_size: 80169397 - config_name: ro_rrt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 23695399 num_examples: 8043 - name: validation num_bytes: 2190973 num_examples: 752 - name: test num_bytes: 2092520 num_examples: 729 download_size: 17187956 dataset_size: 27978892 - config_name: ro_simonero features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 15390734 num_examples: 3747 - name: validation num_bytes: 1926639 num_examples: 443 - name: test num_bytes: 1940787 num_examples: 491 download_size: 11409378 dataset_size: 19258160 - config_name: ru_gsd features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 10504099 num_examples: 3850 - name: validation num_bytes: 1635884 num_examples: 579 - name: test num_bytes: 1597603 num_examples: 601 download_size: 8830986 dataset_size: 13737586 - config_name: ru_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2695958 num_examples: 1000 download_size: 1869304 dataset_size: 2695958 - config_name: ru_syntagrus features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 126305584 num_examples: 48814 - name: validation num_bytes: 17043673 num_examples: 6584 - name: test num_bytes: 16880203 num_examples: 6491 download_size: 102745164 dataset_size: 160229460 - config_name: ru_taiga features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 5802733 num_examples: 3138 - name: validation num_bytes: 1382140 num_examples: 945 - name: test num_bytes: 1314084 num_examples: 881 download_size: 5491427 dataset_size: 8498957 - config_name: sa_ufal features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 431697 num_examples: 230 download_size: 424675 dataset_size: 431697 - config_name: sa_vedic features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 2179608 num_examples: 2524 - name: test num_bytes: 1209605 num_examples: 1473 download_size: 2041583 dataset_size: 3389213 - config_name: gd_arcosg features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 3952356 num_examples: 1990 - name: validation num_bytes: 1038211 num_examples: 645 - name: test num_bytes: 1034788 num_examples: 538 download_size: 3474087 dataset_size: 6025355 - config_name: sr_set features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 9309552 num_examples: 3328 - name: validation num_bytes: 1503953 num_examples: 536 - name: test num_bytes: 1432672 num_examples: 520 download_size: 7414381 dataset_size: 12246177 - config_name: sms_giellagas features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 174744 num_examples: 104 download_size: 116491 dataset_size: 174744 - config_name: sk_snk features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 12017312 num_examples: 8483 - name: validation num_bytes: 1863926 num_examples: 1060 - name: test num_bytes: 1943012 num_examples: 1061 download_size: 10013420 dataset_size: 15824250 - config_name: sl_ssj features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 16713639 num_examples: 6478 - name: validation num_bytes: 2070847 num_examples: 734 - name: test num_bytes: 2083062 num_examples: 788 download_size: 12455962 dataset_size: 20867548 - config_name: sl_sst features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 2903675 num_examples: 2078 - name: test num_bytes: 1493885 num_examples: 1110 download_size: 2655777 dataset_size: 4397560 - config_name: soj_aha features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 6218 num_examples: 8 download_size: 4577 dataset_size: 6218 - config_name: ajp_madar features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 71956 num_examples: 100 download_size: 43174 dataset_size: 71956 - config_name: es_ancora features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 50101327 num_examples: 14305 - name: validation num_bytes: 5883940 num_examples: 1654 - name: test num_bytes: 5928986 num_examples: 1721 download_size: 37668083 dataset_size: 61914253 - config_name: es_gsd features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 39582074 num_examples: 14187 - name: validation num_bytes: 3834443 num_examples: 1400 - name: test num_bytes: 1253720 num_examples: 426 download_size: 26073760 dataset_size: 44670237 - config_name: es_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2595946 num_examples: 1000 download_size: 1628475 dataset_size: 2595946 - config_name: swl_sslc features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 57443 num_examples: 87 - name: validation num_bytes: 59002 num_examples: 82 - name: test num_bytes: 24542 num_examples: 34 download_size: 81699 dataset_size: 140987 - config_name: sv_lines features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 6731662 num_examples: 3176 - name: validation num_bytes: 2239951 num_examples: 1032 - name: test num_bytes: 2070626 num_examples: 1035 download_size: 7245283 dataset_size: 11042239 - config_name: sv_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2554725 num_examples: 1000 download_size: 1722516 dataset_size: 2554725 - config_name: sv_talbanken features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 9287256 num_examples: 4303 - name: validation num_bytes: 1361535 num_examples: 504 - name: test num_bytes: 2835742 num_examples: 1219 download_size: 8476012 dataset_size: 13484533 - config_name: gsw_uzh features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 111357 num_examples: 100 download_size: 59675 dataset_size: 111357 - config_name: tl_trg features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 86696 num_examples: 128 download_size: 61344 dataset_size: 86696 - config_name: tl_ugnayan features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 90863 num_examples: 94 download_size: 55207 dataset_size: 90863 - config_name: ta_mwtt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 522349 num_examples: 534 download_size: 414263 dataset_size: 522349 - config_name: ta_ttb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1538780 num_examples: 400 - name: validation num_bytes: 305206 num_examples: 80 - name: test num_bytes: 478941 num_examples: 120 download_size: 1753448 dataset_size: 2322927 - config_name: te_mtg features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 703512 num_examples: 1051 - name: validation num_bytes: 91547 num_examples: 131 - name: test num_bytes: 99757 num_examples: 146 download_size: 643764 dataset_size: 894816 - config_name: th_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2341697 num_examples: 1000 download_size: 1606517 dataset_size: 2341697 - config_name: tpn_tudet features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 8089 num_examples: 8 download_size: 5447 dataset_size: 8089 - config_name: qtd_sagt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 583697 num_examples: 285 - name: validation num_bytes: 1564765 num_examples: 801 - name: test num_bytes: 1710777 num_examples: 805 download_size: 2299611 dataset_size: 3859239 - config_name: tr_boun features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 12827173 num_examples: 7803 - name: validation num_bytes: 1577760 num_examples: 979 - name: test num_bytes: 1580727 num_examples: 979 download_size: 9742035 dataset_size: 15985660 - config_name: tr_gb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2146729 num_examples: 2880 download_size: 1474083 dataset_size: 2146729 - config_name: tr_imst features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 5063905 num_examples: 3664 - name: validation num_bytes: 1342351 num_examples: 988 - name: test num_bytes: 1347524 num_examples: 983 download_size: 4711018 dataset_size: 7753780 - config_name: tr_pud features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 2021772 num_examples: 1000 download_size: 1359487 dataset_size: 2021772 - config_name: uk_iu features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 18886802 num_examples: 5496 - name: validation num_bytes: 2592721 num_examples: 672 - name: test num_bytes: 3561164 num_examples: 892 download_size: 17344586 dataset_size: 25040687 - config_name: hsb_ufal features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 54257 num_examples: 23 - name: test num_bytes: 1246592 num_examples: 623 download_size: 781067 dataset_size: 1300849 - config_name: ur_udtb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 19808745 num_examples: 4043 - name: validation num_bytes: 2652349 num_examples: 552 - name: test num_bytes: 2702596 num_examples: 535 download_size: 15901007 dataset_size: 25163690 - config_name: ug_udt features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 2570856 num_examples: 1656 - name: validation num_bytes: 1406032 num_examples: 900 - name: test num_bytes: 1371993 num_examples: 900 download_size: 3455092 dataset_size: 5348881 - config_name: vi_vtb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1689772 num_examples: 1400 - name: validation num_bytes: 948019 num_examples: 800 - name: test num_bytes: 987207 num_examples: 800 download_size: 2055529 dataset_size: 3624998 - config_name: wbp_ufal features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 48533 num_examples: 55 download_size: 38326 dataset_size: 48533 - config_name: cy_ccg features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 1629465 num_examples: 704 - name: test num_bytes: 1779002 num_examples: 953 download_size: 1984759 dataset_size: 3408467 - config_name: wo_wtb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: train num_bytes: 2781883 num_examples: 1188 - name: validation num_bytes: 1204839 num_examples: 449 - name: test num_bytes: 1227124 num_examples: 470 download_size: 3042699 dataset_size: 5213846 - config_name: yo_ytb features: - name: idx dtype: string - name: text dtype: string - name: tokens sequence: string - name: lemmas sequence: string - name: upos sequence: class_label: names: '0': NOUN '1': PUNCT '2': ADP '3': NUM '4': SYM '5': SCONJ '6': ADJ '7': PART '8': DET '9': CCONJ '10': PROPN '11': PRON '12': X '13': _ '14': ADV '15': INTJ '16': VERB '17': AUX - name: xpos sequence: string - name: feats sequence: string - name: head sequence: string - name: deprel sequence: string - name: deps sequence: string - name: misc sequence: string splits: - name: test num_bytes: 905766 num_examples: 318 download_size: 567955 dataset_size: 905766 --- # Dataset Card for Universal Dependencies Treebank ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Universal Dependencies](https://universaldependencies.org/) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@jplu](https://github.com/jplu) for adding this dataset.
universal_morphologies
--- annotations_creators: - expert-generated language_creators: - found language: - ady - ang - ar - arn - ast - az - ba - be - bg - bn - bo - br - ca - ckb - crh - cs - csb - cu - cy - da - de - dsb - el - en - es - et - eu - fa - fi - fo - fr - frm - fro - frr - fur - fy - ga - gal - gd - gmh - gml - got - grc - gv - hai - he - hi - hu - hy - is - it - izh - ka - kbd - kjh - kk - kl - klr - kmr - kn - krl - kw - la - liv - lld - lt - lud - lv - mk - mt - mwf - nap - nb - nds - nl - nn - nv - oc - olo - osx - pl - ps - pt - qu - ro - ru - sa - sga - sh - sl - sme - sq - sv - swc - syc - te - tg - tk - tr - tt - uk - ur - uz - vec - vep - vot - xcl - xno - yi - zu license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 10K<n<100K - 1K<n<10K - n<1K source_datasets: - original task_categories: - token-classification - text-classification task_ids: - multi-class-classification - multi-label-classification paperswithcode_id: null pretty_name: UniversalMorphologies configs: - ady - ang - ara - arn - ast - aze - bak - bel - ben - bod - bre - bul - cat - ces - chu - ckb - cor - crh - csb - cym - dan - deu - dsb - ell - eng - est - eus - fao - fas - fin - fra - frm - fro - frr - fry - fur - gal - gla - gle - glv - gmh - gml - got - grc - hai - hbs - heb - hin - hun - hye - isl - ita - izh - kal - kan - kat - kaz - kbd - kjh - klr - kmr - krl - lat - lav - lit - liv - lld - lud - mkd - mlt - mwf - nap - nav - nds - nld - nno - nob - oci - olo - osx - pol - por - pus - que - ron - rus - san - sga - slv - sme - spa - sqi - swc - swe - syc - tat - tel - tgk - tuk - tur - ukr - urd - uzb - vec - vep - vot - xcl - xno - yid - zul tags: - morphology dataset_info: - config_name: ady features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 3428235 num_examples: 1666 download_size: 1008487 dataset_size: 3428235 - config_name: ang features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 6569844 num_examples: 1867 download_size: 1435972 dataset_size: 6569844 - config_name: ara features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 24388295 num_examples: 4134 download_size: 7155824 dataset_size: 24388295 - config_name: arn features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 124050 num_examples: 26 download_size: 20823 dataset_size: 124050 - config_name: ast features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 4913008 num_examples: 436 download_size: 1175901 dataset_size: 4913008 - config_name: aze features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1248687 num_examples: 340 download_size: 276306 dataset_size: 1248687 - config_name: bak features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1984657 num_examples: 1084 download_size: 494758 dataset_size: 1984657 - config_name: bel features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 2626405 num_examples: 1027 download_size: 739537 dataset_size: 2626405 - config_name: ben features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 746181 num_examples: 136 download_size: 251991 dataset_size: 746181 - config_name: bod features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 880074 num_examples: 1335 download_size: 197523 dataset_size: 880074 - config_name: bre features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 387583 num_examples: 44 download_size: 82159 dataset_size: 387583 - config_name: bul features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 9589915 num_examples: 2468 download_size: 3074574 dataset_size: 9589915 - config_name: cat features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 12988492 num_examples: 1547 download_size: 2902458 dataset_size: 12988492 - config_name: ces features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 21056640 num_examples: 5125 download_size: 4875288 dataset_size: 21056640 - config_name: chu features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 628237 num_examples: 152 download_size: 149081 dataset_size: 628237 - config_name: ckb features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 3843267 num_examples: 274 download_size: 914302 dataset_size: 3843267 - config_name: cor features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 83434 num_examples: 9 download_size: 17408 dataset_size: 83434 - config_name: crh features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1154595 num_examples: 1230 download_size: 186325 dataset_size: 1154595 - config_name: csb features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 82172 num_examples: 37 download_size: 14259 dataset_size: 82172 - config_name: cym features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1748431 num_examples: 183 download_size: 374501 dataset_size: 1748431 - config_name: dan features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 4204551 num_examples: 3193 download_size: 845939 dataset_size: 4204551 - config_name: deu features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 28436466 num_examples: 15060 download_size: 5966618 dataset_size: 28436466 - config_name: dsb features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 2985168 num_examples: 994 download_size: 536096 dataset_size: 2985168 - config_name: ell features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 34112450 num_examples: 11906 download_size: 11222248 dataset_size: 34112450 - config_name: eng features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 18455909 num_examples: 22765 download_size: 3285554 dataset_size: 18455909 - config_name: est features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 6125879 num_examples: 886 download_size: 1397385 dataset_size: 6125879 - config_name: eus features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 2444247 num_examples: 26 download_size: 876480 dataset_size: 2444247 - config_name: fao features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 7117926 num_examples: 3077 download_size: 1450065 dataset_size: 7117926 - config_name: fas features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 6382709 num_examples: 273 download_size: 2104724 dataset_size: 6382709 - config_name: fin features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: '1' num_bytes: 331855860 num_examples: 46152 - name: '2' num_bytes: 81091817 num_examples: 11491 download_size: 109324828 dataset_size: 412947677 - config_name: fra features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 58747699 num_examples: 7535 download_size: 13404983 dataset_size: 58747699 - config_name: frm features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 6015940 num_examples: 603 download_size: 1441122 dataset_size: 6015940 - config_name: fro features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 20260793 num_examples: 1700 download_size: 4945582 dataset_size: 20260793 - config_name: frr features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 526898 num_examples: 51 download_size: 112236 dataset_size: 526898 - config_name: fry features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 222067 num_examples: 85 download_size: 38227 dataset_size: 222067 - config_name: fur features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1282374 num_examples: 168 download_size: 258793 dataset_size: 1282374 - config_name: gal features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 5844604 num_examples: 486 download_size: 1259120 dataset_size: 5844604 - config_name: gla features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 126847 num_examples: 73 download_size: 25025 dataset_size: 126847 - config_name: gle features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 17065939 num_examples: 7464 download_size: 3853188 dataset_size: 17065939 - config_name: glv features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 7523 num_examples: 1 download_size: 401 dataset_size: 7523 - config_name: gmh features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 114677 num_examples: 29 download_size: 20851 dataset_size: 114677 - config_name: gml features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 233831 num_examples: 52 download_size: 47151 dataset_size: 233831 - config_name: got features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train download_size: 2 dataset_size: 0 - config_name: grc features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 6779867 num_examples: 2431 download_size: 2057514 dataset_size: 6779867 - config_name: hai features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1166240 num_examples: 41 download_size: 329817 dataset_size: 1166240 - config_name: hbs features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 132933961 num_examples: 24419 download_size: 32194142 dataset_size: 132933961 - config_name: heb features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 2211208 num_examples: 510 download_size: 498065 dataset_size: 2211208 - config_name: hin features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 10083004 num_examples: 258 download_size: 3994359 dataset_size: 10083004 - config_name: hun features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 83517327 num_examples: 14892 download_size: 19544319 dataset_size: 83517327 - config_name: hye features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 56537127 num_examples: 7033 download_size: 17810316 dataset_size: 56537127 - config_name: isl features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 12120572 num_examples: 4775 download_size: 2472980 dataset_size: 12120572 - config_name: ita features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 81905203 num_examples: 10009 download_size: 19801423 dataset_size: 81905203 - config_name: izh features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 170094 num_examples: 50 download_size: 28558 dataset_size: 170094 - config_name: kal features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 60434 num_examples: 23 download_size: 9795 dataset_size: 60434 - config_name: kan features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1052294 num_examples: 159 download_size: 318512 dataset_size: 1052294 - config_name: kat features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 12532540 num_examples: 3782 download_size: 4678979 dataset_size: 12532540 - config_name: kaz features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 62519 num_examples: 26 download_size: 14228 dataset_size: 62519 - config_name: kbd features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 511406 num_examples: 250 download_size: 133788 dataset_size: 511406 - config_name: kjh features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 193741 num_examples: 75 download_size: 44907 dataset_size: 193741 - config_name: klr features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 28909688 num_examples: 591 download_size: 7561829 dataset_size: 28909688 - config_name: kmr features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 35504487 num_examples: 15083 download_size: 8592722 dataset_size: 35504487 - config_name: krl features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 106475 num_examples: 20 download_size: 19024 dataset_size: 106475 - config_name: lat features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 81932667 num_examples: 17214 download_size: 19567252 dataset_size: 81932667 - config_name: lav features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 21219584 num_examples: 7548 download_size: 5048680 dataset_size: 21219584 - config_name: lit features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 5287268 num_examples: 1458 download_size: 1191554 dataset_size: 5287268 - config_name: liv features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 642166 num_examples: 203 download_size: 141467 dataset_size: 642166 - config_name: lld features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1240257 num_examples: 180 download_size: 278592 dataset_size: 1240257 - config_name: lud features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: mikhailovskoye num_bytes: 11361 num_examples: 2 - name: new_written num_bytes: 35132 num_examples: 94 - name: southern_ludian_svjatozero num_bytes: 57276 num_examples: 71 download_size: 14697 dataset_size: 103769 - config_name: mkd features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 27800390 num_examples: 10313 download_size: 8157589 dataset_size: 27800390 - config_name: mlt features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 604577 num_examples: 112 download_size: 124584 dataset_size: 604577 - config_name: mwf features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 172890 num_examples: 29 download_size: 25077 dataset_size: 172890 - config_name: nap features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 293699 num_examples: 40 download_size: 64163 dataset_size: 293699 - config_name: nav features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 2051393 num_examples: 674 download_size: 523673 dataset_size: 2051393 - config_name: nds features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train download_size: 2 dataset_size: 0 - config_name: nld features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 8813867 num_examples: 4993 download_size: 1874427 dataset_size: 8813867 - config_name: nno features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 2704566 num_examples: 4689 download_size: 420695 dataset_size: 2704566 - config_name: nob features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 3359706 num_examples: 5527 download_size: 544432 dataset_size: 3359706 - config_name: oci features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1327716 num_examples: 174 download_size: 276611 dataset_size: 1327716 - config_name: olo features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: kotkozero num_bytes: 7682 num_examples: 5 - name: new_written num_bytes: 11158424 num_examples: 15293 - name: syamozero num_bytes: 6379 num_examples: 2 - name: vedlozero num_bytes: 6120 num_examples: 1 - name: vidlitsa num_bytes: 54363 num_examples: 3 download_size: 2130154 dataset_size: 11232968 - config_name: osx features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 3500590 num_examples: 863 download_size: 759997 dataset_size: 3500590 - config_name: pol features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 30855235 num_examples: 10185 download_size: 6666266 dataset_size: 30855235 - config_name: por features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 48530106 num_examples: 4001 download_size: 10982524 dataset_size: 48530106 - config_name: pus features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1176421 num_examples: 395 download_size: 297043 dataset_size: 1176421 - config_name: que features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 27823298 num_examples: 1006 download_size: 6742890 dataset_size: 27823298 - config_name: ron features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 13187957 num_examples: 4405 download_size: 2990521 dataset_size: 13187957 - config_name: rus features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 77484460 num_examples: 28068 download_size: 25151401 dataset_size: 77484460 - config_name: san features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 5500001 num_examples: 917 download_size: 1788739 dataset_size: 5500001 - config_name: sga features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 190479 num_examples: 49 download_size: 43469 dataset_size: 190479 - config_name: slv features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 9071547 num_examples: 2535 download_size: 1911039 dataset_size: 9071547 - config_name: sme features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 9764653 num_examples: 2103 download_size: 2050015 dataset_size: 9764653 - config_name: spa features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 61472202 num_examples: 5460 download_size: 14386131 dataset_size: 61472202 - config_name: sqi features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 5422400 num_examples: 589 download_size: 1261468 dataset_size: 5422400 - config_name: swc features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1694529 num_examples: 100 download_size: 414624 dataset_size: 1694529 - config_name: swe features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 12897827 num_examples: 10553 download_size: 2709960 dataset_size: 12897827 - config_name: syc features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 553392 num_examples: 160 download_size: 130000 dataset_size: 553392 - config_name: tat features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1203356 num_examples: 1283 download_size: 194277 dataset_size: 1203356 - config_name: tel features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 285769 num_examples: 127 download_size: 95069 dataset_size: 285769 - config_name: tgk features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 25276 num_examples: 75 download_size: 2366 dataset_size: 25276 - config_name: tuk features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 127712 num_examples: 68 download_size: 20540 dataset_size: 127712 - config_name: tur features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 44723850 num_examples: 3579 download_size: 11552946 dataset_size: 44723850 - config_name: ukr features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 3299187 num_examples: 1493 download_size: 870660 dataset_size: 3299187 - config_name: urd features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 2197237 num_examples: 182 download_size: 685613 dataset_size: 2197237 - config_name: uzb features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 196802 num_examples: 15 download_size: 41921 dataset_size: 196802 - config_name: vec features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 2892987 num_examples: 368 download_size: 615931 dataset_size: 2892987 - config_name: vep features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: central_eastern num_bytes: 500981 num_examples: 65 - name: central_western num_bytes: 2527618 num_examples: 111 - name: new_written num_bytes: 79899484 num_examples: 9304 - name: northern num_bytes: 175242 num_examples: 21 - name: southern num_bytes: 206289 num_examples: 17 download_size: 20131151 dataset_size: 83309614 - config_name: vot features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 217663 num_examples: 55 download_size: 37179 dataset_size: 217663 - config_name: xcl features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 16856327 num_examples: 4300 download_size: 4950513 dataset_size: 16856327 - config_name: xno features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 48938 num_examples: 5 download_size: 9641 dataset_size: 48938 - config_name: yid features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 1409582 num_examples: 803 download_size: 429391 dataset_size: 1409582 - config_name: zul features: - name: lemma dtype: string - name: forms sequence: - name: word dtype: string - name: Aktionsart sequence: class_label: names: 0: STAT 1: DYN 2: TEL 3: ATEL 4: PCT 5: DUR 6: ACH 7: ACCMP 8: SEMEL 9: ACTY - name: Animacy sequence: class_label: names: 0: ANIM 1: INAN 2: HUM 3: NHUM - name: Argument_Marking sequence: class_label: names: 0: ARGNO1S 1: ARGNO2S 2: ARGNO3S 3: ARGNO1P 4: ARGNO2P 5: ARGNO3P 6: ARGAC1S 7: ARGAC2S 8: ARGAC3S 9: ARGAC1P 10: ARGAC2P 11: ARGAC3P 12: ARGAB1S 13: ARGAB2S 14: ARGAB3S 15: ARGAB1P 16: ARGAB2P 17: ARGAB3P 18: ARGER1S 19: ARGER2S 20: ARGER3S 21: ARGER1P 22: ARGER2P 23: ARGER3P 24: ARGDA1S 25: ARGDA2S 26: ARGDA3S 27: ARGDA1P 28: ARGDA2P 29: ARGDA3P 30: ARGBE1S 31: ARGBE2S 32: ARGBE3S 33: ARGBE1P 34: ARGBE2P 35: ARGBE3P - name: Aspect sequence: class_label: names: 0: IPFV 1: PFV 2: PRF 3: PROG 4: PROSP 5: ITER 6: HAB - name: Case sequence: class_label: names: 0: NOM 1: ACC 2: ERG 3: ABS 4: NOMS 5: DAT 6: BEN 7: PRP 8: GEN 9: REL 10: PRT 11: INS 12: COM 13: VOC 14: COMPV 15: EQTV 16: PRIV 17: PROPR 18: AVR 19: FRML 20: TRANS 21: BYWAY 22: INTER 23: AT 24: POST 25: IN 26: CIRC 27: ANTE 28: APUD 29: 'ON' 30: ONHR 31: ONVR 32: SUB 33: REM 34: PROXM 35: ESS 36: ALL 37: ABL 38: APPRX 39: TERM - name: Comparison sequence: class_label: names: 0: CMPR 1: SPRL 2: AB 3: RL 4: EQT - name: Definiteness sequence: class_label: names: 0: DEF 1: INDF 2: SPEC 3: NSPEC - name: Deixis sequence: class_label: names: 0: PROX 1: MED 2: REMT 3: REF1 4: REF2 5: NOREF 6: PHOR 7: VIS 8: NVIS 9: ABV 10: EVEN 11: BEL - name: Evidentiality sequence: class_label: names: 0: FH 1: DRCT 2: SEN 3: VISU 4: NVSEN 5: AUD 6: NFH 7: QUOT 8: RPRT 9: HRSY 10: INFER 11: ASSUM - name: Finiteness sequence: class_label: names: 0: FIN 1: NFIN - name: Gender sequence: class_label: names: 0: MASC 1: FEM 2: NEUT 3: NAKH1 4: NAKH2 5: NAKH3 6: NAKH4 7: NAKH5 8: NAKH6 9: NAKH7 10: NAKH8 11: BANTU1 12: BANTU2 13: BANTU3 14: BANTU4 15: BANTU5 16: BANTU6 17: BANTU7 18: BANTU8 19: BANTU9 20: BANTU10 21: BANTU11 22: BANTU12 23: BANTU13 24: BANTU14 25: BANTU15 26: BANTU16 27: BANTU17 28: BANTU18 29: BANTU19 30: BANTU20 31: BANTU21 32: BANTU22 33: BANTU23 - name: Information_Structure sequence: class_label: names: 0: TOP 1: FOC - name: Interrogativity sequence: class_label: names: 0: DECL 1: INT - name: Language_Specific sequence: class_label: names: 0: LGSPEC1 1: LGSPEC2 2: LGSPEC3 3: LGSPEC4 4: LGSPEC5 5: LGSPEC6 6: LGSPEC7 7: LGSPEC8 8: LGSPEC9 9: LGSPEC10 - name: Mood sequence: class_label: names: 0: IND 1: SBJV 2: REAL 3: IRR 4: AUPRP 5: AUNPRP 6: IMP 7: COND 8: PURP 9: INTEN 10: POT 11: LKLY 12: ADM 13: OBLIG 14: DEB 15: PERM 16: DED 17: SIM 18: OPT - name: Number sequence: class_label: names: 0: SG 1: PL 2: GRPL 3: DU 4: TRI 5: PAUC 6: GRPAUC 7: INVN - name: Part_Of_Speech sequence: class_label: names: 0: N 1: PROPN 2: ADJ 3: PRO 4: CLF 5: ART 6: DET 7: V 8: ADV 9: AUX 10: V.PTCP 11: V.MSDR 12: V.CVB 13: ADP 14: COMP 15: CONJ 16: NUM 17: PART 18: INTJ - name: Person sequence: class_label: names: 0: '0' 1: '1' 2: '2' 3: '3' 4: '4' 5: INCL 6: EXCL 7: PRX 8: OBV - name: Polarity sequence: class_label: names: 0: POS 1: NEG - name: Politeness sequence: class_label: names: 0: INFM 1: FORM 2: ELEV 3: HUMB 4: POL 5: AVOID 6: LOW 7: HIGH 8: STELEV 9: STSUPR 10: LIT 11: FOREG 12: COL - name: Possession sequence: class_label: names: 0: ALN 1: NALN 2: PSS1S 3: PSS2S 4: PSS2SF 5: PSS2SM 6: PSS2SINFM 7: PSS2SFORM 8: PSS3S 9: PSS3SF 10: PSS3SM 11: PSS1D 12: PSS1DI 13: PSS1DE 14: PSS2D 15: PSS2DM 16: PSS2DF 17: PSS3D 18: PSS3DF 19: PSS3DM 20: PSS1P 21: PSS1PI 22: PSS1PE 23: PSS2P 24: PSS2PF 25: PSS2PM 26: PSS3PF 27: PSS3PM - name: Switch_Reference sequence: class_label: names: 0: SS 1: SSADV 2: DS 3: DSADV 4: OR 5: SIMMA 6: SEQMA 7: LOG - name: Tense sequence: class_label: names: 0: PRS 1: PST 2: FUT 3: IMMED 4: HOD 5: 1DAY 6: RCT 7: RMT - name: Valency sequence: class_label: names: 0: IMPRS 1: INTR 2: TR 3: DITR 4: REFL 5: RECP 6: CAUS 7: APPL - name: Voice sequence: class_label: names: 0: ACT 1: MID 2: PASS 3: ANTIP 4: DIR 5: INV 6: AGFOC 7: PFOC 8: LFOC 9: BFOC 10: ACFOC 11: IFOC 12: CFOC - name: Other sequence: string splits: - name: train num_bytes: 7152507 num_examples: 566 download_size: 1581402 dataset_size: 7152507 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [UniMorph Homepage](https://unimorph.github.io/) - **Repository:** [List of UniMorph repositories](https://github.com/unimorph) - **Paper:** [The Composition and Use of the Universal Morphological Feature Schema (UniMorph Schema)](https://unimorph.github.io/doc/unimorph-schema.pdf) - **Point of Contact:** [Arya McCarthy](mailto:arya@jhu.edu) ### Dataset Summary The Universal Morphology (UniMorph) project is a collaborative effort to improve how NLP handles complex morphology in the world’s languages. The goal of UniMorph is to annotate morphological data in a universal schema that allows an inflected word from any language to be defined by its lexical meaning, typically carried by the lemma, and by a rendering of its inflectional form in terms of a bundle of morphological features from our schema. The specification of the schema is described in Sylak-Glassman (2016). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The current version of the UniMorph dataset covers 110 languages. ## Dataset Structure ### Data Instances Each data instance comprises of a lemma and a set of possible realizations with morphological and meaning annotations. For example: ``` {'forms': {'Aktionsart': [[], [], [], [], []], 'Animacy': [[], [], [], [], []], ... 'Finiteness': [[], [], [], [1], []], ... 'Number': [[], [], [0], [], []], 'Other': [[], [], [], [], []], 'Part_Of_Speech': [[7], [10], [7], [7], [10]], ... 'Tense': [[1], [1], [0], [], [0]], ... 'word': ['ablated', 'ablated', 'ablates', 'ablate', 'ablating']}, 'lemma': 'ablate'} ``` ### Data Fields Each instance in the dataset has the following fields: - `lemma`: the common lemma for all all_forms - `forms`: all annotated forms for this lemma, with: - `word`: the full word form - [`category`]: a categorical variable denoting one or several tags in a category (several to represent composite tags, originally denoted with `A+B`). The full list of categories and possible tags for each can be found [here](https://github.com/unimorph/unimorph.github.io/blob/master/unimorph-schema-json/dimensions-to-features.json) ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
urdu_fake_news
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - ur license: - unknown multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: - fact-checking - intent-classification pretty_name: Bend the Truth (Urdu Fake News) dataset_info: features: - name: news dtype: string - name: label dtype: class_label: names: '0': Fake '1': Real - name: category dtype: class_label: names: '0': bus '1': hlth '2': sp '3': tch '4': sbz splits: - name: train num_bytes: 1762905 num_examples: 638 - name: test num_bytes: 799587 num_examples: 262 download_size: 1042653 dataset_size: 2562492 --- # Dataset Card for Bend the Truth (Urdu Fake News) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/MaazAmjad/Datasets-for-Urdu-news/) - **Repository:** [Github](https://github.com/MaazAmjad/Datasets-for-Urdu-news/) - **Paper:** - **Leaderboard:** - **Point of Contact:** [Maaz Amjad](https://github.com/MaazAmjad) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - news: a string in urdu - label: the label indicating whethere the provided news is real or fake. - category: The intent of the news being presented. The available 5 classes are Sports, Health, Technology, Entertainment, and Business. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
urdu_sentiment_corpus
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - ur license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: urdu-sentiment-corpus pretty_name: Urdu Sentiment Corpus (USC) dataset_info: features: - name: sentence dtype: string - name: sentiment dtype: class_label: names: '0': P '1': N '2': O splits: - name: train num_bytes: 161190 num_examples: 1000 download_size: 51583 dataset_size: 161190 --- # Dataset Card for Urdu Sentiment Corpus (USC) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus) - **Repository:** [Github](https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus) - **Paper:** [IEEE](https://ieeexplore.ieee.org/abstract/document/9080043) - **Leaderboard:** - **Point of Contact:** [Muhammad Yaseen Khan](https://github.com/MuhammadYaseenKhan) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - sentences: The Urdu tweet - sentiment: The sentiment that was exhibited in the tweet, which can be Positive(P) or Negative(N) or Objective(O). ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
vctk
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual pretty_name: VCTK size_categories: - 10K<n<100K source_datasets: - original task_categories: - automatic-speech-recognition task_ids: [] paperswithcode_id: vctk train-eval-index: - config: main task: automatic-speech-recognition task_id: speech_recognition splits: train_split: train col_mapping: file: path text: text metrics: - type: wer name: WER - type: cer name: CER dataset_info: features: - name: speaker_id dtype: string - name: audio dtype: audio: sampling_rate: 48000 - name: file dtype: string - name: text dtype: string - name: text_id dtype: string - name: age dtype: string - name: gender dtype: string - name: accent dtype: string - name: region dtype: string - name: comment dtype: string config_name: main splits: - name: train num_bytes: 40103111 num_examples: 88156 download_size: 11747302977 dataset_size: 40103111 --- # Dataset Card for VCTK ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Edinburg DataShare](https://doi.org/10.7488/ds/2645) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances A data point comprises the path to the audio file, called `file` and its transcription, called `text`. ``` { 'speaker_id': 'p225', 'text_id': '001', 'text': 'Please call Stella.', 'age': '23', 'gender': 'F', 'accent': 'English', 'region': 'Southern England', 'file': '/datasets/downloads/extracted/8ed7dad05dfffdb552a3699777442af8e8ed11e656feb277f35bf9aea448f49e/wav48_silence_trimmed/p225/p225_001_mic1.flac', 'audio': { 'path': '/datasets/downloads/extracted/8ed7dad05dfffdb552a3699777442af8e8ed11e656feb277f35bf9aea448f49e/wav48_silence_trimmed/p225/p225_001_mic1.flac', 'array': array([0.00485229, 0.00689697, 0.00619507, ..., 0.00811768, 0.00836182, 0.00854492], dtype=float32), 'sampling_rate': 48000 }, 'comment': '' } ``` Each audio file is a single-channel FLAC with a sample rate of 48000 Hz. ### Data Fields Each row consists of the following fields: - `speaker_id`: Speaker ID - `audio`: Audio recording - `file`: Path to audio file - `text`: Text transcription of corresponding audio - `text_id`: Text ID - `age`: Speaker's age - `gender`: Speaker's gender - `accent`: Speaker's accent - `region`: Speaker's region, if annotation exists - `comment`: Miscellaneous comments, if any ### Data Splits The dataset has no predefined splits. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode)) ### Citation Information ```bibtex @inproceedings{Veaux2017CSTRVC, title = {CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit}, author = {Christophe Veaux and Junichi Yamagishi and Kirsten MacDonald}, year = 2017 } ``` ### Contributions Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
vivos
--- pretty_name: VIVOS annotations_creators: - expert-generated language_creators: - crowdsourced - expert-generated language: - vi license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - automatic-speech-recognition task_ids: [] dataset_info: features: - name: speaker_id dtype: string - name: path dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: sentence dtype: string splits: - name: train num_bytes: 1722002133 num_examples: 11660 - name: test num_bytes: 86120227 num_examples: 760 download_size: 1475540500 dataset_size: 1808122360 --- # Dataset Card for VIVOS ## Table of Contents - [Dataset Card for VIVOS](#dataset-card-for-vivos) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://doi.org/10.5281/zenodo.7068130 - **Repository:** [Needs More Information] - **Paper:** [A non-expert Kaldi recipe for Vietnamese Speech Recognition System](https://aclanthology.org/W16-5207/) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [AILAB](mailto:ailab@hcmus.edu.vn) ### Dataset Summary VIVOS is a free Vietnamese speech corpus consisting of 15 hours of recording speech prepared for Vietnamese Automatic Speech Recognition task. The corpus was prepared by AILAB, a computer science lab of VNUHCM - University of Science, with Prof. Vu Hai Quan is the head of. We publish this corpus in hope to attract more scientists to solve Vietnamese speech recognition problems. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Vietnamese ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, called `path` and its transcription, called `sentence`. Some additional information about the speaker and the passage which contains the transcription is provided. ``` {'speaker_id': 'VIVOSSPK01', 'path': '/home/admin/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/vivos/train/waves/VIVOSSPK01/VIVOSSPK01_R001.wav', 'audio': {'path': '/home/admin/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/vivos/train/waves/VIVOSSPK01/VIVOSSPK01_R001.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, 'sentence': 'KHÁCH SẠN'} ``` ### Data Fields - speaker_id: An id for which speaker (voice) made the recording - path: The path to the audio file - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - sentence: The sentence the user was prompted to speak ### Data Splits The speech material has been subdivided into portions for train and test. Speech was recorded in a quiet environment with high quality microphone, speakers were asked to read one sentence at a time. | | Train | Test | | ---------------- | ----- | ----- | | Speakers | 46 | 19 | | Utterances | 11660 | 760 | | Duration | 14:55 | 00:45 | | Unique Syllables | 4617 | 1692 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators The dataset was initially prepared by AILAB, a computer science lab of VNUHCM - University of Science. ### Licensing Information Public Domain, Creative Commons Attribution NonCommercial ShareAlike v4.0 ([CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)) ### Citation Information ``` @inproceedings{luong-vu-2016-non, title = "A non-expert {K}aldi recipe for {V}ietnamese Speech Recognition System", author = "Luong, Hieu-Thi and Vu, Hai-Quan", booktitle = "Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies ({WLSI}/{OIAF}4{HLT}2016)", month = dec, year = "2016", address = "Osaka, Japan", publisher = "The COLING 2016 Organizing Committee", url = "https://aclanthology.org/W16-5207", pages = "51--55", } ``` ### Contributions Thanks to [@binh234](https://github.com/binh234) for adding this dataset.
web_nlg
--- annotations_creators: - found language_creators: - crowdsourced language: - en - ru license: - cc-by-sa-3.0 - cc-by-nc-sa-4.0 - gfdl multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-db_pedia - original task_categories: - tabular-to-text task_ids: - rdf-to-text paperswithcode_id: webnlg pretty_name: WebNLG configs: - release_v1 - release_v2 - release_v2.1 - release_v2.1_constrained - release_v2_constrained - release_v3.0_en - release_v3.0_ru - webnlg_challenge_2017 dataset_info: - config_name: webnlg_challenge_2017 features: - name: category dtype: string - name: size dtype: int32 - name: eid dtype: string - name: original_triple_sets sequence: - name: otriple_set sequence: string - name: modified_triple_sets sequence: - name: mtriple_set sequence: string - name: shape dtype: string - name: shape_type dtype: string - name: lex sequence: - name: comment dtype: string - name: lid dtype: string - name: text dtype: string - name: lang dtype: string - name: test_category dtype: string - name: dbpedia_links sequence: string - name: links sequence: string splits: - name: train num_bytes: 5594812 num_examples: 6940 - name: dev num_bytes: 706653 num_examples: 872 - name: test num_bytes: 3122533 num_examples: 4615 download_size: 25499351 dataset_size: 9423998 - config_name: release_v1 features: - name: category dtype: string - name: size dtype: int32 - name: eid dtype: string - name: original_triple_sets sequence: - name: otriple_set sequence: string - name: modified_triple_sets sequence: - name: mtriple_set sequence: string - name: shape dtype: string - name: shape_type dtype: string - name: lex sequence: - name: comment dtype: string - name: lid dtype: string - name: text dtype: string - name: lang dtype: string - name: test_category dtype: string - name: dbpedia_links sequence: string - name: links sequence: string splits: - name: full num_bytes: 11684308 num_examples: 14237 download_size: 25499351 dataset_size: 11684308 - config_name: release_v2 features: - name: category dtype: string - name: size dtype: int32 - name: eid dtype: string - name: original_triple_sets sequence: - name: otriple_set sequence: string - name: modified_triple_sets sequence: - name: mtriple_set sequence: string - name: shape dtype: string - name: shape_type dtype: string - name: lex sequence: - name: comment dtype: string - name: lid dtype: string - name: text dtype: string - name: lang dtype: string - name: test_category dtype: string - name: dbpedia_links sequence: string - name: links sequence: string splits: - name: train num_bytes: 10830413 num_examples: 12876 - name: dev num_bytes: 1360033 num_examples: 1619 - name: test num_bytes: 1324934 num_examples: 1600 download_size: 25499351 dataset_size: 13515380 - config_name: release_v2_constrained features: - name: category dtype: string - name: size dtype: int32 - name: eid dtype: string - name: original_triple_sets sequence: - name: otriple_set sequence: string - name: modified_triple_sets sequence: - name: mtriple_set sequence: string - name: shape dtype: string - name: shape_type dtype: string - name: lex sequence: - name: comment dtype: string - name: lid dtype: string - name: text dtype: string - name: lang dtype: string - name: test_category dtype: string - name: dbpedia_links sequence: string - name: links sequence: string splits: - name: train num_bytes: 10853434 num_examples: 12895 - name: dev num_bytes: 1421590 num_examples: 1594 - name: test num_bytes: 1243182 num_examples: 1606 download_size: 25499351 dataset_size: 13518206 - config_name: release_v2.1 features: - name: category dtype: string - name: size dtype: int32 - name: eid dtype: string - name: original_triple_sets sequence: - name: otriple_set sequence: string - name: modified_triple_sets sequence: - name: mtriple_set sequence: string - name: shape dtype: string - name: shape_type dtype: string - name: lex sequence: - name: comment dtype: string - name: lid dtype: string - name: text dtype: string - name: lang dtype: string - name: test_category dtype: string - name: dbpedia_links sequence: string - name: links sequence: string splits: - name: train num_bytes: 10848793 num_examples: 12876 - name: dev num_bytes: 1362072 num_examples: 1619 - name: test num_bytes: 1325860 num_examples: 1600 download_size: 25499351 dataset_size: 13536725 - config_name: release_v2.1_constrained features: - name: category dtype: string - name: size dtype: int32 - name: eid dtype: string - name: original_triple_sets sequence: - name: otriple_set sequence: string - name: modified_triple_sets sequence: - name: mtriple_set sequence: string - name: shape dtype: string - name: shape_type dtype: string - name: lex sequence: - name: comment dtype: string - name: lid dtype: string - name: text dtype: string - name: lang dtype: string - name: test_category dtype: string - name: dbpedia_links sequence: string - name: links sequence: string splits: - name: train num_bytes: 11040016 num_examples: 12895 - name: dev num_bytes: 1284044 num_examples: 1594 - name: test num_bytes: 1212665 num_examples: 1606 download_size: 25499351 dataset_size: 13536725 - config_name: release_v3.0_en features: - name: category dtype: string - name: size dtype: int32 - name: eid dtype: string - name: original_triple_sets sequence: - name: otriple_set sequence: string - name: modified_triple_sets sequence: - name: mtriple_set sequence: string - name: shape dtype: string - name: shape_type dtype: string - name: lex sequence: - name: comment dtype: string - name: lid dtype: string - name: text dtype: string - name: lang dtype: string - name: test_category dtype: string - name: dbpedia_links sequence: string - name: links sequence: string splits: - name: train num_bytes: 11084860 num_examples: 13211 - name: dev num_bytes: 1394243 num_examples: 1667 - name: test num_bytes: 4039282 num_examples: 5713 download_size: 25499351 dataset_size: 16518385 - config_name: release_v3.0_ru features: - name: category dtype: string - name: size dtype: int32 - name: eid dtype: string - name: original_triple_sets sequence: - name: otriple_set sequence: string - name: modified_triple_sets sequence: - name: mtriple_set sequence: string - name: shape dtype: string - name: shape_type dtype: string - name: lex sequence: - name: comment dtype: string - name: lid dtype: string - name: text dtype: string - name: lang dtype: string - name: test_category dtype: string - name: dbpedia_links sequence: string - name: links sequence: string splits: - name: train num_bytes: 9550340 num_examples: 5573 - name: dev num_bytes: 1314226 num_examples: 790 - name: test num_bytes: 3656501 num_examples: 3410 download_size: 25499351 dataset_size: 14521067 --- # Dataset Card for WebNLG ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WebNLG challenge website](https://webnlg-challenge.loria.fr/) - **Repository:** [WebNLG GitLab repository](https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/) - **Paper:** [Creating Training Corpora for NLG Micro-Planning](https://www.aclweb.org/anthology/P17-1017.pdf) - **Leaderboard:** [WebNLG leaderboards](https://gerbil-nlg.dice-research.org/gerbil/webnlg2020results) - **Point of Contact:** [anastasia.shimorina@loria.fr](anastasia.shimorina@loria.fr) ### Dataset Summary The WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b). ``` a. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot) b. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot ``` As the example illustrates, the task involves specific NLG subtasks such as sentence segmentation (how to chunk the input data into sentences), lexicalisation (of the DBpedia properties), aggregation (how to avoid repetitions) and surface realisation (how to build a syntactically correct and natural sounding text). ### Supported Tasks and Leaderboards The dataset supports a Structured to Text task which requires a model takes a set of RDF (Resource Description Format) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural language sentence expressing the information contained in the triples. The dataset has supportd two challenges: the [WebNLG2017](https://www.aclweb.org/anthology/W17-3518/) and [WebNLG2020](https://gerbil-nlg.dice-research.org/gerbil/webnlg2020results) challenge. Results were ordered by their [METEOR](https://huggingface.co/metrics/meteor) to the reference, but the leaderboards report a range of other metrics including [BLEU](https://huggingface.co/metrics/bleu), [BERTscore](https://huggingface.co/metrics/bertscore), and [BLEURT](https://huggingface.co/metrics/bleurt). The v3 release (`release_v3.0_en`, `release_v3.0_ru`) for the WebNLG2020 challenge also supports a semantic `parsing` task. ### Languages All releases contain English (`en`) data. The v3 release (`release_v3.0_ru`) also contains Russian (`ru`) examples. ## Dataset Structure ### Data Instances A typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and a set of possible verbalizations for this set of triples: ``` {'2017_test_category': '', 'category': 'Politician', 'eid': 'Id10', 'lex': {'comment': ['good', 'good', 'good'], 'lid': ['Id1', 'Id2', 'Id3'], 'text': ['World War II had Chiang Kai-shek as a commander and United States Army soldier Abner W. Sibal.', 'Abner W. Sibal served in the United States Army during the Second World War and during that war Chiang Kai-shek was one of the commanders.', 'Abner W. Sibal, served in the United States Army and fought in World War II, one of the commanders of which, was Chiang Kai-shek.']}, 'modified_triple_sets': {'mtriple_set': [['Abner_W._Sibal | battle | World_War_II', 'World_War_II | commander | Chiang_Kai-shek', 'Abner_W._Sibal | militaryBranch | United_States_Army']]}, 'original_triple_sets': {'otriple_set': [['Abner_W._Sibal | battles | World_War_II', 'World_War_II | commander | Chiang_Kai-shek', 'Abner_W._Sibal | branch | United_States_Army'], ['Abner_W._Sibal | militaryBranch | United_States_Army', 'Abner_W._Sibal | battles | World_War_II', 'World_War_II | commander | Chiang_Kai-shek']]}, 'shape': '(X (X) (X (X)))', 'shape_type': 'mixed', 'size': 3} ``` ### Data Fields The following fields can be found in the instances: - `category`: the category of the DBpedia entities present in the RDF triples. - `eid`: an example ID, only unique per split per category. - `size`: number of RDF triples in the set. - `shape`: (since v2) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. `shape` is a string representation of the tree with nested parentheses where X is a node (see [Newick tree format](https://en.wikipedia.org/wiki/Newick_format)) - `shape_type`: (since v2) is a type of the tree shape, which can be: `chain` (the object of one triple is the subject of the other); `sibling` (triples with a shared subject); `mixed` (both chain and sibling types present). - `test_category`: (for `webnlg_challenge_2017` and `v3`) tells whether the set of RDF triples was present in the training set or not. Several splits of the test set are available: with and without references, and for RDF-to-text generation / for semantic parsing. - `lex`: the lexicalizations, with: - `text`: the text to be predicted. - `lid`: a lexicalization ID, unique per example. - `comment`: the lexicalizations were rated by crowd workers are either `good` or `bad` - `lang`: (for `release_v3.0_ru`) the language used because original English texts were kept in the Russian version. Russian data has additional optional fields comparing to English: - `dbpedialinks`: RDF triples extracted from DBpedia between English and Russian entities by means of the property `sameAs`. - `links`: RDF triples created manually for some entities to serve as pointers to translators. There are two types of them: * with `sameAs` (`Spaniards | sameAs | испанцы`) * with `includes` (`Tomatoes, guanciale, cheese, olive oil | includes | гуанчиале`). Those were mostly created for string literals to translate some parts of them. ### Data Splits For `v3.0` releases: | English (v3.0) | Train | Dev | Test (data-to-text) | |-----------------|--------|-------|-------| | **triple sets** | 13,211 | 1,667 | 1,779 | | **texts** | 35,426 | 4,464 | 5,150 | |**properties** | 372 | 290 | 220 | | Russian (v3.0) | Train | Dev | Test (data-to-text) | |-----------------|--------|-------|---------------------| | **triple sets** | 5,573 | 790 | 1,102 | | **texts** | 14,239 | 2,026 | 2,780 | |**properties** | 226 | 115 | 192 | ## Dataset Creation ### Curation Rationale The WebNLG dataset was created to promote the development _(i)_ of RDF verbalisers and _(ii)_ of microplanners able to handle a wide range of linguistic constructions. The dataset aims at covering knowledge in different domains ("categories"). The same properties and entities can appear in several categories. ### Source Data The data was compiled from raw DBpedia triples. [This paper](https://www.aclweb.org/anthology/C16-1141/) explains how the triples were selected. #### Initial Data Collection and Normalization Initial triples extracted from DBpedia were modified in several ways. See [official documentation](https://webnlg-challenge.loria.fr/docs/) for the most frequent changes that have been made. An original tripleset and a modified tripleset usually represent a one-to-one mapping. However, there are cases with many-to-one mappings when several original triplesets are mapped to one modified tripleset. Entities that served as roots of RDF trees are listed in [this file](https://gitlab.com/shimorina/webnlg-dataset/-/blob/master/supplementary/entities_dict.json). The English WebNLG 2020 dataset (v3.0) for training comprises data-text pairs for 16 distinct DBpedia categories: - The 10 seen categories used in the 2017 version: Airport, Astronaut, Building, City, ComicsCharacter, Food, Monument, SportsTeam, University, and WrittenWork. - The 5 unseen categories of 2017, which are now part of the seen data: Athlete, Artist, CelestialBody, MeanOfTransportation, Politician. - 1 new category: Company. The Russian dataset (v3.0) comprises data-text pairs for 9 distinct categories: Airport, Astronaut, Building, CelestialBody, ComicsCharacter, Food, Monument, SportsTeam, and University. #### Who are the source language producers? There are no source texts, all textual material was compiled during the annotation process. ### Annotations #### Annotation process Annotators were first asked to create sentences that verbalise single triples. In a second round, annotators were asked to combine single-triple sentences together into sentences that cover 2 triples. And so on until 7 triples. Quality checks were performed to ensure the quality of the annotations. See Section 3.3 in [the dataset paper](https://www.aclweb.org/anthology/P17-1017.pdf). Russian data was translated from English with an MT system and then was post-edited by crowdworkers. See Section 2.2 of [this paper](https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.7.pdf). #### Who are the annotators? All references were collected through crowdsourcing platforms (CrowdFlower/Figure 8 and Amazon Mechanical Turk). For Russian, post-editing was done using the Yandex.Toloka crowdsourcing platform. ### Personal and Sensitive Information Neither the dataset as published or the annotation process involves the collection or sharing of any kind of personal / demographic information. ## Considerations for Using the Data ### Social Impact of Dataset We do not foresee any negative social impact in particular from this dataset or task. Positive outlooks: Being able to generate good quality text from RDF data would permit, e.g., making this data more accessible to lay users, enriching existing text with information drawn from knowledge bases such as DBpedia or describing, comparing and relating entities present in these knowledge bases. ### Discussion of Biases This dataset is created using DBpedia RDF triples which naturally exhibit biases that have been found to exist in Wikipedia such as some forms of, e.g., gender bias. The choice of [entities](https://gitlab.com/shimorina/webnlg-dataset/-/blob/master/supplementary/entities_dict.json), described by RDF trees, was not controlled. As such, they may contain gender biases; for instance, all the astronauts described by RDF triples are male. Hence, in texts, pronouns _he/him/his_ occur more often. Similarly, entities can be related to the Western culture more often than to other cultures. ### Other Known Limitations The quality of the crowdsourced references is limited, in particular in terms of fluency/naturalness of the collected texts. Russian data was machine-translated and then post-edited by crowdworkers, so some examples may still exhibit issues related to bad translations. ## Additional Information ### Dataset Curators The principle curator of the dataset is Anastasia Shimorina (Université de Lorraine / LORIA, France). Throughout the WebNLG releases, several people contributed to their construction: Claire Gardent (CNRS / LORIA, France), Shashi Narayan (Google, UK), Laura Perez-Beltrachini (University of Edinburgh, UK), Elena Khasanova, and Thiago Castro Ferreira (Federal University of Minas Gerais, Brazil). The dataset construction was funded by the French National Research Agency (ANR). ### Licensing Information The dataset uses the `cc-by-nc-sa-4.0` license. The source DBpedia project uses the `cc-by-sa-3.0` and `gfdl-1.1` licenses. ### Citation Information - If you use the WebNLG corpus, cite: ``` @inproceedings{web_nlg, author = {Claire Gardent and Anastasia Shimorina and Shashi Narayan and Laura Perez{-}Beltrachini}, editor = {Regina Barzilay and Min{-}Yen Kan}, title = {Creating Training Corpora for {NLG} Micro-Planners}, booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, {ACL} 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers}, pages = {179--188}, publisher = {Association for Computational Linguistics}, year = {2017}, url = {https://doi.org/10.18653/v1/P17-1017}, doi = {10.18653/v1/P17-1017} } ``` - If you use `release_v2_constrained` in particular, cite: ``` @InProceedings{shimorina2018handling, author = "Shimorina, Anastasia and Gardent, Claire", title = "Handling Rare Items in Data-to-Text Generation", booktitle = "Proceedings of the 11th International Conference on Natural Language Generation", year = "2018", publisher = "Association for Computational Linguistics", pages = "360--370", location = "Tilburg University, The Netherlands", url = "http://aclweb.org/anthology/W18-6543" } ``` ### Contributions Thanks to [@Shimorina](https://github.com/Shimorina), [@yjernite](https://github.com/yjernite) for adding this dataset.
web_of_science
--- language: - en paperswithcode_id: web-of-science-dataset pretty_name: Web of Science Dataset dataset_info: - config_name: WOS5736 features: - name: input_data dtype: string - name: label dtype: int32 - name: label_level_1 dtype: int32 - name: label_level_2 dtype: int32 splits: - name: train num_bytes: 8051533 num_examples: 5736 download_size: 60222421 dataset_size: 8051533 - config_name: WOS11967 features: - name: input_data dtype: string - name: label dtype: int32 - name: label_level_1 dtype: int32 - name: label_level_2 dtype: int32 splits: - name: train num_bytes: 16248391 num_examples: 11967 download_size: 60222421 dataset_size: 16248391 - config_name: WOS46985 features: - name: input_data dtype: string - name: label dtype: int32 - name: label_level_1 dtype: int32 - name: label_level_2 dtype: int32 splits: - name: train num_bytes: 65471726 num_examples: 46985 download_size: 60222421 dataset_size: 65471726 --- # Dataset Card for "web_of_science" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://data.mendeley.com/datasets/9rw3vkcfy4/6](https://data.mendeley.com/datasets/9rw3vkcfy4/6) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 180.67 MB - **Size of the generated dataset:** 89.81 MB - **Total amount of disk used:** 270.48 MB ### Dataset Summary Copyright (c) 2017 Kamran Kowsari Permission is hereby granted, free of charge, to any person obtaining a copy of this dataset and associated documentation files (the "Dataset"), to deal in the dataset without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Dataset, and to permit persons to whom the dataset is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Dataset. If you use this dataset please cite: Referenced paper: HDLTex: Hierarchical Deep Learning for Text Classification Description of Dataset: Here is three datasets which include WOS-11967 , WOS-46985, and WOS-5736 Each folder contains: -X.txt -Y.txt -YL1.txt -YL2.txt X is input data that include text sequences Y is target value YL1 is target value of level one (parent label) YL2 is target value of level one (child label) Web of Science Dataset WOS-5736 -This dataset contains 5,736 documents with 11 categories which include 3 parents categories. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### WOS11967 - **Size of downloaded dataset files:** 60.22 MB - **Size of the generated dataset:** 16.25 MB - **Total amount of disk used:** 76.48 MB An example of 'train' looks as follows. ``` ``` #### WOS46985 - **Size of downloaded dataset files:** 60.22 MB - **Size of the generated dataset:** 65.50 MB - **Total amount of disk used:** 125.72 MB An example of 'train' looks as follows. ``` ``` #### WOS5736 - **Size of downloaded dataset files:** 60.22 MB - **Size of the generated dataset:** 8.05 MB - **Total amount of disk used:** 68.27 MB An example of 'train' looks as follows. ``` ``` ### Data Fields The data fields are the same among all splits. #### WOS11967 - `input_data`: a `string` feature. - `label`: a `int32` feature. - `label_level_1`: a `int32` feature. - `label_level_2`: a `int32` feature. #### WOS46985 - `input_data`: a `string` feature. - `label`: a `int32` feature. - `label_level_1`: a `int32` feature. - `label_level_2`: a `int32` feature. #### WOS5736 - `input_data`: a `string` feature. - `label`: a `int32` feature. - `label_level_1`: a `int32` feature. - `label_level_2`: a `int32` feature. ### Data Splits | name |train| |--------|----:| |WOS11967|11967| |WOS46985|46985| |WOS5736 | 5736| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{kowsari2017HDLTex, title={HDLTex: Hierarchical Deep Learning for Text Classification}, author={Kowsari, Kamran and Brown, Donald E and Heidarysafa, Mojtaba and Jafari Meimandi, Kiana and and Gerber, Matthew S and Barnes, Laura E}, booktitle={Machine Learning and Applications (ICMLA), 2017 16th IEEE International Conference on}, year={2017}, organization={IEEE} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
web_questions
--- annotations_creators: - crowdsourced language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: WebQuestions size_categories: - 1K<n<10K source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: webquestions dataset_info: features: - name: url dtype: string - name: question dtype: string - name: answers sequence: string splits: - name: train num_bytes: 533736 num_examples: 3778 - name: test num_bytes: 289824 num_examples: 2032 download_size: 1272965 dataset_size: 823560 --- # Dataset Card for "web_questions" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://worksheets.codalab.org/worksheets/0xba659fe363cb46e7a505c5b6a774dc8a](https://worksheets.codalab.org/worksheets/0xba659fe363cb46e7a505c5b6a774dc8a) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Semantic Parsing on Freebase from Question-Answer Pairs](https://aclanthology.org/D13-1160/) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.27 MB - **Size of the generated dataset:** 0.83 MB - **Total amount of disk used:** 2.10 MB ### Dataset Summary This dataset consists of 6,642 question/answer pairs. The questions are supposed to be answerable by Freebase, a large knowledge graph. The questions are mostly centered around a single named entity. The questions are popular ones asked on the web (at least in 2013). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 1.27 MB - **Size of the generated dataset:** 0.83 MB - **Total amount of disk used:** 2.10 MB An example of 'train' looks as follows. ``` { "answers": ["Jamaican Creole English Language", "Jamaican English"], "question": "what does jamaican people speak?", "url": "http://www.freebase.com/view/en/jamaica" } ``` ### Data Fields The data fields are the same among all splits. #### default - `url`: a `string` feature. - `question`: a `string` feature. - `answers`: a `list` of `string` features. ### Data Splits | name |train|test| |-------|----:|---:| |default| 3778|2032| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{berant-etal-2013-semantic, title = "Semantic Parsing on {F}reebase from Question-Answer Pairs", author = "Berant, Jonathan and Chou, Andrew and Frostig, Roy and Liang, Percy", booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", month = oct, year = "2013", address = "Seattle, Washington, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D13-1160", pages = "1533--1544", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
weibo_ner
--- annotations_creators: - expert-generated language_creators: - found language: - zh license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition paperswithcode_id: weibo-ner pretty_name: Weibo NER dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': B-GPE.NAM '1': B-GPE.NOM '2': B-LOC.NAM '3': B-LOC.NOM '4': B-ORG.NAM '5': B-ORG.NOM '6': B-PER.NAM '7': B-PER.NOM '8': I-GPE.NAM '9': I-GPE.NOM '10': I-LOC.NAM '11': I-LOC.NOM '12': I-ORG.NAM '13': I-ORG.NOM '14': I-PER.NAM '15': I-PER.NOM '16': O splits: - name: train num_bytes: 1179589 num_examples: 1350 - name: validation num_bytes: 232380 num_examples: 270 - name: test num_bytes: 237407 num_examples: 270 download_size: 750687 dataset_size: 1649376 train-eval-index: - config: default task: token-classification task_id: entity_extraction splits: train_split: train eval_split: test col_mapping: tokens: tokens ner_tags: tags metrics: - type: seqeval name: seqeval --- # Dataset Card for "Weibo NER" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** https://github.com/OYE93/Chinese-NLP-Corpus/tree/master/NER/Weibo - **Paper:** [More Information Needed] - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
wi_locness
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - en license: - other multilinguality: - monolingual - other-language-learner size_categories: - 1K<n<10K source_datasets: - original task_categories: - text2text-generation task_ids: [] paperswithcode_id: locness-corpus pretty_name: Cambridge English Write & Improve + LOCNESS configs: - locness - wi tags: - grammatical-error-correction dataset_info: - config_name: default features: - name: id dtype: string - name: userid dtype: string - name: cefr dtype: string - name: text dtype: string - name: edits sequence: - name: start dtype: int32 - name: end dtype: int32 - name: text dtype: string splits: - name: train num_bytes: 4375795 num_examples: 3000 - name: validation num_bytes: 447055 num_examples: 300 download_size: 6120469 dataset_size: 4822850 - config_name: wi features: - name: id dtype: string - name: userid dtype: string - name: cefr dtype: string - name: text dtype: string - name: edits sequence: - name: start dtype: int32 - name: end dtype: int32 - name: text dtype: string splits: - name: train num_bytes: 4375795 num_examples: 3000 - name: validation num_bytes: 447055 num_examples: 300 download_size: 6120469 dataset_size: 4822850 - config_name: locness features: - name: id dtype: string - name: cefr dtype: string - name: text dtype: string - name: edits sequence: - name: start dtype: int32 - name: end dtype: int32 - name: text dtype: string splits: - name: validation num_bytes: 138176 num_examples: 50 download_size: 6120469 dataset_size: 138176 --- # Dataset Card for Cambridge English Write & Improve + LOCNESS Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data - **Repository:** - **Paper:** https://www.aclweb.org/anthology/W19-4406/ - **Leaderboard:** https://competitions.codalab.org/competitions/20228#results - **Point of Contact:** ### Dataset Summary Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native English students with their writing. Specifically, students from around the world submit letters, stories, articles and essays in response to various prompts, and the W&I system provides instant feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these submissions and assigned them a CEFR level. The LOCNESS corpus (Granger, 1998) consists of essays written by native English students. It was originally compiled by researchers at the Centre for English Corpus Linguistics at the University of Louvain. Since native English students also sometimes make mistakes, we asked the W&I annotators to annotate a subsection of LOCNESS so researchers can test the effectiveness of their systems on the full range of English levels and abilities. ### Supported Tasks and Leaderboards Grammatical error correction (GEC) is the task of automatically correcting grammatical errors in text; e.g. [I follows his advices -> I followed his advice]. It can be used to not only help language learners improve their writing skills, but also alert native speakers to accidental mistakes or typos. The aim of the task of this dataset is to correct all types of errors in written text. This includes grammatical, lexical and orthographical errors. The following Codalab competition contains the latest leaderboard, along with information on how to submit to the withheld W&I+LOCNESS test set: https://competitions.codalab.org/competitions/20228 ### Languages The dataset is in English. ## Dataset Structure ### Data Instances An example from the `wi` configuration: ``` { 'id': '1-140178', 'userid': '21251', 'cefr': 'A2.i', 'text': 'My town is a medium size city with eighty thousand inhabitants. It has a high density population because its small territory. Despite of it is an industrial city, there are many shops and department stores. I recommend visiting the artificial lake in the certer of the city which is surrounded by a park. Pasteries are very common and most of them offer the special dessert from the city. There are a comercial zone along the widest street of the city where you can find all kind of establishments: banks, bars, chemists, cinemas, pet shops, restaurants, fast food restaurants, groceries, travel agencies, supermarkets and others. Most of the shops have sales and offers at least three months of the year: January, June and August. The quality of the products and services are quite good, because there are a huge competition, however I suggest you taking care about some fakes or cheats.', 'edits': { 'start': [13, 77, 104, 126, 134, 256, 306, 375, 396, 402, 476, 484, 579, 671, 774, 804, 808, 826, 838, 850, 857, 862, 868], 'end': [24, 78, 104, 133, 136, 262, 315, 379, 399, 411, 480, 498, 588, 671, 777, 807, 810, 835, 845, 856, 861, 867, 873], 'text': ['medium-sized', '-', ' of', 'Although', '', 'center', None, 'of', 'is', 'commercial', 'kinds', 'businesses', 'grocers', ' in', 'is', 'is', '', '. However,', 'recommend', 'be', 'careful', 'of', ''] } } ``` An example from the `locness` configuration: ``` { 'id': '7-5819177', 'cefr': 'N', 'text': 'Boxing is a common, well known and well loved sport amongst most countries in the world however it is also punishing, dangerous and disliked to the extent that many people want it banned, possibly with good reason.\nBoxing is a dangerous sport, there are relatively common deaths, tragic injuries and even disease. All professional boxers are at risk from being killed in his next fight. If not killed then more likely paralysed. There have been a number of cases in the last ten years of the top few boxers having tragic losses throughout their ranks. This is just from the elite few, and theres more from those below them.\nMore deaths would occur through boxing if it were banned. The sport would go underground, there would be no safety measures like gloves, a doctor, paramedics or early stopping of the fight if someone looked unable to continue. With this going on the people taking part will be dangerous, and on the streets. Dangerous dogs who were trained to kill and maim in similar underound dog fights have already proved deadly to innocent people, the new boxers could be even more at risk.\nOnce boxing is banned and no-one grows up knowing it as acceptable there will be no interest in boxing and hopefully less all round interest in violence making towns and cities much safer places to live in, there will be less fighting outside pubs and clubs and less violent attacks with little or no reason.\nchange the rules of boxing slightly would much improve the safety risks of the sport and not detract form the entertainment. There are all sorts of proposals, lighter and more cushioning gloves could be worn, ban punches to the head, headguards worn or make fights shorter, as most of the serious injuries occur in the latter rounds, these would all show off the boxers skill and tallent and still be entertaining to watch.\nEven if a boxer is a success and manages not to be seriously hurt he still faces serious consequences in later life diseases that attack the brains have been known to set in as a direct result of boxing, even Muhamed Ali, who was infamous(?) both for his boxing and his quick-witted intelligence now has Alzheimer disease and can no longer do many everyday acts.\nMany other sports are more dangerous than boxing, motor sports and even mountaineering has risks that are real. Boxers chose to box, just as racing drivers drive.', 'edits': { 'start': [24, 39, 52, 87, 242, 371, 400, 528, 589, 713, 869, 992, 1058, 1169, 1209, 1219, 1255, 1308, 1386, 1412, 1513, 1569, 1661, 1731, 1744, 1781, 1792, 1901, 1951, 2038, 2131, 2149, 2247, 2286], 'end': [25, 40, 59, 95, 249, 374, 400, 538, 595, 713, 869, 1001, 1063, 1169, 1209, 1219, 1255, 1315, 1390, 1418, 1517, 1570, 1661, 1737, 1751, 1781, 1799, 1901, 1960, 2044, 2131, 2149, 2248, 2289], 'text': ['-', '-', 'in', '. However,', '. There', 'their', ',', 'among', "there's", ' and', ',', 'underground', '. The', ',', ',', ',', ',', '. There', 'for', 'Changing', 'from', ';', ',', 'later', '. These', "'", 'talent', ',', '. Diseases', '. Even', ',', "'s", ';', 'have'] } } ``` ### Data Fields The fields of the dataset are: - `id`: the id of the text as a string - `cefr`: the [CEFR level](https://www.cambridgeenglish.org/exams-and-tests/cefr/) of the text as a string - `userid`: id of the user - `text`: the text of the submission as a string - `edits`: the edits from W&I: - `start`: start indexes of each edit as a list of integers - `end`: end indexes of each edit as a list of integers - `text`: the text content of each edit as a list of strings - `from`: the original text of each edit as a list of strings ### Data Splits | name |train|validation| |----------|----:|---------:| | wi | 3000| 300| | locness | N/A| 50| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Write & Improve License: ``` Cambridge English Write & Improve (CEWI) Dataset Licence Agreement 1. By downloading this dataset and licence, this licence agreement is entered into, effective this date, between you, the Licensee, and the University of Cambridge, the Licensor. 2. Copyright of the entire licensed dataset is held by the Licensor. No ownership or interest in the dataset is transferred to the Licensee. 3. The Licensor hereby grants the Licensee a non-exclusive non-transferable right to use the licensed dataset for non-commercial research and educational purposes. 4. Non-commercial purposes exclude without limitation any use of the licensed dataset or information derived from the dataset for or as part of a product or service which is sold, offered for sale, licensed, leased or rented. 5. The Licensee shall acknowledge use of the licensed dataset in all publications of research based on it, in whole or in part, through citation of the following publication: Helen Yannakoudakis, Øistein E. Andersen, Ardeshir Geranpayeh, Ted Briscoe and Diane Nicholls. 2018. Developing an automated writing placement system for ESL learners. Applied Measurement in Education. 6. The Licensee may publish excerpts of less than 100 words from the licensed dataset pursuant to clause 3. 7. The Licensor grants the Licensee this right to use the licensed dataset "as is". Licensor does not make, and expressly disclaims, any express or implied warranties, representations or endorsements of any kind whatsoever. 8. This Agreement shall be governed by and construed in accordance with the laws of England and the English courts shall have exclusive jurisdiction. ``` LOCNESS License: ``` LOCNESS Dataset Licence Agreement 1. The corpus is to be used for non-commercial purposes only 2. All publications on research partly or wholly based on the corpus should give credit to the Centre for English Corpus Linguistics (CECL), Université catholique de Louvain, Belgium. A scanned copy or offprint of the publication should also be sent to <sylviane.granger@uclouvain.be>. 3. No part of the corpus is to be distributed to a third party without specific authorization from CECL. The corpus can only be used by the person agreeing to the licence terms and researchers working in close collaboration with him/her or students under his/her supervision, attached to the same institution, within the framework of the research project. ``` ### Citation Information ``` @inproceedings{bryant-etal-2019-bea, title = "The {BEA}-2019 Shared Task on Grammatical Error Correction", author = "Bryant, Christopher and Felice, Mariano and Andersen, {\O}istein E. and Briscoe, Ted", booktitle = "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", month = aug, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W19-4406", doi = "10.18653/v1/W19-4406", pages = "52--75", abstract = "This paper reports on the BEA-2019 Shared Task on Grammatical Error Correction (GEC). As with the CoNLL-2014 shared task, participants are required to correct all types of errors in test data. One of the main contributions of the BEA-2019 shared task is the introduction of a new dataset, the Write{\&}Improve+LOCNESS corpus, which represents a wider range of native and learner English levels and abilities. Another contribution is the introduction of tracks, which control the amount of annotated data available to participants. Systems are evaluated in terms of ERRANT F{\_}0.5, which allows us to report a much wider range of performance statistics. The competition was hosted on Codalab and remains open for further submissions on the blind test set.", } ``` ### Contributions Thanks to [@aseifert](https://github.com/aseifert) for adding this dataset.
wider_face
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-nc-nd-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-wider task_categories: - object-detection task_ids: - face-detection paperswithcode_id: wider-face-1 pretty_name: WIDER FACE dataset_info: features: - name: image dtype: image - name: faces sequence: - name: bbox sequence: float32 length: 4 - name: blur dtype: class_label: names: '0': clear '1': normal '2': heavy - name: expression dtype: class_label: names: '0': typical '1': exaggerate - name: illumination dtype: class_label: names: '0': normal '1': 'exaggerate ' - name: occlusion dtype: class_label: names: '0': 'no' '1': partial '2': heavy - name: pose dtype: class_label: names: '0': typical '1': atypical - name: invalid dtype: bool splits: - name: train num_bytes: 12049881 num_examples: 12880 - name: test num_bytes: 3761103 num_examples: 16097 - name: validation num_bytes: 2998735 num_examples: 3226 download_size: 3676086479 dataset_size: 18809719 --- # Dataset Card for WIDER FACE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://shuoyang1213.me/WIDERFACE/index.html - **Repository:** - **Paper:** [WIDER FACE: A Face Detection Benchmark](https://arxiv.org/abs/1511.06523) - **Leaderboard:** http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html - **Point of Contact:** shuoyang.1213@gmail.com ### Dataset Summary WIDER FACE dataset is a face detection benchmark dataset, of which images are selected from the publicly available WIDER dataset. We choose 32,203 images and label 393,703 faces with a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER FACE dataset is organized based on 61 event classes. For each event class, we randomly select 40%/10%/50% data as training, validation and testing sets. We adopt the same evaluation metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets, we do not release bounding box ground truth for the test images. Users are required to submit final prediction files, which we shall proceed to evaluate. ### Supported Tasks and Leaderboards - `face-detection`: The dataset can be used to train a model for Face Detection. More information on evaluating the model's performance can be found [here](http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html). ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its face annotations. ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1024x755 at 0x19FA12186D8>, 'faces': { 'bbox': [ [178.0, 238.0, 55.0, 73.0], [248.0, 235.0, 59.0, 73.0], [363.0, 157.0, 59.0, 73.0], [468.0, 153.0, 53.0, 72.0], [629.0, 110.0, 56.0, 81.0], [745.0, 138.0, 55.0, 77.0] ], 'blur': [2, 2, 2, 2, 2, 2], 'expression': [0, 0, 0, 0, 0, 0], 'illumination': [0, 0, 0, 0, 0, 0], 'occlusion': [1, 2, 1, 2, 1, 2], 'pose': [0, 0, 0, 0, 0, 0], 'invalid': [False, False, False, False, False, False] } } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `faces`: a dictionary of face attributes for the faces present on the image - `bbox`: the bounding box of each face (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `blur`: the blur level of each face, with possible values including `clear` (0), `normal` (1) and `heavy` - `expression`: the facial expression of each face, with possible values including `typical` (0) and `exaggerate` (1) - `illumination`: the lightning condition of each face, with possible values including `normal` (0) and `exaggerate` (1) - `occlusion`: the level of occlusion of each face, with possible values including `no` (0), `partial` (1) and `heavy` (2) - `pose`: the pose of each face, with possible values including `typical` (0) and `atypical` (1) - `invalid`: whether the image is valid or invalid. ### Data Splits The data is split into training, validation and testing set. WIDER FACE dataset is organized based on 61 event classes. For each event class, 40%/10%/50% data is randomly selected as training, validation and testing sets. The training set contains 12880 images, the validation set 3226 images and test set 16097 images. ## Dataset Creation ### Curation Rationale The curators state that the current face detection datasets typically contain a few thousand faces, with limited variations in pose, scale, facial expression, occlusion, and background clutters, making it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping with heavy occlusion, small scale, and atypical pose. ### Source Data #### Initial Data Collection and Normalization WIDER FACE dataset is a subset of the WIDER dataset. The images in WIDER were collected in the following three steps: 1) Event categories were defined and chosen following the Large Scale Ontology for Multimedia (LSCOM) [22], which provides around 1000 concepts relevant to video event analysis. 2) Images are retrieved using search engines like Google and Bing. For each category, 1000-3000 images were collected. 3) The data were cleaned by manually examining all the images and filtering out images without human face. Then, similar images in each event category were removed to ensure large diversity in face appearance. A total of 32203 images are eventually included in the WIDER FACE dataset. #### Who are the source language producers? The images are selected from publicly available WIDER dataset. ### Annotations #### Annotation process The curators label the bounding boxes for all the recognizable faces in the WIDER FACE dataset. The bounding box is required to tightly contain the forehead, chin, and cheek.. If a face is occluded, they still label it with a bounding box but with an estimation on the scale of occlusion. Similar to the PASCAL VOC dataset [6], they assign an ’Ignore’ flag to the face which is very difficult to be recognized due to low resolution and small scale (10 pixels or less). After annotating the face bounding boxes, they further annotate the following attributes: pose (typical, atypical) and occlusion level (partial, heavy). Each annotation is labeled by one annotator and cross-checked by two different people. #### Who are the annotators? Shuo Yang, Ping Luo, Chen Change Loy and Xiaoou Tang. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Shuo Yang, Ping Luo, Chen Change Loy and Xiaoou Tang ### Licensing Information [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/). ### Citation Information ``` @inproceedings{yang2016wider, Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou}, Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, Title = {WIDER FACE: A Face Detection Benchmark}, Year = {2016}} ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
wiki40b
--- language: - en paperswithcode_id: wiki-40b pretty_name: Wiki-40B dataset_info: features: - name: wikidata_id dtype: string - name: text dtype: string - name: version_id dtype: string config_name: en splits: - name: train num_bytes: 9423623904 num_examples: 2926536 - name: validation num_bytes: 527383016 num_examples: 163597 - name: test num_bytes: 522219464 num_examples: 162274 download_size: 0 dataset_size: 10473226384 --- # Dataset Card for "wiki40b" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://research.google/pubs/pub49029/](https://research.google/pubs/pub49029/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 10.47 GB - **Total amount of disk used:** 10.47 GB ### Dataset Summary Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page filtering to remove disambiguation pages, redirect pages, deleted pages, and non-entity pages. Each example contains the wikidata id of the entity, and the full Wikipedia article after page processing that removes non-content sections and structured objects. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### en - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 10.47 GB - **Total amount of disk used:** 10.47 GB An example of 'train' looks as follows. ``` ``` ### Data Fields The data fields are the same among all splits. #### en - `wikidata_id`: a `string` feature. - `text`: a `string` feature. - `version_id`: a `string` feature. ### Data Splits |name| train |validation| test | |----|------:|---------:|-----:| |en |2926536| 163597|162274| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` ``` ### Contributions Thanks to [@jplu](https://github.com/jplu), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
wiki_asp
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization task_ids: [] paperswithcode_id: wikiasp pretty_name: WikiAsp tags: - aspect-based-summarization dataset_info: - config_name: album features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1907323642 num_examples: 24434 - name: test num_bytes: 232999001 num_examples: 3038 - name: validation num_bytes: 234990092 num_examples: 3104 download_size: 644173065 dataset_size: 2375312735 - config_name: animal features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 497474133 num_examples: 16540 - name: test num_bytes: 61315970 num_examples: 2007 - name: validation num_bytes: 57943532 num_examples: 2005 download_size: 150974930 dataset_size: 616733635 - config_name: artist features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1876134255 num_examples: 26754 - name: test num_bytes: 237751553 num_examples: 3329 - name: validation num_bytes: 223240910 num_examples: 3194 download_size: 626686303 dataset_size: 2337126718 - config_name: building features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1100057273 num_examples: 20449 - name: test num_bytes: 134357678 num_examples: 2482 - name: validation num_bytes: 139387376 num_examples: 2607 download_size: 346224042 dataset_size: 1373802327 - config_name: company features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1606057076 num_examples: 24353 - name: test num_bytes: 199282041 num_examples: 3029 - name: validation num_bytes: 200498778 num_examples: 2946 download_size: 504194353 dataset_size: 2005837895 - config_name: educational_institution features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1623000534 num_examples: 17634 - name: test num_bytes: 200476681 num_examples: 2267 - name: validation num_bytes: 203262430 num_examples: 2141 download_size: 471033992 dataset_size: 2026739645 - config_name: event features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 748201660 num_examples: 6475 - name: test num_bytes: 96212295 num_examples: 828 - name: validation num_bytes: 97431395 num_examples: 807 download_size: 240072903 dataset_size: 941845350 - config_name: film features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 2370068027 num_examples: 32129 - name: test num_bytes: 294918370 num_examples: 3981 - name: validation num_bytes: 290240851 num_examples: 4014 download_size: 808231638 dataset_size: 2955227248 - config_name: group features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1025166800 num_examples: 11966 - name: test num_bytes: 114239405 num_examples: 1444 - name: validation num_bytes: 120863870 num_examples: 1462 download_size: 344498865 dataset_size: 1260270075 - config_name: historic_place features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 256158020 num_examples: 4919 - name: test num_bytes: 31201154 num_examples: 600 - name: validation num_bytes: 29058067 num_examples: 601 download_size: 77289509 dataset_size: 316417241 - config_name: infrastructure features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1124486451 num_examples: 17226 - name: test num_bytes: 134820330 num_examples: 2091 - name: validation num_bytes: 125193140 num_examples: 1984 download_size: 328804337 dataset_size: 1384499921 - config_name: mean_of_transportation features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 650424738 num_examples: 9277 - name: test num_bytes: 89759392 num_examples: 1170 - name: validation num_bytes: 88440901 num_examples: 1215 download_size: 210234418 dataset_size: 828625031 - config_name: office_holder features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1643899203 num_examples: 18177 - name: test num_bytes: 207433317 num_examples: 2333 - name: validation num_bytes: 202624275 num_examples: 2218 download_size: 524721727 dataset_size: 2053956795 - config_name: plant features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 239150885 num_examples: 6107 - name: test num_bytes: 31340125 num_examples: 774 - name: validation num_bytes: 28752150 num_examples: 786 download_size: 77890632 dataset_size: 299243160 - config_name: single features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1277277277 num_examples: 14217 - name: test num_bytes: 152328537 num_examples: 1712 - name: validation num_bytes: 160312594 num_examples: 1734 download_size: 429214401 dataset_size: 1589918408 - config_name: soccer_player features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 604502541 num_examples: 17599 - name: test num_bytes: 72820378 num_examples: 2280 - name: validation num_bytes: 76705685 num_examples: 2150 download_size: 193347234 dataset_size: 754028604 - config_name: software features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1122906186 num_examples: 13516 - name: test num_bytes: 133717992 num_examples: 1638 - name: validation num_bytes: 134578157 num_examples: 1637 download_size: 356764908 dataset_size: 1391202335 - config_name: television_show features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 893325347 num_examples: 8717 - name: test num_bytes: 115155155 num_examples: 1072 - name: validation num_bytes: 119461892 num_examples: 1128 download_size: 302093407 dataset_size: 1127942394 - config_name: town features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 772504751 num_examples: 14818 - name: test num_bytes: 100975827 num_examples: 1831 - name: validation num_bytes: 101522638 num_examples: 1911 download_size: 243261734 dataset_size: 975003216 - config_name: written_work features: - name: exid dtype: string - name: inputs sequence: string - name: targets sequence: sequence: string splits: - name: train num_bytes: 1491395960 num_examples: 15065 - name: test num_bytes: 189537205 num_examples: 1931 - name: validation num_bytes: 185707567 num_examples: 1843 download_size: 498307235 dataset_size: 1866640732 --- # Dataset Card for WikiAsp ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Wiki Asp](https://github.com/neulab/wikiasp) - **Repository:** [GitHub](https://github.com/neulab/wikiasp) - **Paper:** [WikiAsp: A Dataset for Multi-domain Aspect-based Summarization](https://arxiv.org/abs/2011.07832) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances An example from the "plant" configuration: ``` { 'exid': 'train-78-8', 'inputs': ['< EOT > calcareous rocks and barrens , wooded cliff edges .', 'plant an erect short - lived perennial ( or biennial ) herb whose slender leafy stems radiate from the base , and are 3 - 5 dm tall , giving it a bushy appearance .', 'leaves densely hairy , grayish - green , simple and alternate on the stem .', 'flowers are bright yellow to yellow - orange , cross - shaped , each having 4 spatula - shaped petals about 5 mm long .', 'fruit is a nearly globe - shaped capsule , about 3 mm in diameter , with 1 or 2 seeds in each cell .', 'flowering period : early april to late may .', 'even though there are many members of the mustard family in the range of this species , no other plant shares this combination of characters : bright yellow flowers , grayish - green stems and foliage , globe - shaped fruits with a long style , perennial habit , and the habitat of limestone rocky cliffs .', 'timber removal may be beneficial and even needed to maintain the open character of the habitat for this species .', 'hand removal of trees in the vicinity of the population is necessary to avoid impacts from timber operations .', 'southwest indiana , north central kentucky , and north central tennessee .', 'email : naturepreserves @ ky . gov feedback naturepreserves @ ky . gov | about the agency | about this site copyright © 2003 - 2013 commonwealth of kentucky .', 'all rights reserved .', '<EOS>' ], 'targets': [ ['description', 'physaria globosa is a small plant covered with dense hairs giving it a grayish appearance . it produces yellow flowers in the spring , and its fruit is globe - shaped . its preferred habitat is dry limestone cliffs , barrens , cedar glades , steep wooded slopes , and talus areas . some have also been found in areas of deeper soil and roadsides .' ], ['conservation', 'the population fluctuates year to year , but on average there are about 2000 living plants at any one time , divided among 33 known locations . threats include forms of habitat degradation and destruction , including road construction and grading , mowing , dumping , herbicides , alteration of waterways , livestock damage , and invasive species of plants such as japanese honeysuckle , garlic mustard , alsike clover , sweet clover , meadow fescue , and multiflora rose . all populations are considered vulnerable to extirpation .' ] ] } ``` ### Data Fields - `exid`: a unique identifier - `input`: the cited references and consists of tokenized sentences (with NLTK) - `targets`: a list of aspect-based summaries, where each element is a pair of a) the target aspect and b) the aspect-based summary ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@katnoria](https://github.com/katnoria) for adding this dataset.
wiki_atomic_edits
--- annotations_creators: - found language_creators: - found language: - de - en - es - fr - it - ja - ru - zh license: - cc-by-sa-4.0 multilinguality: - multilingual size_categories: - 100K<n<1M - 10M<n<100M - 1M<n<10M source_datasets: - original task_categories: - summarization task_ids: [] paperswithcode_id: wikiatomicedits pretty_name: WikiAtomicEdits configs: - chinese_deletions - chinese_insertions - english_deletions - english_insertions - french_deletions - french_insertions - german_deletions - german_insertions - italian_deletions - italian_insertions - japanese_deletions - japanese_insertions - russian_deletions - russian_insertions - spanish_deletions - spanish_insertions dataset_info: - config_name: german_insertions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 1072443082 num_examples: 3343403 download_size: 274280387 dataset_size: 1072443082 - config_name: german_deletions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 624070402 num_examples: 1994329 download_size: 160133549 dataset_size: 624070402 - config_name: english_insertions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 4258411914 num_examples: 13737796 download_size: 1090652177 dataset_size: 4258411914 - config_name: english_deletions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 2865754626 num_examples: 9352389 download_size: 736560902 dataset_size: 2865754626 - config_name: spanish_insertions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 481145004 num_examples: 1380934 download_size: 118837934 dataset_size: 481145004 - config_name: spanish_deletions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 317253196 num_examples: 908276 download_size: 78485695 dataset_size: 317253196 - config_name: french_insertions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 651525210 num_examples: 2038305 download_size: 160442894 dataset_size: 651525210 - config_name: french_deletions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 626323354 num_examples: 2060242 download_size: 155263358 dataset_size: 626323354 - config_name: italian_insertions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 372950256 num_examples: 1078814 download_size: 92302006 dataset_size: 372950256 - config_name: italian_deletions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 198598618 num_examples: 583316 download_size: 49048596 dataset_size: 198598618 - config_name: japanese_insertions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 765754162 num_examples: 2249527 download_size: 185766012 dataset_size: 765754162 - config_name: japanese_deletions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 459683880 num_examples: 1352162 download_size: 110513593 dataset_size: 459683880 - config_name: russian_insertions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 790822192 num_examples: 1471638 download_size: 152985812 dataset_size: 790822192 - config_name: russian_deletions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 514750186 num_examples: 960976 download_size: 100033230 dataset_size: 514750186 - config_name: chinese_insertions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 233367646 num_examples: 746509 download_size: 66124094 dataset_size: 233367646 - config_name: chinese_deletions features: - name: id dtype: int32 - name: base_sentence dtype: string - name: phrase dtype: string - name: edited_sentence dtype: string splits: - name: train num_bytes: 144269112 num_examples: 467271 download_size: 40898651 dataset_size: 144269112 --- # Dataset Card for WikiAtomicEdits ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** https://github.com/google-research-datasets/wiki-atomic-edits - **Paper:** https://www.aclweb.org/anthology/D18-1028/ - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The languages in the dataset are: - de - en - es - fr - it - jp: Japanese (`ja`) - ru - zh ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
wiki_auto
--- annotations_creators: - crowdsourced - machine-generated language_creators: - found language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|other-wikipedia task_categories: - text2text-generation task_ids: - text-simplification pretty_name: WikiAuto configs: - auto - auto_acl - auto_full_no_split - auto_full_with_split - manual dataset_info: - config_name: manual features: - name: alignment_label dtype: class_label: names: '0': notAligned '1': aligned '2': partialAligned - name: normal_sentence_id dtype: string - name: simple_sentence_id dtype: string - name: normal_sentence dtype: string - name: simple_sentence dtype: string - name: gleu_score dtype: float32 splits: - name: train num_bytes: 110838475 num_examples: 373801 - name: dev num_bytes: 21112775 num_examples: 73249 - name: test num_bytes: 33851634 num_examples: 118074 download_size: 168957430 dataset_size: 165802884 - config_name: auto_acl features: - name: normal_sentence dtype: string - name: simple_sentence dtype: string splits: - name: full num_bytes: 121975414 num_examples: 488332 download_size: 118068366 dataset_size: 121975414 - config_name: auto features: - name: example_id dtype: string - name: normal struct: - name: normal_article_id dtype: int32 - name: normal_article_title dtype: string - name: normal_article_url dtype: string - name: normal_article_content sequence: - name: normal_sentence_id dtype: string - name: normal_sentence dtype: string - name: simple struct: - name: simple_article_id dtype: int32 - name: simple_article_title dtype: string - name: simple_article_url dtype: string - name: simple_article_content sequence: - name: simple_sentence_id dtype: string - name: simple_sentence dtype: string - name: paragraph_alignment sequence: - name: normal_paragraph_id dtype: string - name: simple_paragraph_id dtype: string - name: sentence_alignment sequence: - name: normal_sentence_id dtype: string - name: simple_sentence_id dtype: string splits: - name: part_1 num_bytes: 1773240295 num_examples: 125059 - name: part_2 num_bytes: 80417651 num_examples: 13036 download_size: 2160638921 dataset_size: 1853657946 - config_name: auto_full_no_split features: - name: normal_sentence dtype: string - name: simple_sentence dtype: string splits: - name: full num_bytes: 146310611 num_examples: 591994 download_size: 141574179 dataset_size: 146310611 - config_name: auto_full_with_split features: - name: normal_sentence dtype: string - name: simple_sentence dtype: string splits: - name: full num_bytes: 124549115 num_examples: 483801 download_size: 120678315 dataset_size: 124549115 --- # Dataset Card for WikiAuto ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [WikiAuto github repository](https://github.com/chaojiang06/wiki-auto) - **Paper:** [Neural CRF Model for Sentence Alignment in Text Simplification](https://arxiv.org/abs/2005.02324) - **Point of Contact:** [Chao Jiang](jiang.1530@osu.edu) ### Dataset Summary WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the `manual` config in this version of dataset), then trained a neural CRF system to predict these alignments. The trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here). ### Supported Tasks and Leaderboards The dataset was created to support a `text-simplification` task. Success in these tasks is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf). ### Languages While both the input and output of the proposed task are in English (`en`), it should be noted that it is presented as a translation task where Wikipedia Simple English is treated as its own idiom. For a statement of what is intended (but not always observed) to constitute Simple English on this platform, see [Simple English in Wikipedia](https://simple.wikipedia.org/wiki/Wikipedia:About#Simple_English). ## Dataset Structure ### Data Instances The data in all of the configurations looks a little different. A `manual` config instance consists of a sentence from the Simple English Wikipedia article, one from the linked English Wikipedia article, IDs for each of them, and a label indicating whether they are aligned. Sentences on either side can be repeated so that the aligned sentences are in the same instances. For example: ``` {'alignment_label': 1, 'normal_sentence_id': '0_66252-1-0-0', 'simple_sentence_id': '0_66252-0-0-0', 'normal_sentence': 'The Local Government Act 1985 is an Act of Parliament in the United Kingdom.', 'simple_sentence': 'The Local Government Act 1985 was an Act of Parliament in the United Kingdom', 'gleu_score': 0.800000011920929} ``` Is followed by ``` {'alignment_label': 0, 'normal_sentence_id': '0_66252-1-0-1', 'simple_sentence_id': '0_66252-0-0-0', 'normal_sentence': 'Its main effect was to abolish the six county councils of the metropolitan counties that had been set up in 1974, 11 years earlier, by the Local Government Act 1972, along with the Greater London Council that had been established in 1965.', 'simple_sentence': 'The Local Government Act 1985 was an Act of Parliament in the United Kingdom', 'gleu_score': 0.08641975373029709} ``` The `auto` config shows a pair of an English and corresponding Simple English Wikipedia as an instance, with an alignment at the paragraph and sentence level: ``` {'example_id': '0', 'normal': {'normal_article_content': {'normal_sentence': ["Lata Mondal ( ; born: 16 January 1993, Dhaka) is a Bangladeshi cricketer who plays for the Bangladesh national women's cricket team.", 'She is a right handed batter.', 'Mondal was born on January 16, 1993 in Dhaka, Bangladesh.', "Mondal made her ODI career against the Ireland women's cricket team on November 26, 2011.", "Mondal made her T20I career against the Ireland women's cricket team on August 28, 2012.", "In October 2018, she was named in Bangladesh's squad for the 2018 ICC Women's World Twenty20 tournament in the West Indies.", "Mondal was a member of the team that won a silver medal in cricket against the China national women's cricket team at the 2010 Asian Games in Guangzhou, China."], 'normal_sentence_id': ['normal-41918715-0-0', 'normal-41918715-0-1', 'normal-41918715-1-0', 'normal-41918715-2-0', 'normal-41918715-3-0', 'normal-41918715-3-1', 'normal-41918715-4-0']}, 'normal_article_id': 41918715, 'normal_article_title': 'Lata Mondal', 'normal_article_url': 'https://en.wikipedia.org/wiki?curid=41918715'}, 'paragraph_alignment': {'normal_paragraph_id': ['normal-41918715-0'], 'simple_paragraph_id': ['simple-702227-0']}, 'sentence_alignment': {'normal_sentence_id': ['normal-41918715-0-0', 'normal-41918715-0-1'], 'simple_sentence_id': ['simple-702227-0-0', 'simple-702227-0-1']}, 'simple': {'simple_article_content': {'simple_sentence': ["Lata Mondal (born: 16 January 1993) is a Bangladeshi cricketer who plays for the Bangladesh national women's cricket team.", 'She is a right handed bat.'], 'simple_sentence_id': ['simple-702227-0-0', 'simple-702227-0-1']}, 'simple_article_id': 702227, 'simple_article_title': 'Lata Mondal', 'simple_article_url': 'https://simple.wikipedia.org/wiki?curid=702227'}} ``` Finally, the `auto_acl`, the `auto_full_no_split`, and the `auto_full_with_split` configs were obtained by selecting the aligned pairs of sentences from `auto` to provide a ready-to-go aligned dataset to train a sequence-to-sequence system. While `auto_acl` corresponds to the filtered version of the data used to train the systems in the paper, `auto_full_no_split` and `auto_full_with_split` correspond to the unfiltered versions with and without sentence splits respectively. In the `auto_full_with_split` config, we join the sentences in the simple article mapped to the same sentence in the complex article to capture sentence splitting. Split sentences are separated by a `<SEP>` token. In the `auto_full_no_split` config, we do not join the splits and treat them as separate pairs. An instance is a single pair of sentences: ``` {'normal_sentence': 'In early work , Rutherford discovered the concept of radioactive half-life , the radioactive element radon , and differentiated and named alpha and beta radiation .\n', 'simple_sentence': 'Rutherford discovered the radioactive half-life , and the three parts of radiation which he named Alpha , Beta , and Gamma .\n'} ``` ### Data Fields The data has the following field: - `normal_sentence`: a sentence from English Wikipedia. - `normal_sentence_id`: a unique ID for each English Wikipedia sentence. The last two dash-separated numbers correspond to the paragraph number in the article and the sentence number in the paragraph. - `simple_sentence`: a sentence from Simple English Wikipedia. - `simple_sentence_id`: a unique ID for each Simple English Wikipedia sentence. The last two dash-separated numbers correspond to the paragraph number in the article and the sentence number in the paragraph. - `alignment_label`: signifies whether a pair of sentences is aligned: labels are `2:partialAligned`, `1:aligned` and `0:notAligned` - `paragraph_alignment`: a first step of alignment mapping English and Simple English paragraphs from linked articles - `sentence_alignment`: the full alignment mapping English and Simple English sentences from linked articles - `gleu_score`: the sentence level GLEU (Google-BLEU) score for each pair. ### Data Splits In `auto`, the `part_2` split corresponds to the articles used in `manual`, and `part_1` has the rest of Wikipedia. The `manual` config is provided with a `train`/`dev`/`test` split with the following amounts of data: | | train | validation | test | |------------------------|--------:|-----------:|--------:| | Total sentence pairs | 373801 | 73249 | 118074 | | Aligned sentence pairs | 1889 | 346 | 677 | ## Dataset Creation ### Curation Rationale Simple English Wikipedia provides a ready source of training data for text simplification systems, as 1. articles in different languages are linked, making it easier to find parallel data and 2. the Simple English data is written by users for users rather than by professional translators. However, even though articles are aligned, finding a good sentence-level alignment can remain challenging. This work aims to provide a solution for this problem. By manually annotating a sub-set of the articles, they manage to achieve an F1 score of over 88% on predicting alignment, which allows to create a good quality sentence level aligned corpus using all of Simple English Wikipedia. ### Source Data #### Initial Data Collection and Normalization The authors mention that they "extracted 138,095 article pairs from the 2019/09 Wikipedia dump [...] using an improved version of the [WikiExtractor](https://github.com/attardi/wikiextractor) library". The [SpaCy](https://spacy.io/) library is used for sentence splitting. #### Who are the source language producers? The dataset uses langauge from Wikipedia: some demographic information is provided [here](https://en.wikipedia.org/wiki/Wikipedia:Who_writes_Wikipedia%3F). ### Annotations #### Annotation process Sentence alignment labels were obtained for 500 randomly sampled document pairs (10,123 sentence pairs total). The authors pre-selected several alignment candidates from English Wikipedia for each Simple Wikipedia sentence based on various similarity metrics, then asked the crowd-workers to annotate these pairs. #### Who are the annotators? No demographic annotation is provided for the crowd workers. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu working at Ohio State University. ### Licensing Information The dataset is not licensed by itself, but the source Wikipedia data is under a `cc-by-sa-3.0` license. ### Citation Information You can cite the paper presenting the dataset as: ``` @inproceedings{acl/JiangMLZX20, author = {Chao Jiang and Mounica Maddela and Wuwei Lan and Yang Zhong and Wei Xu}, editor = {Dan Jurafsky and Joyce Chai and Natalie Schluter and Joel R. Tetreault}, title = {Neural {CRF} Model for Sentence Alignment in Text Simplification}, booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, {ACL} 2020, Online, July 5-10, 2020}, pages = {7943--7960}, publisher = {Association for Computational Linguistics}, year = {2020}, url = {https://www.aclweb.org/anthology/2020.acl-main.709/} } ``` ### Contributions Thanks to [@yjernite](https://github.com/yjernite), [@mounicam](https://github.com/mounicam) for adding this dataset.
wiki_bio
--- annotations_creators: - found language_creators: - found language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - table-to-text task_ids: [] paperswithcode_id: wikibio pretty_name: WikiBio dataset_info: features: - name: input_text struct: - name: table sequence: - name: column_header dtype: string - name: row_number dtype: int16 - name: content dtype: string - name: context dtype: string - name: target_text dtype: string splits: - name: train num_bytes: 619269257 num_examples: 582659 - name: test num_bytes: 77264695 num_examples: 72831 - name: val num_bytes: 77335069 num_examples: 72831 download_size: 333998704 dataset_size: 773869021 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/DavidGrangier/wikipedia-biography-dataset - **Paper:** https://arxiv.org/pdf/1603.07771.pdf - **GitHub:** https://github.com/DavidGrangier/wikipedia-biography-dataset ### Dataset Summary This Dataset contains 728321 biographies extracted from Wikipedia containing the first paragraph of the biography and the tabular infobox. ### Supported Tasks and Leaderboards The main purpose of this dataset is developing text generation models. ### Languages English. ## Dataset Structure ### Data Instances More Information Needed ### Data Fields The structure of a single sample is the following: ```json { "input_text":{ "context":"pope michael iii of alexandria\n", "table":{ "column_header":[ "type", "ended", "death_date", "title", "enthroned", "name", "buried", "religion", "predecessor", "nationality", "article_title", "feast_day", "birth_place", "residence", "successor" ], "content":[ "pope", "16 march 907", "16 march 907", "56th of st. mark pope of alexandria & patriarch of the see", "25 april 880", "michael iii of alexandria", "monastery of saint macarius the great", "coptic orthodox christian", "shenouda i", "egyptian", "pope michael iii of alexandria\n", "16 -rrb- march -lrb- 20 baramhat in the coptic calendar", "egypt", "saint mark 's church", "gabriel i" ], "row_number":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] } }, "target_text":"pope michael iii of alexandria -lrb- also known as khail iii -rrb- was the coptic pope of alexandria and patriarch of the see of st. mark -lrb- 880 -- 907 -rrb- .\nin 882 , the governor of egypt , ahmad ibn tulun , forced khail to pay heavy contributions , forcing him to sell a church and some attached properties to the local jewish community .\nthis building was at one time believed to have later become the site of the cairo geniza .\n" } ``` where, in the `"table"` field, all the information of the Wikpedia infobox is stored (the header of the infobox is stored in `"column_header"` and the information in the `"content"` field). ### Data Splits - Train: 582659 samples. - Test: 72831 samples. - Validation: 72831 samples. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data This dataset was announced in the paper <em>Neural Text Generation from Structured Data with Application to the Biography Domain</em> [(arxiv link)](https://arxiv.org/pdf/1603.07771.pdf) and is stored in [this](https://github.com/DavidGrangier/wikipedia-biography-dataset) repo (owned by DavidGrangier). #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This dataset is ditributed under Creative Comons CC BY-SA 3.0 License. ### Citation Information For refering the original paper in BibTex format: ``` @article{DBLP:journals/corr/LebretGA16, author = {R{\'{e}}mi Lebret and David Grangier and Michael Auli}, title = {Generating Text from Structured Data with Application to the Biography Domain}, journal = {CoRR}, volume = {abs/1603.07771}, year = {2016}, url = {http://arxiv.org/abs/1603.07771}, archivePrefix = {arXiv}, eprint = {1603.07771}, timestamp = {Mon, 13 Aug 2018 16:48:30 +0200}, biburl = {https://dblp.org/rec/journals/corr/LebretGA16.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@alejandrocros](https://github.com/alejandrocros) for adding this dataset.
wiki_dpr
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gfdl multilinguality: - multilingual size_categories: - 10M<n<100M source_datasets: - original task_categories: - fill-mask - text-generation task_ids: - language-modeling - masked-language-modeling pretty_name: Wiki-DPR tags: - text-search dataset_info: - config_name: psgs_w100.nq.exact features: - name: id dtype: string - name: text dtype: string - name: title dtype: string - name: embeddings sequence: float32 splits: - name: train num_bytes: 78419281788 num_examples: 21015300 download_size: 70965697456 dataset_size: 78419281788 - config_name: psgs_w100.nq.compressed features: - name: id dtype: string - name: text dtype: string - name: title dtype: string - name: embeddings sequence: float32 splits: - name: train num_bytes: 78419281788 num_examples: 21015300 download_size: 70965697456 dataset_size: 78419281788 - config_name: psgs_w100.nq.no_index features: - name: id dtype: string - name: text dtype: string - name: title dtype: string - name: embeddings sequence: float32 splits: - name: train num_bytes: 78419281788 num_examples: 21015300 download_size: 70965697456 dataset_size: 78419281788 - config_name: psgs_w100.multiset.exact features: - name: id dtype: string - name: text dtype: string - name: title dtype: string - name: embeddings sequence: float32 splits: - name: train num_bytes: 78419281788 num_examples: 21015300 download_size: 70965697456 dataset_size: 78419281788 - config_name: psgs_w100.multiset.compressed features: - name: id dtype: string - name: text dtype: string - name: title dtype: string - name: embeddings sequence: float32 splits: - name: train num_bytes: 78419281788 num_examples: 21015300 download_size: 70965697456 dataset_size: 78419281788 - config_name: psgs_w100.multiset.no_index features: - name: id dtype: string - name: text dtype: string - name: title dtype: string - name: embeddings sequence: float32 splits: - name: train num_bytes: 78419281788 num_examples: 21015300 download_size: 70965697456 dataset_size: 78419281788 --- # Dataset Card for "wiki_dpr" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/facebookresearch/DPR](https://github.com/facebookresearch/DPR) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 425.79 GB - **Size of the generated dataset:** 470.52 GB - **Total amount of disk used:** 978.05 GB ### Dataset Summary This is the wikipedia split used to evaluate the Dense Passage Retrieval (DPR) model. It contains 21M passages from wikipedia along with their DPR embeddings. The wikipedia articles were split into multiple, disjoint text blocks of 100 words as passages. The wikipedia dump is the one from Dec. 20, 2018. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances Each instance contains a paragraph of at most 100 words, as well as the title of the wikipedia page it comes from, and the DPR embedding (a 768-d vector). #### psgs_w100.multiset.compressed - **Size of downloaded dataset files:** 70.97 GB - **Size of the generated dataset:** 78.42 GB - **Total amount of disk used:** 152.26 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: {'id': '1', 'text': 'Aaron Aaron ( or ; "Ahärôn") is a prophet, high priest, and the brother of Moses in the Abrahamic religions. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran. The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother\'s spokesman ("prophet") to the Pharaoh. Part of the Law (Torah) that Moses received from'], 'title': 'Aaron', 'embeddings': [-0.07233893871307373, 0.48035329580307007, 0.18650995194911957, -0.5287084579467773, -0.37329429388046265, 0.37622880935668945, 0.25524479150772095, ... -0.336689829826355, 0.6313082575798035, -0.7025573253631592]} ``` #### psgs_w100.multiset.exact - **Size of downloaded dataset files:** 70.97 GB - **Size of the generated dataset:** 78.42 GB - **Total amount of disk used:** 187.38 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: {'id': '1', 'text': 'Aaron Aaron ( or ; "Ahärôn") is a prophet, high priest, and the brother of Moses in the Abrahamic religions. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran. The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother\'s spokesman ("prophet") to the Pharaoh. Part of the Law (Torah) that Moses received from'], 'title': 'Aaron', 'embeddings': [-0.07233893871307373, 0.48035329580307007, 0.18650995194911957, -0.5287084579467773, -0.37329429388046265, 0.37622880935668945, 0.25524479150772095, ... -0.336689829826355, 0.6313082575798035, -0.7025573253631592]} ``` #### psgs_w100.multiset.no_index - **Size of downloaded dataset files:** 70.97 GB - **Size of the generated dataset:** 78.42 GB - **Total amount of disk used:** 149.38 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: {'id': '1', 'text': 'Aaron Aaron ( or ; "Ahärôn") is a prophet, high priest, and the brother of Moses in the Abrahamic religions. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran. The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother\'s spokesman ("prophet") to the Pharaoh. Part of the Law (Torah) that Moses received from'], 'title': 'Aaron', 'embeddings': [-0.07233893871307373, 0.48035329580307007, 0.18650995194911957, -0.5287084579467773, -0.37329429388046265, 0.37622880935668945, 0.25524479150772095, ... -0.336689829826355, 0.6313082575798035, -0.7025573253631592]} ``` #### psgs_w100.nq.compressed - **Size of downloaded dataset files:** 70.97 GB - **Size of the generated dataset:** 78.42 GB - **Total amount of disk used:** 152.26 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: {'id': '1', 'text': 'Aaron Aaron ( or ; "Ahärôn") is a prophet, high priest, and the brother of Moses in the Abrahamic religions. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran. The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother\'s spokesman ("prophet") to the Pharaoh. Part of the Law (Torah) that Moses received from'], 'title': 'Aaron', 'embeddings': [0.013342111371457577, 0.582173764705658, -0.31309744715690613, -0.6991612911224365, -0.5583199858665466, 0.5187504887580872, 0.7152731418609619, ... -0.5385938286781311, 0.8093984127044678, -0.4741983711719513]} ``` #### psgs_w100.nq.exact - **Size of downloaded dataset files:** 70.97 GB - **Size of the generated dataset:** 78.42 GB - **Total amount of disk used:** 187.38 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: {'id': '1', 'text': 'Aaron Aaron ( or ; "Ahärôn") is a prophet, high priest, and the brother of Moses in the Abrahamic religions. Knowledge of Aaron, along with his brother Moses, comes exclusively from religious texts, such as the Bible and Quran. The Hebrew Bible relates that, unlike Moses, who grew up in the Egyptian royal court, Aaron and his elder sister Miriam remained with their kinsmen in the eastern border-land of Egypt (Goshen). When Moses first confronted the Egyptian king about the Israelites, Aaron served as his brother\'s spokesman ("prophet") to the Pharaoh. Part of the Law (Torah) that Moses received from'], 'title': 'Aaron', 'embeddings': [0.013342111371457577, 0.582173764705658, -0.31309744715690613, -0.6991612911224365, -0.5583199858665466, 0.5187504887580872, 0.7152731418609619, ... -0.5385938286781311, 0.8093984127044678, -0.4741983711719513]} ``` ### Data Fields The data fields are the same among all splits. #### psgs_w100.multiset.compressed - `id`: a `string` feature. - `text`: a `string` feature. - `title`: a `string` feature. - `embeddings`: a `list` of `float32` features. #### psgs_w100.multiset.exact - `id`: a `string` feature. - `text`: a `string` feature. - `title`: a `string` feature. - `embeddings`: a `list` of `float32` features. #### psgs_w100.multiset.no_index - `id`: a `string` feature. - `text`: a `string` feature. - `title`: a `string` feature. - `embeddings`: a `list` of `float32` features. #### psgs_w100.nq.compressed - `id`: a `string` feature. - `text`: a `string` feature. - `title`: a `string` feature. - `embeddings`: a `list` of `float32` features. #### psgs_w100.nq.exact - `id`: a `string` feature. - `text`: a `string` feature. - `title`: a `string` feature. - `embeddings`: a `list` of `float32` features. ### Data Splits | name | train | |-----------------------------|-------:| |psgs_w100.multiset.compressed|21015300| |psgs_w100.multiset.exact |21015300| |psgs_w100.multiset.no_index |21015300| |psgs_w100.nq.compressed |21015300| |psgs_w100.nq.exact |21015300| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @misc{karpukhin2020dense, title={Dense Passage Retrieval for Open-Domain Question Answering}, author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih}, year={2020}, eprint={2004.04906}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
wiki_hop
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: wikihop pretty_name: WikiHop tags: - multi-hop dataset_info: - config_name: original features: - name: id dtype: string - name: query dtype: string - name: answer dtype: string - name: candidates sequence: string - name: supports sequence: string - name: annotations sequence: sequence: string splits: - name: train num_bytes: 325952974 num_examples: 43738 - name: validation num_bytes: 41246536 num_examples: 5129 download_size: 339843061 dataset_size: 367199510 - config_name: masked features: - name: id dtype: string - name: question dtype: string - name: answer dtype: string - name: candidates sequence: string - name: supports sequence: string - name: annotations sequence: sequence: string splits: - name: train num_bytes: 348249138 num_examples: 43738 - name: validation num_bytes: 44066862 num_examples: 5129 download_size: 339843061 dataset_size: 392316000 --- # Dataset Card for WikiHop ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [QAngaroo](http://qangaroo.cs.ucl.ac.uk/) - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]() - **Paper:** [Constructing Datasets for Multi-hop Reading Comprehension Across Documents](https://arxiv.org/abs/1710.06481) - **Leaderboard:** [leaderboard](http://qangaroo.cs.ucl.ac.uk/leaderboard.html) - **Point of Contact:** [Johannes Welbl](j.welbl@cs.ucl.ac.uk) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
wiki_lingua
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ar - cs - de - en - es - fr - hi - id - it - ja - ko - nl - pt - ru - th - tr - vi - zh license: - cc-by-3.0 multilinguality: - multilingual size_categories: - 10K<n<100K - 1K<n<10K source_datasets: - original task_categories: - summarization task_ids: [] paperswithcode_id: wikilingua pretty_name: WikiLingua configs: - arabic - chinese - czech - dutch - english - french - german - hindi - indonesian - italian - japanese - korean - portuguese - russian - spanish - thai - turkish - vietnamese dataset_info: - config_name: arabic features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 119116119 num_examples: 9995 download_size: 119358890 dataset_size: 119116119 - config_name: chinese features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 41170689 num_examples: 6541 download_size: 41345464 dataset_size: 41170689 - config_name: czech features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 20816390 num_examples: 2520 download_size: 20894511 dataset_size: 20816390 - config_name: dutch features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 87258040 num_examples: 10862 download_size: 87533442 dataset_size: 87258040 - config_name: english features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string splits: - name: train num_bytes: 333700114 num_examples: 57945 download_size: 338036185 dataset_size: 333700114 - config_name: french features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 197550376 num_examples: 21690 download_size: 198114157 dataset_size: 197550376 - config_name: german features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 168674340 num_examples: 20103 download_size: 169195050 dataset_size: 168674340 - config_name: hindi features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 63785051 num_examples: 3402 download_size: 63874759 dataset_size: 63785051 - config_name: indonesian features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 136408861 num_examples: 16308 download_size: 136833587 dataset_size: 136408861 - config_name: italian features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 138119527 num_examples: 17673 download_size: 138578956 dataset_size: 138119527 - config_name: japanese features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 40145031 num_examples: 4372 download_size: 40259570 dataset_size: 40145031 - config_name: korean features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 38647614 num_examples: 4111 download_size: 38748961 dataset_size: 38647614 - config_name: portuguese features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 204270845 num_examples: 28143 download_size: 204997686 dataset_size: 204270845 - config_name: russian features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 241924032 num_examples: 18143 download_size: 242377242 dataset_size: 241924032 - config_name: spanish features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 314618618 num_examples: 38795 download_size: 315609530 dataset_size: 314618618 - config_name: thai features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 86982851 num_examples: 5093 download_size: 87104200 dataset_size: 86982851 - config_name: turkish features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 11371821 num_examples: 1512 download_size: 11405793 dataset_size: 11371821 - config_name: vietnamese features: - name: url dtype: string - name: article sequence: - name: section_name dtype: string - name: document dtype: string - name: summary dtype: string - name: english_url dtype: string - name: english_section_name dtype: string splits: - name: train num_bytes: 69868788 num_examples: 6616 download_size: 70024093 dataset_size: 69868788 --- # Dataset Card for "wiki_lingua" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [URL](https://github.com/esdurmus/Wikilingua) - **Paper:** [WikiLingua: A Multilingual Abstractive Summarization Dataset](https://arxiv.org/abs/2010.03093) ### Dataset Summary We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of crosslingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The table below shows number of article-summary pairs with a parallel article-summary pair in English. ______________________________ | Language | Num. parallel | | ----------- | --------------| | English | 141,457 | | Spanish | 113,215 | | Portuguese | 81,695 | | French | 63,692 | | German | 58,375 | | Russian | 52,928 | | Italian | 50,968 | | Indonesian | 47,511 | | Dutch | 31,270 | | Arabic | 29,229 | | Vietnamese | 19,600 | | Chinese | 18,887 | | Thai | 14,770 | | Japanese | 12,669 | | Korean | 12,189 | | Hindi | 9,929 | | Czech | 7,200 | | Turkish | 4,503 | ## Dataset Structure ### Data Instances ``` { 'article': { 'document': ['make sure that the area is a safe place, especially if you plan on walking home at night. It’s always a good idea to practice the buddy system. Have a friend meet up and walk with you. Research the bus, train, or streetcar routes available in your area to find safe and affordable travel to your destination. Make sure you check the schedule for your outgoing and return travel. Some public transportation will cease to run late at night. Be sure if you take public transportation to the venue that you will also be able to get home late at night. Check the routes. Even if some public transit is still running late at night, the routing may change. Some may run express past many of the stops, or not travel all the way to the ends. Be sure that your stop will still be available when you need it for your return trip. If you are taking public transit in a vulnerable state after drinking, it is always a good idea to travel in groups. Having friends available is a good way to stay safe and make sure that you reach your destination. This is more expensive option than a taxi or ride share service, but could be a fun and fancy way to stay safe and ensure that you will have a ride home. Plan this service in advance with a scheduled time to pick you up from your home and the venue. You want to be sure that the service will still be available when you need to get home. This may be easy in a large city, but taxis may be less frequent in smaller towns. This is especially true late at night, so this is a less reliable option than scheduling a ride in advance. Have a friend accompany you and help you flag a cab to make sure you are able to get one. Set up a plan to call a friend when you get home to make sure that you made it safely to your destination. If there are no taxis readily available call a local service to send a car to pick you up. You can share a ride with your friends, or other people using the app at the same moment. If you are in a vulnerable state it is best to share the ride with your friends to make sure you get home safe. You can request the car to yourself rather than sharing rides with strangers. If you travel home on your own or are the last of your group to be dropped off, make plans to call a friend when you get home so they know you made it safely to your destination. There may be a designated driver service in your area which can chauffeur your group. Make reservations with them in advance and keep their contact information handy while you are drinking.', "Designating a driver is a very popular tactic to avoid drinking and driving. It is important to plan in advance, because your brain function will slow down and your decision making skills will be impaired once you start drinking. Decide before you begin drinking that you will not drive. Figure out who will be getting you home before you leave. Make sure this person is responsible and keep them in your sight while you are drinking. Have their contact information handy in case you can’t find them when you are ready to leave. Choose a friend who doesn’t drink alcohol. You likely have someone in your friend group who doesn’t drink. This person is the most likely to remain sober. Decide on one person who will remain sober. You can take turns within your friend group, alternating who will be the designated driver on each occasion. Be sure that the designated driver actually remains sober. The person who has drank the least is still not sober. If you don’t have your car with you, you can guarantee that you won’t make the choice to drive it home. If you are drinking at your home. Give your keys to a responsible friend to ensure that you don't choose to drive somewhere after you have been drinking. It may be tempting to stay longer or leave with someone else. Stick to the plan you made in advance and only leave with your sober, designated driver. Keep the phone number of your driver handy in case you can't find them when you are ready to leave. If your designated driver drinks alcohol, find alternate transportation to get home.", 'If you have been drinking at all you are at least on the spectrum of drunkenness. You could be showing signs of impairment and slower brain function including lack of motor skills and slower reaction time, leading to the inability to operate a motor vehicle. Some of these signs could be: Poor balance or stumbling. Difficulty speaking clearly and slurred words. Abnormal behavior leading to you doing things you wouldn’t normally do if you were sober. As soon as you notice that you are showing signs of impairment, give your keys to a friend, the host or the bartender to ensure that you won’t drive until you are sober. Make sure to only give them your car key. Hold onto your house keys. If your friend, the host or the bartender are advising you not to drive, you are likely too drunk. Listen to their advice and acknowledge that they are trying to help you. Bystander intervention is common when it comes to drinking and driving. Many people will be willing to step in, take your keys and help you get home safely. If no one if offering to help, you may need to ask. Take a ride from a sober friend. It is best to get in a car with someone you trust when you are in this vulnerable state. Allow the host or bartender to call a cab or car service to take you home. If you are having a difficult time finding a safe way to get home, find a place to stay which does not involve you driving. Ask the host of the party if there is a place you can sleep. Give them your keys and ask that they keep them in a safe place until the morning. Stay with a friend if they live nearby and are on their way home. Find a hotel within walking distance. Call them to book a room, or have a friend help you secure one. Ask the friend if they will walk you to the hotel and make sure you get checked in safely. There are people in your life who care about you and want to be sure that you are safe. It may seem scary or embarrassing to call your parents or your siblings if you are too drunk to drive, but they will be glad you did. Your safety is the most important. You may need your phone to call someone for a ride or get help from a friend. Be sure to charge your phone before you leave the house. It is also a good idea to bring a charger with you in case your battery dies before the end of the night or you end up staying where you are and need to get home the next morning. You may also want to invest in a portable battery charger for your phone should there not be a power outlet available. Make sure it is fully charged before you leave your house. Keep it handy in your pocket or your bag throughout the night.' ], 'section_name': ['Finding Other Transportation', 'Designating a Driver', 'Staying Safe' ], 'summary': ['Walk to the venue where you will be drinking if it is close enough. Take public transit. Show up in style by hiring a limo or black car service. Flag a taxi cab for a convenient option to get where you’re going. Request a rideshare service like Uber or Lyft using an app on your phone. Reserve a designated driver service.', 'Plan in advance. Assign a designated driver. Leave your car at home. Leave the venue with your designated driver.', 'Pay attention to your body. Give up your keys. Listen to other people. Accept help. Stay where you are. Have an emergency back-up plan. Make sure that your phone is charged.' ] }, 'url': 'https://www.wikihow.com/Avoid-Drinking-and-Driving' } ``` ### Data Fields - `url`: WikiHow URL of the article - `article`: A dictionary containing `section_name`, `document` and `summary` - `section_name`: List of section headings in an article - `document`: List of documents, one for each section in the `section_name` list - `summary`: List of summarized document ### Data Splits | | train | |:-----------|--------:| | arabic | 9995 | | chinese | 6541 | | czech | 2520 | | dutch | 10862 | | english | 57945 | | french | 21690 | | german | 20103 | | hindi | 3402 | | indonesian | 16308 | | italian | 17673 | | japanese | 4372 | | korean | 4111 | | portuguese | 28143 | | russian | 18143 | | spanish | 6616 | | thai | 5093 | | turkish | 1512 | | vietnamese | 6616 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information - Article provided by wikiHow https://www.wikihow.com/Main-Page, a wiki building the world's largest, highest quality how-to manual. Please edit this article and find author credits at wikiHow.com. Content on wikiHow can be shared under a [Creative Commons license](http://creativecommons.org/licenses/by-nc-sa/3.0/). - Refer to [this webpage](https://www.wikihow.com/wikiHow:Attribution) for the specific attribution guidelines. - also see https://gem-benchmark.com/data_cards/WikiLingua ### Citation Information ```bibtex @article{ladhak-wiki-2020, title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization}, authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown}, journal = {arXiv preprint arXiv:2010.03093}, year = {2020}, url = {https://arxiv.org/abs/2010.03093} } ``` ### Contributions Thanks to [@katnoria](https://github.com/katnoria) for adding this dataset.
wiki_movies
--- pretty_name: WikiMovies annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - question-answering task_ids: - closed-domain-qa paperswithcode_id: wikimovies dataset_info: features: - name: question dtype: string - name: answer dtype: string splits: - name: train num_bytes: 7274490 num_examples: 96185 - name: test num_bytes: 755258 num_examples: 9952 - name: validation num_bytes: 754755 num_examples: 10000 download_size: 57070041 dataset_size: 8784503 --- # Dataset Card for WikiMovies ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WikiMovies Homepage](https://research.fb.com/downloads/babi/) - **Repository:** - **Paper:** [Key-Value Memory Networks for Directly Reading Documents](https://arxiv.org/pdf/1606.03126.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The WikiMovies dataset consists of roughly 100k (templated) questions over 75k entitiesbased on questions with answers in the open movie database (OMDb). It is the QA part of the Movie Dialog dataset. ### Supported Tasks and Leaderboards - Question Answering ### Languages The text in the dataset is written in English. ## Dataset Structure ### Data Instances The raw data consists of question answer pairs separated by a tab. Here are 3 examples: ```buildoutcfg 1 what does Grégoire Colin appear in? Before the Rain 1 Joe Thomas appears in which movies? The Inbetweeners Movie, The Inbetweeners 2 1 what films did Michelle Trachtenberg star in? Inspector Gadget, Black Christmas, Ice Princess, Harriet the Spy, The Scribbler ``` It is unclear what the `1` is for at the beginning of each line, but it has been removed in the `Dataset` object. ### Data Fields Here is an example of the raw data ingested by `Datasets`: ```buildoutcfg { 'answer': 'Before the Rain', 'question': 'what does Grégoire Colin appear in?' } ``` `answer`: a string containing the answer to a corresponding question. `question`: a string containing the relevant question. ### Data Splits The data is split into train, test, and dev sets. The split sizes are as follows: | wiki-entities_qa_* | n examples| | ----- | ---- | | train.txt | 96185 | | dev.txt | 10000 | | test.txt | 9952 | ## Dataset Creation ### Curation Rationale WikiMovies was built with the following goals in mind: (i) machine learning techniques should have ample training examples for learning; and (ii) one can analyze easily the performance of different representations of knowledge and break down the results by question type. The datasetcan be downloaded fromhttp://fb.ai/babi ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{miller2016keyvalue, title={Key-Value Memory Networks for Directly Reading Documents}, author={Alexander Miller and Adam Fisch and Jesse Dodge and Amir-Hossein Karimi and Antoine Bordes and Jason Weston}, year={2016}, eprint={1606.03126}, archivePrefix={arXiv}, primaryClass={cs.CL} ``` ### Contributions Thanks to [@aclifton314](https://github.com/aclifton314) for adding this dataset.
wiki_qa
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - other multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: wikiqa pretty_name: WikiQA dataset_info: features: - name: question_id dtype: string - name: question dtype: string - name: document_title dtype: string - name: answer dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' splits: - name: test num_bytes: 1337903 num_examples: 6165 - name: train num_bytes: 4469148 num_examples: 20360 - name: validation num_bytes: 591833 num_examples: 2733 download_size: 7094233 dataset_size: 6398884 --- # Dataset Card for "wiki_qa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.microsoft.com/en-us/download/details.aspx?id=52419](https://www.microsoft.com/en-us/download/details.aspx?id=52419) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [WikiQA: A Challenge Dataset for Open-Domain Question Answering](https://aclanthology.org/D15-1237/) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 7.10 MB - **Size of the generated dataset:** 6.40 MB - **Total amount of disk used:** 13.50 MB ### Dataset Summary Wiki Question Answering corpus from Microsoft. The WikiQA corpus is a publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 7.10 MB - **Size of the generated dataset:** 6.40 MB - **Total amount of disk used:** 13.50 MB An example of 'train' looks as follows. ``` { "answer": "Glacier caves are often called ice caves , but this term is properly used to describe bedrock caves that contain year-round ice.", "document_title": "Glacier cave", "label": 0, "question": "how are glacier caves formed?", "question_id": "Q1" } ``` ### Data Fields The data fields are the same among all splits. #### default - `question_id`: a `string` feature. - `question`: a `string` feature. - `document_title`: a `string` feature. - `answer`: a `string` feature. - `label`: a classification label, with possible values including `0` (0), `1` (1). ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|20360| 2733|6165| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information MICROSOFT RESEARCH DATA LICENSE AGREEMENT FOR MICROSOFT RESEARCH WIKIQA CORPUS These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its affiliates) and you. Please read them. They apply to the data associated with this license above, which includes the media on which you received it, if any. The terms also apply to any Microsoft: - updates, - supplements, - Internet-based services, and - support services for this data, unless other terms accompany those items. If so, those terms apply. BY USING THE DATA, YOU ACCEPT THESE TERMS. IF YOU DO NOT ACCEPT THEM, DO NOT USE THE DATA. If you comply with these license terms, you have the rights below. 1. SCOPE OF LICENSE. a. You may use, copy, modify, create derivative works, and distribute the Dataset: i. for research and technology development purposes only. Examples of research and technology development uses are teaching, academic research, public demonstrations and experimentation ; and ii. to publish (or present papers or articles) on your results from using such Dataset. b. The data is licensed, not sold. This agreement only gives you some rights to use the data. Microsoft reserves all other rights. Unless applicable law gives you more rights despite this limitation, you may use the data only as expressly permitted in this agreement. In doing so, you must comply with any technical limitations in the data that only allow you to use it in certain ways. You may not - work around any technical limitations in the data; - reverse engineer, decompile or disassemble the data, except and only to the extent that applicable law expressly permits, despite this limitation; - rent, lease or lend the data; - transfer the data or this agreement to any third party; or - use the data directly in a commercial product without Microsoft’s permission. 2. DISTRIBUTION REQUIREMENTS: a. If you distribute the Dataset or any derivative works of the Dataset, you will distribute them under the same terms and conditions as in this Agreement, and you will not grant other rights to the Dataset or derivative works that are different from those provided by this Agreement. b. If you have created derivative works of the Dataset, and distribute such derivative works, you will cause the modified files to carry prominent notices so that recipients know that they are not receiving Page 1 of 3the original Dataset. Such notices must state: (i) that you have changed the Dataset; and (ii) the date of any changes. 3. DISTRIBUTION RESTRICTIONS. You may not: (a) alter any copyright, trademark or patent notice in the Dataset; (b) use Microsoft’s trademarks in a way that suggests your derivative works or modifications come from or are endorsed by Microsoft; (c) include the Dataset in malicious, deceptive or unlawful programs. 4. OWNERSHIP. Microsoft retains all right, title, and interest in and to any Dataset provided to you under this Agreement. You acquire no interest in the Dataset you may receive under the terms of this Agreement. 5. LICENSE TO MICROSOFT. Microsoft is granted back, without any restrictions or limitations, a non-exclusive, perpetual, irrevocable, royalty-free, assignable and sub-licensable license, to reproduce, publicly perform or display, use, modify, post, distribute, make and have made, sell and transfer your modifications to and/or derivative works of the Dataset, for any purpose. 6. FEEDBACK. If you give feedback about the Dataset to Microsoft, you give to Microsoft, without charge, the right to use, share and commercialize your feedback in any way and for any purpose. You also give to third parties, without charge, any patent rights needed for their products, technologies and services to use or interface with any specific parts of a Microsoft dataset or service that includes the feedback. You will not give feedback that is subject to a license that requires Microsoft to license its Dataset or documentation to third parties because we include your feedback in them. These rights survive this Agreement. 7. EXPORT RESTRICTIONS. The Dataset is subject to United States export laws and regulations. You must comply with all domestic and international export laws and regulations that apply to the Dataset. These laws include restrictions on destinations, end users and end use. For additional information, see www.microsoft.com/exporting. 8. ENTIRE AGREEMENT. This Agreement, and the terms for supplements, updates, Internet-based services and support services that you use, are the entire agreement for the Dataset. 9. SUPPORT SERVICES. Because this data is “as is,” we may not provide support services for it. 10. APPLICABLE LAW. a. United States. If you acquired the software in the United States, Washington state law governs the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws principles. The laws of the state where you live govern all other claims, including claims under state consumer protection laws, unfair competition laws, and in tort. b. Outside the United States. If you acquired the software in any other country, the laws of that country apply. 11. LEGAL EFFECT. This Agreement describes certain legal rights. You may have other rights under the laws of your country. You may also have rights with respect to the party from whom you acquired the Dataset. This Agreement does not change your rights under the laws of your country if the laws of your country do not permit it to do so. 12. DISCLAIMER OF WARRANTY. The Dataset is licensed “as-is.” You bear the risk of using it. Microsoft gives no express warranties, guarantees or conditions. You may have additional consumer rights or statutory guarantees under your local laws which this agreement cannot change. To the extent permitted under your local laws, Microsoft excludes the implied warranties of merchantability, fitness for a particular purpose and non- infringement. 13. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES. This limitation applies to - anything related to the software, services, content (including code) on third party Internet sites, or third party programs; and Page 2 of 3 - claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence, or other tort to the extent permitted by applicable law. It also applies even if Microsoft knew or should have known about the possibility of the damages. The above limitation or exclusion may not apply to you because your country may not allow the exclusion or limitation of incidental, consequential or other damages. ### Citation Information ``` @inproceedings{yang-etal-2015-wikiqa, title = "{W}iki{QA}: A Challenge Dataset for Open-Domain Question Answering", author = "Yang, Yi and Yih, Wen-tau and Meek, Christopher", booktitle = "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", month = sep, year = "2015", address = "Lisbon, Portugal", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D15-1237", doi = "10.18653/v1/D15-1237", pages = "2013--2018", } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
wiki_qa_ar
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ar license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa paperswithcode_id: wikiqaar pretty_name: English-Arabic Wikipedia Question-Answering dataset_info: features: - name: question_id dtype: string - name: question dtype: string - name: document_id dtype: string - name: answer_id dtype: string - name: answer dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' config_name: plain_text splits: - name: test num_bytes: 7563127 num_examples: 20632 - name: validation num_bytes: 3740721 num_examples: 10387 - name: train num_bytes: 26009979 num_examples: 70264 download_size: 35226436 dataset_size: 37313827 --- # Dataset Card for WikiQAar ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WikiQaAr](https://github.com/qcri/WikiQAar) - **Repository:** [WikiQaAr](https://github.com/qcri/WikiQAar) - **Paper:** - **Point of Contact:** [Ines Abbes ](abbes.ines@yahoo.com) ### Dataset Summary Arabic Version of WikiQA by automatic automatic machine translators and crowdsourced the selection of the best one to be incorporated into the corpus ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances Each data point contains the question and whether the answer is a valid or not. ### Data Fields - `question_id`: the question id. - `question`: the question text. - `document_id`: the wikipedia document id. - `answer_id` : the answer id. - `answer` : a candidate answer to the question. - `label` : 1 if the `answer` is correct or 0 otherwise. ### Data Splits The dataset is not split. | | train | validation | test | |------------|-------:|-----------:|-------:| | Data split | 70,264 | 20,632 | 10,387 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization Translation of WikiQA. #### Who are the source language producers? Translation of WikiQA. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @InProceedings{YangYihMeek:EMNLP2015:WikiQA, author = {{Yi}, Yang and {Wen-tau}, Yih and {Christopher} Meek}, title = "{WikiQA: A Challenge Dataset for Open-Domain Question Answering}", journal = {Association for Computational Linguistics}, year = 2015, doi = {10.18653/v1/D15-1237}, pages = {2013–2018}, } ``` ### Contributions Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
wiki_snippets
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - en license: - unknown multilinguality: - multilingual pretty_name: WikiSnippets size_categories: - 10M<n<100M source_datasets: - extended|wiki40b - extended|wikipedia task_categories: - text-generation - other task_ids: - language-modeling paperswithcode_id: null tags: - text-search dataset_info: - config_name: wiki40b_en_100_0 features: - name: _id dtype: string - name: datasets_id dtype: int32 - name: wiki_id dtype: string - name: start_paragraph dtype: int32 - name: start_character dtype: int32 - name: end_paragraph dtype: int32 - name: end_character dtype: int32 - name: article_title dtype: string - name: section_title dtype: string - name: passage_text dtype: string splits: - name: train num_bytes: 12938641686 num_examples: 17553713 download_size: 0 dataset_size: 12938641686 - config_name: wikipedia_en_100_0 features: - name: _id dtype: string - name: datasets_id dtype: int32 - name: wiki_id dtype: string - name: start_paragraph dtype: int32 - name: start_character dtype: int32 - name: end_paragraph dtype: int32 - name: end_character dtype: int32 - name: article_title dtype: string - name: section_title dtype: string - name: passage_text dtype: string splits: - name: train num_bytes: 26407884393 num_examples: 33849898 download_size: 0 dataset_size: 26407884393 --- # Dataset Card for "wiki_snippets" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary Wikipedia version split into plain text snippets for dense semantic indexing. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure We show detailed information for 2 configurations of the dataset (with 100 snippet passage length and 0 overlap) in English: - wiki40b_en_100_0: Wiki-40B - wikipedia_en_100_0: Wikipedia ### Data Instances #### wiki40b_en_100_0 - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 12.94 GB - **Total amount of disk used:** 12.94 GB An example of 'train' looks as follows: ``` {'_id': '{"datasets_id": 0, "wiki_id": "Q1294448", "sp": 2, "sc": 0, "ep": 6, "ec": 610}', 'datasets_id': 0, 'wiki_id': 'Q1294448', 'start_paragraph': 2, 'start_character': 0, 'end_paragraph': 6, 'end_character': 610, 'article_title': 'Ági Szalóki', 'section_title': 'Life', 'passage_text': "Ági Szalóki Life She started singing as a toddler, considering Márta Sebestyén a role model. Her musical background is traditional folk music; she first won recognition for singing with Ökrös in a traditional folk style, and Besh o droM, a Balkan gypsy brass band. With these ensembles she toured around the world from the Montreal Jazz Festival, through Glastonbury Festival to the Théatre de la Ville in Paris, from New York to Beijing.\nSince 2005, she began to pursue her solo career and explore various genres, such as jazz, thirties ballads, or children's songs.\nUntil now, three of her six released albums"} ``` #### wikipedia_en_100_0 - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 26.41 GB - **Total amount of disk used:** 26.41 GB An example of 'train' looks as follows: ``` {'_id': '{"datasets_id": 0, "wiki_id": "Anarchism", "sp": 0, "sc": 0, "ep": 2, "ec": 129}', 'datasets_id': 0, 'wiki_id': 'Anarchism', 'start_paragraph': 0, 'start_character': 0, 'end_paragraph': 2, 'end_character': 129, 'article_title': 'Anarchism', 'section_title': 'Start', 'passage_text': 'Anarchism is a political philosophy and movement that is sceptical of authority and rejects all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the state, which it holds to be unnecessary, undesirable, and harmful. As a historically left-wing movement, placed on the farthest left of the political spectrum, it is usually described alongside communalism and libertarian Marxism as the libertarian wing (libertarian socialism) of the socialist movement, and has a strong historical association with anti-capitalism and socialism. Humans lived in societies without formal hierarchies long before the establishment of formal states, realms, or empires. With the'} ``` ### Data Fields The data fields are the same for all configurations: - `_id`: a `string` feature. - `datasets_id`: a `int32` feature. - `wiki_id`: a `string` feature. - `start_paragraph`: a `int32` feature. - `start_character`: a `int32` feature. - `end_paragraph`: a `int32` feature. - `end_character`: a `int32` feature. - `article_title`: a `string` feature. - `section_title`: a `string` feature. - `passage_text`: a `string` feature. ### Data Splits | name | train | |:-------------------|---------:| | wiki40b_en_100_0 | 17553713 | | wikipedia_en_100_0 | 33849898 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information See licensing information of source datasets. ### Citation Information Cite source datasets: - Wiki-40B: ``` @inproceedings{49029, title = {Wiki-40B: Multilingual Language Model Dataset}, author = {Mandy Guo and Zihang Dai and Denny Vrandecic and Rami Al-Rfou}, year = {2020}, booktitle = {LREC 2020} } ``` - Wikipedia: ``` @ONLINE{wikidump, author = "Wikimedia Foundation", title = "Wikimedia Downloads", url = "https://dumps.wikimedia.org" } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@yjernite](https://github.com/yjernite) for adding this dataset.
wiki_source
--- annotations_creators: - found language_creators: - found language: - en - sv license: - unknown multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: WikiSource dataset_info: features: - name: id dtype: string - name: translation dtype: translation: languages: - en - sv config_name: en-sv splits: - name: train num_bytes: 8153542 num_examples: 33283 download_size: 2375052 dataset_size: 8153542 --- # Dataset Card for WikiSource ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/WikiSource.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
wiki_split
--- annotations_creators: - machine-generated language: - en language_creators: - found license: - cc-by-4.0 multilinguality: - monolingual pretty_name: WikiSplit size_categories: - 100K<n<1M source_datasets: - original task_categories: - text2text-generation task_ids: [] paperswithcode_id: wikisplit tags: - split-and-rephrase dataset_info: features: - name: complex_sentence dtype: string - name: simple_sentence_1 dtype: string - name: simple_sentence_2 dtype: string splits: - name: test num_bytes: 1949294 num_examples: 5000 - name: train num_bytes: 384513073 num_examples: 989944 - name: validation num_bytes: 1935459 num_examples: 5000 download_size: 100279164 dataset_size: 388397826 --- # Dataset Card for "wiki_split" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://dataset-homepage/](https://dataset-homepage/) - **Repository:** https://github.com/google-research-datasets/wiki-split - **Paper:** [Learning To Split and Rephrase From Wikipedia Edit History](https://arxiv.org/abs/1808.09468) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 100.28 MB - **Size of the generated dataset:** 388.40 MB - **Total amount of disk used:** 488.68 MB ### Dataset Summary One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences. ### Supported Tasks and Leaderboards - Split and Rephrase ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 100.28 MB - **Size of the generated dataset:** 388.40 MB - **Total amount of disk used:** 488.68 MB An example of 'train' looks as follows. ``` { "complex_sentence": " '' As she translates from one language to another , she tries to find the appropriate wording and context in English that would correspond to the work in Spanish her poems and stories started to have differing meanings in their respective languages .", "simple_sentence_1": "' '' As she translates from one language to another , she tries to find the appropriate wording and context in English that would correspond to the work in Spanish . ", "simple_sentence_2": " Ergo , her poems and stories started to have differing meanings in their respective languages ." } ``` ### Data Fields The data fields are the same among all splits. #### default - `complex_sentence`: a `string` feature. - `simple_sentence_1`: a `string` feature. - `simple_sentence_2`: a `string` feature. ### Data Splits | name |train |validation|test| |-------|-----:|---------:|---:| |default|989944| 5000|5000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The WikiSplit dataset is a verbatim copy of certain content from the publicly available Wikipedia revision history. The dataset is therefore licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/). Any third party content or data is provided "As Is" without any warranty, express or implied. ### Citation Information ``` @inproceedings{botha-etal-2018-learning, title = "Learning To Split and Rephrase From {W}ikipedia Edit History", author = "Botha, Jan A. and Faruqui, Manaal and Alex, John and Baldridge, Jason and Das, Dipanjan", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D18-1080", doi = "10.18653/v1/D18-1080", pages = "732--737", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun) for adding this dataset.
wiki_summary
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - fa license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text2text-generation - translation - question-answering - summarization task_ids: - abstractive-qa - explanation-generation - extractive-qa - open-domain-qa - open-domain-abstractive-qa - text-simplification pretty_name: WikiSummary dataset_info: features: - name: id dtype: string - name: link dtype: string - name: title dtype: string - name: article dtype: string - name: highlights dtype: string splits: - name: train num_bytes: 207186608 num_examples: 45654 - name: test num_bytes: 25693509 num_examples: 5638 - name: validation num_bytes: 23130954 num_examples: 5074 download_size: 255168504 dataset_size: 256011071 --- # Dataset Card for [Needs More Information] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/m3hrdadfi/wiki-summary - **Repository:** https://github.com/m3hrdadfi/wiki-summary - **Paper:** [More Information Needed] - **Leaderboard:** [More Information Needed] - **Point of Contact:** [Mehrdad Farahani](mailto:m3hrdadphi@gmail.com) ### Dataset Summary The dataset extracted from Persian Wikipedia into the form of articles and highlights and cleaned the dataset into pairs of articles and highlights and reduced the articles' length (only version 1.0.0) and highlights' length to a maximum of 512 and 128, respectively, suitable for parsBERT. This dataset is created to achieve state-of-the-art results on some interesting NLP tasks like Text Summarization. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in Percy. ## Dataset Structure ### Data Instances ``` { 'id' :'0598cfd2ac491a928615945054ab7602034a8f4f', 'link': 'https://fa.wikipedia.org/wiki/انقلاب_1917_روسیه', 'title': 'انقلاب 1917 روسیه', 'article': 'نخست انقلاب فوریه ۱۹۱۷ رخ داد . در این انقلاب پس از یک‌سری اعتصابات ، تظاهرات و درگیری‌ها ، نیکولای دوم ، آخرین تزار روسیه از سلطنت خلع شد و یک دولت موقت به قدرت رسید . دولت موقت زیر نظر گئورگی لووف و الکساندر کرنسکی تشکیل شد . اکثر اعضای دولت موقت ، از شاخه منشویک حزب سوسیال دموکرات کارگری روسیه بودند . دومین مرحله ، انقلاب اکتبر ۱۹۱۷ بود . انقلاب اکتبر ، تحت نظارت حزب بلشویک (شاخه رادیکال از حزب سوسیال دموکرات کارگری روسیه) و به رهبری ولادیمیر لنین به پیش رفت و طی یک یورش نظامی همه‌جانبه به کاخ زمستانی سن پترزبورگ و سایر اماکن مهم ، قدرت را از دولت موقت گرفت . در این انقلاب افراد بسیار کمی کشته شدند . از زمان شکست روسیه در جنگ ۱۹۰۵ با ژاپن ، اوضاع بد اقتصادی ، گرسنگی ، عقب‌ماندگی و سرمایه‌داری و نارضایتی‌های گوناگون در بین مردم ، سربازان ، کارگران ، کشاورزان و نخبگان روسیه به‌وجود آمده‌بود . سرکوبهای تزار و ایجاد مجلس دوما نظام مشروطه حاصل آن دوران است . حزب سوسیال دموکرات ، اصلی‌ترین معترض به سیاست‌های نیکلای دوم بود که به‌طور گسترده بین دهقانان کشاورزان و کارگران کارخانجات صنعتی علیه سیاست‌های سیستم تزار فعالیت داشت . در اوت ۱۹۱۴ میلادی ، امپراتوری روسیه به دستور تزار وقت و به منظور حمایت از اسلاوهای صربستان وارد جنگ جهانی اول در برابر امپراتوری آلمان و امپراتوری اتریش-مجارستان شد . نخست فقط بلشویک‌ها ، مخالف ورود روسیه به این جنگ بودند و می‌گفتند که این جنگ ، سبب بدتر شدن اوضاع نابسامان اقتصادی و اجتماعی روسیه خواهد شد . در سال ۱۹۱۴ میلادی ، یعنی در آغاز جنگ جهانی اول ، روسیه بزرگترین ارتش جهان را داشت ، حدود ۱۲ میلیون سرباز و ۶ میلیون سرباز ذخیره ؛ ولی در پایان سال ۱۹۱۶ میلادی ، پنج میلیون نفر از سربازان روسیه کشته ، زخمی یا اسیر شده بودند . حدود دو میلیون سرباز نیز محل خدمت خود را ترک کرده و غالبا با اسلحه به شهر و دیار خود بازگشته بودند . در میان ۱۰ یا ۱۱ میلیون سرباز باقی‌مانده نیز ، اعتبار تزار و سلسله مراتب ارتش و اتوریته افسران بالا دست از بین رفته بود . عوامل نابسامان داخلی اعم از اجتماعی کشاورزی و فرماندهی نظامی در شکستهای روسیه بسیار مؤثر بود . شکست‌های روسیه در جنگ جهانی اول ، حامیان نیکلای دوم در روسیه را به حداقل خود رساند . در اوایل فوریه ۱۹۱۷ میلادی اکثر کارگران صنعتی در پتروگراد و مسکو دست به اعتصاب زدند . سپس شورش به پادگان‌ها و سربازان رسید . اعتراضات دهقانان نیز گسترش یافت . سوسیال دموکرات‌ها هدایت اعتراضات را در دست گرفتند . در ۱۱ مارس ۱۹۱۷ میلادی ، تزار وقت روسیه ، نیکلای دوم ، فرمان انحلال مجلس روسیه را صادر کرد ، اما اکثر نمایندگان مجلس متفرق نشدند و با تصمیمات نیکلای دوم مخالفت کردند . سرانجام در پی تظاهرات گسترده کارگران و سپس نافرمانی سربازان در سرکوب تظاهرکنندگان در پتروگراد ، نیکلای دوم از مقام خود استعفا داد . بدین ترتیب حکم‌رانی دودمان رومانوف‌ها بر روسیه پس از حدود سیصد سال پایان یافت .', 'highlights': 'انقلاب ۱۹۱۷ روسیه ، جنبشی اعتراضی ، ضد امپراتوری روسیه بود که در سال ۱۹۱۷ رخ داد و به سرنگونی حکومت تزارها و برپایی اتحاد جماهیر شوروی انجامید . مبانی انقلاب بر پایه صلح-نان-زمین استوار بود . این انقلاب در دو مرحله صورت گرفت : در طول این انقلاب در شهرهای اصلی روسیه همانند مسکو و سن پترزبورگ رویدادهای تاریخی برجسته‌ای رخ داد . انقلاب در مناطق روستایی و رعیتی نیز پا به پای مناطق شهری در حال پیشروی بود و دهقانان زمین‌ها را تصرف کرده و در حال بازتوزیع آن در میان خود بودند .' } ``` ### Data Fields - `id`: Article id - `link`: Article link - `title`: Title of the article - `article`: Full text content in the article - `highlights`: Summary of the article ### Data Splits | Train | Test | Validation | |-------------|-------------|-------------| | 45,654 | 5,638 | 5,074 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process No annotations. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was created by Mehrdad Farahani. ### Licensing Information [Apache License 2.0](https://github.com/m3hrdadfi/wiki-summary/blob/master/LICENSE) ### Citation Information ``` @misc{Bert2BertWikiSummaryPersian, author = {Mehrdad Farahani}, title = {Summarization using Bert2Bert model on WikiSummary dataset}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {https://github.com/m3hrdadfi/wiki-summary}, } ``` ### Contributions Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset.
wikiann
--- annotations_creators: - machine-generated language_creators: - crowdsourced language: - ace - af - als - am - an - ang - ar - arc - arz - as - ast - ay - az - ba - bar - be - bg - bh - bn - bo - br - bs - ca - cbk - cdo - ce - ceb - ckb - co - crh - cs - csb - cv - cy - da - de - diq - dv - el - eml - en - eo - es - et - eu - ext - fa - fi - fo - fr - frr - fur - fy - ga - gan - gd - gl - gn - gu - hak - he - hi - hr - hsb - hu - hy - ia - id - ig - ilo - io - is - it - ja - jbo - jv - ka - kk - km - kn - ko - ksh - ku - ky - la - lb - li - lij - lmo - ln - lt - lv - lzh - mg - mhr - mi - min - mk - ml - mn - mr - ms - mt - mwl - my - mzn - nan - nap - nds - ne - nl - nn - 'no' - nov - oc - or - os - pa - pdc - pl - pms - pnb - ps - pt - qu - rm - ro - ru - rw - sa - sah - scn - sco - sd - sgs - sh - si - sk - sl - so - sq - sr - su - sv - sw - szl - ta - te - tg - th - tk - tl - tr - tt - ug - uk - ur - uz - vec - vep - vi - vls - vo - vro - wa - war - wuu - xmf - yi - yo - yue - zea - zh license: - unknown multilinguality: - multilingual size_categories: - n<1K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition paperswithcode_id: wikiann-1 pretty_name: WikiANN configs: - 'no' - ace - af - als - am - an - ang - ar - arc - arz - as - ast - ay - az - ba - bar - be - bg - bh - bn - bo - br - bs - ca - cdo - ce - ceb - ckb - co - crh - cs - csb - cv - cy - da - de - diq - dv - el - en - eo - es - et - eu - ext - fa - fi - fo - fr - frr - fur - fy - ga - gan - gd - gl - gn - gu - hak - he - hi - hr - hsb - hu - hy - ia - id - ig - ilo - io - is - it - ja - jbo - jv - ka - kk - km - kn - ko - ksh - ku - ky - la - lb - li - lij - lmo - ln - lt - lv - mg - mhr - mi - min - mk - ml - mn - mr - ms - mt - mwl - my - mzn - nap - nds - ne - nl - nn - nov - oc - or - os - other-bat-smg - other-be-x-old - other-cbk-zam - other-eml - other-fiu-vro - other-map-bms - other-simple - other-zh-classical - other-zh-min-nan - other-zh-yue - pa - pdc - pl - pms - pnb - ps - pt - qu - rm - ro - ru - rw - sa - sah - scn - sco - sd - sh - si - sk - sl - so - sq - sr - su - sv - sw - szl - ta - te - tg - th - tk - tl - tr - tt - ug - uk - ur - uz - vec - vep - vi - vls - vo - wa - war - wuu - xmf - yi - yo - zea - zh language_bcp47: - be-tarask - en-basiceng - jv-x-bms dataset_info: - config_name: ace features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 22453 num_examples: 100 - name: test num_bytes: 25752 num_examples: 100 - name: train num_bytes: 23231 num_examples: 100 download_size: 234008884 dataset_size: 71436 - config_name: af features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 299137 num_examples: 1000 - name: test num_bytes: 295849 num_examples: 1000 - name: train num_bytes: 1521604 num_examples: 5000 download_size: 234008884 dataset_size: 2116590 - config_name: als features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 34318 num_examples: 100 - name: test num_bytes: 36345 num_examples: 100 - name: train num_bytes: 34968 num_examples: 100 download_size: 234008884 dataset_size: 105631 - config_name: am features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 21429 num_examples: 100 - name: test num_bytes: 23811 num_examples: 100 - name: train num_bytes: 22214 num_examples: 100 download_size: 234008884 dataset_size: 67454 - config_name: an features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 180609 num_examples: 1000 - name: test num_bytes: 174992 num_examples: 1000 - name: train num_bytes: 180967 num_examples: 1000 download_size: 234008884 dataset_size: 536568 - config_name: ang features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 21925 num_examples: 100 - name: test num_bytes: 24523 num_examples: 100 - name: train num_bytes: 23296 num_examples: 100 download_size: 234008884 dataset_size: 69744 - config_name: ar features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2325688 num_examples: 10000 - name: test num_bytes: 2334664 num_examples: 10000 - name: train num_bytes: 4671669 num_examples: 20000 download_size: 234008884 dataset_size: 9332021 - config_name: arc features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 15726 num_examples: 100 - name: test num_bytes: 16641 num_examples: 100 - name: train num_bytes: 18536 num_examples: 100 download_size: 234008884 dataset_size: 50903 - config_name: arz features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 26609 num_examples: 100 - name: test num_bytes: 25663 num_examples: 100 - name: train num_bytes: 26375 num_examples: 100 download_size: 234008884 dataset_size: 78647 - config_name: as features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 25736 num_examples: 100 - name: test num_bytes: 23350 num_examples: 100 - name: train num_bytes: 24984 num_examples: 100 download_size: 234008884 dataset_size: 74070 - config_name: ast features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 217477 num_examples: 1000 - name: test num_bytes: 220874 num_examples: 1000 - name: train num_bytes: 228238 num_examples: 1000 download_size: 234008884 dataset_size: 666589 - config_name: ay features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 11684 num_examples: 100 - name: test num_bytes: 13379 num_examples: 100 - name: train num_bytes: 12596 num_examples: 100 download_size: 234008884 dataset_size: 37659 - config_name: az features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 272066 num_examples: 1000 - name: test num_bytes: 267935 num_examples: 1000 - name: train num_bytes: 2645552 num_examples: 10000 download_size: 234008884 dataset_size: 3185553 - config_name: ba features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 29262 num_examples: 100 - name: test num_bytes: 30502 num_examples: 100 - name: train num_bytes: 31123 num_examples: 100 download_size: 234008884 dataset_size: 90887 - config_name: bar features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 17374 num_examples: 100 - name: test num_bytes: 17839 num_examples: 100 - name: train num_bytes: 16796 num_examples: 100 download_size: 234008884 dataset_size: 52009 - config_name: bat-smg features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 26496 num_examples: 100 - name: test num_bytes: 26093 num_examples: 100 - name: train num_bytes: 24677 num_examples: 100 download_size: 234008884 dataset_size: 77266 - config_name: be features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 262042 num_examples: 1000 - name: test num_bytes: 266104 num_examples: 1000 - name: train num_bytes: 3983322 num_examples: 15000 download_size: 234008884 dataset_size: 4511468 - config_name: be-x-old features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 342654 num_examples: 1000 - name: test num_bytes: 337599 num_examples: 1000 - name: train num_bytes: 1704256 num_examples: 5000 download_size: 234008884 dataset_size: 2384509 - config_name: bg features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2840907 num_examples: 10000 - name: test num_bytes: 2830213 num_examples: 10000 - name: train num_bytes: 5665063 num_examples: 20000 download_size: 234008884 dataset_size: 11336183 - config_name: bh features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 33682 num_examples: 100 - name: test num_bytes: 30692 num_examples: 100 - name: train num_bytes: 36374 num_examples: 100 download_size: 234008884 dataset_size: 100748 - config_name: bn features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 238446 num_examples: 1000 - name: test num_bytes: 237218 num_examples: 1000 - name: train num_bytes: 2351591 num_examples: 10000 download_size: 234008884 dataset_size: 2827255 - config_name: bo features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 22688 num_examples: 100 - name: test num_bytes: 15437 num_examples: 100 - name: train num_bytes: 14085 num_examples: 100 download_size: 234008884 dataset_size: 52210 - config_name: br features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 206839 num_examples: 1000 - name: test num_bytes: 222083 num_examples: 1000 - name: train num_bytes: 221495 num_examples: 1000 download_size: 234008884 dataset_size: 650417 - config_name: bs features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 246378 num_examples: 1000 - name: test num_bytes: 247331 num_examples: 1000 - name: train num_bytes: 3669346 num_examples: 15000 download_size: 234008884 dataset_size: 4163055 - config_name: ca features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 1836319 num_examples: 10000 - name: test num_bytes: 1847746 num_examples: 10000 - name: train num_bytes: 3689342 num_examples: 20000 download_size: 234008884 dataset_size: 7373407 - config_name: cbk-zam features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 47060 num_examples: 100 - name: test num_bytes: 47277 num_examples: 100 - name: train num_bytes: 52545 num_examples: 100 download_size: 234008884 dataset_size: 146882 - config_name: cdo features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 37479 num_examples: 100 - name: test num_bytes: 34319 num_examples: 100 - name: train num_bytes: 36204 num_examples: 100 download_size: 234008884 dataset_size: 108002 - config_name: ce features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 40303 num_examples: 100 - name: test num_bytes: 38640 num_examples: 100 - name: train num_bytes: 38284 num_examples: 100 download_size: 234008884 dataset_size: 117227 - config_name: ceb features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 22789 num_examples: 100 - name: test num_bytes: 23950 num_examples: 100 - name: train num_bytes: 21365 num_examples: 100 download_size: 234008884 dataset_size: 68104 - config_name: ckb features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 214231 num_examples: 1000 - name: test num_bytes: 211988 num_examples: 1000 - name: train num_bytes: 217066 num_examples: 1000 download_size: 234008884 dataset_size: 643285 - config_name: co features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 15968 num_examples: 100 - name: test num_bytes: 15880 num_examples: 100 - name: train num_bytes: 18032 num_examples: 100 download_size: 234008884 dataset_size: 49880 - config_name: crh features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 20230 num_examples: 100 - name: test num_bytes: 23879 num_examples: 100 - name: train num_bytes: 23336 num_examples: 100 download_size: 234008884 dataset_size: 67445 - config_name: cs features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2456654 num_examples: 10000 - name: test num_bytes: 2458155 num_examples: 10000 - name: train num_bytes: 4944758 num_examples: 20000 download_size: 234008884 dataset_size: 9859567 - config_name: csb features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 28841 num_examples: 100 - name: test num_bytes: 27840 num_examples: 100 - name: train num_bytes: 31640 num_examples: 100 download_size: 234008884 dataset_size: 88321 - config_name: cv features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 24787 num_examples: 100 - name: test num_bytes: 26403 num_examples: 100 - name: train num_bytes: 26956 num_examples: 100 download_size: 234008884 dataset_size: 78146 - config_name: cy features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 228586 num_examples: 1000 - name: test num_bytes: 233869 num_examples: 1000 - name: train num_bytes: 2337116 num_examples: 10000 download_size: 234008884 dataset_size: 2799571 - config_name: da features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2422976 num_examples: 10000 - name: test num_bytes: 2432324 num_examples: 10000 - name: train num_bytes: 4882222 num_examples: 20000 download_size: 234008884 dataset_size: 9737522 - config_name: de features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2754550 num_examples: 10000 - name: test num_bytes: 2750996 num_examples: 10000 - name: train num_bytes: 5510641 num_examples: 20000 download_size: 234008884 dataset_size: 11016187 - config_name: diq features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 24147 num_examples: 100 - name: test num_bytes: 22476 num_examples: 100 - name: train num_bytes: 24131 num_examples: 100 download_size: 234008884 dataset_size: 70754 - config_name: dv features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 30322 num_examples: 100 - name: test num_bytes: 27279 num_examples: 100 - name: train num_bytes: 31033 num_examples: 100 download_size: 234008884 dataset_size: 88634 - config_name: el features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 3027962 num_examples: 10000 - name: test num_bytes: 3034329 num_examples: 10000 - name: train num_bytes: 6046638 num_examples: 20000 download_size: 234008884 dataset_size: 12108929 - config_name: eml features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 30050 num_examples: 100 - name: test num_bytes: 35880 num_examples: 100 - name: train num_bytes: 30792 num_examples: 100 download_size: 234008884 dataset_size: 96722 - config_name: en features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2336353 num_examples: 10000 - name: test num_bytes: 2330245 num_examples: 10000 - name: train num_bytes: 4649601 num_examples: 20000 download_size: 234008884 dataset_size: 9316199 - config_name: eo features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 1968690 num_examples: 10000 - name: test num_bytes: 1961486 num_examples: 10000 - name: train num_bytes: 2952610 num_examples: 15000 download_size: 234008884 dataset_size: 6882786 - config_name: es features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 1976935 num_examples: 10000 - name: test num_bytes: 1986664 num_examples: 10000 - name: train num_bytes: 3972292 num_examples: 20000 download_size: 234008884 dataset_size: 7935891 - config_name: et features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2403361 num_examples: 10000 - name: test num_bytes: 2392424 num_examples: 10000 - name: train num_bytes: 3579264 num_examples: 15000 download_size: 234008884 dataset_size: 8375049 - config_name: eu features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2677036 num_examples: 10000 - name: test num_bytes: 2628951 num_examples: 10000 - name: train num_bytes: 2672353 num_examples: 10000 download_size: 234008884 dataset_size: 7978340 - config_name: ext features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 30821 num_examples: 100 - name: test num_bytes: 29483 num_examples: 100 - name: train num_bytes: 23110 num_examples: 100 download_size: 234008884 dataset_size: 83414 - config_name: fa features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2328640 num_examples: 10000 - name: test num_bytes: 2314687 num_examples: 10000 - name: train num_bytes: 4618098 num_examples: 20000 download_size: 234008884 dataset_size: 9261425 - config_name: fi features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2500586 num_examples: 10000 - name: test num_bytes: 2505161 num_examples: 10000 - name: train num_bytes: 5020655 num_examples: 20000 download_size: 234008884 dataset_size: 10026402 - config_name: fiu-vro features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 27672 num_examples: 100 - name: test num_bytes: 27728 num_examples: 100 - name: train num_bytes: 28689 num_examples: 100 download_size: 234008884 dataset_size: 84089 - config_name: fo features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 26094 num_examples: 100 - name: test num_bytes: 23531 num_examples: 100 - name: train num_bytes: 26178 num_examples: 100 download_size: 234008884 dataset_size: 75803 - config_name: fr features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2058004 num_examples: 10000 - name: test num_bytes: 2073593 num_examples: 10000 - name: train num_bytes: 4123995 num_examples: 20000 download_size: 234008884 dataset_size: 8255592 - config_name: frr features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 15883 num_examples: 100 - name: test num_bytes: 15736 num_examples: 100 - name: train num_bytes: 16654 num_examples: 100 download_size: 234008884 dataset_size: 48273 - config_name: fur features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 25264 num_examples: 100 - name: test num_bytes: 30562 num_examples: 100 - name: train num_bytes: 33654 num_examples: 100 download_size: 234008884 dataset_size: 89480 - config_name: fy features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 226436 num_examples: 1000 - name: test num_bytes: 229700 num_examples: 1000 - name: train num_bytes: 223013 num_examples: 1000 download_size: 234008884 dataset_size: 679149 - config_name: ga features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 234092 num_examples: 1000 - name: test num_bytes: 235083 num_examples: 1000 - name: train num_bytes: 238047 num_examples: 1000 download_size: 234008884 dataset_size: 707222 - config_name: gan features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 17533 num_examples: 100 - name: test num_bytes: 13879 num_examples: 100 - name: train num_bytes: 14398 num_examples: 100 download_size: 234008884 dataset_size: 45810 - config_name: gd features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 23230 num_examples: 100 - name: test num_bytes: 20308 num_examples: 100 - name: train num_bytes: 20154 num_examples: 100 download_size: 234008884 dataset_size: 63692 - config_name: gl features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2029683 num_examples: 10000 - name: test num_bytes: 2031150 num_examples: 10000 - name: train num_bytes: 3030993 num_examples: 15000 download_size: 234008884 dataset_size: 7091826 - config_name: gn features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 29132 num_examples: 100 - name: test num_bytes: 24263 num_examples: 100 - name: train num_bytes: 28220 num_examples: 100 download_size: 234008884 dataset_size: 81615 - config_name: gu features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 48009 num_examples: 100 - name: test num_bytes: 45417 num_examples: 100 - name: train num_bytes: 42625 num_examples: 100 download_size: 234008884 dataset_size: 136051 - config_name: hak features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 17977 num_examples: 100 - name: test num_bytes: 18155 num_examples: 100 - name: train num_bytes: 16208 num_examples: 100 download_size: 234008884 dataset_size: 52340 - config_name: he features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2801392 num_examples: 10000 - name: test num_bytes: 2785474 num_examples: 10000 - name: train num_bytes: 5600488 num_examples: 20000 download_size: 234008884 dataset_size: 11187354 - config_name: hi features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 261207 num_examples: 1000 - name: test num_bytes: 267255 num_examples: 1000 - name: train num_bytes: 1315829 num_examples: 5000 download_size: 234008884 dataset_size: 1844291 - config_name: hr features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2417450 num_examples: 10000 - name: test num_bytes: 2430440 num_examples: 10000 - name: train num_bytes: 4877331 num_examples: 20000 download_size: 234008884 dataset_size: 9725221 - config_name: hsb features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 24695 num_examples: 100 - name: test num_bytes: 24348 num_examples: 100 - name: train num_bytes: 24228 num_examples: 100 download_size: 234008884 dataset_size: 73271 - config_name: hu features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2590116 num_examples: 10000 - name: test num_bytes: 2626771 num_examples: 10000 - name: train num_bytes: 5263122 num_examples: 20000 download_size: 234008884 dataset_size: 10480009 - config_name: hy features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 237560 num_examples: 1000 - name: test num_bytes: 237121 num_examples: 1000 - name: train num_bytes: 3634065 num_examples: 15000 download_size: 234008884 dataset_size: 4108746 - config_name: ia features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 32064 num_examples: 100 - name: test num_bytes: 37617 num_examples: 100 - name: train num_bytes: 32928 num_examples: 100 download_size: 234008884 dataset_size: 102609 - config_name: id features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 1901625 num_examples: 10000 - name: test num_bytes: 1902732 num_examples: 10000 - name: train num_bytes: 3814047 num_examples: 20000 download_size: 234008884 dataset_size: 7618404 - config_name: ig features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 17721 num_examples: 100 - name: test num_bytes: 18432 num_examples: 100 - name: train num_bytes: 15988 num_examples: 100 download_size: 234008884 dataset_size: 52141 - config_name: ilo features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 16675 num_examples: 100 - name: test num_bytes: 17245 num_examples: 100 - name: train num_bytes: 17152 num_examples: 100 download_size: 234008884 dataset_size: 51072 - config_name: io features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 19026 num_examples: 100 - name: test num_bytes: 17231 num_examples: 100 - name: train num_bytes: 20781 num_examples: 100 download_size: 234008884 dataset_size: 57038 - config_name: is features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 243667 num_examples: 1000 - name: test num_bytes: 235946 num_examples: 1000 - name: train num_bytes: 243465 num_examples: 1000 download_size: 234008884 dataset_size: 723078 - config_name: it features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2282947 num_examples: 10000 - name: test num_bytes: 2307618 num_examples: 10000 - name: train num_bytes: 4633575 num_examples: 20000 download_size: 234008884 dataset_size: 9224140 - config_name: ja features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 6775608 num_examples: 10000 - name: test num_bytes: 6898538 num_examples: 10000 - name: train num_bytes: 13578325 num_examples: 20000 download_size: 234008884 dataset_size: 27252471 - config_name: jbo features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 15618 num_examples: 100 - name: test num_bytes: 19586 num_examples: 100 - name: train num_bytes: 15070 num_examples: 100 download_size: 234008884 dataset_size: 50274 - config_name: jv features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 17691 num_examples: 100 - name: test num_bytes: 20203 num_examples: 100 - name: train num_bytes: 19409 num_examples: 100 download_size: 234008884 dataset_size: 57303 - config_name: ka features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 3454381 num_examples: 10000 - name: test num_bytes: 3480870 num_examples: 10000 - name: train num_bytes: 3428008 num_examples: 10000 download_size: 234008884 dataset_size: 10363259 - config_name: kk features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 286502 num_examples: 1000 - name: test num_bytes: 284503 num_examples: 1000 - name: train num_bytes: 287952 num_examples: 1000 download_size: 234008884 dataset_size: 858957 - config_name: km features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 29310 num_examples: 100 - name: test num_bytes: 36101 num_examples: 100 - name: train num_bytes: 31938 num_examples: 100 download_size: 234008884 dataset_size: 97349 - config_name: kn features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 36853 num_examples: 100 - name: test num_bytes: 32278 num_examples: 100 - name: train num_bytes: 34346 num_examples: 100 download_size: 234008884 dataset_size: 103477 - config_name: ko features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2553068 num_examples: 10000 - name: test num_bytes: 2547800 num_examples: 10000 - name: train num_bytes: 5107090 num_examples: 20000 download_size: 234008884 dataset_size: 10207958 - config_name: ksh features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 26338 num_examples: 100 - name: test num_bytes: 25249 num_examples: 100 - name: train num_bytes: 25941 num_examples: 100 download_size: 234008884 dataset_size: 77528 - config_name: ku features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 22597 num_examples: 100 - name: test num_bytes: 20795 num_examples: 100 - name: train num_bytes: 22669 num_examples: 100 download_size: 234008884 dataset_size: 66061 - config_name: ky features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 31010 num_examples: 100 - name: test num_bytes: 31896 num_examples: 100 - name: train num_bytes: 32768 num_examples: 100 download_size: 234008884 dataset_size: 95674 - config_name: la features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 207205 num_examples: 1000 - name: test num_bytes: 198910 num_examples: 1000 - name: train num_bytes: 999050 num_examples: 5000 download_size: 234008884 dataset_size: 1405165 - config_name: lb features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 253774 num_examples: 1000 - name: test num_bytes: 249989 num_examples: 1000 - name: train num_bytes: 1260939 num_examples: 5000 download_size: 234008884 dataset_size: 1764702 - config_name: li features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 20201 num_examples: 100 - name: test num_bytes: 18817 num_examples: 100 - name: train num_bytes: 20211 num_examples: 100 download_size: 234008884 dataset_size: 59229 - config_name: lij features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 28005 num_examples: 100 - name: test num_bytes: 27882 num_examples: 100 - name: train num_bytes: 30581 num_examples: 100 download_size: 234008884 dataset_size: 86468 - config_name: lmo features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 26575 num_examples: 100 - name: test num_bytes: 29453 num_examples: 100 - name: train num_bytes: 24161 num_examples: 100 download_size: 234008884 dataset_size: 80189 - config_name: ln features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 21709 num_examples: 100 - name: test num_bytes: 27003 num_examples: 100 - name: train num_bytes: 22227 num_examples: 100 download_size: 234008884 dataset_size: 70939 - config_name: lt features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2192874 num_examples: 10000 - name: test num_bytes: 2191269 num_examples: 10000 - name: train num_bytes: 2199946 num_examples: 10000 download_size: 234008884 dataset_size: 6584089 - config_name: lv features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2173420 num_examples: 10000 - name: test num_bytes: 2190458 num_examples: 10000 - name: train num_bytes: 2206943 num_examples: 10000 download_size: 234008884 dataset_size: 6570821 - config_name: map-bms features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 19780 num_examples: 100 - name: test num_bytes: 20558 num_examples: 100 - name: train num_bytes: 21639 num_examples: 100 download_size: 234008884 dataset_size: 61977 - config_name: mg features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 24861 num_examples: 100 - name: test num_bytes: 22570 num_examples: 100 - name: train num_bytes: 25739 num_examples: 100 download_size: 234008884 dataset_size: 73170 - config_name: mhr features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 23263 num_examples: 100 - name: test num_bytes: 23639 num_examples: 100 - name: train num_bytes: 18648 num_examples: 100 download_size: 234008884 dataset_size: 65550 - config_name: mi features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 39399 num_examples: 100 - name: test num_bytes: 40147 num_examples: 100 - name: train num_bytes: 37896 num_examples: 100 download_size: 234008884 dataset_size: 117442 - config_name: min features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 28719 num_examples: 100 - name: test num_bytes: 24741 num_examples: 100 - name: train num_bytes: 26620 num_examples: 100 download_size: 234008884 dataset_size: 80080 - config_name: mk features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 333193 num_examples: 1000 - name: test num_bytes: 337757 num_examples: 1000 - name: train num_bytes: 3355936 num_examples: 10000 download_size: 234008884 dataset_size: 4026886 - config_name: ml features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 363008 num_examples: 1000 - name: test num_bytes: 349383 num_examples: 1000 - name: train num_bytes: 3582066 num_examples: 10000 download_size: 234008884 dataset_size: 4294457 - config_name: mn features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 22006 num_examples: 100 - name: test num_bytes: 23538 num_examples: 100 - name: train num_bytes: 23244 num_examples: 100 download_size: 234008884 dataset_size: 68788 - config_name: mr features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 314858 num_examples: 1000 - name: test num_bytes: 326290 num_examples: 1000 - name: train num_bytes: 1598804 num_examples: 5000 download_size: 234008884 dataset_size: 2239952 - config_name: ms features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 183944 num_examples: 1000 - name: test num_bytes: 183539 num_examples: 1000 - name: train num_bytes: 3699238 num_examples: 20000 download_size: 234008884 dataset_size: 4066721 - config_name: mt features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 24571 num_examples: 100 - name: test num_bytes: 24662 num_examples: 100 - name: train num_bytes: 24956 num_examples: 100 download_size: 234008884 dataset_size: 74189 - config_name: mwl features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 51987 num_examples: 100 - name: test num_bytes: 43008 num_examples: 100 - name: train num_bytes: 44605 num_examples: 100 download_size: 234008884 dataset_size: 139600 - config_name: my features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 48953 num_examples: 100 - name: test num_bytes: 45956 num_examples: 100 - name: train num_bytes: 41371 num_examples: 100 download_size: 234008884 dataset_size: 136280 - config_name: mzn features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 25304 num_examples: 100 - name: test num_bytes: 25947 num_examples: 100 - name: train num_bytes: 24841 num_examples: 100 download_size: 234008884 dataset_size: 76092 - config_name: nap features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 21546 num_examples: 100 - name: test num_bytes: 24194 num_examples: 100 - name: train num_bytes: 26596 num_examples: 100 download_size: 234008884 dataset_size: 72336 - config_name: nds features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 28388 num_examples: 100 - name: test num_bytes: 26571 num_examples: 100 - name: train num_bytes: 24679 num_examples: 100 download_size: 234008884 dataset_size: 79638 - config_name: ne features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 33932 num_examples: 100 - name: test num_bytes: 33227 num_examples: 100 - name: train num_bytes: 36173 num_examples: 100 download_size: 234008884 dataset_size: 103332 - config_name: nl features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2378080 num_examples: 10000 - name: test num_bytes: 2403076 num_examples: 10000 - name: train num_bytes: 4784289 num_examples: 20000 download_size: 234008884 dataset_size: 9565445 - config_name: nn features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 274140 num_examples: 1000 - name: test num_bytes: 269631 num_examples: 1000 - name: train num_bytes: 5436185 num_examples: 20000 download_size: 234008884 dataset_size: 5979956 - config_name: 'no' features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2576669 num_examples: 10000 - name: test num_bytes: 2563559 num_examples: 10000 - name: train num_bytes: 5139548 num_examples: 20000 download_size: 234008884 dataset_size: 10279776 - config_name: nov features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 14856 num_examples: 100 - name: test num_bytes: 14830 num_examples: 100 - name: train num_bytes: 17270 num_examples: 100 download_size: 234008884 dataset_size: 46956 - config_name: oc features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 20428 num_examples: 100 - name: test num_bytes: 18600 num_examples: 100 - name: train num_bytes: 19319 num_examples: 100 download_size: 234008884 dataset_size: 58347 - config_name: or features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 32131 num_examples: 100 - name: test num_bytes: 29508 num_examples: 100 - name: train num_bytes: 27822 num_examples: 100 download_size: 234008884 dataset_size: 89461 - config_name: os features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 26779 num_examples: 100 - name: test num_bytes: 25995 num_examples: 100 - name: train num_bytes: 26033 num_examples: 100 download_size: 234008884 dataset_size: 78807 - config_name: pa features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 25230 num_examples: 100 - name: test num_bytes: 23708 num_examples: 100 - name: train num_bytes: 24171 num_examples: 100 download_size: 234008884 dataset_size: 73109 - config_name: pdc features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 24419 num_examples: 100 - name: test num_bytes: 24674 num_examples: 100 - name: train num_bytes: 23991 num_examples: 100 download_size: 234008884 dataset_size: 73084 - config_name: pl features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2448324 num_examples: 10000 - name: test num_bytes: 2463783 num_examples: 10000 - name: train num_bytes: 4851527 num_examples: 20000 download_size: 234008884 dataset_size: 9763634 - config_name: pms features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 28369 num_examples: 100 - name: test num_bytes: 24015 num_examples: 100 - name: train num_bytes: 27429 num_examples: 100 download_size: 234008884 dataset_size: 79813 - config_name: pnb features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 19070 num_examples: 100 - name: test num_bytes: 21206 num_examples: 100 - name: train num_bytes: 19504 num_examples: 100 download_size: 234008884 dataset_size: 59780 - config_name: ps features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 49901 num_examples: 100 - name: test num_bytes: 43621 num_examples: 100 - name: train num_bytes: 63501 num_examples: 100 download_size: 234008884 dataset_size: 157023 - config_name: pt features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 1962145 num_examples: 10000 - name: test num_bytes: 1946729 num_examples: 10000 - name: train num_bytes: 3917453 num_examples: 20000 download_size: 234008884 dataset_size: 7826327 - config_name: qu features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 18231 num_examples: 100 - name: test num_bytes: 17675 num_examples: 100 - name: train num_bytes: 16989 num_examples: 100 download_size: 234008884 dataset_size: 52895 - config_name: rm features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 32776 num_examples: 100 - name: test num_bytes: 35880 num_examples: 100 - name: train num_bytes: 30489 num_examples: 100 download_size: 234008884 dataset_size: 99145 - config_name: ro features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2063860 num_examples: 10000 - name: test num_bytes: 2060933 num_examples: 10000 - name: train num_bytes: 4179869 num_examples: 20000 download_size: 234008884 dataset_size: 8304662 - config_name: ru features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2574546 num_examples: 10000 - name: test num_bytes: 2597248 num_examples: 10000 - name: train num_bytes: 5175665 num_examples: 20000 download_size: 234008884 dataset_size: 10347459 - config_name: rw features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 17999 num_examples: 100 - name: test num_bytes: 14445 num_examples: 100 - name: train num_bytes: 16778 num_examples: 100 download_size: 234008884 dataset_size: 49222 - config_name: sa features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 45721 num_examples: 100 - name: test num_bytes: 49209 num_examples: 100 - name: train num_bytes: 52504 num_examples: 100 download_size: 234008884 dataset_size: 147434 - config_name: sah features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 27875 num_examples: 100 - name: test num_bytes: 26853 num_examples: 100 - name: train num_bytes: 27041 num_examples: 100 download_size: 234008884 dataset_size: 81769 - config_name: scn features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 20105 num_examples: 100 - name: test num_bytes: 17384 num_examples: 100 - name: train num_bytes: 21032 num_examples: 100 download_size: 234008884 dataset_size: 58521 - config_name: sco features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 22215 num_examples: 100 - name: test num_bytes: 21589 num_examples: 100 - name: train num_bytes: 20308 num_examples: 100 download_size: 234008884 dataset_size: 64112 - config_name: sd features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 51555 num_examples: 100 - name: test num_bytes: 38534 num_examples: 100 - name: train num_bytes: 56925 num_examples: 100 download_size: 234008884 dataset_size: 147014 - config_name: sh features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 1789918 num_examples: 10000 - name: test num_bytes: 1791491 num_examples: 10000 - name: train num_bytes: 3583633 num_examples: 20000 download_size: 234008884 dataset_size: 7165042 - config_name: si features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 30845 num_examples: 100 - name: test num_bytes: 29341 num_examples: 100 - name: train num_bytes: 31255 num_examples: 100 download_size: 234008884 dataset_size: 91441 - config_name: simple features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 247147 num_examples: 1000 - name: test num_bytes: 245358 num_examples: 1000 - name: train num_bytes: 4921916 num_examples: 20000 download_size: 234008884 dataset_size: 5414421 - config_name: sk features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2342061 num_examples: 10000 - name: test num_bytes: 2335009 num_examples: 10000 - name: train num_bytes: 4701553 num_examples: 20000 download_size: 234008884 dataset_size: 9378623 - config_name: sl features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2090247 num_examples: 10000 - name: test num_bytes: 2133491 num_examples: 10000 - name: train num_bytes: 3158676 num_examples: 15000 download_size: 234008884 dataset_size: 7382414 - config_name: so features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 21864 num_examples: 100 - name: test num_bytes: 17219 num_examples: 100 - name: train num_bytes: 23780 num_examples: 100 download_size: 234008884 dataset_size: 62863 - config_name: sq features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 210888 num_examples: 1000 - name: test num_bytes: 209824 num_examples: 1000 - name: train num_bytes: 1052387 num_examples: 5000 download_size: 234008884 dataset_size: 1473099 - config_name: sr features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2548390 num_examples: 10000 - name: test num_bytes: 2564831 num_examples: 10000 - name: train num_bytes: 5105569 num_examples: 20000 download_size: 234008884 dataset_size: 10218790 - config_name: su features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 22605 num_examples: 100 - name: test num_bytes: 21861 num_examples: 100 - name: train num_bytes: 20839 num_examples: 100 download_size: 234008884 dataset_size: 65305 - config_name: sv features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2678672 num_examples: 10000 - name: test num_bytes: 2719077 num_examples: 10000 - name: train num_bytes: 5395722 num_examples: 20000 download_size: 234008884 dataset_size: 10793471 - config_name: sw features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 168819 num_examples: 1000 - name: test num_bytes: 172693 num_examples: 1000 - name: train num_bytes: 168749 num_examples: 1000 download_size: 234008884 dataset_size: 510261 - config_name: szl features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 19397 num_examples: 100 - name: test num_bytes: 18967 num_examples: 100 - name: train num_bytes: 17646 num_examples: 100 download_size: 234008884 dataset_size: 56010 - config_name: ta features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 354957 num_examples: 1000 - name: test num_bytes: 357667 num_examples: 1000 - name: train num_bytes: 5275759 num_examples: 15000 download_size: 234008884 dataset_size: 5988383 - config_name: te features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 356189 num_examples: 1000 - name: test num_bytes: 359780 num_examples: 1000 - name: train num_bytes: 358792 num_examples: 1000 download_size: 234008884 dataset_size: 1074761 - config_name: tg features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 27130 num_examples: 100 - name: test num_bytes: 28821 num_examples: 100 - name: train num_bytes: 27200 num_examples: 100 download_size: 234008884 dataset_size: 83151 - config_name: th features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 14189743 num_examples: 10000 - name: test num_bytes: 14505054 num_examples: 10000 - name: train num_bytes: 28968916 num_examples: 20000 download_size: 234008884 dataset_size: 57663713 - config_name: tk features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 21611 num_examples: 100 - name: test num_bytes: 20302 num_examples: 100 - name: train num_bytes: 19521 num_examples: 100 download_size: 234008884 dataset_size: 61434 - config_name: tl features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 148682 num_examples: 1000 - name: test num_bytes: 152964 num_examples: 1000 - name: train num_bytes: 1518784 num_examples: 10000 download_size: 234008884 dataset_size: 1820430 - config_name: tr features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2280517 num_examples: 10000 - name: test num_bytes: 2276920 num_examples: 10000 - name: train num_bytes: 4501912 num_examples: 20000 download_size: 234008884 dataset_size: 9059349 - config_name: tt features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 282535 num_examples: 1000 - name: test num_bytes: 282691 num_examples: 1000 - name: train num_bytes: 283392 num_examples: 1000 download_size: 234008884 dataset_size: 848618 - config_name: ug features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 35219 num_examples: 100 - name: test num_bytes: 31129 num_examples: 100 - name: train num_bytes: 26620 num_examples: 100 download_size: 234008884 dataset_size: 92968 - config_name: uk features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 2934897 num_examples: 10000 - name: test num_bytes: 2928200 num_examples: 10000 - name: train num_bytes: 5928026 num_examples: 20000 download_size: 234008884 dataset_size: 11791123 - config_name: ur features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 203747 num_examples: 1000 - name: test num_bytes: 203138 num_examples: 1000 - name: train num_bytes: 4108707 num_examples: 20000 download_size: 234008884 dataset_size: 4515592 - config_name: uz features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 184625 num_examples: 1000 - name: test num_bytes: 184713 num_examples: 1000 - name: train num_bytes: 186105 num_examples: 1000 download_size: 234008884 dataset_size: 555443 - config_name: vec features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 19335 num_examples: 100 - name: test num_bytes: 20254 num_examples: 100 - name: train num_bytes: 20437 num_examples: 100 download_size: 234008884 dataset_size: 60026 - config_name: vep features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 22306 num_examples: 100 - name: test num_bytes: 21371 num_examples: 100 - name: train num_bytes: 21387 num_examples: 100 download_size: 234008884 dataset_size: 65064 - config_name: vi features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 1944856 num_examples: 10000 - name: test num_bytes: 1960024 num_examples: 10000 - name: train num_bytes: 3915944 num_examples: 20000 download_size: 234008884 dataset_size: 7820824 - config_name: vls features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 27895 num_examples: 100 - name: test num_bytes: 26778 num_examples: 100 - name: train num_bytes: 26183 num_examples: 100 download_size: 234008884 dataset_size: 80856 - config_name: vo features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 14385 num_examples: 100 - name: test num_bytes: 14001 num_examples: 100 - name: train num_bytes: 14442 num_examples: 100 download_size: 234008884 dataset_size: 42828 - config_name: wa features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 22493 num_examples: 100 - name: test num_bytes: 21581 num_examples: 100 - name: train num_bytes: 23072 num_examples: 100 download_size: 234008884 dataset_size: 67146 - config_name: war features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 16834 num_examples: 100 - name: test num_bytes: 19912 num_examples: 100 - name: train num_bytes: 18829 num_examples: 100 download_size: 234008884 dataset_size: 55575 - config_name: wuu features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 15123 num_examples: 100 - name: test num_bytes: 15067 num_examples: 100 - name: train num_bytes: 17016 num_examples: 100 download_size: 234008884 dataset_size: 47206 - config_name: xmf features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 39979 num_examples: 100 - name: test num_bytes: 36081 num_examples: 100 - name: train num_bytes: 31796 num_examples: 100 download_size: 234008884 dataset_size: 107856 - config_name: yi features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 25269 num_examples: 100 - name: test num_bytes: 25005 num_examples: 100 - name: train num_bytes: 27303 num_examples: 100 download_size: 234008884 dataset_size: 77577 - config_name: yo features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 17738 num_examples: 100 - name: test num_bytes: 17996 num_examples: 100 - name: train num_bytes: 18984 num_examples: 100 download_size: 234008884 dataset_size: 54718 - config_name: zea features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 24916 num_examples: 100 - name: test num_bytes: 22997 num_examples: 100 - name: train num_bytes: 21252 num_examples: 100 download_size: 234008884 dataset_size: 69165 - config_name: zh features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 4839728 num_examples: 10000 - name: test num_bytes: 4709458 num_examples: 10000 - name: train num_bytes: 9524981 num_examples: 20000 download_size: 234008884 dataset_size: 19074167 - config_name: zh-classical features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 59980 num_examples: 100 - name: test num_bytes: 65885 num_examples: 100 - name: train num_bytes: 56238 num_examples: 100 download_size: 234008884 dataset_size: 182103 - config_name: zh-min-nan features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 24533 num_examples: 100 - name: test num_bytes: 24326 num_examples: 100 - name: train num_bytes: 19358 num_examples: 100 download_size: 234008884 dataset_size: 68217 - config_name: zh-yue features: - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-PER '2': I-PER '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC - name: langs sequence: string - name: spans sequence: string splits: - name: validation num_bytes: 4934158 num_examples: 10000 - name: test num_bytes: 4964029 num_examples: 10000 - name: train num_bytes: 9950629 num_examples: 20000 download_size: 234008884 dataset_size: 19848816 --- # Dataset Card for WikiANN ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Massively Multilingual Transfer for NER](https://github.com/afshinrahimi/mmner) - **Repository:** [Massively Multilingual Transfer for NER](https://github.com/afshinrahimi/mmner) - **Paper:** The original datasets come from the _Cross-lingual name tagging and linking for 282 languages_ [paper](https://www.aclweb.org/anthology/P17-1178/) by Xiaoman Pan et al. (2018). This version corresponds to the balanced train, dev, and test splits of the original data from the _Massively Multilingual Transfer for NER_ [paper](https://arxiv.org/abs/1902.00193) by Afshin Rahimi et al. (2019). - **Leaderboard:** - **Point of Contact:** [Afshin Rahimi](mailto:afshinrahimi@gmail.com) or [Lewis Tunstall](mailto:lewis.c.tunstall@gmail.com) or [Albert Villanova del Moral](albert@huggingface.co) ### Dataset Summary WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles annotated with LOC (location), PER (person), and ORG (organisation) tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of Rahimi et al. (2019), which supports 176 of the 282 languages from the original WikiANN corpus. ### Supported Tasks and Leaderboards - `named-entity-recognition`: The dataset can be used to train a model for named entity recognition in many languages, or evaluate the zero-shot cross-lingual capabilities of multilingual models. ### Languages The dataset contains 176 languages, one in each of the configuration subsets. The corresponding BCP 47 language tags are: | | Language tag | |:-------------------|:---------------| | ace | ace | | af | af | | als | als | | am | am | | an | an | | ang | ang | | ar | ar | | arc | arc | | arz | arz | | as | as | | ast | ast | | ay | ay | | az | az | | ba | ba | | bar | bar | | be | be | | bg | bg | | bh | bh | | bn | bn | | bo | bo | | br | br | | bs | bs | | ca | ca | | cdo | cdo | | ce | ce | | ceb | ceb | | ckb | ckb | | co | co | | crh | crh | | cs | cs | | csb | csb | | cv | cv | | cy | cy | | da | da | | de | de | | diq | diq | | dv | dv | | el | el | | en | en | | eo | eo | | es | es | | et | et | | eu | eu | | ext | ext | | fa | fa | | fi | fi | | fo | fo | | fr | fr | | frr | frr | | fur | fur | | fy | fy | | ga | ga | | gan | gan | | gd | gd | | gl | gl | | gn | gn | | gu | gu | | hak | hak | | he | he | | hi | hi | | hr | hr | | hsb | hsb | | hu | hu | | hy | hy | | ia | ia | | id | id | | ig | ig | | ilo | ilo | | io | io | | is | is | | it | it | | ja | ja | | jbo | jbo | | jv | jv | | ka | ka | | kk | kk | | km | km | | kn | kn | | ko | ko | | ksh | ksh | | ku | ku | | ky | ky | | la | la | | lb | lb | | li | li | | lij | lij | | lmo | lmo | | ln | ln | | lt | lt | | lv | lv | | mg | mg | | mhr | mhr | | mi | mi | | min | min | | mk | mk | | ml | ml | | mn | mn | | mr | mr | | ms | ms | | mt | mt | | mwl | mwl | | my | my | | mzn | mzn | | nap | nap | | nds | nds | | ne | ne | | nl | nl | | nn | nn | | no | no | | nov | nov | | oc | oc | | or | or | | os | os | | other-bat-smg | sgs | | other-be-x-old | be-tarask | | other-cbk-zam | cbk | | other-eml | eml | | other-fiu-vro | vro | | other-map-bms | jv-x-bms | | other-simple | en-basiceng | | other-zh-classical | lzh | | other-zh-min-nan | nan | | other-zh-yue | yue | | pa | pa | | pdc | pdc | | pl | pl | | pms | pms | | pnb | pnb | | ps | ps | | pt | pt | | qu | qu | | rm | rm | | ro | ro | | ru | ru | | rw | rw | | sa | sa | | sah | sah | | scn | scn | | sco | sco | | sd | sd | | sh | sh | | si | si | | sk | sk | | sl | sl | | so | so | | sq | sq | | sr | sr | | su | su | | sv | sv | | sw | sw | | szl | szl | | ta | ta | | te | te | | tg | tg | | th | th | | tk | tk | | tl | tl | | tr | tr | | tt | tt | | ug | ug | | uk | uk | | ur | ur | | uz | uz | | vec | vec | | vep | vep | | vi | vi | | vls | vls | | vo | vo | | wa | wa | | war | war | | wuu | wuu | | xmf | xmf | | yi | yi | | yo | yo | | zea | zea | | zh | zh | ## Dataset Structure ### Data Instances This is an example in the "train" split of the "af" (Afrikaans language) configuration subset: ```python { 'tokens': ['Sy', 'ander', 'seun', ',', 'Swjatopolk', ',', 'was', 'die', 'resultaat', 'van', '’n', 'buite-egtelike', 'verhouding', '.'], 'ner_tags': [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'langs': ['af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af'], 'spans': ['PER: Swjatopolk'] } ``` ### Data Fields - `tokens`: a `list` of `string` features. - `langs`: a `list` of `string` features that correspond to the language of each token. - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6). - `spans`: a `list` of `string` features, that is the list of named entities in the input text formatted as ``<TAG>: <mention>`` ### Data Splits For each configuration subset, the data is split into "train", "validation" and "test" sets, each containing the following number of examples: | | Train | Validation | Test | |:-------------|--------:|-------------:|-------:| | ace | 100 | 100 | 100 | | af | 5000 | 1000 | 1000 | | als | 100 | 100 | 100 | | am | 100 | 100 | 100 | | an | 1000 | 1000 | 1000 | | ang | 100 | 100 | 100 | | ar | 20000 | 10000 | 10000 | | arc | 100 | 100 | 100 | | arz | 100 | 100 | 100 | | as | 100 | 100 | 100 | | ast | 1000 | 1000 | 1000 | | ay | 100 | 100 | 100 | | az | 10000 | 1000 | 1000 | | ba | 100 | 100 | 100 | | bar | 100 | 100 | 100 | | bat-smg | 100 | 100 | 100 | | be | 15000 | 1000 | 1000 | | be-x-old | 5000 | 1000 | 1000 | | bg | 20000 | 10000 | 10000 | | bh | 100 | 100 | 100 | | bn | 10000 | 1000 | 1000 | | bo | 100 | 100 | 100 | | br | 1000 | 1000 | 1000 | | bs | 15000 | 1000 | 1000 | | ca | 20000 | 10000 | 10000 | | cbk-zam | 100 | 100 | 100 | | cdo | 100 | 100 | 100 | | ce | 100 | 100 | 100 | | ceb | 100 | 100 | 100 | | ckb | 1000 | 1000 | 1000 | | co | 100 | 100 | 100 | | crh | 100 | 100 | 100 | | cs | 20000 | 10000 | 10000 | | csb | 100 | 100 | 100 | | cv | 100 | 100 | 100 | | cy | 10000 | 1000 | 1000 | | da | 20000 | 10000 | 10000 | | de | 20000 | 10000 | 10000 | | diq | 100 | 100 | 100 | | dv | 100 | 100 | 100 | | el | 20000 | 10000 | 10000 | | eml | 100 | 100 | 100 | | en | 20000 | 10000 | 10000 | | eo | 15000 | 10000 | 10000 | | es | 20000 | 10000 | 10000 | | et | 15000 | 10000 | 10000 | | eu | 10000 | 10000 | 10000 | | ext | 100 | 100 | 100 | | fa | 20000 | 10000 | 10000 | | fi | 20000 | 10000 | 10000 | | fiu-vro | 100 | 100 | 100 | | fo | 100 | 100 | 100 | | fr | 20000 | 10000 | 10000 | | frr | 100 | 100 | 100 | | fur | 100 | 100 | 100 | | fy | 1000 | 1000 | 1000 | | ga | 1000 | 1000 | 1000 | | gan | 100 | 100 | 100 | | gd | 100 | 100 | 100 | | gl | 15000 | 10000 | 10000 | | gn | 100 | 100 | 100 | | gu | 100 | 100 | 100 | | hak | 100 | 100 | 100 | | he | 20000 | 10000 | 10000 | | hi | 5000 | 1000 | 1000 | | hr | 20000 | 10000 | 10000 | | hsb | 100 | 100 | 100 | | hu | 20000 | 10000 | 10000 | | hy | 15000 | 1000 | 1000 | | ia | 100 | 100 | 100 | | id | 20000 | 10000 | 10000 | | ig | 100 | 100 | 100 | | ilo | 100 | 100 | 100 | | io | 100 | 100 | 100 | | is | 1000 | 1000 | 1000 | | it | 20000 | 10000 | 10000 | | ja | 20000 | 10000 | 10000 | | jbo | 100 | 100 | 100 | | jv | 100 | 100 | 100 | | ka | 10000 | 10000 | 10000 | | kk | 1000 | 1000 | 1000 | | km | 100 | 100 | 100 | | kn | 100 | 100 | 100 | | ko | 20000 | 10000 | 10000 | | ksh | 100 | 100 | 100 | | ku | 100 | 100 | 100 | | ky | 100 | 100 | 100 | | la | 5000 | 1000 | 1000 | | lb | 5000 | 1000 | 1000 | | li | 100 | 100 | 100 | | lij | 100 | 100 | 100 | | lmo | 100 | 100 | 100 | | ln | 100 | 100 | 100 | | lt | 10000 | 10000 | 10000 | | lv | 10000 | 10000 | 10000 | | map-bms | 100 | 100 | 100 | | mg | 100 | 100 | 100 | | mhr | 100 | 100 | 100 | | mi | 100 | 100 | 100 | | min | 100 | 100 | 100 | | mk | 10000 | 1000 | 1000 | | ml | 10000 | 1000 | 1000 | | mn | 100 | 100 | 100 | | mr | 5000 | 1000 | 1000 | | ms | 20000 | 1000 | 1000 | | mt | 100 | 100 | 100 | | mwl | 100 | 100 | 100 | | my | 100 | 100 | 100 | | mzn | 100 | 100 | 100 | | nap | 100 | 100 | 100 | | nds | 100 | 100 | 100 | | ne | 100 | 100 | 100 | | nl | 20000 | 10000 | 10000 | | nn | 20000 | 1000 | 1000 | | no | 20000 | 10000 | 10000 | | nov | 100 | 100 | 100 | | oc | 100 | 100 | 100 | | or | 100 | 100 | 100 | | os | 100 | 100 | 100 | | pa | 100 | 100 | 100 | | pdc | 100 | 100 | 100 | | pl | 20000 | 10000 | 10000 | | pms | 100 | 100 | 100 | | pnb | 100 | 100 | 100 | | ps | 100 | 100 | 100 | | pt | 20000 | 10000 | 10000 | | qu | 100 | 100 | 100 | | rm | 100 | 100 | 100 | | ro | 20000 | 10000 | 10000 | | ru | 20000 | 10000 | 10000 | | rw | 100 | 100 | 100 | | sa | 100 | 100 | 100 | | sah | 100 | 100 | 100 | | scn | 100 | 100 | 100 | | sco | 100 | 100 | 100 | | sd | 100 | 100 | 100 | | sh | 20000 | 10000 | 10000 | | si | 100 | 100 | 100 | | simple | 20000 | 1000 | 1000 | | sk | 20000 | 10000 | 10000 | | sl | 15000 | 10000 | 10000 | | so | 100 | 100 | 100 | | sq | 5000 | 1000 | 1000 | | sr | 20000 | 10000 | 10000 | | su | 100 | 100 | 100 | | sv | 20000 | 10000 | 10000 | | sw | 1000 | 1000 | 1000 | | szl | 100 | 100 | 100 | | ta | 15000 | 1000 | 1000 | | te | 1000 | 1000 | 1000 | | tg | 100 | 100 | 100 | | th | 20000 | 10000 | 10000 | | tk | 100 | 100 | 100 | | tl | 10000 | 1000 | 1000 | | tr | 20000 | 10000 | 10000 | | tt | 1000 | 1000 | 1000 | | ug | 100 | 100 | 100 | | uk | 20000 | 10000 | 10000 | | ur | 20000 | 1000 | 1000 | | uz | 1000 | 1000 | 1000 | | vec | 100 | 100 | 100 | | vep | 100 | 100 | 100 | | vi | 20000 | 10000 | 10000 | | vls | 100 | 100 | 100 | | vo | 100 | 100 | 100 | | wa | 100 | 100 | 100 | | war | 100 | 100 | 100 | | wuu | 100 | 100 | 100 | | xmf | 100 | 100 | 100 | | yi | 100 | 100 | 100 | | yo | 100 | 100 | 100 | | zea | 100 | 100 | 100 | | zh | 20000 | 10000 | 10000 | | zh-classical | 100 | 100 | 100 | | zh-min-nan | 100 | 100 | 100 | | zh-yue | 20000 | 10000 | 10000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information The original 282 datasets are associated with this article ``` @inproceedings{pan-etal-2017-cross, title = "Cross-lingual Name Tagging and Linking for 282 Languages", author = "Pan, Xiaoman and Zhang, Boliang and May, Jonathan and Nothman, Joel and Knight, Kevin and Ji, Heng", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P17-1178", doi = "10.18653/v1/P17-1178", pages = "1946--1958", abstract = "The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of new KB mining methods: generating {``}silver-standard{''} annotations by transferring annotations from English to other languages through cross-lingual links and KB properties, refining annotations through self-training and topic selection, deriving language-specific morphology features from anchor links, and mining word translation pairs from cross-lingual links. Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data.", } ``` while the 176 languages supported in this version are associated with the following article ``` @inproceedings{rahimi-etal-2019-massively, title = "Massively Multilingual Transfer for {NER}", author = "Rahimi, Afshin and Li, Yuan and Cohn, Trevor", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1015", pages = "151--164", } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun) and [@rabeehk](https://github.com/rabeehk) for adding this dataset.
wikicorpus
--- pretty_name: Wikicorpus annotations_creators: - machine-generated - no-annotation language_creators: - found language: - ca - en - es license: - gfdl multilinguality: - monolingual size_categories: - 100K<n<1M - 10M<n<100M - 1M<n<10M source_datasets: - original task_categories: - fill-mask - text-classification - text-generation - token-classification task_ids: - language-modeling - masked-language-modeling - part-of-speech paperswithcode_id: null configs: - raw_ca - raw_en - raw_es - tagged_ca - tagged_en - tagged_es tags: - word-sense-disambiguation - lemmatization dataset_info: - config_name: raw_ca features: - name: id dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 263170192 num_examples: 143883 download_size: 96437841 dataset_size: 263170192 - config_name: raw_es features: - name: id dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 671295359 num_examples: 259409 download_size: 252926918 dataset_size: 671295359 - config_name: raw_en features: - name: id dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 3388801074 num_examples: 1359146 download_size: 1346378932 dataset_size: 3388801074 - config_name: tagged_ca features: - name: id dtype: string - name: title dtype: string - name: sentence sequence: string - name: lemmas sequence: string - name: pos_tags sequence: string - name: wordnet_senses sequence: string splits: - name: train num_bytes: 1666129919 num_examples: 2016221 download_size: 226390380 dataset_size: 1666129919 - config_name: tagged_es features: - name: id dtype: string - name: title dtype: string - name: sentence sequence: string - name: lemmas sequence: string - name: pos_tags sequence: string - name: wordnet_senses sequence: string splits: - name: train num_bytes: 4100040390 num_examples: 5039367 download_size: 604910899 dataset_size: 4100040390 - config_name: tagged_en features: - name: id dtype: string - name: title dtype: string - name: sentence sequence: string - name: lemmas sequence: string - name: pos_tags sequence: string - name: wordnet_senses sequence: string splits: - name: train num_bytes: 18077275300 num_examples: 26350272 download_size: 2477450893 dataset_size: 18077275300 --- # Dataset Card for Wikicorpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.cs.upc.edu/~nlp/wikicorpus/ - **Repository:** - **Paper:** https://www.cs.upc.edu/~nlp/papers/reese10.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Each sub-dataset is monolingual in the languages: - ca: Catalan - en: English - es: Spanish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The WikiCorpus is licensed under the same license as Wikipedia, that is, the [GNU Free Documentation License](http://www.fsf.org/licensing/licenses/fdl.html) ### Citation Information ``` @inproceedings{reese-etal-2010-wikicorpus, title = "{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus", author = "Reese, Samuel and Boleda, Gemma and Cuadros, Montse and Padr{\'o}, Llu{\'i}s and Rigau, German", booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)", month = may, year = "2010", address = "Valletta, Malta", publisher = "European Language Resources Association (ELRA)", url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/222_Paper.pdf", abstract = "This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.", } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
wikihow
--- paperswithcode_id: wikihow pretty_name: WikiHow dataset_info: - config_name: all features: - name: text dtype: string - name: headline dtype: string - name: title dtype: string splits: - name: train num_bytes: 513238309 num_examples: 157252 - name: validation num_bytes: 18246897 num_examples: 5599 - name: test num_bytes: 18276023 num_examples: 5577 download_size: 5460385 dataset_size: 549761229 - config_name: sep features: - name: text dtype: string - name: headline dtype: string - name: title dtype: string - name: overview dtype: string - name: sectionLabel dtype: string splits: - name: train num_bytes: 990499776 num_examples: 1060732 - name: validation num_bytes: 35173966 num_examples: 37932 - name: test num_bytes: 35271826 num_examples: 37800 download_size: 5460385 dataset_size: 1060945568 --- ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
wikipedia
--- annotations_creators: - no-annotation language_creators: - crowdsourced pretty_name: Wikipedia paperswithcode_id: null license: - cc-by-sa-3.0 - gfdl task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling source_datasets: - original multilinguality: - multilingual size_categories: - n<1K - 1K<n<10K - 10K<n<100K - 100K<n<1M - 1M<n<10M language: - aa - ab - ace - af - ak - als - am - an - ang - ar - arc - arz - as - ast - atj - av - ay - az - azb - ba - bar - bcl - be - bg - bh - bi - bjn - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk - cdo - ce - ceb - ch - cho - chr - chy - ckb - co - cr - crh - cs - csb - cu - cv - cy - da - de - din - diq - dsb - dty - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fj - fo - fr - frp - frr - fur - fy - ga - gag - gan - gd - gl - glk - gn - gom - gor - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - ia - id - ie - ig - ii - ik - ilo - inh - io - is - it - iu - ja - jam - jbo - jv - ka - kaa - kab - kbd - kbp - kg - ki - kj - kk - kl - km - kn - ko - koi - krc - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lez - lfn - lg - li - lij - lmo - ln - lo - lrc - lt - ltg - lv - lzh - mai - mdf - mg - mh - mhr - mi - min - mk - ml - mn - mr - mrj - ms - mt - mus - mwl - my - myv - mzn - na - nah - nan - nap - nds - ne - new - ng - nl - nn - 'no' - nov - nrf - nso - nv - ny - oc - olo - om - or - os - pa - pag - pam - pap - pcd - pdc - pfl - pi - pih - pl - pms - pnb - pnt - ps - pt - qu - rm - rmy - rn - ro - ru - rue - rup - rw - sa - sah - sat - sc - scn - sco - sd - se - sg - sgs - sh - si - sk - sl - sm - sn - so - sq - sr - srn - ss - st - stq - su - sv - sw - szl - ta - tcy - tdt - te - tg - th - ti - tk - tl - tn - to - tpi - tr - ts - tt - tum - tw - ty - tyv - udm - ug - uk - ur - uz - ve - vec - vep - vi - vls - vo - vro - wa - war - wo - wuu - xal - xh - xmf - yi - yo - yue - za - zea - zh - zu language_bcp47: - nds-nl configs: - 20220301.aa - 20220301.ab - 20220301.ace - 20220301.ady - 20220301.af - 20220301.ak - 20220301.als - 20220301.am - 20220301.an - 20220301.ang - 20220301.ar - 20220301.arc - 20220301.arz - 20220301.as - 20220301.ast - 20220301.atj - 20220301.av - 20220301.ay - 20220301.az - 20220301.azb - 20220301.ba - 20220301.bar - 20220301.bat-smg - 20220301.bcl - 20220301.be - 20220301.be-x-old - 20220301.bg - 20220301.bh - 20220301.bi - 20220301.bjn - 20220301.bm - 20220301.bn - 20220301.bo - 20220301.bpy - 20220301.br - 20220301.bs - 20220301.bug - 20220301.bxr - 20220301.ca - 20220301.cbk-zam - 20220301.cdo - 20220301.ce - 20220301.ceb - 20220301.ch - 20220301.cho - 20220301.chr - 20220301.chy - 20220301.ckb - 20220301.co - 20220301.cr - 20220301.crh - 20220301.cs - 20220301.csb - 20220301.cu - 20220301.cv - 20220301.cy - 20220301.da - 20220301.de - 20220301.din - 20220301.diq - 20220301.dsb - 20220301.dty - 20220301.dv - 20220301.dz - 20220301.ee - 20220301.el - 20220301.eml - 20220301.en - 20220301.eo - 20220301.es - 20220301.et - 20220301.eu - 20220301.ext - 20220301.fa - 20220301.ff - 20220301.fi - 20220301.fiu-vro - 20220301.fj - 20220301.fo - 20220301.fr - 20220301.frp - 20220301.frr - 20220301.fur - 20220301.fy - 20220301.ga - 20220301.gag - 20220301.gan - 20220301.gd - 20220301.gl - 20220301.glk - 20220301.gn - 20220301.gom - 20220301.gor - 20220301.got - 20220301.gu - 20220301.gv - 20220301.ha - 20220301.hak - 20220301.haw - 20220301.he - 20220301.hi - 20220301.hif - 20220301.ho - 20220301.hr - 20220301.hsb - 20220301.ht - 20220301.hu - 20220301.hy - 20220301.ia - 20220301.id - 20220301.ie - 20220301.ig - 20220301.ii - 20220301.ik - 20220301.ilo - 20220301.inh - 20220301.io - 20220301.is - 20220301.it - 20220301.iu - 20220301.ja - 20220301.jam - 20220301.jbo - 20220301.jv - 20220301.ka - 20220301.kaa - 20220301.kab - 20220301.kbd - 20220301.kbp - 20220301.kg - 20220301.ki - 20220301.kj - 20220301.kk - 20220301.kl - 20220301.km - 20220301.kn - 20220301.ko - 20220301.koi - 20220301.krc - 20220301.ks - 20220301.ksh - 20220301.ku - 20220301.kv - 20220301.kw - 20220301.ky - 20220301.la - 20220301.lad - 20220301.lb - 20220301.lbe - 20220301.lez - 20220301.lfn - 20220301.lg - 20220301.li - 20220301.lij - 20220301.lmo - 20220301.ln - 20220301.lo - 20220301.lrc - 20220301.lt - 20220301.ltg - 20220301.lv - 20220301.mai - 20220301.map-bms - 20220301.mdf - 20220301.mg - 20220301.mh - 20220301.mhr - 20220301.mi - 20220301.min - 20220301.mk - 20220301.ml - 20220301.mn - 20220301.mr - 20220301.mrj - 20220301.ms - 20220301.mt - 20220301.mus - 20220301.mwl - 20220301.my - 20220301.myv - 20220301.mzn - 20220301.na - 20220301.nah - 20220301.nap - 20220301.nds - 20220301.nds-nl - 20220301.ne - 20220301.new - 20220301.ng - 20220301.nl - 20220301.nn - 20220301.no - 20220301.nov - 20220301.nrm - 20220301.nso - 20220301.nv - 20220301.ny - 20220301.oc - 20220301.olo - 20220301.om - 20220301.or - 20220301.os - 20220301.pa - 20220301.pag - 20220301.pam - 20220301.pap - 20220301.pcd - 20220301.pdc - 20220301.pfl - 20220301.pi - 20220301.pih - 20220301.pl - 20220301.pms - 20220301.pnb - 20220301.pnt - 20220301.ps - 20220301.pt - 20220301.qu - 20220301.rm - 20220301.rmy - 20220301.rn - 20220301.ro - 20220301.roa-rup - 20220301.roa-tara - 20220301.ru - 20220301.rue - 20220301.rw - 20220301.sa - 20220301.sah - 20220301.sat - 20220301.sc - 20220301.scn - 20220301.sco - 20220301.sd - 20220301.se - 20220301.sg - 20220301.sh - 20220301.si - 20220301.simple - 20220301.sk - 20220301.sl - 20220301.sm - 20220301.sn - 20220301.so - 20220301.sq - 20220301.sr - 20220301.srn - 20220301.ss - 20220301.st - 20220301.stq - 20220301.su - 20220301.sv - 20220301.sw - 20220301.szl - 20220301.ta - 20220301.tcy - 20220301.te - 20220301.tet - 20220301.tg - 20220301.th - 20220301.ti - 20220301.tk - 20220301.tl - 20220301.tn - 20220301.to - 20220301.tpi - 20220301.tr - 20220301.ts - 20220301.tt - 20220301.tum - 20220301.tw - 20220301.ty - 20220301.tyv - 20220301.udm - 20220301.ug - 20220301.uk - 20220301.ur - 20220301.uz - 20220301.ve - 20220301.vec - 20220301.vep - 20220301.vi - 20220301.vls - 20220301.vo - 20220301.wa - 20220301.war - 20220301.wo - 20220301.wuu - 20220301.xal - 20220301.xh - 20220301.xmf - 20220301.yi - 20220301.yo - 20220301.za - 20220301.zea - 20220301.zh - 20220301.zh-classical - 20220301.zh-min-nan - 20220301.zh-yue - 20220301.zu dataset_info: - config_name: 20220301.de features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 8905282792 num_examples: 2665357 download_size: 6523215105 dataset_size: 8905282792 - config_name: 20220301.en features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 20275516160 num_examples: 6458670 download_size: 20598313936 dataset_size: 20275516160 - config_name: 20220301.fr features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 7375920768 num_examples: 2402095 download_size: 5602565274 dataset_size: 7375920768 - config_name: 20220301.frr features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 9129760 num_examples: 15199 download_size: 12438017 dataset_size: 9129760 - config_name: 20220301.it features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 4539944448 num_examples: 1743035 download_size: 3516441239 dataset_size: 4539944448 - config_name: 20220301.simple features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 235072360 num_examples: 205328 download_size: 239682796 dataset_size: 235072360 --- # Dataset Card for Wikipedia ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per language. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.). The articles are parsed using the ``mwparserfromhell`` tool. To load this dataset you need to install Apache Beam and ``mwparserfromhell`` first: ``` pip install apache_beam mwparserfromhell ``` Then, you can load any subset of Wikipedia per language and per date this way: ```python from datasets import load_dataset load_dataset("wikipedia", language="sw", date="20220120", beam_runner=...) ``` where you can pass as `beam_runner` any Apache Beam supported runner for (distributed) data processing (see [here](https://beam.apache.org/documentation/runners/capability-matrix/)). Pass "DirectRunner" to run it on your machine. You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html). Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with: ```python from datasets import load_dataset load_dataset("wikipedia", "20220301.en") ``` The list of pre-processed subsets is: - "20220301.de" - "20220301.en" - "20220301.fr" - "20220301.frr" - "20220301.it" - "20220301.simple" ### Supported Tasks and Leaderboards The dataset is generally used for Language Modeling. ### Languages You can find the list of languages [here](https://meta.wikimedia.org/wiki/List_of_Wikipedias). ## Dataset Structure ### Data Instances An example looks as follows: ``` {'id': '1', 'url': 'https://simple.wikipedia.org/wiki/April', 'title': 'April', 'text': 'April is the fourth month...' } ``` Some subsets of Wikipedia have already been processed by HuggingFace, as you can see below: #### 20220301.de - **Size of downloaded dataset files:** 6.84 GB - **Size of the generated dataset:** 9.34 GB - **Total amount of disk used:** 16.18 GB #### 20220301.en - **Size of downloaded dataset files:** 21.60 GB - **Size of the generated dataset:** 21.26 GB - **Total amount of disk used:** 42.86 GB #### 20220301.fr - **Size of downloaded dataset files:** 5.87 GB - **Size of the generated dataset:** 7.73 GB - **Total amount of disk used:** 13.61 GB #### 20220301.frr - **Size of downloaded dataset files:** 13.04 MB - **Size of the generated dataset:** 9.57 MB - **Total amount of disk used:** 22.62 MB #### 20220301.it - **Size of downloaded dataset files:** 3.69 GB - **Size of the generated dataset:** 4.76 GB - **Total amount of disk used:** 8.45 GB #### 20220301.simple - **Size of downloaded dataset files:** 251.32 MB - **Size of the generated dataset:** 246.49 MB - **Total amount of disk used:** 497.82 MB ### Data Fields The data fields are the same among all configurations: - `id` (`str`): ID of the article. - `url` (`str`): URL of the article. - `title` (`str`): Title of the article. - `text` (`str`): Text content of the article. ### Data Splits Here are the number of examples for several configurations: | name | train | |-----------------|--------:| | 20220301.de | 2665357 | | 20220301.en | 6458670 | | 20220301.fr | 2402095 | | 20220301.frr | 15199 | | 20220301.it | 1743035 | | 20220301.simple | 205328 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Most of Wikipedia's text and many of its images are co-licensed under the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License) (CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License) (GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts). Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes the text. ### Citation Information ``` @ONLINE{wikidump, author = "Wikimedia Foundation", title = "Wikimedia Downloads", url = "https://dumps.wikimedia.org" } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
wikisql
--- annotations_creators: - crowdsourced language: - en language_creators: - found - machine-generated license: - unknown multilinguality: - monolingual pretty_name: WikiSQL size_categories: - 10K<n<100K source_datasets: - original task_categories: - text2text-generation task_ids: [] paperswithcode_id: wikisql tags: - text-to-sql dataset_info: features: - name: phase dtype: int32 - name: question dtype: string - name: table struct: - name: header sequence: string - name: page_title dtype: string - name: page_id dtype: string - name: types sequence: string - name: id dtype: string - name: section_title dtype: string - name: caption dtype: string - name: rows sequence: sequence: string - name: name dtype: string - name: sql struct: - name: human_readable dtype: string - name: sel dtype: int32 - name: agg dtype: int32 - name: conds sequence: - name: column_index dtype: int32 - name: operator_index dtype: int32 - name: condition dtype: string splits: - name: test num_bytes: 32234761 num_examples: 15878 - name: validation num_bytes: 15159314 num_examples: 8421 - name: train num_bytes: 107345917 num_examples: 56355 download_size: 26164664 dataset_size: 154739992 --- # Dataset Card for "wikisql" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/salesforce/WikiSQL - **Paper:** [Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning](https://arxiv.org/abs/1709.00103) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 26.16 MB - **Size of the generated dataset:** 154.74 MB - **Total amount of disk used:** 180.90 MB ### Dataset Summary A large crowd-sourced dataset for developing natural language interfaces for relational databases. WikiSQL is a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 26.16 MB - **Size of the generated dataset:** 154.74 MB - **Total amount of disk used:** 180.90 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "phase": 1, "question": "How would you answer a second test question?", "sql": { "agg": 0, "conds": { "column_index": [2], "condition": ["Some Entity"], "operator_index": [0] }, "human_readable": "SELECT Header1 FROM table WHERE Another Header = Some Entity", "sel": 0 }, "table": "{\"caption\": \"L\", \"header\": [\"Header1\", \"Header 2\", \"Another Header\"], \"id\": \"1-10015132-9\", \"name\": \"table_10015132_11\", \"page_i..." } ``` ### Data Fields The data fields are the same among all splits. #### default - `phase`: a `int32` feature. - `question`: a `string` feature. - `header`: a `list` of `string` features. - `page_title`: a `string` feature. - `page_id`: a `string` feature. - `types`: a `list` of `string` features. - `id`: a `string` feature. - `section_title`: a `string` feature. - `caption`: a `string` feature. - `rows`: a dictionary feature containing: - `feature`: a `string` feature. - `name`: a `string` feature. - `human_readable`: a `string` feature. - `sel`: a `int32` feature. - `agg`: a `int32` feature. - `conds`: a dictionary feature containing: - `column_index`: a `int32` feature. - `operator_index`: a `int32` feature. - `condition`: a `string` feature. ### Data Splits | name |train|validation|test | |-------|----:|---------:|----:| |default|56355| 8421|15878| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{zhongSeq2SQL2017, author = {Victor Zhong and Caiming Xiong and Richard Socher}, title = {Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning}, journal = {CoRR}, volume = {abs/1709.00103}, year = {2017} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
wikitext
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gfdl multilinguality: - monolingual paperswithcode_id: wikitext-2 pretty_name: WikiText size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling dataset_info: - config_name: wikitext-103-v1 features: - name: text dtype: string splits: - name: test num_bytes: 1295579 num_examples: 4358 - name: train num_bytes: 545142639 num_examples: 1801350 - name: validation num_bytes: 1154755 num_examples: 3760 download_size: 190229076 dataset_size: 547592973 - config_name: wikitext-2-v1 features: - name: text dtype: string splits: - name: test num_bytes: 1270951 num_examples: 4358 - name: train num_bytes: 10918134 num_examples: 36718 - name: validation num_bytes: 1134127 num_examples: 3760 download_size: 4475746 dataset_size: 13323212 - config_name: wikitext-103-raw-v1 features: - name: text dtype: string splits: - name: test num_bytes: 1305092 num_examples: 4358 - name: train num_bytes: 546501673 num_examples: 1801350 - name: validation num_bytes: 1159292 num_examples: 3760 download_size: 191984949 dataset_size: 548966057 - config_name: wikitext-2-raw-v1 features: - name: text dtype: string splits: - name: test num_bytes: 1305092 num_examples: 4358 - name: train num_bytes: 11061733 num_examples: 36718 - name: validation num_bytes: 1159292 num_examples: 3760 download_size: 4721645 dataset_size: 13526117 --- # Dataset Card for "wikitext" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Pointer Sentinel Mixture Models](https://arxiv.org/abs/1609.07843) - **Point of Contact:** [Stephen Merity](mailto:smerity@salesforce.com) - **Size of downloaded dataset files:** 391.41 MB - **Size of the generated dataset:** 1.12 GB - **Total amount of disk used:** 1.52 GB ### Dataset Summary The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License. Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### wikitext-103-raw-v1 - **Size of downloaded dataset files:** 191.98 MB - **Size of the generated dataset:** 549.42 MB - **Total amount of disk used:** 741.41 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "text": "\" The gold dollar or gold one @-@ dollar piece was a coin struck as a regular issue by the United States Bureau of the Mint from..." } ``` #### wikitext-103-v1 - **Size of downloaded dataset files:** 190.23 MB - **Size of the generated dataset:** 548.05 MB - **Total amount of disk used:** 738.27 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..." } ``` #### wikitext-2-raw-v1 - **Size of downloaded dataset files:** 4.72 MB - **Size of the generated dataset:** 13.54 MB - **Total amount of disk used:** 18.26 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "\" The Sinclair Scientific Programmable was introduced in 1975 , with the same case as the Sinclair Oxford . It was larger than t..." } ``` #### wikitext-2-v1 - **Size of downloaded dataset files:** 4.48 MB - **Size of the generated dataset:** 13.34 MB - **Total amount of disk used:** 17.82 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..." } ``` ### Data Fields The data fields are the same among all splits. #### wikitext-103-raw-v1 - `text`: a `string` feature. #### wikitext-103-v1 - `text`: a `string` feature. #### wikitext-2-raw-v1 - `text`: a `string` feature. #### wikitext-2-v1 - `text`: a `string` feature. ### Data Splits | name | train |validation|test| |-------------------|------:|---------:|---:| |wikitext-103-raw-v1|1801350| 3760|4358| |wikitext-103-v1 |1801350| 3760|4358| |wikitext-2-raw-v1 | 36718| 3760|4358| |wikitext-2-v1 | 36718| 3760|4358| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The dataset is available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/). ### Citation Information ``` @misc{merity2016pointer, title={Pointer Sentinel Mixture Models}, author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher}, year={2016}, eprint={1609.07843}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
wikitext_tl39
--- annotations_creators: - no-annotation language_creators: - found language: - fil - tl license: - gpl-3.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: wikitext-tl-39 pretty_name: WikiText-TL-39 dataset_info: features: - name: text dtype: string config_name: wikitext-tl-39 splits: - name: test num_bytes: 46182996 num_examples: 376737 - name: train num_bytes: 217182748 num_examples: 1766072 - name: validation num_bytes: 46256674 num_examples: 381763 download_size: 116335234 dataset_size: 309622418 --- # Dataset Card for WikiText-TL-39 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Filipino Text Benchmarks](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks) - **Repository:** - **Paper:** [Evaluating language model finetuning techniques for low-resource languages](https://arxiv.org/abs/1907.00409) - **Leaderboard:** - **Point of Contact:** Jan Christian Blaise Cruz (jan_christian_cruz@dlsu.edu.ph) ### Dataset Summary Large scale, unlabeled text dataset with 39 Million tokens in the training set. Inspired by the original WikiText Long Term Dependency dataset (Merity et al., 2016). TL means "Tagalog." Published in Cruz & Cheng (2019). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Filipino/Tagalog ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `text` (`str`) The dataset is in plaintext and only has one field ("text") as it is compiled for language modeling. ### Data Splits Split | Documents | Tokens ------|-----------|------- Train | 120,975 | 39M Valid | 25,919 | 8M Test | 25,921 | 8M Please see the paper for more details on the dataset splits ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data Tagalog Wikipedia #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@jcblaisecruz02](https://github.com/jcblaisecruz02) for adding this dataset.
wili_2018
--- annotations_creators: - no-annotation language_creators: - found language: - ace - af - als - am - an - ang - ar - arz - as - ast - av - ay - az - azb - ba - bar - bcl - be - bg - bho - bjn - bn - bo - bpy - br - bs - bxr - ca - cbk - cdo - ce - ceb - chr - ckb - co - crh - cs - csb - cv - cy - da - de - diq - dsb - dty - dv - egl - el - en - eo - es - et - eu - ext - fa - fi - fo - fr - frp - fur - fy - ga - gag - gd - gl - glk - gn - gu - gv - ha - hak - he - hi - hif - hr - hsb - ht - hu - hy - ia - id - ie - ig - ilo - io - is - it - ja - jam - jbo - jv - ka - kaa - kab - kbd - kk - km - kn - ko - koi - kok - krc - ksh - ku - kv - kw - ky - la - lad - lb - lez - lg - li - lij - lmo - ln - lo - lrc - lt - ltg - lv - lzh - mai - map - mdf - mg - mhr - mi - min - mk - ml - mn - mr - mrj - ms - mt - mwl - my - myv - mzn - nan - nap - nb - nci - nds - ne - new - nl - nn - nrm - nso - nv - oc - olo - om - or - os - pa - pag - pam - pap - pcd - pdc - pfl - pl - pnb - ps - pt - qu - rm - ro - roa - ru - rue - rup - rw - sa - sah - sc - scn - sco - sd - sgs - sh - si - sk - sl - sme - sn - so - sq - sr - srn - stq - su - sv - sw - szl - ta - tcy - te - tet - tg - th - tk - tl - tn - to - tr - tt - tyv - udm - ug - uk - ur - uz - vec - vep - vi - vls - vo - vro - wa - war - wo - wuu - xh - xmf - yi - yo - zea - zh license: - odbl multilinguality: - multilingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: [] paperswithcode_id: wili-2018 pretty_name: Wili2018 language_bcp47: - be-tarask - map-bms - nds-nl - roa-tara - zh-yue tags: - language-identification dataset_info: features: - name: sentence dtype: string - name: label dtype: class_label: names: '0': cdo '1': glk '2': jam '3': lug '4': san '5': rue '6': wol '7': new '8': mwl '9': bre '10': ara '11': hye '12': xmf '13': ext '14': cor '15': yor '16': div '17': asm '18': lat '19': cym '20': hif '21': ace '22': kbd '23': tgk '24': rus '25': nso '26': mya '27': msa '28': ava '29': cbk '30': urd '31': deu '32': swa '33': pus '34': bxr '35': udm '36': csb '37': yid '38': vro '39': por '40': pdc '41': eng '42': tha '43': hat '44': lmo '45': pag '46': jav '47': chv '48': nan '49': sco '50': kat '51': bho '52': bos '53': kok '54': oss '55': mri '56': fry '57': cat '58': azb '59': kin '60': hin '61': sna '62': dan '63': egl '64': mkd '65': ron '66': bul '67': hrv '68': som '69': pam '70': nav '71': ksh '72': nci '73': khm '74': sgs '75': srn '76': bar '77': cos '78': ckb '79': pfl '80': arz '81': roa-tara '82': fra '83': mai '84': zh-yue '85': guj '86': fin '87': kir '88': vol '89': hau '90': afr '91': uig '92': lao '93': swe '94': slv '95': kor '96': szl '97': srp '98': dty '99': nrm '100': dsb '101': ind '102': wln '103': pnb '104': ukr '105': bpy '106': vie '107': tur '108': aym '109': lit '110': zea '111': pol '112': est '113': scn '114': vls '115': stq '116': gag '117': grn '118': kaz '119': ben '120': pcd '121': bjn '122': krc '123': amh '124': diq '125': ltz '126': ita '127': kab '128': bel '129': ang '130': mhr '131': che '132': koi '133': glv '134': ido '135': fao '136': bak '137': isl '138': bcl '139': tet '140': jpn '141': kur '142': map-bms '143': tyv '144': olo '145': arg '146': ori '147': lim '148': tel '149': lin '150': roh '151': sqi '152': xho '153': mlg '154': fas '155': hbs '156': tam '157': aze '158': lad '159': nob '160': sin '161': gla '162': nap '163': snd '164': ast '165': mal '166': mdf '167': tsn '168': nds '169': tgl '170': nno '171': sun '172': lzh '173': jbo '174': crh '175': pap '176': oci '177': hak '178': uzb '179': zho '180': hsb '181': sme '182': mlt '183': vep '184': lez '185': nld '186': nds-nl '187': mrj '188': spa '189': ceb '190': ina '191': heb '192': hun '193': que '194': kaa '195': mar '196': vec '197': frp '198': ell '199': sah '200': eus '201': ces '202': slk '203': chr '204': lij '205': nep '206': srd '207': ilo '208': be-tarask '209': bod '210': orm '211': war '212': glg '213': mon '214': gle '215': min '216': ibo '217': ile '218': epo '219': lav '220': lrc '221': als '222': mzn '223': rup '224': fur '225': tat '226': myv '227': pan '228': ton '229': kom '230': wuu '231': tcy '232': tuk '233': kan '234': ltg config_name: WiLI-2018 dataset splits: - name: train num_bytes: 65408201 num_examples: 117500 - name: test num_bytes: 66491260 num_examples: 117500 download_size: 130516351 dataset_size: 131899461 --- # Dataset Card for wili_2018 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://zenodo.org/record/841984 - **Repository:** [Needs More Information] - **Paper:** https://arxiv.org/pdf/1801.07779 - **Leaderboard:** [Needs More Information] - **Point of Contact:** Thoma, Martin (Email: info@martin-thoma.de) ### Dataset Summary WiLI-2018, the Wikipedia language identification benchmark dataset, contains 235000 paragraphs of 235 languages. The dataset is balanced and a train-test split is provided. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages 235 Different Languages ## Dataset Structure ### Data Instances ``` { 'label': 207, 'sentence': 'Ti Turkia ket maysa a demokrata, sekular, unitario, batay-linteg a republika nga addaan ti taga-ugma a tinawtawid a kultura. Ti Turkia ket umadadu a naipatipon iti Laud babaen ti panagkameng kadagiti organisasion a kas ti Konsilo iti Europa, NATO, OECD, OSCE ken ti G-20 a dagiti kangrunaan nga ekonomia. Ti Turkia ket nangrugi a nakitulag ti napno a panagkameng iti Kappon ti Europa idi 2005, nga isu ket maysa idin a kumaduaan a kameng iti Europeano a Komunidad ti Ekonomia manipud idi 1963 ken nakadanon ti maysa a tulagan ti kappon ti aduana idi 1995. Ti Turkia ket nagtaraken iti asideg a kultural, politikal, ekonomiko ken industria a panakibiang iti Tengnga a Daya, dagiti Turko nga estado iti Tengnga nga Asia ken dagiti pagilian ti Aprika babaen ti panagkameng kadagiti organisasion a kas ti Turko a Konsilo, Nagsaupan nga Administrasion iti Turko nga Arte ken Kultura, Organisasion iti Islamiko a Panagtitinnulong ken ti Organisasion ti Ekonomiko a Panagtitinnulong.' } ``` ### Data Fields [Needs More Information] ### Data Splits 175000 lines of text each for train and test data. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The dataset was initially created by Thomas Martin ### Licensing Information ODC Open Database License v1.0 ### Citation Information ``` @dataset{thoma_martin_2018_841984, author = {Thoma, Martin}, title = {{WiLI-2018 - Wikipedia Language Identification database}}, month = jan, year = 2018, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.841984}, url = {https://doi.org/10.5281/zenodo.841984} } ``` ### Contributions Thanks to [@Shubhambindal2017](https://github.com/Shubhambindal2017) for adding this dataset.
wino_bias
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - mit multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - coreference-resolution paperswithcode_id: winobias pretty_name: WinoBias dataset_info: - config_name: wino_bias features: - name: document_id dtype: string - name: part_number dtype: string - name: word_number sequence: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB '47': HYPH '48': XX '49': NFP '50': AFX '51': ADD '52': -LRB- '53': -RRB- - name: parse_bit sequence: string - name: predicate_lemma sequence: string - name: predicate_framenet_id sequence: string - name: word_sense sequence: string - name: speaker sequence: string - name: ner_tags sequence: class_label: names: '0': B-PERSON '1': I-PERSON '2': B-NORP '3': I-NORP '4': B-FAC '5': I-FAC '6': B-ORG '7': I-ORG '8': B-GPE '9': I-GPE '10': B-LOC '11': I-LOC '12': B-PRODUCT '13': I-PRODUCT '14': B-EVENT '15': I-EVENT '16': B-WORK_OF_ART '17': I-WORK_OF_ART '18': B-LAW '19': I-LAW '20': B-LANGUAGE '21': I-LANGUAGE '22': B-DATE '23': I-DATE '24': B-TIME '25': I-TIME '26': B-PERCENT '27': I-PERCENT '28': B-MONEY '29': I-MONEY '30': B-QUANTITY '31': I-QUANTITY '32': B-ORDINAL '33': I-ORDINAL '34': B-CARDINAL '35': I-CARDINAL '36': '*' '37': '0' - name: verbal_predicates sequence: string splits: - name: train num_bytes: 173899234 num_examples: 150335 download_size: 268725744 dataset_size: 173899234 - config_name: type1_pro features: - name: document_id dtype: string - name: part_number dtype: string - name: word_number sequence: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB '47': HYPH '48': XX '49': NFP '50': AFX '51': ADD '52': -LRB- '53': -RRB- '54': '-' - name: parse_bit sequence: string - name: predicate_lemma sequence: string - name: predicate_framenet_id sequence: string - name: word_sense sequence: string - name: speaker sequence: string - name: ner_tags sequence: class_label: names: '0': B-PERSON '1': I-PERSON '2': B-NORP '3': I-NORP '4': B-FAC '5': I-FAC '6': B-ORG '7': I-ORG '8': B-GPE '9': I-GPE '10': B-LOC '11': I-LOC '12': B-PRODUCT '13': I-PRODUCT '14': B-EVENT '15': I-EVENT '16': B-WORK_OF_ART '17': I-WORK_OF_ART '18': B-LAW '19': I-LAW '20': B-LANGUAGE '21': I-LANGUAGE '22': B-DATE '23': I-DATE '24': B-TIME '25': I-TIME '26': B-PERCENT '27': I-PERCENT '28': B-MONEY '29': I-MONEY '30': B-QUANTITY '31': I-QUANTITY '32': B-ORDINAL '33': I-ORDINAL '34': B-CARDINAL '35': I-CARDINAL '36': '*' '37': '0' '38': '-' - name: verbal_predicates sequence: string - name: coreference_clusters sequence: string splits: - name: validation num_bytes: 379380 num_examples: 396 - name: test num_bytes: 402041 num_examples: 396 download_size: 846198 dataset_size: 781421 - config_name: type1_anti features: - name: document_id dtype: string - name: part_number dtype: string - name: word_number sequence: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB '47': HYPH '48': XX '49': NFP '50': AFX '51': ADD '52': -LRB- '53': -RRB- '54': '-' - name: parse_bit sequence: string - name: predicate_lemma sequence: string - name: predicate_framenet_id sequence: string - name: word_sense sequence: string - name: speaker sequence: string - name: ner_tags sequence: class_label: names: '0': B-PERSON '1': I-PERSON '2': B-NORP '3': I-NORP '4': B-FAC '5': I-FAC '6': B-ORG '7': I-ORG '8': B-GPE '9': I-GPE '10': B-LOC '11': I-LOC '12': B-PRODUCT '13': I-PRODUCT '14': B-EVENT '15': I-EVENT '16': B-WORK_OF_ART '17': I-WORK_OF_ART '18': B-LAW '19': I-LAW '20': B-LANGUAGE '21': I-LANGUAGE '22': B-DATE '23': I-DATE '24': B-TIME '25': I-TIME '26': B-PERCENT '27': I-PERCENT '28': B-MONEY '29': I-MONEY '30': B-QUANTITY '31': I-QUANTITY '32': B-ORDINAL '33': I-ORDINAL '34': B-CARDINAL '35': I-CARDINAL '36': '*' '37': '0' '38': '-' - name: verbal_predicates sequence: string - name: coreference_clusters sequence: string splits: - name: validation num_bytes: 380846 num_examples: 396 - name: test num_bytes: 403229 num_examples: 396 download_size: 894311 dataset_size: 784075 - config_name: type2_pro features: - name: document_id dtype: string - name: part_number dtype: string - name: word_number sequence: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB '47': HYPH '48': XX '49': NFP '50': AFX '51': ADD '52': -LRB- '53': -RRB- '54': '-' - name: parse_bit sequence: string - name: predicate_lemma sequence: string - name: predicate_framenet_id sequence: string - name: word_sense sequence: string - name: speaker sequence: string - name: ner_tags sequence: class_label: names: '0': B-PERSON '1': I-PERSON '2': B-NORP '3': I-NORP '4': B-FAC '5': I-FAC '6': B-ORG '7': I-ORG '8': B-GPE '9': I-GPE '10': B-LOC '11': I-LOC '12': B-PRODUCT '13': I-PRODUCT '14': B-EVENT '15': I-EVENT '16': B-WORK_OF_ART '17': I-WORK_OF_ART '18': B-LAW '19': I-LAW '20': B-LANGUAGE '21': I-LANGUAGE '22': B-DATE '23': I-DATE '24': B-TIME '25': I-TIME '26': B-PERCENT '27': I-PERCENT '28': B-MONEY '29': I-MONEY '30': B-QUANTITY '31': I-QUANTITY '32': B-ORDINAL '33': I-ORDINAL '34': B-CARDINAL '35': I-CARDINAL '36': '*' '37': '0' '38': '-' - name: verbal_predicates sequence: string - name: coreference_clusters sequence: string splits: - name: validation num_bytes: 367293 num_examples: 396 - name: test num_bytes: 375480 num_examples: 396 download_size: 802425 dataset_size: 742773 - config_name: type2_anti features: - name: document_id dtype: string - name: part_number dtype: string - name: word_number sequence: int32 - name: tokens sequence: string - name: pos_tags sequence: class_label: names: '0': '"' '1': '''''' '2': '#' '3': $ '4': ( '5': ) '6': ',' '7': . '8': ':' '9': '``' '10': CC '11': CD '12': DT '13': EX '14': FW '15': IN '16': JJ '17': JJR '18': JJS '19': LS '20': MD '21': NN '22': NNP '23': NNPS '24': NNS '25': NN|SYM '26': PDT '27': POS '28': PRP '29': PRP$ '30': RB '31': RBR '32': RBS '33': RP '34': SYM '35': TO '36': UH '37': VB '38': VBD '39': VBG '40': VBN '41': VBP '42': VBZ '43': WDT '44': WP '45': WP$ '46': WRB '47': HYPH '48': XX '49': NFP '50': AFX '51': ADD '52': -LRB- '53': -RRB- '54': '-' - name: parse_bit sequence: string - name: predicate_lemma sequence: string - name: predicate_framenet_id sequence: string - name: word_sense sequence: string - name: speaker sequence: string - name: ner_tags sequence: class_label: names: '0': B-PERSON '1': I-PERSON '2': B-NORP '3': I-NORP '4': B-FAC '5': I-FAC '6': B-ORG '7': I-ORG '8': B-GPE '9': I-GPE '10': B-LOC '11': I-LOC '12': B-PRODUCT '13': I-PRODUCT '14': B-EVENT '15': I-EVENT '16': B-WORK_OF_ART '17': I-WORK_OF_ART '18': B-LAW '19': I-LAW '20': B-LANGUAGE '21': I-LANGUAGE '22': B-DATE '23': I-DATE '24': B-TIME '25': I-TIME '26': B-PERCENT '27': I-PERCENT '28': B-MONEY '29': I-MONEY '30': B-QUANTITY '31': I-QUANTITY '32': B-ORDINAL '33': I-ORDINAL '34': B-CARDINAL '35': I-CARDINAL '36': '*' '37': '0' '38': '-' - name: verbal_predicates sequence: string - name: coreference_clusters sequence: string splits: - name: validation num_bytes: 368757 num_examples: 396 - name: test num_bytes: 377262 num_examples: 396 download_size: 848804 dataset_size: 746019 --- # Dataset Card for Wino_Bias dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WinoBias](https://uclanlp.github.io/corefBias/overview) - **Repository:** - **Paper:** [Arxiv](https://arxiv.org/abs/1804.06876) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary WinoBias, a Winograd-schema dataset for coreference resolution focused on gender bias. The corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). ### Supported Tasks and Leaderboards The underlying task is coreference resolution. ### Languages English ## Dataset Structure ### Data Instances The dataset has 4 subsets: `type1_pro`, `type1_anti`, `type2_pro` and `type2_anti`. The `*_pro` subsets contain sentences that reinforce gender stereotypes (e.g. mechanics are male, nurses are female), whereas the `*_anti` datasets contain "anti-stereotypical" sentences (e.g. mechanics are female, nurses are male). The `type1` (*WB-Knowledge*) subsets contain sentences for which world knowledge is necessary to resolve the co-references, and `type2` (*WB-Syntax*) subsets require only the syntactic information present in the sentence to resolve them. ### Data Fields - document_id = This is a variation on the document filename - part_number = Some files are divided into multiple parts numbered as 000, 001, 002, ... etc. - word_num = This is the word index of the word in that sentence. - tokens = This is the token as segmented/tokenized in the Treebank. - pos_tags = This is the Penn Treebank style part of speech. When parse information is missing, all part of speeches except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag. - parse_bit = This is the bracketed structure broken before the first open parenthesis in the parse, and the word/part-of-speech leaf replaced with a *. The full parse can be created by substituting the asterix with the "([pos] [word])" string (or leaf) and concatenating the items in the rows of that column. When the parse information is missing, the first word of a sentence is tagged as "(TOP*" and the last word is tagged as "*)" and all intermediate words are tagged with a "*". - predicate_lemma = The predicate lemma is mentioned for the rows for which we have semantic role information or word sense information. All other rows are marked with a "-". - predicate_framenet_id = This is the PropBank frameset ID of the predicate in predicate_lemma. - word_sense = This is the word sense of the word in Column tokens. - speaker = This is the speaker or author name where available. - ner_tags = These columns identifies the spans representing various named entities. For documents which do not have named entity annotation, each line is represented with an "*". - verbal_predicates = There is one column each of predicate argument structure information for the predicate mentioned in predicate_lemma. If there are no predicates tagged in a sentence this is a single column with all rows marked with an "*". ### Data Splits Dev and Test Split available ## Dataset Creation ### Curation Rationale The WinoBias dataset was introduced in 2018 (see [paper](https://arxiv.org/abs/1804.06876)), with its original task being *coreference resolution*, which is a task that aims to identify mentions that refer to the same entity or person. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The dataset was created by researchers familiar with the WinoBias project, based on two prototypical templates provided by the authors, in which entities interact in plausible ways. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? "Researchers familiar with the [WinoBias] project" ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [Recent work](https://www.microsoft.com/en-us/research/uploads/prod/2021/06/The_Salmon_paper.pdf) has shown that this dataset contains grammatical issues, incorrect or ambiguous labels, and stereotype conflation, among other limitations. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez and Kai-Wei Chan ### Licensing Information MIT Licence ### Citation Information @article{DBLP:journals/corr/abs-1804-06876, author = {Jieyu Zhao and Tianlu Wang and Mark Yatskar and Vicente Ordonez and Kai{-}Wei Chang}, title = {Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods}, journal = {CoRR}, volume = {abs/1804.06876}, year = {2018}, url = {http://arxiv.org/abs/1804.06876}, archivePrefix = {arXiv}, eprint = {1804.06876}, timestamp = {Mon, 13 Aug 2018 16:47:01 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1804-06876.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ### Contributions Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset. Updated by [@JieyuZhao](https://github.com/JieyuZhao).
winograd_wsc
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - multiple-choice task_ids: - multiple-choice-coreference-resolution paperswithcode_id: wsc pretty_name: Winograd Schema Challenge dataset_info: - config_name: wsc285 features: - name: text dtype: string - name: pronoun dtype: string - name: pronoun_loc dtype: int32 - name: quote dtype: string - name: quote_loc dtype: int32 - name: options sequence: string - name: label dtype: class_label: names: '0': '0' '1': '1' - name: source dtype: string splits: - name: test num_bytes: 52281 num_examples: 285 download_size: 113235 dataset_size: 52281 - config_name: wsc273 features: - name: text dtype: string - name: pronoun dtype: string - name: pronoun_loc dtype: int32 - name: quote dtype: string - name: quote_loc dtype: int32 - name: options sequence: string - name: label dtype: class_label: names: '0': '0' '1': '1' - name: source dtype: string splits: - name: test num_bytes: 49674 num_examples: 273 download_size: 113235 dataset_size: 49674 --- # Dataset Card for The Winograd Schema Challenge ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html - **Repository:** - **Paper:** https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.729.9814&rep=rep1&type=pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from a well-known example by Terry Winograd: > The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. If the word is ``feared'', then ``they'' presumably refers to the city council; if it is ``advocated'' then ``they'' presumably refers to the demonstrators. ### Supported Tasks and Leaderboards From the official webpage: > A contest, entitled the Winograd Schema Challenge was run once, in 2016. At that time, there was a cash prize offered for achieving human-level performance in the contest. Since then, the sponsor has withdrawn; therefore NO CASH PRIZES CAN BE OFFERED OR WILL BE AWARDED FOR ANY KIND OF PERFORMANCE OR ACHIEVEMENT ON THIS CHALLENGE. ### Languages The dataset is in English. [Translation of 12 WSs into Chinese ](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WSChinese.html)(translated by Wei Xu). Translations into Japanese, by Soichiro Tanaka, Rafal Rzepka, and Shiho Katajima\ **Translation changing English names to Japanese **[PDF ](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/collection_ja.pdf)    [HTML](http://arakilab.media.eng.hokudai.ac.jp/~kabura/collection_ja.html)\ **Translation preserving English names** [PDF ](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/collection_katakana.pdf)    [HTML](http://arakilab.media.eng.hokudai.ac.jp/~kabura/collection_katakana.html) [Translation into French, ](http://www.llf.cnrs.fr/winograd-fr)by Pascal Amsili and Olga Seminck [Winograd Schemas in Portuguese](https://sol.sbc.org.br/index.php/eniac/article/view/9334) by Gabriela Melo, Vinicius Imaizumi, and Fábio Cozman. [Mandarinograd: A Chinese Collection of Winograd Schemas](https://www.aclweb.org/anthology/2020.lrec-1.3) by Timothée Bernard and Ting Han, LREC-2020. ## Dataset Structure ### Data Instances Each instance contains a text passage with a designated pronoun and two possible answers indicating which entity in the passage the pronoun represents. An example instance looks like the following: ```python { 'label': 0, 'options': ['The city councilmen', 'The demonstrators'], 'pronoun': 'they', 'pronoun_loc': 63, 'quote': 'they feared violence', 'quote_loc': 63, 'source': '(Winograd 1972)', 'text': 'The city councilmen refused the demonstrators a permit because they feared violence.' } ``` ### Data Fields - `text` (str): The text sequence - `options` (list[str]): The two entity options that the pronoun may be referring to - `label` (int): The index of the correct option in the `options` field - `pronoun` (str): The pronoun in the sequence to be resolved - `pronoun_loc` (int): The starting position of the pronoun in the sequence - `quote` (str): The substr with the key action or context surrounding the pronoun - `quote_loc` (int): The starting position of the quote in the sequence - `source` (str): A description of the source who contributed the example ### Data Splits Only a test split is included. ## Dataset Creation ### Curation Rationale The Winograd Schema Challenge was proposed as an automated evaluation of an AI system's commonsense linguistic understanding. From the webpage: > The strengths of the challenge are that it is clear-cut, in that the answer to each schema is a binary choice; vivid, in that it is obvious to non-experts that a program that fails to get the right answers clearly has serious gaps in its understanding; and difficult, in that it is far beyond the current state of the art. ### Source Data #### Initial Data Collection and Normalization This data was manually written by experts such that the schemas are: - easily disambiguated by the human reader (ideally, so easily that the reader does not even notice that there is an ambiguity); - not solvable by simple techniques such as selectional restrictions; - Google-proof; that is, there is no obvious statistical test over text corpora that will reliably disambiguate these correctly. #### Who are the source language producers? This dataset has grown over time, and so was produced by a variety of lingustic and AI researchers. See the `source` field for the source of each instance. ### Annotations #### Annotation process Annotations are produced by the experts who construct the examples. #### Who are the annotators? See above. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset has grown over time, and so was produced by a variety of lingustic and AI researchers. See the `source` field for the source of each instance. ### Licensing Information This work is licensed under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). ### Citation Information The Winograd Schema Challenge including many of the examples here was proposed by [Levesque et al 2012](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.729.9814&rep=rep1&type=pdf): ``` @inproceedings{levesque2012winograd, title={The winograd schema challenge}, author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora}, booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning}, year={2012}, organization={Citeseer} } ``` ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
winogrande
--- language: - en paperswithcode_id: winogrande pretty_name: WinoGrande dataset_info: - config_name: winogrande_xs features: - name: sentence dtype: string - name: option1 dtype: string - name: option2 dtype: string - name: answer dtype: string splits: - name: train num_bytes: 20704 num_examples: 160 - name: test num_bytes: 227649 num_examples: 1767 - name: validation num_bytes: 164199 num_examples: 1267 download_size: 3395492 dataset_size: 412552 - config_name: winogrande_s features: - name: sentence dtype: string - name: option1 dtype: string - name: option2 dtype: string - name: answer dtype: string splits: - name: train num_bytes: 82308 num_examples: 640 - name: test num_bytes: 227649 num_examples: 1767 - name: validation num_bytes: 164199 num_examples: 1267 download_size: 3395492 dataset_size: 474156 - config_name: winogrande_m features: - name: sentence dtype: string - name: option1 dtype: string - name: option2 dtype: string - name: answer dtype: string splits: - name: train num_bytes: 329001 num_examples: 2558 - name: test num_bytes: 227649 num_examples: 1767 - name: validation num_bytes: 164199 num_examples: 1267 download_size: 3395492 dataset_size: 720849 - config_name: winogrande_l features: - name: sentence dtype: string - name: option1 dtype: string - name: option2 dtype: string - name: answer dtype: string splits: - name: train num_bytes: 1319576 num_examples: 10234 - name: test num_bytes: 227649 num_examples: 1767 - name: validation num_bytes: 164199 num_examples: 1267 download_size: 3395492 dataset_size: 1711424 - config_name: winogrande_xl features: - name: sentence dtype: string - name: option1 dtype: string - name: option2 dtype: string - name: answer dtype: string splits: - name: train num_bytes: 5185832 num_examples: 40398 - name: test num_bytes: 227649 num_examples: 1767 - name: validation num_bytes: 164199 num_examples: 1267 download_size: 3395492 dataset_size: 5577680 - config_name: winogrande_debiased features: - name: sentence dtype: string - name: option1 dtype: string - name: option2 dtype: string - name: answer dtype: string splits: - name: train num_bytes: 1203420 num_examples: 9248 - name: test num_bytes: 227649 num_examples: 1767 - name: validation num_bytes: 164199 num_examples: 1267 download_size: 3395492 dataset_size: 1595268 --- # Dataset Card for "winogrande" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://leaderboard.allenai.org/winogrande/submissions/get-started](https://leaderboard.allenai.org/winogrande/submissions/get-started) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 20.37 MB - **Size of the generated dataset:** 10.50 MB - **Total amount of disk used:** 30.87 MB ### Dataset Summary WinoGrande is a new collection of 44k problems, inspired by Winograd Schema Challenge (Levesque, Davis, and Morgenstern 2011), but adjusted to improve the scale and robustness against the dataset-specific bias. Formulated as a fill-in-a-blank task with binary options, the goal is to choose the right option for a given sentence which requires commonsense reasoning. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### winogrande_debiased - **Size of downloaded dataset files:** 3.40 MB - **Size of the generated dataset:** 1.59 MB - **Total amount of disk used:** 4.99 MB An example of 'train' looks as follows. ``` ``` #### winogrande_l - **Size of downloaded dataset files:** 3.40 MB - **Size of the generated dataset:** 1.71 MB - **Total amount of disk used:** 5.11 MB An example of 'validation' looks as follows. ``` ``` #### winogrande_m - **Size of downloaded dataset files:** 3.40 MB - **Size of the generated dataset:** 0.72 MB - **Total amount of disk used:** 4.12 MB An example of 'validation' looks as follows. ``` ``` #### winogrande_s - **Size of downloaded dataset files:** 3.40 MB - **Size of the generated dataset:** 0.47 MB - **Total amount of disk used:** 3.87 MB An example of 'validation' looks as follows. ``` ``` #### winogrande_xl - **Size of downloaded dataset files:** 3.40 MB - **Size of the generated dataset:** 5.58 MB - **Total amount of disk used:** 8.98 MB An example of 'train' looks as follows. ``` ``` ### Data Fields The data fields are the same among all splits. #### winogrande_debiased - `sentence`: a `string` feature. - `option1`: a `string` feature. - `option2`: a `string` feature. - `answer`: a `string` feature. #### winogrande_l - `sentence`: a `string` feature. - `option1`: a `string` feature. - `option2`: a `string` feature. - `answer`: a `string` feature. #### winogrande_m - `sentence`: a `string` feature. - `option1`: a `string` feature. - `option2`: a `string` feature. - `answer`: a `string` feature. #### winogrande_s - `sentence`: a `string` feature. - `option1`: a `string` feature. - `option2`: a `string` feature. - `answer`: a `string` feature. #### winogrande_xl - `sentence`: a `string` feature. - `option1`: a `string` feature. - `option2`: a `string` feature. - `answer`: a `string` feature. ### Data Splits | name |train|validation|test| |-------------------|----:|---------:|---:| |winogrande_debiased| 9248| 1267|1767| |winogrande_l |10234| 1267|1767| |winogrande_m | 2558| 1267|1767| |winogrande_s | 640| 1267|1767| |winogrande_xl |40398| 1267|1767| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{ai2:winogrande, title = {WinoGrande: An Adversarial Winograd Schema Challenge at Scale}, authors={Keisuke, Sakaguchi and Ronan, Le Bras and Chandra, Bhagavatula and Yejin, Choi }, year={2019} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@TevenLeScao](https://github.com/TevenLeScao), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
wiqa
--- language: - en paperswithcode_id: wiqa pretty_name: What-If Question Answering dataset_info: features: - name: question_stem dtype: string - name: question_para_step sequence: string - name: answer_label dtype: string - name: answer_label_as_choice dtype: string - name: choices sequence: - name: text dtype: string - name: label dtype: string - name: metadata_question_id dtype: string - name: metadata_graph_id dtype: string - name: metadata_para_id dtype: string - name: metadata_question_type dtype: string - name: metadata_path_len dtype: int32 splits: - name: train num_bytes: 17089298 num_examples: 29808 - name: test num_bytes: 1532223 num_examples: 3003 - name: validation num_bytes: 3779584 num_examples: 6894 download_size: 5247733 dataset_size: 22401105 --- # Dataset Card for "wiqa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://allenai.org/data/wiqa](https://allenai.org/data/wiqa) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 5.24 MB - **Size of the generated dataset:** 22.40 MB - **Total amount of disk used:** 27.65 MB ### Dataset Summary The WIQA dataset V1 has 39705 questions containing a perturbation and a possible effect in the context of a paragraph. The dataset is split into 29808 train questions, 6894 dev questions and 3003 test questions. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 5.24 MB - **Size of the generated dataset:** 22.40 MB - **Total amount of disk used:** 27.65 MB An example of 'validation' looks as follows. ``` { "answer_label": "more", "answer_label_as_choice": "A", "choices": { "label": ["A", "B", "C"], "text": ["more", "less", "no effect"] }, "metadata_graph_id": "481", "metadata_para_id": "528", "metadata_path_len": 3, "metadata_question_id": "influence_graph:528:481:77#0", "metadata_question_type": "INPARA_EFFECT", "question_para_step": ["A male and female rabbit mate", "The female rabbit becomes pregnant", "Baby rabbits form inside of the mother rabbit", "The female rabbit gives birth to a litter", "The newborn rabbits grow up to become adults", "The adult rabbits find mates."], "question_stem": "suppose the female is sterile happens, how will it affect LESS rabbits." } ``` ### Data Fields The data fields are the same among all splits. #### default - `question_stem`: a `string` feature. - `question_para_step`: a `list` of `string` features. - `answer_label`: a `string` feature. - `answer_label_as_choice`: a `string` feature. - `choices`: a dictionary feature containing: - `text`: a `string` feature. - `label`: a `string` feature. - `metadata_question_id`: a `string` feature. - `metadata_graph_id`: a `string` feature. - `metadata_para_id`: a `string` feature. - `metadata_question_type`: a `string` feature. - `metadata_path_len`: a `int32` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|29808| 6894|3003| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{wiqa, author = {Niket Tandon and Bhavana Dalvi Mishra and Keisuke Sakaguchi and Antoine Bosselut and Peter Clark} title = {WIQA: A dataset for "What if..." reasoning over procedural text}, journal = {arXiv:1909.04739v1}, year = {2019}, } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset.