Name
stringlengths
3
106
Subsets
list
HF Link
stringlengths
3
69
Link
stringlengths
24
135
License
stringclasses
28 values
Year
int32
2k
2.02k
Language
stringclasses
2 values
Dialect
stringclasses
18 values
Domain
stringclasses
10 values
Form
stringclasses
3 values
Collection Style
stringclasses
7 values
Description
stringlengths
16
1.64k
Volume
stringlengths
1
13
Unit
stringclasses
5 values
Ethical Risks
stringclasses
4 values
Provider
stringlengths
1
136
Derived From
stringlengths
2
307
Paper Title
stringlengths
3
143
Paper Link
stringlengths
3
285
Script
stringclasses
4 values
Tokenized
stringclasses
2 values
Host
stringclasses
21 values
Access
stringclasses
3 values
Cost
stringlengths
3
11
Test Split
stringclasses
3 values
Tasks
stringlengths
8
181
Venue Title
stringlengths
2
46
Citations
stringlengths
3
6
Venue Type
stringclasses
5 values
Venue Name
stringlengths
3
113
Authors
stringlengths
3
923
Affiliations
stringlengths
1
470
Abstract
stringlengths
3
2.15k
Added By
stringlengths
3
25
Twifil
[]
nan
https://github.com/kinmokusu/oea_algd
unknown
2,020
ar
ar-DZ: (Arabic (Algeria))
social media
text
crawling and annotation(other)
An Algerian dialect dataset annotated for both sentiment (9,000 tweets), emotion (about 5,000 tweets) and extra-linguistic information including author profiling (age and gender)
14,000
sentences
Low
nan
nan
An Algerian Corpus and an Annotation Platform for Opinion and Emotion Analysis
https://aclanthology.org/2020.lrec-1.151.pdf
Arab-Latn
No
GitHub
Free
nan
No
sentiment analysis, Emotion detection
LREC
14.0
conference
Language Resources and Evaluation Conference
Leila Moudjari, Karima Akli-Astouati, Farah Benamara
nan
In this paper, we address the lack of resources for opinion and emotion analysis related to North African dialects, targeting Algerian dialect. We present TWIFIL (TWItter proFILing) a collaborative annotation platform for crowdsourcing annotation of tweets at different levels of granularity. The plateform allowed the creation of the largest Algerian dialect dataset annotated for both sentiment (9,000 tweets), emotion (about 5,000 tweets) and extra-linguistic information including author profiling (age and gender). The annotation resulted also in the creation of the largest Algerien dialect subjectivity lexicon of about 9,000 entries which can constitute a valuable resources for the development of future NLP applications for Algerian dialect. To test the validity of the dataset, a set of deep learning experiments were conducted to classify a given tweet as positive, negative or neutral. We discuss our results and provide an error analysis to better identify classification errors.
Abderrahmane Issam
MADAR Lexicon
[]
nan
https://docs.google.com/forms/d/e/1FAIpQLSe2LHYmHsxdkHPYHgcZDz25dTNbnygPkmClIaLd_fwud-XnTQ/viewform
custom
2,022
ar
mixed
other
text
manual curation
The MADAR Lexicon is a collection of 1,042 concepts expressed in 25 city dialects totaling 47K entries (with an average of 45 words per concept, or about 2 words per dialect). Concepts were selected from the BTEC Parallel corpora. The lexicon is centered around concept keys, which are triplets of English, French, and Modern Standard Arabic (MSA), and annotators had to provide words that overlap in word sense with all three languages. Each dialectal word is presented in its CODA orthography and its CAPHI phonology (Bouamor et al., 2018; Habash et al., 2018). The MADAR Lexicon was created as part of the Multi-Arabic Dialect Applications and Resources Project (funded by NPRP 7-290- 1-047 from the Qatar National Research Fund (a member of the Qatar Foundation). Website: http://madar.camel-lab.com
47,000
tokens
Low
NYU Abu Dhabi
nan
The MADAR Arabic Dialect Corpus and Lexicon.
http://www.lrec-conf.org/proceedings/lrec2018/pdf/351.pdf
Arab
Yes
CAMeL Resources
Free
nan
No
dialect identification, transliteration
LREC
127.0
conference
The International Conference on Language Resources and Evaluation
Bouamor, Houda, Nizar Habash, Mohammad Salameh, Wajdi Zaghouani, Owen Rambow, Dana Abdulrahim, Ossama Obeid, Salam Khalifa, Fadhl Eryani, Alexander Erdmann and Kemal Oflazer.
Carnegie Mellon University in Qatar, Qatar; Hamad Bin Khalifa University, Qatar; New York University Abu Dhabi, UAE; Columbia University, USA, University of Bahrain; Bahrain
In this paper, we present two resources that were created as part of the Multi Arabic Dialect Applications and Resources (MADAR) project. The first is a large parallel corpus of 25 Arabic city dialects in the travel domain. The second is a lexicon of 1,045 concepts with an average of 45 words from 25 cities per concept. These resources are the first of their kind in terms of the breadth of their coverage and the fine location granularity. The focus on cities, as opposed to regions in studying Arabic dialects, opens new avenues to many areas of research from dialectology to dialect identification and machine translation.
Fadhl Al-Eryani
xSID - (X) Slot and Intent Detection
[]
nan
https://bitbucket.org/robvanderg/xsid/src/master/
CC BY-SA 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
human translation
An evaluation dataset of intent classification and slot detection
800
tokens
Low
nan
nan
From Masked Language Modeling to Translation: Non-English Auxiliary Tasks Improve Zero-shot Spoken Language Understanding
https://aclanthology.org/2021.naacl-main.197.pdf
Arab
No
other
Free
nan
Yes
intent classification, slot detection
ACL
nan
conference
Associations of computation linguistics
nan
nan
The lack of publicly available evaluation data for low-resource languages limits progress in Spoken Language Understanding (SLU). As key tasks like intent classification and slot filling require abundant training data, it is desirable to reuse existing data in high-resource languages to develop models for low-resource scenarios. We introduce XSID, a new benchmark for cross-lingual (X) Slot and Intent Detection in 13 languages from 6 language families, including a very low-resource dialect. To tackle the challenge, we propose a joint learning approach, with English SLU training data and non-English auxiliary tasks from raw text, syntax and translation for transfer. We study two setups which differ by type and language coverage of the pre-trained embeddings. Our results show that jointly learning the main tasks with masked language modeling is effective for slots, while machine translation transfer works best for intent classification
Amr Keleg
Annotated Shami Corpus
[]
nan
https://github.com/christios/annotated-shami-corpus
CC BY 4.0
2,021
ar
ar-LB: (Arabic (Lebanon))
social media
text
crawling and annotation(other)
Subsection of the Lebanese portion of the Shami Corpus annotated for spelling standardization (CODA), morphological segmentation and tagging, and spontaneous orthography taxonomy tagging.
10,000
tokens
Medium
nan
Shami Corpus
Orthography Standardization in Arabic Dialects
https://dspace.cuni.cz/handle/20.500.11956/147949
Arab
Yes
GitHub
Free
nan
No
part of speech tagging, morphological analysis, error class taxonomy tagging, CODA
nan
nan
nan
nan
Christian Khairallah
Charles University in Prague, Saarland University
Spontaneous orthography in Arabic dialects poses one of the biggest obstacles in the way of Dialectal Arabic NLP applications. As the Arab world enjoys a wide array of these widely spoken and recently written, non-standard, low-resource varieties, this thesis presents a detailed account of this relatively overlooked phenomenon. It sets out to show that continuously creating additional noise-free, manually standardized corpora of Dialectal Arabic does not free us from the shackles of non-standard (spontaneous) orthography. Because real-world data will most often come in a noisy format, it also investigates ways to ease the amount of noise in textual data. As a proof of concept, we restrict ourselves to one of the dialectal varieties, namely, Lebanese Arabic. It also strives to gain a better understanding of the nature of the noise and its distribution. All of this is done by leveraging various spelling correction and morphological tagging neural architectures in a multi-task setting, and by annotating a Lebanese Arabic corpus for spontaneous orthography standardization, and morphological segmentation and tagging, among other features. Additionally, a detailed taxonomy of spelling inconsistencies for Lebanese Arabic is presented and is used to tag the corpus. This constitutes the first attempt in Dialectal Arabic research to try and categorize spontaneous orthography in a detailed manner.
Christian Khairallah
Maknuune: A Large Open Palestinian Arabic Lexicon
[]
nan
http://www.palestine-lexicon.org/
CC BY-SA 4.0
2,022
ar
ar-PS: (Arabic (Palestine))
other
spoken
manual curation
Palestinian Arabic lexicon collected through manual curation and field surveys.
36,302
tokens
Low
Shahd Dibas and NYU Abu Dhabi
nan
Maknuune: A Large Open Palestinian Arabic Lexicon
https://arxiv.org/pdf/2210.12985.pdf
Arab
No
Gdrive
Free
nan
No
Lexicon (linguistic analysis)
WANLP
nan
workshop
Arabic Natural Language Processing Workshop
Shahd Dibas, Christian Khairallah, Nizar Habash, Omar Fayez Sadi, Tariq Sairafy, Karmel Sarabta, Abrar Ardah
NYUAD, University of Oxford, UNRWA
We present Maknuune, a large open lexicon for the Palestinian Arabic dialect. Maknuune has over 36K entries from 17K lemmas, and 3.7K roots. All entries include diacritized Arabic orthography, phonological transcription and English glosses. Some entries are enriched with additional information such as broken plurals and templatic feminine forms, associated phrases and collocations, Standard Arabic glosses, and examples or notes on grammar, usage, or location of collected entry.
Christian Khairallah
Baladi Lebanese dialect corpora
[]
nan
https://portal.sina.birzeit.edu/curras
CC BY-NC-SA 4.0
2,022
ar
ar-LB: (Arabic (Lebanon))
other
text
crawling
The corpus consists of about 9.6K words/tokens collected from Facebook, blog posts and traditional poems. The corpus was annotated as an extension to Curras and following the same annotation methodology to form a Levantine Corpus.
10,000
tokens
Low
Birzeit University
nan
Curras + Baladi: Towards a Levantine Corpus
https://arxiv.org/pdf/2212.06468.pdf
Arab
Yes
Dropbox
Upon-Request
nan
No
machine translation, speech recognition, dialect identification, named entity recognition, part of speech tagging, language identification, morphological analysis
LERC
nan
conference
LERC
Karim El Haff, Mustafa Jarrar, Tymaa Hammouda, Fadi Zaraket
nan
The processing of the Arabic language is a complex field of research. This is due to many factors, including the complex and rich morphology of Arabic, its high degree of ambiguity, and the presence of several regional varieties that need to be processed while taking into account their unique characteristics. When its dialects are taken into account, this language pushes the limits of NLP to find solutions to problems posed by its inherent nature. It is a diglossic language; the standard language is used in formal settings and in education and is quite different from the vernacular languages spoken in the different regions and influenced by older languages that were historically spoken in those regions. This should encourage NLP specialists to create dialect-specific corpora such as the Palestinian morphologically annotated Curras corpus of Birzeit University. In this work, we present the Lebanese Corpus Baladi that consists of around 9.6K morphologically annotated tokens. Since Lebanese and Palestinian dialects are part of the same Levantine dialectal continuum, and thus highly mutually intelligible, our proposed corpus was constructed to be used to (1) enrich Curras and transform it into a more general Levantine corpus and (2) improve Curras by solving detected errors.
Mustafa Jarrar
Mawqif
[]
https://huggingface.co/datasets/NoraAlt/Mawqif_Stance-Detection
https://github.com/NoraAlt/Mawqif-Arabic-Stance
unknown
2,022
ar
mixed
social media
text
crawling and annotation(other)
Mawqif is the first Arabic dataset that can be used for target-specific stance detection. This is a multi-label dataset where each data point is annotated for stance, sentiment, and sarcasm.
4,121
sentences
Medium
nan
nan
Mawqif: A Multi-label Arabic Dataset for Target-specific Stance Detection
https://aclanthology.org/2022.wanlp-1.16/
Arab
No
GitHub
Free
nan
Yes
sentiment analysis, topic classification, irony detection, stance detection
WANLP
nan
workshop
Arabic Natural Language Processing Workshop
nan
nan
Social media platforms are becoming inherent parts of people’s daily life to express opinions and stances toward topics of varying polarities. Stance detection determines the viewpoint expressed in a text toward a target. While communication on social media (e.g., Twitter) takes place in more than 40 languages, the majority of stance detection research has been focused on English. Although some efforts have recently been made to develop stance detection datasets in other languages, no similar efforts seem to have considered the Arabic language. In this paper, we present Mawqif, the first Arabic dataset for target-specific stance detection, composed of 4,121 tweets annotated with stance, sentiment, and sarcasm polarities. Mawqif, as a multi-label dataset, can provide more opportunities for studying the interaction between different opinion dimensions and evaluating a multi-task model. We provide a detailed description of the dataset, present an analysis of the produced annotation, and evaluate four BERT-based models on it. Our best model achieves a macro-F1 of 78.89%, which shows that there is ample room for improvement on this challenging task. We publicly release our dataset, the annotation guidelines, and the code of the experiments.
Nora Saleh Alturayeif
Tatoeba
[]
nan
https://tatoeba.org/en/downloads
CC BY 2.0
2,006
multilingual
mixed
other
text
manual curation
A crowd-sourced dataset of parallel sentences.
nan
sentences
nan
nan
nan
nan
nan
Arab
No
other
Free
nan
No
machine translation, language modelling, dialect identification, language identification
nan
nan
nan
nan
nan
nan
nan
Amr Keleg
QA4MRE
[]
https://huggingface.co/datasets/qa4mre
http://nlp.uned.es/clef-qa/repository/qa4mre.php
unknown
2,013
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling and annotation(other)
QA4MRE dataset was created for the CLEF 2011/2012/2013 shared tasks to promote research in question answering and reading comprehension. The dataset contains a supporting passage and a set of questions corresponding to the passage. Multiple options for answers are provided for each question, of which only one is correct. The training and test datasets are available for the main track. Additional gold standard documents are available for two pilot studies: one on alzheimers data, and the other on entrance exams data.
160
documents
Low
nan
nan
QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation
https://link.springer.com/chapter/10.1007/978-3-642-40802-1_29
Arab
No
other
Free
nan
No
multiple choice
CLEF
nan
conference
Conference and Labs of the Evaluation Forum
Anselmo Peñas, Eduard Hovy, Pamela Forner, Álvaro Rodrigo, Richard Sutcliffe & Roser Morante
nan
This paper describes the methodology for testing the performance of Machine Reading systems through Question Answering and Reading Comprehension Tests. This was the attempt of the QA4MRE challenge which was run as a Lab at CLEF 2011–2013. The traditional QA task was replaced by a new Machine Reading task, whose intention was to ask questions that required a deep knowledge of individual short texts and in which systems were required to choose one answer, by analysing the corresponding test document in conjunction with background text collections provided by the organization. Four different tasks have been organized during these years: Main Task, Processing Modality and Negation for Machine Reading, Machine Reading of Biomedical Texts about Alzheimer’s disease, and Entrance Exams. This paper describes their motivation, their goals, their methodology for preparing the data sets, their background collections, their metrics used for the evaluation, and the lessons learned along these three years.
Zaid Alyafeai