Name
stringlengths
3
106
Subsets
list
HF Link
stringlengths
3
69
Link
stringlengths
24
135
License
stringclasses
28 values
Year
int32
2k
2.02k
Language
stringclasses
2 values
Dialect
stringclasses
18 values
Domain
stringclasses
10 values
Form
stringclasses
3 values
Collection Style
stringclasses
7 values
Description
stringlengths
16
1.64k
Volume
stringlengths
1
13
Unit
stringclasses
5 values
Ethical Risks
stringclasses
4 values
Provider
stringlengths
1
136
Derived From
stringlengths
2
307
Paper Title
stringlengths
3
143
Paper Link
stringlengths
3
285
Script
stringclasses
4 values
Tokenized
stringclasses
2 values
Host
stringclasses
21 values
Access
stringclasses
3 values
Cost
stringlengths
3
11
Test Split
stringclasses
3 values
Tasks
stringlengths
8
181
Venue Title
stringlengths
2
46
Citations
stringlengths
3
6
Venue Type
stringclasses
5 values
Venue Name
stringlengths
3
113
Authors
stringlengths
3
923
Affiliations
stringlengths
1
470
Abstract
stringlengths
3
2.15k
Added By
stringlengths
3
25
AQMAR
[]
https://huggingface.co/datasets/arbml/AQMAR
https://www.cs.cmu.edu/~ark/ArabicNER/
CC BY-SA 3.0
2,012
ar
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling and annotation(other)
This is a 74,000-token corpus of 28 Arabic Wikipedia articles hand-annotated for named entities.
74,000
tokens
Low
CMU
nan
Recall-Oriented Learning of Named Entities in Arabic Wikipedia
https://aclanthology.org/E12-1017.pdf
Arab
No
other
Free
nan
No
named entity recognition
EACL
64.0
conference
European Chapter of the Association for Computational Linguistics
B. Mohit,Nathan Schneider,Rishav Bhowmick,Kemal Oflazer,Noah A. Smith
,,,,
We consider the problem of NER in Arabic Wikipedia, a semisupervised domain adaptation setting for which we have no labeled training data in the target domain. To facilitate evaluation, we obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard categories. Standard supervised learning on newswire text leads to poor target-domain recall. We train a sequence model and show that a simple modification to the online learner---a loss function encouraging it to "arrogantly" favor recall over precision---substantially improves recall and F1. We then adapt our model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the self-training stage yields marginal gains.
Zaid Alyafeai
MASC
[]
https://huggingface.co/datasets/abdusahmbzuai/masc_dev
https://github.com/almoslmi/masc
custom
2,018
ar
mixed
reviews
text
crawling and annotation(other)
Multi-domain Arabic Sentiment Corpus (MASC) with a size of 8860 positive and negative reviews from different domains
8,860
sentences
Low
Multiple institutions
nan
Arabic senti-lexicon: Constructing publicly available language resources for Arabic sentiment analysis
https://journals.sagepub.com/doi/full/10.1177/0165551516683908
Arab
No
GitHub
Free
nan
No
sentiment analysis, dialect identification, topic classification
JIS
54.0
journal
Journal of Information Science
Tareq Al-Moslmi,M. Albared,Adel Al-Shabi,N. Omar,S. Abdullah
,,,,
Sentiment analysis is held to be one of the highly dynamic recent research fields in Natural Language Processing, facilitated by the quickly growing volume of Web opinion data. Most of the approaches in this field are focused on English due to the lack of sentiment resources in other languages such as the Arabic language and its large variety of dialects. In most sentiment analysis applications, good sentiment resources play a critical role. Based on that, in this article, several publicly available sentiment analysis resources for Arabic are introduced. This article introduces the Arabic senti-lexicon, a list of 3880 positive and negative synsets annotated with their part of speech, polarity scores, dialects synsets and inflected forms. This article also presents a Multi-domain Arabic Sentiment Corpus (MASC) with a size of 8860 positive and negative reviews from different domains. In this article, an in-depth study has been conducted on five types of feature sets for exploiting effective features and investigating their effect on performance of Arabic sentiment analysis. The aim is to assess the quality of the developed language resources and to integrate different feature sets and classification algorithms to synthesise a more accurate sentiment analysis method. The Arabic senti-lexicon is used for generating feature vectors. Five well-known machine learning algorithms: naïve Bayes, k-nearest neighbours, support vector machines (SVMs), logistic linear regression and neural network are employed as base-classifiers for each of the feature sets. A wide range of comparative experiments on standard Arabic data sets were conducted, discussion is presented and conclusions are drawn. The experimental results show that the Arabic senti-lexicon is a very useful resource for Arabic sentiment analysis. Moreover, results show that classifiers which are trained on feature vectors derived from the corpus using the Arabic sentiment lexicon are more accurate than classifiers trained using the raw corpus.
Zaid Alyafeai
Arabic senti-lexicon
[]
https://huggingface.co/datasets/arbml/Senti_Lexicon
https://github.com/almoslmi/masc
custom
2,018
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
a list of 3880 positive and negative synsets annotated with their part of speech, polarity scores, dialects synsets and inflected forms
3,880
tokens
Low
Multiple institutions
nan
Arabic senti-lexicon: Constructing publicly available language resources for Arabic sentiment analysis
https://journals.sagepub.com/doi/full/10.1177/0165551516683909
Arab
No
GitHub
Free
nan
No
part of speech tagging, sentiment analysis
JIS
54.0
journal
Journal of Information Science
Tareq Al-Moslmi,M. Albared,Adel Al-Shabi,N. Omar,S. Abdullah
,,,,
Sentiment analysis is held to be one of the highly dynamic recent research fields in Natural Language Processing, facilitated by the quickly growing volume of Web opinion data. Most of the approaches in this field are focused on English due to the lack of sentiment resources in other languages such as the Arabic language and its large variety of dialects. In most sentiment analysis applications, good sentiment resources play a critical role. Based on that, in this article, several publicly available sentiment analysis resources for Arabic are introduced. This article introduces the Arabic senti-lexicon, a list of 3880 positive and negative synsets annotated with their part of speech, polarity scores, dialects synsets and inflected forms. This article also presents a Multi-domain Arabic Sentiment Corpus (MASC) with a size of 8860 positive and negative reviews from different domains. In this article, an in-depth study has been conducted on five types of feature sets for exploiting effective features and investigating their effect on performance of Arabic sentiment analysis. The aim is to assess the quality of the developed language resources and to integrate different feature sets and classification algorithms to synthesise a more accurate sentiment analysis method. The Arabic senti-lexicon is used for generating feature vectors. Five well-known machine learning algorithms: naïve Bayes, k-nearest neighbours, support vector machines (SVMs), logistic linear regression and neural network are employed as base-classifiers for each of the feature sets. A wide range of comparative experiments on standard Arabic data sets were conducted, discussion is presented and conclusions are drawn. The experimental results show that the Arabic senti-lexicon is a very useful resource for Arabic sentiment analysis. Moreover, results show that classifiers which are trained on feature vectors derived from the corpus using the Arabic sentiment lexicon are more accurate than classifiers trained using the raw corpus.
Zaid Alyafeai
CANERCorpus
[]
https://huggingface.co/datasets/caner
https://github.com/RamziSalah/Classical-Arabic-Named-Entity-Recognition-Corpus
unknown
2,018
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
It is freely available and manual annotation by human experts, containing more than 7,000 Hadiths
72,108
tokens
Low
Universiti Kebangsaan
nan
Building the Classical Arabic Named Entity Recognition Corpus (CANERCorpus)
https://ieeexplore.ieee.org/document/8464820/authors#authors
Arab
No
GitHub
Free
nan
No
named entity recognition
CAMP
0.0
conference
International Conference on Information Retrieval and Knowledge Management
Ramzi Salah,Lailatul Qadri Binti Zakaria
,
The past decade has witnessed construction of the background information resources to overcome several challenges in text mining tasks. For non-English languages with poor knowledge sources such as Arabic, these challenges have become more salient especially for handling the natural language processing applications that require human annotation. In the Named Entity Recognition (NER) task, several researches have been introduced to address the complexity of Arabic in terms of morphological and syntactical variations. However, there are a small number of studies dealing with Classical Arabic (CA) that is the official language of Quran and Hadith. CA was also used for archiving the Islamic topics that contain a lot of useful information which could of great value if extracted. Therefore, in this paper, we introduce Classical Arabic Named Entity Recognition corpus as a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities. It is freely available and manual annotation by human experts, containing more than 7,000 Hadiths. Based on Islamic topics, we classify named entities into 20 types which include the specific-domain entities that have not been handled before such as Allah, Prophet, Paradise, Hell, and Religion. The differences between the standard and classical Arabic are described in details during this work. Moreover, the comprehensive statistical analysis is introduced to measure the factors that play important role in manual human annotation.
Zaid Alyafeai
SAD
[]
https://huggingface.co/datasets/arbml/SAD
http://www.cs.stir.ac.uk/~lss/arabic/
unknown
2,014
ar
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
manual curation
The Arabic speech corpus for isolated words contains 9992 utterances of 20 words spoken by 50 native male Arabic speakers.
6
hours
Low
University of Stirling
nan
On Improving the Classification Capability of Reservoir Computing for Arabic Speech Recognition
https://link.springer.com/content/pdf/10.1007%2F978-3-319-11179-7_29.pdf
Arab
No
other
Free
nan
Yes
speech recognition
ICANN
13.0
conference
International Conference on Artificial Neural Networks
A. Alalshekmubarak,Leslie S. Smith
,
Designing noise-resilient systems is a major challenge in the field of automated speech recognition (ASR). These systems are crucial for real-world applications where high levels of noise tend to be present. We introduce a noise robust system based on Echo State Networks and Extreme Kernel machines which we call ESNEKM. To evaluate the performance of the proposed system, we used our recently released public Arabic speech dataset and the well-known spoken Arabic digits (SAD) dataset. Different feature extraction methods considered in this study include mel-frequency cepstral coefficients (MFCCs), perceptual linear prediction (PLP) and RASTA- perceptual linear prediction. These extracted features were fed to the ESNEKM and the result compared with a baseline hidden Markov model (HMM), so that nine models were compared in total. ESNEKM models outperformed HMM models under all the feature extraction methods, noise levels, and noise types. The best performance was obtained by the model that combined RASTA-PLP with ESNEKM.
Zaid Alyafeai
ANTCORPUS
[]
https://huggingface.co/datasets/arbml/antcorpus
https://github.com/antcorpus/antcorpus.data
custom
2,017
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling
ANT Corpus, which is collected from RSS Feeds.
6,005
documents
Low
Multiple institutions
nan
ANT Corpus: An Arabic News Text Collection for Textual Classification
https://ieeexplore.ieee.org/abstract/document/8308275/authors#authors
Arab
No
GitHub
Free
nan
No
topic classification
AICCSA
17.0
conference
International Conference on Computer Systems and Applications
Amina Chouigui,Oussama Ben Khiroun,Bilel Elayeb
,,
We propose in this paper a new online Arabic corpus of news articles, named ANT Corpus, which is collected from RSS Feeds. Each document represents an article structured in the standard XML TREC format. We use the ANT Corpus for Text Classification (TC) by applying the SVM and Naive Bayes (NB) classifiers to assign to each article its accurate predefined category. We study also in this work the contribution of terms weighting, stop-words removal and light stemming on Arabic TC. The experimental results prove that the text length affects considerably the TC accuracy and that titles words are not sufficiently significant to perform good classification rates. As a conclusion, the SVM method gives the best results of classification of both titles and texts parts.
Zaid Alyafeai
Religious Hate Speech
[]
https://huggingface.co/datasets/arbml/Religious_Hate_Speech
https://github.com/nuhaalbadi/Arabic_hatespeech
unknown
2,018
ar
mixed
social media
text
crawling and annotation(other)
training dataset contains 5,569 examples, while the testing dataset contains 567 examples collected from twittter
6,136
sentences
High
Multiple institutions
nan
Are they Our Brothers? Analysis and Detection of Religious Hate Speech in the Arabic Twittersphere
https://ieeexplore.ieee.org/document/8508247/authors#authors
Arab
No
GitHub
Free
nan
Yes
hate speech detection
ASONAM
72.0
conference
IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
Nuha Albadi,Maram Kurdi,Shivakant Mishra
,,
Religious hate speech in the Arabic Twittersphere is a notable problem that requires developing automated tools to detect messages that use inflammatory sectarian language to promote hatred and violence against people on the basis of religious affiliation. Distinguishing hate speech from other profane and vulgar language is quite a challenging task that requires deep linguistic analysis. The richness of the Arabic morphology and the limited available resources for the Arabic language make this task even more challenging. To the best of our knowledge, this paper is the first to address the problem of identifying speech promoting religious hatred in the Arabic Twitter. In this work, we describe how we created the first publicly available Arabic dataset annotated for the task of religious hate speech detection and the first Arabic lexicon consisting of terms commonly found in religious discussions along with scores representing their polarity and strength. We then developed various classification models using lexicon-based, n-gram-based, and deep-learning-based approaches. A detailed comparison of the performance of different models on a completely new unseen dataset is then presented. We find that a simple Recurrent Neural Network (RNN) architecture with Gated Recurrent Units (GRU) and pre-trained word embeddings can adequately detect religious hate speech with 0.84 Area Under the Receiver Operating Characteristic curve (AUROC).
Zaid Alyafeai
omcca
[ { "Name": "Saudi ", "Dialect": "ar-SA: (Arabic (Saudi Arabia))", "Volume": "18,282", "Unit": "sentences" }, { "Name": "Jordan", "Dialect": "ar-JO: (Arabic (Jordan))", "Volume": "10,294", "Unit": "sentences" } ]
nan
https://github.com/AhmedObaidi/omcca
unknown
2,016
ar
mixed
reviews
text
crawling and annotation(other)
Opinion Mining Corpus for Colloquial Variety of Arabic language
28,576
sentences
Low
Multiple institutions
nan
Opinion Mining: Analysis of Comments Written in Arabic Colloquial
http://www.iaeng.org/publication/WCECS2016/WCECS2016_pp470-475.pdf
Arab
No
GitHub
Free
nan
No
dialect identification, sentiment analysis
WCECS
8.0
conference
World Congress on Engineering and Computer Science
Ahmed Y. Al-Obaidi,V. Samawi
,
In Arab nations, people used to express their opinions using colloquial dialects depending on the country to which they belong to. Analyzing reviews written in various Arabic dialects is a challenging problem. This is because some words could have different meanings in various dialects. Furthermore, dialects could contain words that do not belong to classical Arabic language.‎This research tackles the problem of sentiment analysis of reviews and comments written in colloquial dialects of Arabic language, at which the ability of different machine learning algorithms and features are examined in polarity determination. In this work, people's reviews (written in different dialects) are classified into positive or negative opinions. Each dialect comes with its own stopwords list. Consequently, a list of stop-words that suits different dialects in addition to modern standard Arabic (MSA) is suggested. In this paper, a light stemmer that suits dialects is developed. Two feature sets are utilized (bag of words (BoW), and N-gram of words) to investigate their effectiveness in sentiment analysis. Finally, Naïve-Bayes, Support vector machine (SVM), and Maximum Entropy machine learning algorithms are applied to study their performance in opinion mining. F1-measure is used to evaluate the performance of these machine learning algorithms. To train and test the suggested system performance, we built a corpus1 of reviews by collecting reviews written in two dialects (Saudi dialect and Jordanian dialect). The testing results show that Maximum Entropy outperforms the other two machine learning algorithms. Using N-gram (with N=3) as features set improves the performance of the three machine learning algorithms.
Zaid Alyafeai
Corpus of Offensive Language in Arabic
[]
nan
https://onedrive.live.com/?authkey=!ACDXj_ZNcZPqzy0&id=6EF6951FBF8217F9!105&cid=6EF6951FBF8217F9
unknown
2,018
ar
mixed
social media
text
crawling and annotation(other)
a dataset of YouTube comments in Arabic, specifically designed to be used for thedetection of offensive language in a machine learning scenario
15,050
sentences
High
University of Limerick
nan
Dataset Construction for the Detection of Anti-Social Behaviour in Online Communication in Arabic
https://core.ac.uk/download/pdf/211161742.pdf
Arab
No
OneDrive
Free
nan
No
offensive language detection, hate speech detection
ACLING
33.0
conference
nternational Conference on AI in Computational Linguistics
Azalden Alakrot,Liam Murray,Nikola S. Nikolov
,,
Abstract Warning: this paper contains a range of words which may cause offence. In recent years, many studies target anti-social behaviour such as offensive language and cyberbullying in online communication. Typically, these studies collect data from various reachable sources, the majority of the datasets being in English. However, to the best of our knowledge, there is no dataset collected from the YouTube platform targeting Arabic text and overall there are only a few datasets of Arabic text, collected from other social platforms for the purpose of offensive language detection. Therefore, in this paper we contribute to this field by presenting a dataset of YouTube comments in Arabic, specifically designed to be used for the detection of offensive language in a machine learning scenario. Our dataset contains a range of offensive language and flaming in the form of YouTube comments. We document the labelling process we have conducted, taking into account the difference in the Arab dialects and the diversity of perception of offensive language throughout the Arab world. Furthermore, statistical analysis of the dataset is presented, in order to make it ready for use as a training dataset for predictive modelling.
Zaid Alyafeai
Qatari heritage corpus
[]
nan
https://data.world/saraalmulla/qatari-heritage-expressions
CDLA-Permissive-1.0
2,020
ar
ar-QA: (Arabic (Qatar))
other
text
crawling and annotation(other)
qatari heritage expressions dataset with translations
1,000
sentences
Low
Hamad Bin Khalifa University
nan
Building a Corpus of Qatari Arabic Expressions
https://aclanthology.org/2020.osact-1.4.pdf
Arab
No
data.world
Upon-Request
nan
No
machine translation
OSACT
1.0
workshop
Workshop on Open-Source Arabic Corpora and Processing Tools
Sara Al-Mulla,W. Zaghouani
,
The current Arabic natural language processing resources are mainly build to address the Modern Standard Arabic (MSA), while we witnessed some scattered efforts to build resources for various Arabic dialects such as the Levantine and the Egyptian dialects. We observed a lack of resources for Gulf Arabic and especially the Qatari variety. In this paper, we present the first Qatari idioms and expression corpus of 1000 entries. The corpus was created from on-line and printed sources in addition to transcribed recorded interviews. The corpus covers various Qatari traditional expressions and idioms. To this end, audio recordings were collected from interviews and an online survey questionnaire was conducted to validate our data. This corpus aims to help advance the dialectal Arabic Speech and Natural Language Processing tools and applications for the Qatari dialect.
Zaid Alyafeai
KSU Rich Arabic Speech Database
[]
nan
https://catalog.ldc.upenn.edu/LDC2014S02
custom
2,013
ar
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
crawling and annotation(other)
it has 752 speakers; the speakers are from different ethnic groups: Saudis, Arabs, and non-Arabs;
590
hours
Low
King Saud University
nan
KSU Rich Arabic Speech Database
https://catalog.ldc.upenn.edu/docs/LDC2014S02/KSU-Rich-Arabic-Speech-Database.pdf
Arab
No
LDC
With-Fee
2000 $
No
speech recognition
other
20.0
preprint
nan
M. Alsulaiman,G. Muhammad,M. Bencherif,A. Mahmood,Z. Ali
,,,,
Arabic is one of the major languages in the world. Unfortunately not so much research in Arabic speaker recognition has been done. One main reason for this lack of research is the unavailability of rich Arabic speech databases. In this paper, we present a rich and comprehensive Arabic speech database that we developed for the Arabic speaker / speech recognition research and/or applications. The database is rich in different aspects: (a) it has 752 speakers; (b) the speakers are from different ethnic groups: Saudis, Arabs, and non-Arabs; (c) utterances are both read text and spontaneous; (d) scripts are of different dimensions, such as, isolated words, digits, phonetically rich words, sentences, phonetically balanced sentences, paragraphs, etc.; (e) different sets of microphones with medium and high quality; (f) telephony and non-telephony speech; (g) three different recording environments: office, sound proof room, and cafeteria; (h) three different sessions, where the recording sessions are scheduled at least with 2 weeks interval. Because of the richness of this database, it can be used in many Arabic, and non-Arabic, speech processing researches, such as speaker / speech recognition, speech analysis, accent identification, ethnic groups / nationality recognition, etc. The richness of the database makes it a valuable resource for research in Arabic speech processing in particular and for research in speech processing in general. The database was carefully manually verified. The manual verification was complemented with automatic verification. Validation was performed on a subset of the database where the recognition rate reached 100% for Saudi speakers and 96% for non-Saudi speakers by using a system with 12 Mel frequency Cepstral coefficients, and 32 Gaussian mixtures.
Zaid Alyafeai
XTREME
[]
https://huggingface.co/datasets/xtreme
https://github.com/google-research/xtreme
Apache-2.0
2,020
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
contains many datasets from different benchmarks like XNLI, TYDiQA, etc.
nan
sentences
Low
Google
Tydiqa, xnli, etc.
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization
https://arxiv.org/pdf/2003.11080.pdf
Arab
No
GitHub
Free
nan
Yes
natural language inference,part of speech tagging,named entity recognition,question answering,machine translation
ICML
209.0
conference
International Conference on Machine Learning
Junjie Hu,Sebastian Ruder,Aditya Siddhant,Graham Neubig,Orhan Firat,M. Johnson
,DeepMind,,,,
Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders XTREME benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.
Zaid Alyafeai
Arabic document classification dataset
[]
https://huggingface.co/datasets/arbml/Document_Classification
https://diab.edublogs.org/dataset-for-arabic-document-classification/
unknown
2,014
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
The dataset contains nine major disciplines: Art, Literature, Religion, Politics, Law, Economy, Sport, Health, and Technology.
2,700
documents
Low
Multiple institutions
nan
On the Impact of Dataset Characteristics on Arabic Document Classification
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.800.5666&rep=rep1&type=pdf
Arab
Yes
other
Free
nan
No
topic classification
IJCA
11.0
journal
International Journal of Computer Applications
Diab Abuaiadah,Jihad El Sana,Walid Abusalah
,,
paper describes the impact of dataset characteristics on the results of Arabic document classification algorithms using TF-IDF representations. The experiments compared different stemmers, different categories and different training set sizes, and found that different dataset characteristics produced widely differing results, in one case attaining a remarkable 99% recall (accuracy). The use of a standard dataset would eliminate this variability and enable researchers to gain comparable knowledge from the published results.
Mustafa Ghaleb
KSUCCA Corpus
[]
https://huggingface.co/datasets/arbml/KSUCCA
https://sourceforge.net/projects/ksucca-corpus/
unknown
2,013
ar
ar-CLS: (Arabic (Classic))
books
text
crawling
KSUCCA is a pioneering 50+ million word corpus that captures the culture of a nation.
410
documents
Low
King Saud University
nan
The Design and Construction of the 50 Million Words KSUCCA
https://eprints.whiterose.ac.uk/81860/1/TheDesignConstruction50MillionWordKSUCCA.pdf
Arab
No
sourceforge
Free
nan
No
semantic similarity
other
30.0
preprint
nan
Maha AlRabiah,A. Al-Salman,E. Atwell
,,
In this paper, we report the design and construction of King Saud University Corpus of Classical Arabic (KSUCCA), which is part of ongoing research that attempts to study the meanings of words used in the holy Quran, through analysis of their distributional semantics in contemporaneous texts. The holy Quranic text was revealed in pure Classical Arabic, which forms the basis of Arabic linguistic theory and which is well understood by the educated Arabic reader. Therefore, it is necessary to investigate the distributional lexical semantics of the Quran's words in the light of similar texts (corpus) that are written in pure Classical Arabic. To the best of our knowledge, there exist only two corpora of Classical Arabic; one is part of the King Abdulaziz City for Science and Technology Arabic Corpus (KACST Arabic Corpus) and the other is the Classical Arabic Corpus (CAC) (Elewa, 2009). However, neither of the two corpora is adequate for our research; the former does not cover many genres such as: Linguistics, Literature, Science, Sociology and Biography; and it only contains 17+ million words, so it is not very large. While the latter is even smaller with only 5 million words. Therefore, we made an effort to carefully design and compose our own corpus bearing in mind that it should be large enough, balanced, and representative so that any result obtained from it can be generalized for Classical Arabic. In addition, we tried to make the design general enough in order to make the corpus also appropriate for other research.
Mustafa Ghaleb
XNLI
[]
https://huggingface.co/datasets/xnli
https://github.com/facebookresearch/XNLI
CC BY-NC 4.0
2,018
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
human translation
evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages,
7,500
sentences
Low
Facebook, NYU
nan
XNLI: Evaluating Cross-lingual Sentence Representations
https://arxiv.org/pdf/1809.05053.pdf
Arab
No
GitHub
Free
nan
Yes
natural language inference
EMNLP
354.0
conference
Conference on Empirical Methods in Natural Language Processing
Alexis Conneau,Guillaume Lample,Ruty Rinott,Adina Williams,Samuel R. Bowman,Holger Schwenk,Veselin Stoyanov
,,,Facebook AI Research,New York University,,
State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in cross-lingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 14 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.
Zaid Alyafeai
OSAC
[]
https://huggingface.co/datasets/arbml/OSAC_CNN
https://sourceforge.net/projects/ar-text-mining/files/Arabic-Corpora/
unknown
2,010
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling
collecting the largest free accessible Arabic corpus, OSAC, which contains about 18M words and about 0.5M district keywords.
22,429
documents
Low
Islamic University of Gaza
nan
OSAC: Open Source Arabic Corpora
http://site.iugaza.edu.ps/wp-content/uploads/mksaad-OSAC-OpenSourceArabicCorpora-EECS10-rev9(1).pdf
Arab
No
sourceforge
Free
nan
No
topic classification
other
101.0
preprint
nan
Motaz Saad,W. Ashour
The Islamic University of Gaza,
Arabic Linguistics is promising research field. The acute lack of free public accessible Arabic corpora is one of the major difficulties that Arabic linguistics researches face. The effort of this paper is a step towards supporting Arabic linguistics research field. This paper presents the complex nature of Arabic language, pose the problems of: (1) lacking free public Arabic corpora, (2) the lack of high-quality, wellstructured Arabic digital contents. The paper finally presents OSAC, the largest free accessible that we collected.
Zaid Alyafeai
wikiann
[]
https://huggingface.co/datasets/wikiann
https://drive.google.com/drive/folders/1Q-xdT99SeaCghihGa7nRkcXGwRGUIsKN
unknown
2,017
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling and annotation(other)
Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data
185,000
tokens
Low
Multiple institutions
nan
Cross-lingual Name Tagging and Linking for 282 Languages
https://aclanthology.org/P17-1178.pdf
Arab
No
Gdrive
Free
nan
Yes
named entity recognition
ACL
168.0
conference
Assofications of computation linguisitcs
Xiaoman Pan,Boliang Zhang,Jonathan May,J. Nothman,Kevin Knight,Heng Ji
,,,,,
The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of new KB mining methods: generating “silver-standard” annotations by transferring annotations from English to other languages through cross-lingual links and KB properties, refining annotations through self-training and topic selection, deriving language-specific morphology features from anchor links, and mining word translation pairs from cross-lingual links. Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data.
Zaid Alyafeai
Tanzil
[]
https://huggingface.co/datasets/tanzil
https://opus.nlpl.eu/Tanzil.php
custom
2,012
multilingual
ar-CLS: (Arabic (Classic))
other
text
human translation
This is a collection of Quran translations compiled by the Tanzil project
12,472
sentences
Low
OPUS
nan
Parallel Data, Tools and Interfaces in OPUS
http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
Arab
No
other
Free
nan
No
machine translation
LREC
1006.0
conference
International Conference on Language Resources and Evaluation
J. Tiedemann
nan
This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.
Zaid Alyafeai
Tatoeba
[]
https://huggingface.co/datasets/tatoeba
https://tatoeba.org/en/sentences/show_all_in/ara/none/none/indifferent
CC BY 2.0
2,012
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
human translation
Tatoeba covers 338 languages and contains a total of 7.8 million sentences.
36,792
sentences
Low
OPUS
nan
Parallel Data, Tools and Interfaces in OPUS
http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
Arab
No
other
Free
nan
No
machine translation
LREC
1006.0
conference
International Conference on Language Resources and Evaluation
J. Tiedemann
nan
This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.
Zaid Alyafeai
TaPaCo
[]
https://huggingface.co/datasets/tapaco
https://zenodo.org/record/3707949#.YRKu0ogzaUk
CC BY 2.0
2,020
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
The corpus contains a total of 1.9 million sentences, with 200 – 250 000 sentences per language
6,446
sentences
Low
University of Helsinki
Tatoeba
TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages
https://aclanthology.org/2020.lrec-1.848.pdf
Arab
No
zenodo
Free
nan
No
paraphrasing
LREC
4.0
conference
International Conference on Language Resources and Evaluation
Yves Scherrer
nan
This paper presents TaPaCo, a freely available paraphrase corpus for 73 languages extracted from the Tatoeba database. Tatoeba is a crowdsourcing project mainly geared towards language learners. Its aim is to provide example sentences and translations for particular linguistic constructions and words. The paraphrase corpus is created by populating a graph with Tatoeba sentences and equivalence links between sentences “meaning the same thing”. This graph is then traversed to extract sets of paraphrases. Several language-independent filters and pruning steps are applied to remove uninteresting sentences. A manual evaluation performed on three languages shows that between half and three quarters of inferred paraphrases are correct and that most remaining ones are either correct but trivial, or near-paraphrases that neutralize a morphological distinction. The corpus contains a total of 1.9 million sentences, with 200 - 250 000 sentences per language. It covers a range of languages for which, to our knowledge, no other paraphrase dataset exists. The dataset is available at https://doi.org/10.5281/zenodo.3707949.
Zaid Alyafeai
LASER
[]
nan
https://github.com/facebookresearch/LASER
BSD
2,019
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
human translation
aligned sentences in 112 languages
1,000
sentences
Low
Multiple institutions
Tatoeba
Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
https://arxiv.org/pdf/1812.10464.pdf
Arab
No
GitHub
Free
nan
No
machine translation
TACL
374.0
journal
Transactions of the Association for Computational Linguistics
Mikel Artetxe,Holger Schwenk
,
Abstract We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared byte-pair encoding vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI data set), cross-lingual document classification (MLDoc data set), and parallel corpus mining (BUCC data set) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low- resource languages. Our implementation, the pre-trained encoder, and the multilingual test set are available at https://github.com/facebookresearch/LASER.
Zaid Alyafeai
NArabizi treebank
[]
nan
https://parsiti.github.io/NArabizi/
unknown
2,020
ar
ar-DZ: (Arabic (Algeria))
other
text
crawling and annotation(other)
fully annotated in morpho-syntax and Universal Dependency syntax, with full translation at both the word and the sentence levels
1,500
sentences
Low
Inria,Sorbonne Université
nan
Building a User-Generated Content North-African Arabizi Treebank: Tackling Hell
https://aclanthology.org/2020.acl-main.107.pdf
Arab-Latn
Yes
GitHub
Free
nan
Yes
part of speech tagging,dependency parsing,machine translation
ACL
14.0
conference
Assofications of computation linguisitcs
Djamé Seddah,Farah Essaidi,Amal Fethi,Matthieu Futeral,Benjamin Müller,Pedro Javier Ortiz Suárez,B. Sagot,Abhishek Srivastava
Inria;Sorbonne Université,,,,,Inria;Sorbonne Université,,
We introduce the first treebank for a romanized user-generated content variety of Algerian, a North-African Arabic dialect known for its frequent usage of code-switching. Made of 1500 sentences, fully annotated in morpho-syntax and Universal Dependency syntax, with full translation at both the word and the sentence levels, this treebank is made freely available. It is supplemented with 50k unlabeled sentences collected from Common Crawl and web-crawled data using intensive data-mining techniques. Preliminary experiments demonstrate its usefulness for POS tagging and dependency parsing. We believe that what we present in this paper is useful beyond the low-resource language community. This is the first time that enough unlabeled and annotated data is provided for an emerging user-generated content dialectal language with rich morphology and code switching, making it an challenging test-bed for most recent NLP approaches.
Zaid Alyafeai
NArabizi corpus
[]
https://huggingface.co/datasets/arbml/NArabizi
https://github.com/SamiaTouileb/Narabizi
unknown
2,021
ar
ar-DZ: (Arabic (Algeria))
other
text
crawling and annotation(other)
extension of NArabizi treebank by adding to annotations.
1,500
sentences
Low
University of Oslo
NArabizi treebank
The interplay between language similarity and script on a novel multi-layer Algerian dialect corpus
https://arxiv.org/pdf/2105.07400.pdf
Arab-Latn
Yes
GitHub
Free
nan
Yes
part of speech tagging,dependency parsing,machine translation, sentiment analysis,transliteration, topic classification
FINDINGS
1.0
conference
Findings of the Association for Computational Linguistics
Samia Touileb,Jeremy Barnes
,University of the Basque Country UPV/EHU
Recent years have seen a rise in interest for cross-lingual transfer between languages with similar typology, and between languages of various scripts. However, the interplay between language similarity and difference in script on cross-lingual transfer is a less studied problem. We explore this interplay on cross-lingual transfer for two supervised tasks, namely part-of-speech tagging and sentiment analysis. We introduce a newly annotated corpus of Algerian user-generated comments comprising parallel annotations of Algerian written in Latin, Arabic, and code-switched scripts, as well as annotations for sentiment and topic categories. We perform baseline experiments by fine-tuning multi-lingual language models. We further explore the effect of script vs. language similarity in cross-lingual transfer by fine-tuning multi-lingual models on languages which are a) typologically distinct, but use the same script, b) typologically similar, but use a distinct script, or c) are typologically similar and use the same script. We find there is a delicate relationship between script and typology for part-of-speech, while sentiment analysis is less sensitive.
Zaid Alyafeai
DefArabicQA
[]
nan
https://sites.google.com/site/anlprg/outils-et-corpus-realises
unknown
2,010
ar
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling and annotation(other)
2000 snippets returned by Google search engine and Wikipedia Arabic version and a set of 50 organization definition questions
2,000
sentences
Low
Multiple institutions
nan
DefArabicQA: Arabic Definition Question Answering System
http://personales.upv.es/prosso/resources/TriguiEtAl_LREC10.pdf
Arab
No
other
Free
nan
No
question answering
LREC
51.0
conference
International Conference on Language Resources and Evaluation
Omar Trigui,L. Belguith,P. Rosso
,,
Today the Web is the largest resource of knowledge and, therefore, sometimes this makes it difficult to find precise information. Current search engines can only return ranked snippets containing the effective answers to a query user. But, they can not return the exact answers. Question Answering systems present the solution to obtain effective and exact answers to a user question asked in natural language question instead of keywords query. Unfortunately, Question Answering task for the Arabic language has not been investigated enough in the last decade, compared to other languages. In this paper, we tackle the definition Question Answering task for the Arabic language. We propose an Arabic definitional Question Answering system based on a pattern approach to identify exact and accurate definitions about organization using Web resources. We experimented this system using 2000 snippets returned by Google search engine and Wikipedia Arabic version and a set of 50 organization definition questions. The obtained results are very encouraging: (90%) of the questions used have complete (vital) definitions in the top-five answers and (64%) of them have complete definitions in the top-one answer. MRR was (0.81).
Zaid Alyafeai
TuDiCoI
[]
https://huggingface.co/datasets/arbml/TuDiCoI
https://sites.google.com/site/anlprg/outils-et-corpus-realisess
unknown
2,010
ar
ar-TN: (Arabic (Tunisia))
transcribed audio
spoken
manual curation
The corpus consists of 434 1465 staff utterances and 1615 client utterances
127
sentences
Low
University of Sfax
nan
LEXICAL STUDY OF A SPOKEN DIALOGUE CORPUS IN TUNISIAN DIALECT
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.452.7847&rep=rep1&type=pdf
Arab
No
other
Free
nan
No
speech recognition
other
15.0
preprint
nan
M. Graja,M. Jaoua,L. Belguith
,,
The aim of this paper is to present a lexical study of a spoken dialogue corpus in Tunisian dialect since such resources does not currently existing. The lexical analysis permits to take into account the specificity of Tunisian dialect by identifying lexical varieties and significant elements used in the spoken dialogue. This can lead us to provide a useful characterization for dialogue systems and help us to develop models and methods specifically designed for Tunisian dialect.
Zaid Alyafeai
AFEWC
[]
https://huggingface.co/datasets/arbml/AFEWC
https://sourceforge.net/projects/crlcl/
CC BY-NC 2.0
2,013
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling
a multilingual comparable text articles in Arabic, French, and English languages
40,290
documents
Low
INRIA
nan
Extracting Comparable Articles from Wikipedia and Measuring their Comparabilities
https://www.sciencedirect.com/science/article/pii/S1877042813041402
Arab
No
sourceforge
Free
nan
No
machine translation
PSBS
21.0
journal
Procedia Social and Behavioral Sciences
Motaz K. Saad,David Langlois,Kamel Sma
,,
Parallel corpora are not available for all domains and languages, but statistical methods in multilingual research domains require huge parallel/comparable corpora. Comparable corpora can be used when the parallel is not sufficient or not available for specific domains and languages. In this paper, we propose a method to extract all comparable articles from Wikipedia for multiple languages based on interlanguge links. We also extract comparable articles from Euro News website. We also present two comparability measures (CM) to compute the degree of comparability of multilingual articles. We extracted about 40K and 34K comparable articles from Wikipedia and Euro News respectively in three languages including Arabic, French, and English. Experimental results of comparability measures show that our measure can capture the comparability of multilingual corpora and allow to retrieve articles from different language concerning the same topic.
Zaid Alyafeai
Arabic Named Entities
[]
https://huggingface.co/datasets/arbml/Arabic_Named_Entities
https://sourceforge.net/projects/arabicnes/
unknown
2,010
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling and annotation(other)
we have extracted approximately 45,000 Arabic NE
45,000
tokens
Low
Multiple institutions
nan
An automatically built Named Entity lexicon for Arabic
http://doras.dcu.ie/15979/1/An_automatically_built_Named_Entity_lexicon_for_Arabic.pdf
Arab
No
sourceforge
Free
nan
No
named entity recognition
LREC
41.0
conference
International Conference on Language Resources and Evaluation
Mohammed Attia,Antonio Toral,L. Tounsi,M. Monachini,Josef van Genabith
,University of Groningen,,,
We have successfully adapted and extended the automatic Multilingual, Interoperable Named Entity Lexicon approach to Arabic, using Arabic WordNet (AWN) and Arabic Wikipedia (AWK). First, we extract AWN’s instantiable nouns and identify the corresponding categories and hyponym subcategories in AWK. Then, we exploit Wikipedia inter-lingual links to locate correspondences between articles in ten different languages in order to identify Named Entities (NEs). We apply keyword search on AWK abstracts to provide for Arabic articles that do not have a correspondence in any of the other languages. In addition, we perform a post-processing step to fetch further NEs from AWK not reachable through AWN. Finally, we investigate diacritization using matching with geonames databases, MADA-TOKAN tools and different heuristics for restoring vowel marks of Arabic NEs. Using this methodology, we have extracted approximately 45,000 Arabic NEs and built, to the best of our knowledge, the largest, most mature and well-structured Arabic NE lexical resource to date. We have stored and organised this lexicon following the Lexical Markup Framework (LMF) ISO standard. We conduct a quantitative and qualitative evaluation of the lexicon against a manually annotated gold standard and achieve precision scores from 95.83% (with 66.13% recall) to 99.31% (with 61.45% recall) according to different values of a threshold.
Zaid Alyafeai
AraFacts
[]
https://huggingface.co/datasets/arbml/AraFacts
https://gitlab.com/bigirqu/AraFacts/
CC BY-NC 4.0
2,021
ar
mixed
other
text
crawling and annotation(other)
consists of 6,222 claims along with their factual labels and additional metadata, such as fact-checking article content, topical category, and links to posts or Web pages spreading the claim
6,222
sentences
Low
Qatar University
nan
AraFacts: The First Large Arabic Dataset of Naturally Occurring Claims
https://aclanthology.org/2021.wanlp-1.26.pdf
Arab
No
GitLab
Free
nan
No
topic classification, claim verification
WANLP
0.0
workshop
Arabic Natural Language Processing Workshop
Zien Sheikh Ali,Watheq Mansour,Tamer Elsayed,A. Al-Ali
,,,
We introduce AraFacts, the first large Arabic dataset of naturally occurring claims collected from 5 Arabic fact-checking websites, e.g., Fatabyyano and Misbar, and covering claims since 2016. Our dataset consists of 6,121 claims along with their factual labels and additional metadata, such as fact-checking article content, topical category, and links to posts or Web pages spreading the claim. Since the data is obtained from various fact-checking websites, we standardize the original claim labels to provide a unified label rating for all claims. Moreover, we provide revealing dataset statistics and motivate its use by suggesting possible research applications. The dataset is made publicly available for the research community.
Zaid Alyafeai
ASAD
[]
nan
https://wti.kaust.edu.sa/solve/Arabic-Sentiment-Analysis-Challenge
unknown
2,020
ar
mixed
social media
text
crawling and annotation(other)
ASAD is a large, high-quality annotated dataset (including 95K tweets), with three-class sentiment labels (positive, negative and neutral)
100,000
sentences
Medium
Multiple institutions
nan
ASAD: A TWITTER-BASED BENCHMARK ARABIC SENTIMENT ANALYSIS DATASET
https://arxiv.org/pdf/2011.00578.pdf
Arab
No
other
Upon-Request
nan
Yes
sentiment analysis
ArXiv
2.0
preprint
ArXiv
Basma Alharbi,Hind Alamro,Manal Alshehri,Zuhair Khayyat,Manal Kalkatawi,I. Jaber,X. Zhang
,,,,,,
This paper provides a detailed description of a new Twitter-based benchmark dataset for Arabic Sentiment Analysis (ASAD), which is launched in a competition3, sponsored by KAUST for awarding 10000 USD, 5000 USD and 2000 USD to the first, second and third place winners, respectively. Compared to other publicly released Arabic datasets, ASAD is a large, high-quality annotated dataset(including 95K tweets), with three-class sentiment labels (positive, negative and neutral). We presents the details of the data collection process and annotation process. In addition, we implement several baseline models for the competition task and report the results as a reference for the participants to the competition.
Zaid Alyafeai
ASTAD
[]
https://huggingface.co/datasets/arbml/Sentiment_Analysis_Tweets
https://github.com/motazsaad/arabic-sentiment-analysis
Apache-2.0
2,020
ar
mixed
social media
text
crawling and annotation(other)
It contains 58K Arabic tweets (47K training, 11K test) tweets annotated in positive and negative labels
58,000
sentences
Medium
Multiple institutions
nan
An Arabic Tweets Sentiment Analysis Dataset (ATSAD) using Distant Supervision and Self Training
https://aclanthology.org/2020.osact-1.1.pdf
Arab
No
GitHub
Free
nan
Yes
sentiment analysis
OSACT
4.0
workshop
Workshop on Open-Source Arabic Corpora and Processing Tools
Kathrein Abu Kwaik,S. Chatzikyriakidis,Simon Dobnik,Motaz Saad,Richard Johansson
,,,The Islamic University of Gaza,
As the number of social media users increases, they express their thoughts, needs, socialise and publish their opinions reviews. For good social media sentiment analysis, good quality resources are needed, and the lack of these resources is particularly evident for languages other than English, in particular Arabic. The available Arabic resources lack of from either the size of the corpus or the quality of the annotation. In this paper, we present an Arabic Sentiment Analysis Corpus collected from Twitter, which contains 36K tweets labelled into positive and negative. We employed distant supervision and self-training approaches into the corpus to annotate it. Besides, we release an 8K tweets manually annotated as a gold standard. We evaluated the corpus intrinsically by comparing it to human classification and pre-trained sentiment analysis models, Moreover, we apply extrinsic evaluation methods exploiting sentiment analysis task and achieve an accuracy of 86%.
Zaid Alyafeai
PAAD
[]
https://huggingface.co/datasets/arbml/PAAD
https://data.mendeley.com/datasets/spvbf5bgjs/2
CC BY 4.0
2,020
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling and annotation(other)
he dataset is 206 articles distributed into three categories as (Reform, Conservative and Revolutionary) that we offer to the research community on Arabiccomputational linguistics.
206
documents
Low
Multiple institutions
nan
PAAD:POLITICAL ARABIC ARTICLES DATASET FOR AUTOMATIC TEXT CATEGORIZATION
https://ijci.uoitc.edu.iq/index.php/ijci/article/view/246/174
Arab
No
Mendeley Data
Free
nan
No
topic classification
IJCI
3.0
journal
Iraqi journal of computer and informatics
Dhafar Hamed Abd,Ahmed T. Sadiq,Ayad R. Abbas
,,
Now day’s text Classification and Sentiment analysis is considered as one of the popular Natural Language Processing (NLP) tasks. This kind of technique plays significant role in human activities and has impact on the daily behaviours. Each article in different fields such as politics and business represent different opinions according to the writer tendency. A huge amount of data will be acquired through that differentiation. The capability to manage the political orientation of an online article automatically. Therefore, there is no corpus for political categorization was directed towards this task in Arabic, due to the lack of rich representative resources for training an Arabic text classifier. However, we introduce political Arabic articles dataset (PAAD) of textual data collected from newspapers, social network, general forum and ideology website. The dataset is 206 articles distributed into three categories as (Reform, Conservative and Revolutionary) that we offer to the research community on Arabic computational linguistics. We anticipate that this dataset would make a great aid for a variety of NLP tasks on Modern Standard Arabic, political text classification purposes. We present the data in raw form and excel file. Excel file will be in four types such as V1 raw data, V2 preprocessing, V3 root stemming and V4 light stemming.
Zaid Alyafeai
NLP dataset for Arabic dialects
[ { "Name": "Algeria", "Dialect": "ar-DZ: (Arabic (Algeria))", "Volume": "13,393", "Unit": "sentences" }, { "Name": "Lebanon", "Dialect": "ar-LB: (Arabic (Lebanon))", "Volume": "14,482", "Unit": "sentences" }, { "Name": "Morocco", "Dialect": "ar-MA: (Arabic (Morocco))", "Volume": "9,965", "Unit": "sentences" }, { "Name": "Tunisia", "Dialect": "ar-TN: (Arabic (Tunisia))", "Volume": "8,044", "Unit": "sentences" }, { "Name": "Egypt", "Dialect": "ar-EG: (Arabic (Egypt))", "Volume": "7,519", "Unit": "sentences" } ]
nan
https://msda.um6p.ma/msda_datasets
unknown
2,021
ar
mixed
social media
text
crawling and annotation(other)
This data was collected from the Twitter social network and consists on +50K twits in five (5) national dialects
52,210
sentences
Medium
Multiple institutions
nan
An open access NLP dataset for Arabic dialects : data collection, labeling, and model construction
https://arxiv.org/pdf/2102.11000.pdf
Arab
No
other
Free
nan
No
dialect identification, topic classification, sentiment analysis
ArXiv
0.0
preprint
ArXiv
ElMehdi Boujou,Hamza Chataoui,Abdellah El Mekki,Saad Benjelloun,I. Chairi,Ismail Berrada
,,Mohammed VI Polytechnic University,,,
Natural Language Processing (NLP) is today a very active field of research and innovation. Many applications need however big sets of data for supervised learning, suitably labelled for the training purpose. This includes applications for the Arabic language and its national dialects. However, such open access labeled data sets in Arabic and its dialects are lacking in the Data Science ecosystem and this lack can be a burden to innovation and research in this field. In this work, we present an open data set of social data content in several Arabic dialects. This data was collected from the Twitter social network and consists on +50K twits in five (5) national dialects. Furthermore, this data was labeled for several applications, namely dialect detection, topic detection and sentiment analysis. We publish this data as an open access data to encourage innovation and encourage other works in the field of NLP for Arabic dialects and social media. A selection of models were built using this data set and are presented in this paper along with their performances.
nan
LK-Hadith-Corpus
[]
https://huggingface.co/datasets/arbml/LK_Hadith
https://github.com/ShathaTm/LK-Hadith-Corpus
unknown
2,019
multilingual
ar-CLS: (Arabic (Classic))
books
text
other
a bilingual parallel corpus of Islamic Hadith
39,038
sentences
Low
King Saud University
nan
The Arabic–English Parallel Corpus of Authentic Hadith
https://www.researchgate.net/publication/341359917_The_Arabic-English_Parallel_Corpus_of_Authentic_Hadith
Arab-Latn
Yes
GitHub
Free
nan
No
machine translation
IJASAT
nan
journal
International Journal on Islamic Applications in Computer Science And Technologies
nan
nan
We present a bilingual parallel corpus of Islamic Hadith, which is the set of narratives reporting different aspects of the prophet Muhammad's life. The Hadith collection is extracted from the six canonical Hadith books which possess unique linguistic features and patterns that are automatically extracted and annotated using a domain-specific tool for Hadith segmentation. In this article, we present the methodology of creating the corpus of 39,038 annotated Hadiths which will be freely available for the research community.
Mustafa Ghaleb
BBN Blog Posts Sentiment Corpus
[]
https://huggingface.co/datasets/arbml/BBN_Blog_Posts
https://saifmohammad.com/WebPages/ArabicSA.html
unknown
2,015
ar
ar-LEV: (Arabic(Levant))
social media
text
crawling and annotation(other)
A random subset of 1200 Levantine dialectal sentences chosen from the BBN Arabic-Dialect–English Parallel Text
1,200
sentences
Medium
Multiple institutions
nan
Sentiment after Translation: A Case-Study on Arabic Social Media Posts
https://aclanthology.org/N15-1078.pdf
Arab
No
other
Free
nan
No
sentiment analysis,machine translation
NAACL
125.0
conference
North American Chapter of the Association for Computational Linguistics
Mohammad Salameh,Saif M. Mohammad,Svetlana Kiritchenko
,National Research Council Canada,
When text is translated from one language into another, sentiment is preserved to varying degrees. In this paper, we use Arabic social media posts as stand-in for source language text, and determine loss in sentiment predictability when they are translated into English, manually and automatically. As benchmarks, we use manually and automatically determined sentiment labels of the Arabic texts. We show that sentiment analysis of English translations of Arabic texts produces competitive results, w.r.t. Arabic sentiment analysis. We discover that even though translation significantly reduces the human ability to recover sentiment, automatic sentiment systems are still able to capture sentiment information from the translations.
Zaid Alyafeai
Syria Tweets Sentiment Corpus
[]
https://huggingface.co/datasets/arbml/Syria_Tweet_Sentiment
https://saifmohammad.com/WebPages/ArabicSA.html
unknown
2,015
ar
ar-SY: (Arabic (Syria))
social media
text
crawling and annotation(other)
A dataset of 2000 tweets originating from Syria
2,000
sentences
Medium
Multiple institutions
nan
Sentiment after Translation: A Case-Study on Arabic Social Media Posts
https://aclanthology.org/N15-1078.pdf
Arab
No
other
Free
nan
No
sentiment analysis,machine translation
NAACL
125.0
conference
North American Chapter of the Association for Computational Linguistics
Mohammad Salameh,Saif M. Mohammad,Svetlana Kiritchenko
,National Research Council Canada,
When text is translated from one language into another, sentiment is preserved to varying degrees. In this paper, we use Arabic social media posts as stand-in for source language text, and determine loss in sentiment predictability when they are translated into English, manually and automatically. As benchmarks, we use manually and automatically determined sentiment labels of the Arabic texts. We show that sentiment analysis of English translations of Arabic texts produces competitive results, w.r.t. Arabic sentiment analysis. We discover that even though translation significantly reduces the human ability to recover sentiment, automatic sentiment systems are still able to capture sentiment information from the translations.
Zaid Alyafeai
Coronavirus
[]
nan
https://github.com/aseelad/Coronavirus-Public-Arabic-Twitter-Data-Set/
CC BY-NC-SA 4.0
2,020
ar
mixed
social media
text
other
contains data collected from December 1st 2019 until April 11th 2020
707,829
sentences
Low
Imam Mohammad Bin Saud University
nan
Coronavirus: Public Arabic Twitter Dataset
https://www.preprints.org/manuscript/202004.0263/v1
Arab
No
GitHub
Free
nan
No
text generation, language modeling
other
3.0
preprint
nan
Aseel Addawood
nan
The COVID-19 pandemic spread of the coronavirus across the globe has affected our lives on many different levels. The world we knew before the spread of the virus has become another one. Every country has taken preventive measures, including social distancing, travel restrictions, and curfew, to control the spread of the disease. With these measures implemented, people have shifted to social media platforms in the online sphere, such as Twitter, to maintain connections. In this paper, we describe a coronavirus data set of Arabic tweets collected from January 1, 2020, primarily from hashtags populated from Saudi Arabia. This data set is available to the research community to glean a better understanding of the societal, economical, and political effects of the outbreak and to help policy makers make better decisions for fighting this epidemic.
Zaid Alyafeai
Botta
[]
nan
https://camel.abudhabi.nyu.edu/botta/
custom
2,016
ar
ar-EG: (Arabic (Egypt))
other
text
other
files available for chatbot systems
nan
documents
Low
NYU Abu Dhabi
nan
Botta: An Arabic Dialect Chatbot
https://aclanthology.org/C16-2044.pdf
Arab
No
CAMeL Resources
Upon-Request
nan
No
dialogue generation
COLING
14.0
conference
International Conference on Computational Linguistics
Dana Abu Ali,Nizar Habash
,
This paper presents BOTTA, the first Arabic dialect chatbot. We explore the challenges of creating a conversational agent that aims to simulate friendly conversations using the Egyptian Arabic dialect. We present a number of solutions and describe the different components of the BOTTA chatbot. The BOTTA database files are publicly available for researchers working on Arabic chatbot technologies. The BOTTA chatbot is also publicly available for any users who want to chat with it online.
Zaid Alyafeai
AR-ASAG
[]
nan
https://data.mendeley.com/datasets/dj95jh332j/1
CC BY 4.0
2,020
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
reported evaluations relate to answers submitted for three different exams submitted to three classes of students.
2,133
sentences
Low
Bouira University
nan
AR-ASAG An ARabic Dataset for Automatic Short Answer Grading Evaluation
https://aclanthology.org/2020.lrec-1.321.pdf
Arab
No
Mendeley Data
Free
nan
No
answer grading evaluation
LREC
0.0
conference
International Conference on Language Resources and Evaluation
Leila Ouahrani,Djamel Bennouar
,
Automatic short answer grading is a significant problem in E-assessment. Several models have been proposed to deal with it. Evaluation and comparison of such solutions need the availability of Datasets with manual examples. In this paper, we introduce AR-ASAG, an Arabic Dataset for automatic short answer grading. The Dataset contains 2133 pairs of (Model Answer, Student Answer) in several versions (txt, xml, Moodle xml and .db). We explore then an unsupervised corpus based approach for automatic grading adapted to the Arabic Language. We use COALS (Correlated Occurrence Analogue to Lexical Semantic) algorithm to create semantic space for word distribution. The summation vector model is combined to term weighting and common words to achieve similarity between a teacher model answer and a student answer. The approach is particularly suitable for languages with scarce resources such as Arabic language where robust specific resources are not yet available. A set of experiments were conducted to analyze the effect of domain specificity, semantic space dimension and stemming techniques on the effectiveness of the grading model. The proposed approach gives promising results for Arabic language. The reported results may serve as baseline for future research work evaluation
Zaid Alyafeai
DAICT
[]
nan
https://www.hbku.edu.qa/en/DAICT
custom
2,020
ar
mixed
social media
text
crawling and annotation(other)
The dataset includes 5,588 tweets -- written in both MSA and dialectual Arabic -- manually annotated by two professional linguistics from HBKU
5,588
sentences
Medium
Hamad Bin Khalifa University
nan
DAICT: A Dialectal Arabic Irony Corpus Extracted from Twitter
http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.768.pdf
Arab
No
other
Free
nan
No
irony detection
LREC
15.0
conference
International Conference on Language Resources and Evaluation
Ines Abbes,W. Zaghouani,Omaima El-Hardlo,Faten Ashour
,,,
Identifying irony in user-generated social media content has a wide range of applications; however to date Arabic content has received limited attention. To bridge this gap, this study builds a new open domain Arabic corpus annotated for irony detection. We query Twitter using irony-related hashtags to collect ironic messages, which are then manually annotated by two linguists according to our working definition of irony. Challenges which we have encountered during the annotation process reflect the inherent limitations of Twitter messages interpretation, as well as the complexity of Arabic and its dialects. Once published, our corpus will be a valuable free resource for developing open domain systems for automatic irony recognition in Arabic language and its dialects in social media text.
Zaid Alyafeai
WAW
[]
nan
https://alt.qcri.org/resources/wawcorpus/
custom
2,018
ar
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
human translation
an interpreting corpus for English/Arabic
31
hours
Low
QCRI,Hamad Bin Khalifa University
nan
The WAW Corpus: The First Corpus of Interpreted Speeches and their Translations for English and Arabic
https://alt.qcri.org/resources/wawcorpus/
Arab-Latn
No
QCRI Resources
Free
nan
No
speech recognition, machine translation
LREC
1.0
conference
International Conference on Language Resources and Evaluation
Ahmed Abdelali,Irina Temnikova,S. Hedaya,S. Vogel
,University of Wolverhampton, Bulgarian Academy of Sciences, Qatar Computing Research Institute, Mitra Translations,,
This article presents the WAW Corpus, an interpreting corpus for English/Arabic, which can be used for teaching interpreters, studying the characteristics of interpreters’ work, as well as to train machine translation systems. The corpus contains recordings of lectures and speeches from international conferences, their interpretations, the transcripts of the original speeches and of their interpretations, as well as human translations of both kinds of transcripts into the opposite language of the language pair. The article presents the corpus curation, statistics, assessment, as well as a case study of the corpus use.
Zaid Alyafeai
SenZi
[]
https://huggingface.co/datasets/arbml/SenZi
https://tahatobaili.github.io/project-rbz/
custom
2,019
ar
ar-LB: (Arabic (Lebanon))
social media
text
crawling and annotation(other)
by translating, annotating, and transliterating other resources to have an initial set of 2K sentiment words. We expanded it to 24.6K sentiment words by importing inflectional and orthographic forms using word embeddings
24,600
tokens
Medium
Multiple institutions
nan
SenZi: A Sentiment Analysis Lexicon for the Latinised Arabic (Arabizi)
https://aclanthology.org/R19-1138.pdf
Arab-Latn
No
other
Free
nan
No
sentiment analysis, transliteration
RANLP
5.0
conference
Recent Advances in Natural Language Processing
Taha Tobaili,Miriam Fernández,Harith Alani,S. Sharafeddine,Hazem M. Hajj,Goran Glavas
,,,,,
Arabizi is an informal written form of dialectal Arabic transcribed in Latin alphanumeric characters. It has a proven popularity on chat platforms and social media, yet it suffers from a severe lack of natural language processing (NLP) resources. As such, texts written in Arabizi are often disregarded in sentiment analysis tasks for Arabic. In this paper we describe the creation of a sentiment lexicon for Arabizi that was enriched with word embeddings. The result is a new Arabizi lexicon consisting of 11.3K positive and 13.3K negative words. We evaluated this lexicon by classifying the sentiment of Arabizi tweets achieving an F1-score of 0.72. We provide a detailed error analysis to present the challenges that impact the sentiment analysis of Arabizi.
Zaid Alyafeai
ArSAS
[]
https://huggingface.co/datasets/arbml/ArSAS
https://homepages.inf.ed.ac.uk/wmagdy/resources.htm
unknown
2,018
ar
mixed
social media
text
crawling and annotation(other)
A set of 21K Arabic tweets labeled for 4 classes of sentiment and 6 classes of speech-act
21,000
sentences
Medium
Multiple institutions
nan
ArSAS: An Arabic Speech-Act and Sentiment Corpus of Tweets
http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf
Arab
No
other
Free
nan
No
sentiment analysis, speech act classification
LREC
25.0
conference
International Conference on Language Resources and Evaluation
AbdelRahim Elmadany,Hamdy Mubarak,Walid Magdy
University of British Columbia,,The University of Edinburgh
Speech acts are the type of communicative acts within a conversation. Speech act recognition (aka classification) has been an active research in recent years. However, much less attention was directed towards this task in Arabic due to the lack of resources for training an Arabic speech-act classifier. In this paper we present ArSAS , an Arabic corpus of tweets annotated for the tasks of speech-act recognition and sentiment analysis. A large set of 21k Arabic tweets covering multiple topics were collected, prepared and annotated for six different classes of speech-act labels, such as expression, assertion, and question. In addition, the same set of tweets were also annotated with four classes of sentiment. We aim to have this corpus promoting the research in both speech-act recognition and sentiment analysis tasks for Arabic language.
Zaid Alyafeai
ANS CORPUS: claim verification
[ { "Name": "nan", "Dialect": "ar-MSA: (Arabic (Modern Standard Arabic))", "Volume": "3,786", "Unit": "sentences" } ]
https://huggingface.co/datasets/arbml/ANS_stance
https://github.com/latynt/ans
Apache-2.0
2,020
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling and annotation(other)
corpus comes in two perspectives: a version consisting of 4,547 true and false claims and a version consisting of 3,786 pairs (claim, evidence).
4,547
sentences
Low
Latynt
nan
Stance Prediction and Claim Verification: An Arabic Perspective
https://arxiv.org/pdf/2005.10410.pdf
Arab
No
GitHub
Free
nan
Yes
stance detection, claim verification
FEVER
8.0
workshop
Workshop on Fact Extraction and Verification
Jude Khouja
nan
This work explores the application of textual entailment in news claim verification and stance prediction using a new corpus in Arabic. The publicly available corpus comes in two perspectives: a version consisting of 4,547 true and false claims and a version consisting of 3,786 pairs (claim, evidence). We describe the methodology for creating the corpus and the annotation process. Using the introduced corpus, we also develop two machine learning baselines for two proposed tasks: claim verification and stance prediction. Our best model utilizes pretraining (BERT) and achieves 76.7 F1 on the stance prediction task and 64.3 F1 on the claim verification task. Our preliminary experiments shed some light on the limits of automatic claim verification that relies on claims text only. Results hint that while the linguistic features and world knowledge learned during pretraining are useful for stance prediction, such learned representations from pretraining are insufficient for verifying claims without access to context or evidence.
Mustafa Ghaleb
OntoNotes 5.0
[]
https://huggingface.co/datasets/conll2012_ontonotesv5
https://catalog.ldc.upenn.edu/LDC2013T19
LDC User Agreement
2,012
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling and annotation(other)
The Arabic portion of OntoNotes 5.0 includes 300K words of newswire data.
300,000
tokens
Low
Multiple institutions
nan
OntoNotes Release 5.0 with OntoNotes DB Tool v0.999 beta
https://catalog.ldc.upenn.edu/docs/LDC2013T19/OntoNotes-Release-5.0.pdf
Arab
Yes
LDC
Upon-Request
nan
No
machine translation
other
nan
preprint
nan
nan
nan
nan
Mustafa Ghaleb
Arabic-Dialect/English Parallel Text
[ { "Name": "nan", "Dialect": "ar-EG: (Arabic (Egypt))", "Volume": "380,000", "Unit": "tokens" } ]
nan
https://catalog.ldc.upenn.edu/LDC2012T09
LDC User Agreement
2,012
multilingual
ar-LEV: (Arabic(Levant))
other
text
crawling and annotation(translation)
it uses crowdsourcing to cheaply and quickly build LevantineEnglish and Egyptian-English parallel corpora, consisting of 1.1M words and 380k words, respectively.
1,500,000
tokens
Low
Multiple institutions
nan
Machine Translation of Arabic Dialects
https://aclanthology.org/N12-1006.pdf
Arab-Latn
Yes
LDC
With-Fee
2250 $
No
machine translation
NAACL
158.0
conference
North American Chapter of the Association for Computational Linguistics
Rabih Zbib,Erika Malchiodi,J. Devlin,D. Stallard,S. Matsoukas,R. Schwartz,J. Makhoul,Omar Zaidan,Chris Callison-Burch
,,,,,,,,
Arabic Dialects present many challenges for machine translation, not least of which is the lack of data resources. We use crowdsourcing to cheaply and quickly build Levantine-English and Egyptian-English parallel corpora, consisting of 1.1M words and 380k words, respectively. The dialectal sentences are selected from a large corpus of Arabic web text, and translated using Amazon's Mechanical Turk. We use this data to build Dialectal Arabic MT systems, and find that small amounts of dialectal data have a dramatic impact on translation quality. When translating Egyptian and Levantine test sets, our Dialectal Arabic MT system performs 6.3 and 7.0 BLEU points higher than a Modern Standard Arabic MT system trained on a 150M-word Arabic-English parallel corpus.
Mustafa Ghaleb
FLORES-101
[]
https://huggingface.co/datasets/gsarti/flores_101
https://github.com/facebookresearch/flores/tree/master/floresv1/data
CC BY-SA 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling and annotation(other)
Low Resource MT Benchmark
3,100,000
tokens
Low
Facebook
OPTUS
The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation
https://arxiv.org/pdf/2106.03193.pdf
Arab
Yes
GitHub
Free
nan
Yes
machine translation
ArXiv
1.0
preprint
ArXiv
Naman Goyal,Cynthia Gao,Vishrav Chaudhary,Guillaume Wenzek,Da Ju,Sanjan Krishnan,Marc'Aurelio Ranzato,Francisco Guzmán,Angela Fan
,,,,Facebook AI Research,,,,
One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES-101 evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a highquality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
Maraim Masoud
GEM
[]
https://huggingface.co/datasets/gem/xlsumm
https://gem-benchmark.com/
Apache-2.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling and annotation(other)
benchmark environment for Natural Language Generation with a focus on its Evaluation, both through human annotations and automated Metric
29,229
documents
Low
Multiple institutions
MLSUM, XSUM, Wikilingua,WebNLG, CommonGen, E2E,DART, Czech Restaurant, ToTTo, wiki-Auto,TirkCorpus, ASSET, Schema-Guided Dialog
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
https://aclanthology.org/2021.gem-1.10.pdf
Arab
No
other
Free
nan
Yes
summarization
GEM
27.0
workshop
Generation Evaluation and Metrics
Sebastian Gehrmann,Tosin P. Adewumi,Karmanya Aggarwal,Pawan Sasanka Ammanamanchi,Aremu Anuoluwapo,Antoine Bosselut,Khyathi Raghavi Chandu,Miruna Adriana Clinciu,Dipanjan Das,Kaustubh D. Dhole,Wanyu Du,Esin Durmus,Ondrej Dusek,Chris C. Emezue,Varun Gangal,Cristina Garbacea,Tatsunori B. Hashimoto,Yufang Hou,Yacine Jernite,Harsh Jhamtani,Yangfeng Ji,Shailza Jolly,Mihir Kale,Dhruv Kumar,Faisal Ladhak,Aman Madaan,Mounica Maddela,Khyati Mahajan,Saad Mahamood,Bodhisattwa Prasad Majumder,Pedro Henrique Martins,Angelina McMillan-Major,Simon Mille,Emiel van Miltenburg,Moin Nadeem,Shashi Narayan,Vitaly Nikolaev,Rubungo Andre Niyongabo,Salomey Osei,Ankur P. Parikh,Laura Perez-Beltrachini,Niranjan Rao,Vikas Raunak,Juan Diego Rodríguez,Sashank Santhanam,João Sedoc,Thibault Sellam,Samira Shaikh,Anastasia Shimorina,Marco Antonio Sobrevilla Cabezudo,Hendrik Strobelt,Nishant Subramani,W. Xu,Diyi Yang,Akhila Yerukola,Jiawei Zhou
,,,,,EPFL,,Edinburgh Centre for Robotics,,,,Stanford University,Charles University,,Carnegie Mellon University,,,,FAIR,Carnegie Mellon University,University of Virginia,,,,,,,,,University of California San Diego,,,,Tilburg University,Massachusetts Institute of Technology;MIT,,,,,,,,,The University of Texas at Austin,,,,,,Institute of Mathematics and Computer Sciences;University of São Paulo;Pontifical Catholic University of Peru,,Allen Institute for AI;Masakhane,,,,
We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for the 2021 shared task at the associated GEM Workshop.
Maraim Masoud
GEM - XLSum
[]
https://huggingface.co/datasets/GEM/xlsum
https://github.com/csebuetnlp/xl-sum
CC BY-NC-SA 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling and annotation(other)
Large-Scale Multilingual Abstractive Summarization for 44 Languages"
46,897
sentences
Low
Multiple institutions
nan
XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages
https://aclanthology.org/2021.findings-acl.413.pdf
Arab
No
other
Free
nan
Yes
summarization
FINDINGS
0.0
conference
Findings of the Association for Computational Linguistics
Tahmid Hasan,Abhik Bhattacharjee,Md. Saiful Islam,Kazi Samin,Yuan-Fang Li,Yong-Bin Kang,M. Rahman,Rifat Shahriyar
,,,,,,,
Contemporary works on abstractive text summarization have focused primarily on highresource languages like English, mostly due to the limited availability of datasets for low/midresource ones. In this work, we present XLSum, a comprehensive and diverse dataset comprising 1 million professionally annotated article-summary pairs from BBC, extracted using a set of carefully designed heuristics. The dataset covers 44 languages ranging from low to high-resource, for many of which no public dataset is currently available. XL-Sum is highly abstractive, concise, and of high quality, as indicated by human and intrinsic evaluation. We fine-tune mT5, a state-of-theart pretrained multilingual model, with XLSum and experiment on multilingual and lowresource summarization tasks. XL-Sum induces competitive results compared to the ones obtained using similar monolingual datasets: we show higher than 11 ROUGE-2 scores on 10 languages we benchmark on, with some of them exceeding 15, as obtained by multilingual training. Additionally, training on low-resource languages individually also provides competitive performance. To the best of our knowledge, XL-Sum is the largest abstractive summarization dataset in terms of the number of samples collected from a single source and the number of languages covered. We are releasing our dataset and models to encourage future research on multilingual abstractive summarization. The resources can be found at https://github. com/csebuetnlp/xl-sum.
Maraim Masoud
GEM - WikiLingua
[]
https://huggingface.co/datasets/wiki_lingua
https://github.com/esdurmus/Wikilingua
CC0
2,020
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling and annotation(other)
New Benchmark Dataset for Multilingual Abstractive Summarization
29,229
sentences
Low
Multiple institutions
nan
WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization
WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization
Arab
No
GitHub
Free
nan
Yes
summarization
EMNLP
9.0
conference
Conference on Empirical Methods in Natural Language Processing
Faisal Ladhak,Esin Durmus,Claire Cardie,K. McKeown
,Stanford University,,
We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of crosslingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct crosslingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.
Maraim Masoud
UDP (UDP-NYUAD)
[]
nan
https://github.com/UniversalDependencies/UD_Arabic-NYUAD
CC BY-SA 4.0
2,017
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
The treebank consists of 19,738 sentences (738889 tokens), and its domain is mainly newswire.
738,889
tokens
Low
Multiple institutions, NYUAD
The NYUAD Arabic UD treebank is based on the Penn Arabic Treebank (PATB), parts 1, 2, and 3, through conversion to CATiB dependency trees.
Universal Dependencies for Arabic
https://aclanthology.org/W17-1320.pdf
Arab
Yes
GitHub
Free
nan
Yes
morphological attribute tagging, dependency parsing, part of speech tagging
WANLP
12.0
workshop
Arabic Natural Language Processing Workshop
Dima Taji,Nizar Habash,Daniel Zeman
,,
We describe the process of creating NUDAR, a Universal Dependency treebank for Arabic. We present the conversion from the Penn Arabic Treebank to the Universal Dependency syntactic representation through an intermediate dependency representation. We discuss the challenges faced in the conversion of the trees, the decisions we made to solve them, and the validation of our conversion. We also present initial parsing results on NUDAR.
Maraim Masoud
WikiANN NER(MMNER)
[]
nan
https://github.com/afshinrahimi/mmner
unknown
2,019
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling and annotation(other)
Cross-lingual name tagging and linking for 282 languages
30,000
tokens
Low
The university of Melbourne
nan
Massively Multilingual Transfer for NER
https://aclanthology.org/P19-1015.pdf
Arab
Yes
GitHub
Free
nan
Yes
named entity recognition
ACL
54.0
conference
Assofications of computation linguisitcs
Afshin Rahimi,Yuan Li,Trevor Cohn
University of Melbourne,,University of Melbourne
In cross-lingual transfer, NLP models over one or more source languages are applied to a low-resource target language. While most prior work has used a single source model or a few carefully selected models, here we consider a “massive” setting with many such models. This setting raises the problem of poor transfer, particularly from distant languages. We propose two techniques for modulating the transfer, suitable for zero-shot or few-shot learning, respectively. Evaluating on named entity recognition, we show that our techniques are much more effective than strong baselines, including standard ensembling, and our unsupervised method rivals oracle selection of the single best individual model.
Maraim Masoud
WinoMT (MT_Gender)
[]
https://huggingface.co/datasets/arbml/mt_gender_ar
https://github.com/gabrielStanovsky/mt_gender
MIT License
2,019
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
machine translation
Evaluating Gender Bias in Machine Translation
3,888
sentences
Low
Multiple Institutions
nan
Evaluating Gender Bias in Machine Translation
https://arxiv.org/pdf/1906.00591.pdf
Arab
No
GitHub
Free
nan
Yes
machine translation
ACL
99.0
conference
Assofications of computation linguisitcs
Gabriel Stanovsky,Noah A. Smith,Luke Zettlemoyer
Allen Institute for Artificial Intelligence;University of Washington,,University of Washington;Facebook
We present the first challenge set and evaluation protocol for the analysis of gender bias in machine translation (MT). Our approach uses two recent coreference resolution datasets composed of English sentences which cast participants into non-stereotypical gender roles (e.g., “The doctor asked the nurse to help her in the operation”). We devise an automatic gender bias evaluation method for eight target languages with grammatical gender, based on morphological analysis (e.g., the use of female inflection for the word “doctor”). Our analyses show that four popular industrial MT systems and two recent state-of-the-art academic MT models are significantly prone to gender-biased translation errors for all tested target languages. Our data and code are publicly available at https://github.com/gabrielStanovsky/mt_gender.
Maraim Masoud
LinCE - MSA-EGY (NER - Code Switching)
[]
nan
https://ritual.uh.edu/lince/datasets
unknown
2,018
ar
mixed
social media
text
crawling and annotation(other)
Modern Standard ArabicDialectal Arabic (MSA-DA)
11,224
sentences
Medium
Multiple institutions
nan
Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task
https://aclanthology.org/W18-3219.pdf
Arab-Latn
No
other
Upon-Request
nan
Yes
named entity recognition
CALCS
39.0
workshop
Workshop on Computational Approaches to Code Switching
Gustavo Aguilar,F. Alghamdi,Víctor Soto,Mona T. Diab,Julia Hirschberg,T. Solorio
University of Houston,,,,,
In the third shared task of the Computational Approaches to Linguistic Code-Switching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard Arabic-Egyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions.
Maraim Masoud
LinCE - MSA-DA (LID - Code Switching )
[]
nan
https://ritual.uh.edu/lince/datasets
unknown
2,016
ar
mixed
social media
text
crawling and annotation(other)
Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks.
11,241
sentences
Medium
Multiple Institutions
nan
Overview for the Second Shared Task on Language Identification in Code-Switched Data
https://aclanthology.org/W16-5805.pdf
Arab-Latn
No
other
Upon-Request
nan
Yes
code switch detection
CALCS
98.0
workshop
Workshop on Computational Approaches to Code Switching
Giovanni Molina,F. Alghamdi,Mahmoud A. Ghoneim,A. Hawwari,Nicolas Rey-Villamizar,Mona T. Diab,T. Solorio
,,,,,,
We present an overview of the first shared task on language identification on codeswitched data. The shared task included code-switched data from four language pairs: Modern Standard ArabicDialectal Arabic (MSA-DA), MandarinEnglish (MAN-EN), Nepali-English (NEPEN), and Spanish-English (SPA-EN). A total of seven teams participated in the task and submitted 42 system runs. The evaluation showed that language identification at the token level is more difficult when the languages present are closely related, as in the case of MSA-DA, where the prediction performance was the lowest among all language pairs. In contrast, the language pairs with the higest F-measure where SPA-EN and NEP-EN. The task made evident that language identification in code-switched data is still far from solved and warrants further research.
Maraim Masoud
AutoTweet
[]
nan
https://sites.google.com/view/bigir/resources/datasets#h.p_sSqR81gM9CWFF
unknown
2,015
ar
mixed
social media
text
crawling and annotation(other)
classification of Arabic tweets into automated or manual.
3,503
sentences
Medium
Qatar University
nan
Detecting Automatically-Generated Arabic Tweets
https://link.springer.com/chapter/10.1007/978-3-319-28940-3_10
Arab
No
Dropbox
Free
nan
No
generated text detection
AIRS
10.0
conference
Conference on Alliance of Information and Referral Systems
H. Almerekhi,T. Elsayed
,
Recently, Twitter, one of the most widely-known social media platforms, got infiltrated by several automation programs, commonly known as “bots”. Bots can be easily abused to spread spam and hinder information extraction applications by posting lots of automatically-generated tweets that occupy a good portion of the continuous stream of tweets. This problem heavily affects users in the Arab region due to the recent developing political events as automated tweets can disturb communication and waste time needed in filtering such tweets.
Zaid Alyafeai
Arabic SMS/Chat
[]
nan
https://catalog.ldc.upenn.edu/LDC2017T07
LDC User Agreement
2,014
ar
mixed
other
text
other
5,691 conversations totaling 1,029,248 words across 262,026 messages. Messages were natively written in either Arabic orthography or romanized Arabizi
262,026
sentences
Low
LDC
nan
Transliteration of Arabizi into Arabic Orthography: Developing a Parallel Annotated Arabizi-Arabic Script SMS/Chat Corpus
https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/emnlp2014-transliteration-aribizi-into-arabic-orthography.pdf
Arab-Latn
No
LDC
With-Fee
1750 $
No
transliteration, machine translation
WANLP
39.0
workshop
Arabic Natural Language Processing workshop
Ann Bies,Zhiyi Song,M. Maamouri,Stephen Grimes,Haejoong Lee,Jonathan Wright,S. Strassel,Nizar Habash,R. Eskander,Owen Rambow
,,,,,,,,,
This paper describes the process of creating a novel resource, a parallel Arabizi-Arabic script corpus of SMS/Chat data. The language used in social media expresses many differences from other written genres: its vocabulary is informal with intentional deviations from standard orthography such as repeated letters for emphasis; typos and nonstandard abbreviations are common; and nonlinguistic content is written out, such as laughter, sound representations, and emoticons. This situation is exacerbated in the case of Arabic social media for two reasons. First, Arabic dialects, commonly used in social media, are quite different from Modern Standard Arabic phonologically, morphologically and lexically, and most importantly, they lack standard orthographies. Second, Arabic speakers in social media as well as discussion forums, SMS messaging and online chat often use a non-standard romanization called Arabizi. In the context of natural language processing of social media Arabic, transliterating from Arabizi of various dialects to Arabic script is a necessary step, since many of the existing state-of-the-art resources for Arabic dialect processing expect Arabic script input. The corpus described in this paper is expected to support Arabic NLP by providing this resource.
Zaid Alyafeai
AraNews
[]
nan
https://github.com/UBC-NLP/wanlp2020_arabic_fake_news_detection
custom
2,020
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling and annotation(other)
a large-scale, multi-topic, and multi-country Arabic news dataset
1,000,000
documents
Low
UBC
nan
Machine Generation and Detection of Arabic Manipulated and Fake News
https://aclanthology.org/2020.wanlp-1.7.pdf
Arab
No
GitHub
Upon-Request
nan
Yes
fake news detection
WANLP
6.0
workshop
Arabic Natural Language Processing Workshop
E. Nagoudi,AbdelRahim Elmadany,Muhammad Abdul-Mageed,Tariq Alhindi,H. Cavusoglu
,University of British Columbia,,Columbia University;King Abdulaziz City for Science and Technology,
Fake news and deceptive machine-generated text are serious problems threatening modern societies, including in the Arab world. This motivates work on detecting false and manipulated stories online. However, a bottleneck for this research is lack of sufficient data to train detection models. We present a novel method for automatically generating Arabic manipulated (and potentially fake) news stories. Our method is simple and only depends on availability of true stories, which are abundant online, and a part of speech tagger (POS). To facilitate future work, we dispense with both of these requirements altogether by providing AraNews, a novel and large POS-tagged news dataset that can be used off-the-shelf. Using stories generated based on AraNews, we carry out a human annotation study that casts light on the effects of machine manipulation on text veracity. The study also measures human ability to detect Arabic machine manipulated text generated by our method. Finally, we develop the first models for detecting manipulated Arabic news and achieve state-of-the-art results on Arabic fake news detection (macro F1=70.06). Our models and data are publicly available.
Zaid Alyafeai
Khawas
[]
nan
https://sourceforge.net/projects/kacst-acptool/
unknown
2,013
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling
a corpus containing more than two million words and a corpora processing tool that is specifically designed for Arabic
2,910
documents
Low
Multiple Institutions
nan
New Language Resources for Arabic: Corpus Containing More Than Two Million Words and a Corpus Processing Tool
https://ieeexplore.ieee.org/abstract/document/6646005
Arab
No
sourceforge
Free
nan
No
topic classification
IALP
19.0
conference
International Conference on Asian Language Processing
A. Al-Thubaity,Marwa Khan,Manal Al-Mazrua,Maram Al-Mousa
,,,
Arabic is a resource-poor language relative to other languages with a similar number of speakers. This situation negatively affects corpus-based linguistic studies in Arabic and, to a lesser extent, Arabic language processing. This paper presents a brief overview of recent freely available Arabic corpora and corpora processing tools, and it examines some of the issues that may be preventing Arabic linguists from using the same. These issues reveal the need for new language resources to enrich and foster Arabic corpus-based studies. Accordingly, this paper introduces the design of a new Arabic corpus that includes modern standard Arabic varieties based on newspapers from all Arab countries and that comprises more than two million words, it also describes the main features of a corpus processing tool specifically designed for Arabic, called "Khawas ÛæÇÕ" ("diver" in English). Khawas provides more features than any other freely available corpus processing tool for Arabic, including n-gram frequency and concordance, collocations, and statistical comparison of two corpora. Finally, we outline modifications and improvements that could be made in future works.
Zaid Alyafeai
MultiUN v2
[]
https://huggingface.co/datasets/un_multi
http://www.euromatrixplus.net/multi-un/
unknown
2,010
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
human translation
6 official languages of the UN, consisting of around 300 million words per language
65,156
documents
Low
DFKI
nan
MultiUN: A Multilingual Corpus from United Nation Documents
https://www.dfki.de/fileadmin/user_upload/import/4790_686_Paper.pdf
Arab
No
other
Free
nan
Yes
machine translation
LREC
223.0
conference
International Conference on Language Resources and Evaluation
A. Eisele
nan
This paper describes the acquisition, preparation and properties of a corpus extracted from the official documents of the United Nations (UN). This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language. We describe the methods we used for crawling, document formatting, and sentence alignment. This corpus also includes a common test set for machine translation. We present the results of a French-Chinese machine translation experiment performed on this corpus.
Zaid Alyafeai
WDC
[]
https://huggingface.co/datasets/arbml/WDC
https://github.com/Maha-J-Althobaiti/Arabic_NER_Wiki-Corpus
CC BY 3.0
2,014
ar
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling
contains around 6 million tokens representing different genres, as Wikipedia is considered an open domain
6,000,000
tokens
Low
University of Essex
nan
Automatic Creation of Arabic Named Entity Annotated Corpus Using Wikipedia
https://aclanthology.org/E14-3012.pdf
Arab
No
GitHub
Free
nan
No
named entity recognition
EACL
14.0
conference
European Chapter of the Association for Computational Linguistics
M. Althobaiti,Udo Kruschwitz,Massimo Poesio
,University of Regensburg,
In this paper we propose a new methodology to exploit Wikipedia features and structure to automatically develop an Arabic NE annotated corpus. Each Wikipedia link is transformed into an NE type of the target article in order to produce the NE annotation. Other Wikipedia features - namely redirects, anchor texts, and inter-language links - are used to tag additional NEs, which appear without links in Wikipedia texts. Furthermore, we have developed a filtering algorithm to eliminate ambiguity when tagging candidate NEs. Herein we also introduce a mechanism based on the high coverage of Wikipedia in order to address two challenges particular to tagging NEs in Arabic text: rich morphology and the absence of capitalisation. The corpus created with our new method (WDC) has been used to train an NE tagger which has been tested on different domains. Judging by the results, an NE tagger trained on WDC can compete with those trained on manually annotated corpora.
Zaid Alyafeai
CALYOU
[]
https://huggingface.co/datasets/arbml/CAYLOU
https://github.com/abidikarima/CALYOU
unknown
2,017
ar
ar-DZ: (Arabic (Algeria))
social media
text
crawling
A Comparable Spoken Algerian Corpus Harvested from YouTube
5,190
sentences
Medium
Multiple Institutions
nan
CALYOU: A Comparable Spoken Algerian Corpus Harvested from YouTube
https://hal.archives-ouvertes.fr/hal-01531591/document
Arab-Latn
No
GitHub
Free
nan
No
machine translation
INTERSPEECH
16.0
conference
Conference of the International Speech Communication Association
K. Abidi,M. Menacer,Kamel Smaïli
,,
This paper addresses the issue of comparability of comments extracted from Youtube. The comments concern spoken Algerian which could be either local Arabic, Modern Standard Arabic or French. This diversity of expression arises a huge number of problems concerning the data processing. In this article, several methods of alignment will be proposed and tested. The method which permits to best align is Word2Vec-based approach that will be used iteratively. This recurrent call of Word2Vec allows to improve significantly the results of comparability. In fact, a dictionary-based approach leads to a Recall of 4, while our approach allows to get a Recall of 33 at rank 1. Thanks to this approach, we built from Youtube CALYOU, a Comparable Corpus of the spoken Algerian.
Zaid Alyafeai
ADPBC
[]
nan
https://github.com/salsama/Arabic-Information-Extraction-Corpus
unknown
2,021
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling and annotation(other)
This corpus contains the words and their dependency relation produced by performing some steps
16
documents
Low
Multiple Institutions
nan
ADPBC: Arabic Dependency Parsing Based Corpora for Information Extraction
http://www.mecs-press.org/ijitcs/ijitcs-v13-n1/IJITCS-V13-N1-4.pdf
Arab
No
GitHub
Free
nan
No
part of speech tagging, dependency parsing, topic classification
IJITCS
0.0
journal
International Journal of Information Technology and Computer Science
Sally Mohamed,M. Hussien.,Hamdy M. Mousa
,,
There is a massive amount of different information and data in the World Wide Web, and the number of Arabic users and contents is widely increasing. Information extraction is an essential issue to access and sort the data on the web. In this regard, information extraction becomes a challenge, especially for languages, which have a complex morphology like Arabic. Consequently, the trend today is to build a new corpus that makes the information extraction easier and more precise. This paper presents Arabic linguistically analyzed corpus, including dependency relation. The collected data includes five fields; they are a sport, religious, weather, news and biomedical. The output is CoNLL universal lattice file format (CoNLL-UL). The corpus contains an index for the sentences and their linguistic meta-data to enable quick mining and search across the corpus. This corpus has seventeenth morphological annotations and eight features based on the identification of the textual structures help to recognize and understand the grammatical characteristics of the text and perform the dependency relation. The parsing and dependency process conducted by the universal dependency model and corrected manually. The results illustrated the enhancement in the dependency relation corpus. The designed Arabic corpus helps to quickly get linguistic annotations for a text and make the information Extraction techniques easy and clear to learn. The gotten results illustrated the average enhancement in the dependency relation corpus.
Zaid Alyafeai
AraSenCorpus
[]
https://huggingface.co/datasets/arbml/AraSenCorpus
https://github.com/yemen2016/AraSenCorpus
MIT License
2,021
ar
mixed
social media
text
crawling and annotation(other)
contains 4.5 million tweets and covers both modern standard Arabic and some of the Arabic dialects
4,500,000
sentences
Medium
Multiple Institutions
nan
AraSenCorpus: A Semi-Supervised Approach for Sentiment Annotation of a Large Arabic Text Corpus
https://github.com/yemen2016/AraSenCorpus
Arab
No
GitHub
Free
nan
No
sentiment analysis
applsci
2.0
journal
Applied Sciences Journal
Ali Al-Laith,Muhammad Shahbaz,Hind Alaskar,Asim Rehmat
,,,
At a time when research in the field of sentiment analysis tends to study advanced topics in languages, such as English, other languages such as Arabic still suffer from basic problems and challenges, most notably the availability of large corpora. Furthermore, manual annotation is time-consuming and difficult when the corpus is too large. This paper presents a semi-supervised self-learning technique, to extend an Arabic sentiment annotated corpus with unlabeled data, named AraSenCorpus. We use a neural network to train a set of models on a manually labeled dataset containing 15,000 tweets. We used these models to extend the corpus to a large Arabic sentiment corpus called “AraSenCorpus”. AraSenCorpus contains 4.5 million tweets and covers both modern standard Arabic and some of the Arabic dialects. The long-short term memory (LSTM) deep learning classifier is used to train and test the final corpus. We evaluate our proposed framework on two external benchmark datasets to ensure the improvement of the Arabic sentiment classification. The experimental results show that our corpus outperforms the existing state-of-the-art systems.
Zaid Alyafeai
EveTAR
[]
nan
https://sites.google.com/view/bigir/resources/datasets#h.p_dB9cxP-26Xnc
unknown
2,017
ar
mixed
social media
text
crawling and annotation(other)
a crawl of 355M Arabic tweets and covers 50 significant events
3,550,000
sentences
Medium
Qatar University
nan
building a large-scale multi-task test collection over Arabic tweets
https://link.springer.com/article/10.1007/s10791-017-9325-7
Arab
No
Dropbox
Free
nan
No
event detection, event detection, ad-hoc search, timeline generation, real-time summarization
IRJ
10.0
journal
Information Retrieval Journal
Maram Hasanain,Reem Suwaileh,T. Elsayed,Mucahid Kutlu,H. Almerekhi
,,,TOBB University of Economics and Technology,
This article introduces a new language-independent approach for creating a large-scale high-quality test collection of tweets that supports multiple information retrieval (IR) tasks without running a shared-task campaign. The adopted approach (demonstrated over Arabic tweets) designs the collection around significant (i.e., popular) events, which enables the development of topics that represent frequent information needs of Twitter users for which rich content exists. That inherently facilitates the support of multiple tasks that generally revolve around events, namely event detection, ad-hoc search, timeline generation, and real-time summarization. The key highlights of the approach include diversifying the judgment pool via interactive search and multiple manually-crafted queries per topic, collecting high-quality annotations via crowd-workers for relevancy and in-house annotators for novelty, filtering out low-agreement topics and inaccessible tweets, and providing multiple subsets of the collection for better availability. Applying our methodology on Arabic tweets resulted in EveTAR, the first freely-available tweet test collection for multiple IR tasks. EveTAR includes a crawl of 355M Arabic tweets and covers 50 significant events for which about 62K tweets were judged with substantial average inter-annotator agreement (Kappa value of 0.71). We demonstrate the usability of EveTAR by evaluating existing algorithms in the respective tasks. Results indicate that the new collection can support reliable ranking of IR systems that is comparable to similar TREC collections, while providing strong baseline results for future studies over Arabic tweets.
Zaid Alyafeai
A7'ta
[]
https://huggingface.co/datasets/arbml/A7ta
https://github.com/iwan-rg/A-Monolingual-Arabic-Parallel-Corpus-
unknown
2,019
ar
ar-MSA: (Arabic (Modern Standard Arabic))
books
text
crawling
The data contains 300 documents, 445 erroneous sentences and their error-free counterparts, and a total of 3,532 words.
300
documents
Low
King Saud University
nan
A7׳ta: Data on a monolingual Arabic parallel corpus for grammar checking
https://www.sciencedirect.com/science/article/pii/S2352340918315397
Arab
No
GitHub
Free
nan
No
grammar checking
Data in brief
2.0
journal
Data in brief
Nora Madi,Hend Suliman Al-Khalifa
,
Grammar error correction can be considered as a “translation” problem, such that an erroneous sentence is “translated” into a correct version of the sentence in the same language. This can be accomplished by employing techniques like Statistical Machine Translation (SMT) or Neural Machine Translation (NMT). Producing models for SMT or NMT for the goal of grammar correction requires monolingual parallel corpora of a certain language. This data article presents a monolingual parallel corpus of Arabic text called A7׳ta (). It contains 470 erroneous sentences and their 470 error-free counterparts. This is an Arabic parallel corpus that can be used as a linguistic resource for Arabic natural language processing (NLP) mainly to train sequence-to-sequence models for grammar checking. Sentences were manually collected from a book that has been prepared as a guide for correctly writing and using Arabic grammar and other linguistic features. Although there are a number of available Arabic corpora of errors and corrections [2] such as QALB [10] and Arabic Learner Corpus [11], the data we present in this article is an effort to increase the number of freely available Arabic corpora of errors and corrections by providing a detailed error specification and leveraging the work of language experts.
Zaid Alyafeai
BAEC
[ { "Name": "SDC", "Dialect": "ar-SA: (Arabic (Saudi Arabia))", "Volume": "210,396", "Unit": "tokens" }, { "Name": "EDC", "Dialect": "ar-EG: (Arabic (Egypt))", "Volume": "218,149", "Unit": "tokens" }, { "Name": "MSA", "Dialect": "ar-MSA: (Arabic (Modern Standard Arabic))", "Volume": "45,251", "Unit": "tokens" } ]
https://huggingface.co/datasets/arbml/BAEC
https://github.com/TaghreedT/BAEC
unknown
2,020
multilingual
mixed
social media
text
crawling and annotation(other)
consists of 45,251 words and is 436 KB in size.It was collected from different Facebook pages
473,796
tokens
Medium
nan
nan
Compression versus traditional machine learning classifiers to detect code-switching in varieties and dialects: Arabic as a case study
https://eprints.whiterose.ac.uk/155881/1/tarmom18nlejV8tex.pdf
Arab-Latn
No
GitHub
Free
nan
No
code switch detection
NLE
1.0
journal
Natural Language Engineering
Taghreed Tarmom,W. Teahan,E. Atwell,Mohammad Ammar Alsalka
,,,
Abstract The occurrence of code-switching in online communication, when a writer switches among multiple languages, presents a challenge for natural language processing tools, since they are designed for texts written in a single language. To answer the challenge, this paper presents detailed research on ways to detect code-switching in Arabic text automatically. We compare the prediction by partial matching (PPM) compression-based classifier, implemented in Tawa, and a traditional machine learning classifier sequential minimal optimization (SMO), implemented in Waikato Environment for Knowledge Analysis, working specifically on Arabic text taken from Facebook. Three experiments were conducted in order to: (1) detect code-switching among the Egyptian dialect and English; (2) detect code-switching among the Egyptian dialect, the Saudi dialect, and English; and (3) detect code-switching among the Egyptian dialect, the Saudi dialect, Modern Standard Arabic (MSA), and English. Our experiments showed that PPM achieved a higher accuracy rate than SMO with 99.8% versus 97.5% in the first experiment and 97.8% versus 80.7% in the second. In the third experiment, PPM achieved a lower accuracy rate than SMO with 53.2% versus 60.2%. Code-switching between Egyptian Arabic and English text is easiest to detect because Arabic and English are generally written in different character sets. It is more difficult to distinguish between Arabic dialects and MSA as these use the same character set, and most users of Arabic, especially Saudis and Egyptians, frequently mix MSA with their dialects. We also note that the MSA corpus used for training the MSA model may not represent MSA Facebook text well, being built from news websites. This paper also describes in detail the new Arabic corpora created for this research and our experiments.
Zaid Alyafeai
Shamela
[]
nan
https://github.com/OpenArabic/
unknown
2,016
ar
ar-CLS: (Arabic (Classic))
books
text
crawling
a large-scale, historical corpus of Arabic of about 1 billion words from diverse periods of time
6,100
documents
Low
Multiple Institutions
OpenITI
Shamela: A Large-Scale Historical Arabic Corpus
https://arxiv.org/pdf/1612.08989.pdf
Arab
No
GitHub
Free
nan
No
text generation, language modeling
LT4DH
9.0
workshop
Workshop on Language Technology Resources and Tools for Digital Humanities
Yonatan Belinkov,Alexander Magidow,Maxim Romanov,Avi Shmidman,Moshe Koppel
,,,,
Arabic is a widely-spoken language with a rich and long history spanning more than fourteen centuries. Yet existing Arabic corpora largely focus on the modern period or lack sufficient diachronic information. We develop a large-scale, historical corpus of Arabic of about 1 billion words from diverse periods of time. We clean this corpus, process it with a morphological analyzer, and enhance it by detecting parallel passages and automatically dating undated texts. We demonstrate its utility with selected case-studies in which we show its application to the digital humanities.
Zaid Alyafeai
ArSenL
[]
https://huggingface.co/datasets/arbml/ArSenL
http://oma-project.com/ArSenL/download_intro
custom
2,014
ar
mixed
other
text
crawling
large scale Standard Arabic sentiment lexicon (ArSenL) using a combination of existing resources: ESWN, Arabic WordNet, and the Standard Arabic Morphological Analyzer (SAMA)
28,760
tokens
Low
AUB, NYUAD
nan
A Large Scale Arabic Sentiment Lexicon for Arabic Opinion Mining
https://aclanthology.org/W14-3623.pdf
Latn
No
other
Free
nan
No
part of speech tagging, sentiment analysis
WANLP
130.0
workshop
Arabic Natural Language Processing Workshop
Gilbert Badaro,R. Baly,Hazem M. Hajj,Nizar Habash,W. El-Hajj
,,,,
Most opinion mining methods in English rely successfully on sentiment lexicons, such as English SentiWordnet (ESWN). While there have been efforts towards building Arabic sentiment lexicons, they suffer from many deficiencies: limited size, unclear usability plan given Arabic’s rich morphology, or nonavailability publicly. In this paper, we address all of these issues and produce the first publicly available large scale Standard Arabic sentiment lexicon (ArSenL) using a combination of existing resources: ESWN, Arabic WordNet, and the Standard Arabic Morphological Analyzer (SAMA). We compare and combine two methods of constructing this lexicon with an eye on insights for Arabic dialects and other low resource languages. We also present an extrinsic evaluation in terms of subjectivity and sentiment analysis.
Zaid Alyafeai
Osman
[]
https://huggingface.co/datasets/arbml/Osman_Un_Corpus
https://github.com/drelhaj/OsmanReadability
unknown
2,016
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling
e text readability for 73,000 parallel sentences from English and Arabic UN documents
73,000
sentences
Low
Lancaster University
nan
OSMAN – A Novel Arabic Readability Metric
https://aclanthology.org/L16-1038.pdf
Arab
No
GitHub
Free
nan
No
machine translation
LREC
14.0
conference
International Conference on Language Resources and Evaluation
Mahmoud El-Haj,Paul Rayson
Lancaster University,Lancaster University
We present OSMAN (Open Source Metric for Measuring Arabic Narratives) - a novel open source Arabic readability metric and tool. It allows researchers to calculate readability for Arabic text with and without diacritics. OSMAN is a modified version of the conventional readability formulas such as Flesch and Fog. In our work we introduce a novel approach towards counting short, long and stress syllables in Arabic which is essential for judging readability of Arabic narratives. We also introduce an additional factor called “Faseeh” which considers aspects of script usually dropped in informal Arabic writing. To evaluate our methods we used Spearman’s correlation metric to compare text readability for 73,000 parallel sentences from English and Arabic UN documents. The Arabic sentences were written with the absence of diacritics and in order to count the number of syllables we added the diacritics in using an open source tool called Mishkal. The results show that OSMAN readability formula correlates well with the English ones making it a useful tool for researchers and educators working with Arabic text.
Zaid Alyafeai
KalamDZ
[]
https://huggingface.co/datasets/arbml/KalamDZ
https://github.com/LIM-MoDos/KalamDZ
unknown
2,017
ar
ar-DZ: (Arabic (Algeria))
transcribed audio
spoken
crawling and annotation(other)
8 major Algerian Arabic sub-dialects with 4881 speakers and more than 104.4 hours segmented in utterances of at least 6 s
104
hours
Low
Laboratoire d’informatique et Mathématiques Université
nan
Toward a Web-based Speech Corpus for Algerian Arabic Dialectal Varieties
https://aclanthology.org/W17-1317.pdf
Arab
No
GitHub
Free
nan
No
speech classification, dialect identification
WANLP
10.0
workshop
Arabic Natural Language Processing Workshop
Soumia Bougrine,Aicha Chorana,Abdallah Lakhdari,H. Cherroun
,,,
The success of machine learning for automatic speech processing has raised the need for large scale datasets. However, collecting such data is often a challenging task as it implies significant investment involving time and money cost. In this paper, we devise a recipe for building largescale Speech Corpora by harnessing Web resources namely YouTube, other Social Media, Online Radio and TV. We illustrate our methodology by building KALAM’DZ, An Arabic Spoken corpus dedicated to Algerian dialectal varieties. The preliminary version of our dataset covers all major Algerian dialects. In addition, we make sure that this material takes into account numerous aspects that foster its richness. In fact, we have targeted various speech topics. Some automatic and manual annotations are provided. They gather useful information related to the speakers and sub-dialect information at the utterance level. Our corpus encompasses the 8 major Algerian Arabic sub-dialects with 4881 speakers and more than 104.4 hours segmented in utterances of at least 6 s.
Zaid Alyafeai
GeoWAC
[ { "Name": "United_Arab_Emirates ", "Dialect": "ar-AE: (Arabic (United Arab Emirates))", "Volume": "102,370,313", "Unit": "tokens" }, { "Name": "Palestine ", "Dialect": "ar-PS: (Arabic (Palestine))", "Volume": "97,161,325", "Unit": "tokens" }, { "Name": "Morocco ", "Dialect": "ar-MA: (Arabic (Morocco))", "Volume": "78,749,159", "Unit": "tokens" }, { "Name": "Syria", "Dialect": "ar-SY: (Arabic (Syria))", "Volume": "24,308,507", "Unit": "tokens" }, { "Name": "Saudi_Arabia ", "Dialect": "ar-SA: (Arabic (Saudi Arabia))", "Volume": "14,651,603", "Unit": "tokens" }, { "Name": "Algeria", "Dialect": "ar-DZ: (Arabic (Algeria))", "Volume": "11,533,390", "Unit": "tokens" }, { "Name": "Qatar ", "Dialect": "ar-QA: (Arabic (Qatar))", "Volume": "10,783,360", "Unit": "tokens" }, { "Name": "Jordan", "Dialect": "ar-JO: (Arabic (Jordan))", "Volume": "8,638,264", "Unit": "tokens" }, { "Name": "Oman", "Dialect": "ar-OM: (Arabic (Oman))", "Volume": "5,508,084", "Unit": "tokens" }, { "Name": "Bahrain", "Dialect": "ar-BH: (Arabic (Bahrain))", "Volume": "4,524,743", "Unit": "tokens" }, { "Name": "Iraq ", "Dialect": "ar-IQ: (Arabic (Iraq))", "Volume": "4,497,374", "Unit": "tokens" }, { "Name": "Tunisia", "Dialect": "ar-TN: (Arabic (Tunisia))", "Volume": "3,581,885", "Unit": "tokens" }, { "Name": "Sudan", "Dialect": "ar-SD: (Arabic (Sudan))", "Volume": "2,522,356", "Unit": "tokens" }, { "Name": "Egypt", "Dialect": "ar-EG: (Arabic (Egypt))", "Volume": "2,465,722", "Unit": "tokens" }, { "Name": "Djibouti ", "Dialect": "ar-DJ: (Arabic (Djibouti))", "Volume": "1,468,411", "Unit": "tokens" }, { "Name": "Somalia", "Dialect": "ar-SO: (Arabic (Somalia))", "Volume": "255,511", "Unit": "tokens" } ]
nan
https://github.com/jonathandunn/earthLings
GPL-2.0
2,020
multilingual
mixed
other
text
crawling
match the ground-truth geographic distribution of each language
373,020,007
tokens
Low
University of Canterbury
nan
Geographically-Balanced Gigaword Corpora for 50 Language Varieties
http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.308.pdf
Arab
No
GitHub
Free
nan
No
dialect identification
LREC
5.0
conference
International Conference on Language Resources and Evaluation
Jonathan Dunn,B. Adams
University of Canterbury,
While text corpora have been steadily increasing in overall size, even very large corpora are not designed to represent global population demographics. For example, recent work has shown that existing English gigaword corpora over-represent inner-circle varieties from the US and the UK. To correct implicit geographic and demographic biases, this paper uses country-level population demographics to guide the construction of gigaword web corpora. The resulting corpora explicitly match the ground-truth geographic distribution of each language, thus equally representing language users from around the world. This is important because it ensures that speakers of under-resourced language varieties (i.e., Indian English or Algerian French) are represented, both in the corpora themselves but also in derivative resources like word embeddings.
Zaid Alyafeai
Named Entities Lexicon
[]
https://huggingface.co/datasets/arbml/Named_Entities_Lexicon
https://github.com/Hkiri-Emna/Named_Entities_Lexicon_Project
unknown
2,017
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
crawling and annotation(other)
Named entities (person,organisation and location) Arabic-English Pairs Person 27480 Organization 17237 Location 4036 Overall Arabic-English Pairs 48753
48,753
tokens
Low
Latice Laboratory
nan
Arabic-English Text Translation Leveraging Hybrid NER
https://aclanthology.org/Y17-1019.pdf
Arab
No
GitHub
Free
nan
No
named entity recognition,machine translation
PACLIC
6.0
conference
Pacific Asia Conference on Language, Information and Computation
Emna Hkiri,S. Mallat,M. Zrigui
,,
Named Entities (NEs) are a very important part of a sentence and their treatment is a potentially useful preprocessing step for Statistical Machine Translation (SMT). Improper translation of NE lapse the quality of the SMT output and it can hurt sentence’s human readability considerably. Dropping NE often causes translation failures beyond the context, affecting both the morphosyntactic formedness of sentences and the word sense disambiguation in the source text. Due to peculiarities of the written Arabic language, the translation task is however rather challenging. In this work, we address the challenging issues of NEs treatment in the context of SMT of Arabic into English texts. We have experimented on three types of named entities which are: Proper names, Organization names and Location names. In this paper, we present integration between machine learning and rule based techniques to tackle Arabic NER problem in attempt to improve the final quality of the SMT system output. We show empirically that each aspect of our approach is important, and that their combination leads to the best results already after integration of NER into SMT. We show improvements in terms of BLEU scores (+4 points) and reduction of out of vocabulary words over a baseline for the News Commentary corpus.
Zaid Alyafeai
Kawarith
[]
https://huggingface.co/datasets/arbml/kawarith_Kuwait_floods_18
https://github.com/alaa-a-a/kawarith
CC BY-NC 4.0
2,021
ar
mixed
social media
text
crawling and annotation(other)
a multi-dialect Arabic Twitter corpus for crisis events, comprising more than a million Arabic tweets collected during 22 crises that occurred between 2018 and 2020 and involved several types of hazard
12,446
sentences
Medium
University of Birmingham
nan
Kawarith: an Arabic Twitter Corpus for Crisis Events ¯
https://aclanthology.org/2021.wanlp-1.5.pdf
Arab
No
GitHub
Free
nan
Yes
crisis detection
WANLP
2.0
workshop
Arabic Natural Language Processing Workshop
Alaa Alharbi,Mark G. Lee
Taibah University;University of Birmingham,
Social media (SM) platforms such as Twitter provide large quantities of real-time data that can be leveraged during mass emergencies. Developing tools to support crisis-affected communities requires available datasets, which often do not exist for low resource languages. This paper introduces Kawarith a multi-dialect Arabic Twitter corpus for crisis events, comprising more than a million Arabic tweets collected during 22 crises that occurred between 2018 and 2020 and involved several types of hazard. Exploration of this content revealed the most discussed topics and information types, and the paper presents a labelled dataset from seven emergency events that serves as a gold standard for several tasks in crisis informatics research. Using annotated data from the same event, a BERT model is fine-tuned to classify tweets into different categories in the multi- label setting. Results show that BERT-based models yield good performance on this task even with small amounts of task-specific training data.
Zaid Alyafeai
ARC-WMI
[]
https://huggingface.co/datasets/arbml/ARC_WMI
https://github.com/iwan-rg/ARC-WMI
CC BY-NC-SA 4.0
2,018
ar
mixed
other
text
crawling and annotation(other)
4476 sentences with over 61k words, extracted from 94 sources of Arabic written medicine information
4,476
sentences
Low
King Saud University Riyadh
nan
ARC-WMI: Towards Building Arabic Readability Corpus for Written Medicine Information
http://lrec-conf.org/workshops/lrec2018/W30/pdf/9_W30.pdf
Arab
No
GitHub
Free
nan
No
readability assessment
LREC
0.0
conference
International Conference on Language Resources and Evaluation
Abeer Aldayel,H. Al-Khalifa,S. Alaqeel,N. Abanmy,Maha Al-Yahya,Mona T. Diab
,,,,,
Developing easy-to-read written medicine information continues to be a challenge in health communication. Readability aims to gauge the difficulty level of a text. Various formulas and machine learning algorithms have proposed to judge the readability of health materials and assist writers in identifying possible problems related to text difficulty. For this reason, having corpus annotated with readability levels is fundamental to evaluating the readability formulas and training machine learning algorithms. Arabic suffers from a lack of annotated corpora to evaluate text readability, especially for health materials. To address this shortage, we describe a baseline results towards constructing readability corpus ARC-WMI, a new Arabic collection of written medicine information annotated with readability levels. We compiled a corpus of 4476 sentences with over 61k words, extracted from 94 sources of Arabic written medicine information. These sentences were manually annotated and assigned a readability level (“Easy,” “Intermediate,” or “Difficult”) by a panel of five health-care professionals.
Zaid Alyafeai
WikiMatrix
[]
nan
https://github.com/facebookresearch/LASER/tree/master/tasks/WikiMatrix
CC BY-SA
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
wikipedia
text
crawling
t 135M parallel sentences for 1620 different language pairs, out of which only 34M are aligned with English
4,435
sentences
Low
JHU, Facebook
nan
WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia
https://github.com/facebookresearch/LASER/tree/master/tasks/WikiMatrix
Arab
No
GitHub
Free
nan
No
machine translation
EACL
116.0
conference
European Chapter of the Association for Computational Linguistics
Holger Schwenk,Vishrav Chaudhary,Shuo Sun,Hongyu Gong,Francisco Guzmán
,,,University of Illinois at Urbana-Champaign,
We present an approach based on multilingual sentence embeddings to automatically extract parallel sentences from the content of Wikipedia articles in 96 languages, including several dialects or low-resource languages. We do not limit the extraction process to alignments with English, but we systematically consider all possible language pairs. In total, we are able to extract 135M parallel sentences for 16720 different language pairs, out of which only 34M are aligned with English. This corpus is freely available. To get an indication on the quality of the extracted bitexts, we train neural MT baseline systems on the mined data only for 1886 languages pairs, and evaluate them on the TED corpus, achieving strong BLEU scores for many language pairs. The WikiMatrix bitexts seem to be particularly interesting to train MT systems between distant languages without the need to pivot through English.
Zaid Alyafeai
Medical Corpus
[]
https://huggingface.co/datasets/arbml/MedicalCorpus
https://github.com/licvol/Arabic-Spoken-Language-Understanding/tree/master/MedicalCorpus
unknown
2,019
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
corpus from a medical care forum known as Doctissimo
152
sentences
Low
Multiple institutions
nan
An Arabic Multi-Domain Spoken Language Understanding System
https://aclanthology.org/W19-7407.pdf
Arab
No
GitHub
Free
nan
No
disease identification
ICNLSP
0.0
conference
International Conference on Natural Language and Speech Processing
Mohamed Lichouri,Mourad Abbas,R. Djeradi,A. Djeradi
CRSTDLA,,,
In this paper, we suggest the generalization of an Arabic Spoken Language Understanding (SLU) system in a multi-domain humanmachine dialog. We are interested particularly in domain portability of SLU system related to both structured (DBMS) and unstructured data (Information Extraction), related to four domains. In this work, we used the thematic approach for four domains which are School Management, Medical Diagnostics, Consultation domain and Question-Answering domain (DAWQAS). We should note that two kinds of classifiers are used in our experiments: statistical and neural, namely: Gaussian Naive Bayes, Bernoulli Naive Bayes, Logistic Regression, SGD, Passive Aggressive Classifier, Perceptron, Linear Support Vector and Convolutional Neural Network.
Zaid Alyafeai
Student University Corpus
[]
https://huggingface.co/datasets/arbml/School_Corpus
https://github.com/licvol/Arabic-Spoken-Language-Understanding/tree/master/UniversityStudentCorpus/MonoLingual
unknown
2,019
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
300 students which formulated their requests to access their information from the education office
126
sentences
Low
Multiple institutions
nan
An Arabic Multi-Domain Spoken Language Understanding System
https://aclanthology.org/W19-7407.pdf
Arab
No
GitHub
Free
nan
No
student management
ICNLSP
0.0
conference
International Conference on Natural Language and Speech Processing
Mohamed Lichouri,Mourad Abbas,R. Djeradi,A. Djeradi
CRSTDLA,,,
In this paper, we suggest the generalization of an Arabic Spoken Language Understanding (SLU) system in a multi-domain humanmachine dialog. We are interested particularly in domain portability of SLU system related to both structured (DBMS) and unstructured data (Information Extraction), related to four domains. In this work, we used the thematic approach for four domains which are School Management, Medical Diagnostics, Consultation domain and Question-Answering domain (DAWQAS). We should note that two kinds of classifiers are used in our experiments: statistical and neural, namely: Gaussian Naive Bayes, Bernoulli Naive Bayes, Logistic Regression, SGD, Passive Aggressive Classifier, Perceptron, Linear Support Vector and Convolutional Neural Network.
Zaid Alyafeai
Multilingual Hate Speech Detection Dataset
[]
nan
https://github.com/hate-alert/DE-LIMIT/tree/master/Dataset
unknown
2,020
multilingual
mixed
social media
text
crawling and annotation(other)
Combined MLMA and L-HSAB datasets
5,790
sentences
High
Indian Institute of Technology Kharagpur
L-HSAB,MLMA
Deep Learning Models for Multilingual Hate Speech Detection
https://arxiv.org/pdf/2004.06465.pdf
Arab
No
GitHub
Free
nan
Yes
hate speech detection
ArXiv
27.0
preprint
ArXiv
Sai Saket Aluru,Binny Mathew,Punyajoy Saha,Animesh Mukherjee
,Indian Institute of Technology Kharagpur,,
Hate speech detection is a challenging problem with most of the datasets available in only one language: English. In this paper, we conduct a large scale analysis of multilingual hate speech in 9 languages from 16 different sources. We observe that in low resource setting, simple models such as LASER embedding with logistic regression performs the best, while in high resource setting BERT based models perform better. In case of zero-shot classification, languages such as Italian and Portuguese achieve good results. Our proposed framework could be used as an efficient solution for low-resource languages. These models could also act as good baselines for future multilingual hate speech detection tasks. We have made our code and experimental settings public for other researchers at this https URL.
Zaid Alyafeai
EDGAD
[]
nan
https://github.com/shery91/Egyptian-Dialect-Gender-Annotated-Dataset
unknown
2,019
ar
ar-EG: (Arabic (Egypt))
social media
text
crawling and annotation(other)
Egyptian Dialect Gender Annotated Dataset (EDGAD) obtained from Twitter as well as a proposed text classification solution for the Gender Identification (GI) problem. The dataset consists of 70,000 tweets per gender
140,000
sentences
Medium
Cairo University
nan
Gender identification of egyptian dialect in twitter
https://www.sciencedirect.com/science/article/pii/S1110866518302044
Arab
No
GitHub
Free
nan
No
gender identification
EIJ
6.0
journal
Egyptian Informatics Journal
Shereen Hussein,Mona Farouk,E. Hemayed
,,
Abstract Despite the widespread of social media among all age groups in Arabic countries, the research directed towards Author Profiling (AP) is still in its early stages. This paper provides an Egyptian Dialect Gender Annotated Dataset (EDGAD) obtained from Twitter as well as a proposed text classification solution for the Gender Identification (GI) problem. The dataset consists of 70,000 tweets per gender. In text classification, a Mixed Feature Vector (MFV) with different stylometric and Egyptian Arabic Dialect (EAD) language-specific features is proposed, in addition to N-Gram Feature Vector (NFV). Ensemble weighted average is applied to the Random Forest (RF) with MFV and Logistic Regression (LR) with NFV. The achieved gender identification accuracy is 87.6%.
Zaid Alyafeai
MediaSpeech
[]
https://huggingface.co/datasets/arbml/MediaSpeech_ar
https://github.com/NTRLab/MediaSpeech
CC BY 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
text
crawling and annotation(other)
an open-source 10-hour ASR system evaluation dataset NTR MediaSpeech for 4 languages: Spanish, French, Turkish and Arabic
10
hours
Low
Multiple institutions
nan
MediaSpeech: Multilanguage ASR Benchmark and Dataset
https://arxiv.org/ftp/arxiv/papers/2103/2103.16193.pdf
Arab
No
GitHub
Free
nan
No
speech recognition
ArXiv
0.0
preprint
ArXiv
Rostislav Kolobov,Olga Okhapkina,Olga Omelchishina,A. Platunov,Roman Bedyakin,Vyacheslav Moshkin,D. Men'shikov,N. Mikhaylovskiy
,,,,,,,
The performance of automated speech recognition (ASR) systems is well known to differ for varied application domains. At the same time, vendors and research groups typically report ASR quality results either for limited use simplistic domains (audiobooks, TED talks), or proprietary datasets. To fill this gap, we provide an open-source 10-hour ASR system evaluation dataset NTR MediaSpeech for 4 languages: Spanish, French, Turkish and Arabic. The dataset was collected from the official youtube channels of media in the respective languages, and manually transcribed. We estimate that the WER of the dataset is under 5%. We have benchmarked many ASR systems available both commercially and freely, and provide the benchmark results. We also open-source baseline QuartzNet models for each language.
Zaid Alyafeai
IDAT
[]
https://huggingface.co/datasets/arbml/multilingual_irony
https://github.com/bilalghanem/multilingual_irony
GPL-3.0
2,019
ar
mixed
social media
text
crawling and annotation(other)
written in Modern Standard Arabic but also in different Arabic language varieties including Egypt, Gulf, Levantine and Maghrebi dialects
5,030
sentences
Medium
Multiple institutions
nan
IDAT@FIRE2019: Overview of the Track on Irony Detection in Arabic Tweets
http://ceur-ws.org/Vol-2517/T4-1.pdf
Arab
No
GitHub
Free
nan
Yes
irony detection
FIRE
32.0
conference
Forum for Information Retrieval Evaluation
Bilal Ghanem,Jihen Karoui,F. Benamara,Véronique Moriceau,P. Rosso
,,,,
This overview paper describes the first shared task on irony detection for the Arabic language. The task consists of a binary classification of tweets as ironic or not using a dataset composed of 5, 030 Arabic tweets about different political issues and events related to the Middle East and the Maghreb. Tweets in our dataset are written in Modern Standard Arabic but also in different Arabic language varieties including Egypt, Gulf, Levantine and Maghrebi dialects. Eighteen teams registered to the task among which ten submitted their runs. The methods of participants ranged from feature-based to neural networks using either classical machine learning techniques or ensemble methods. The best performing system achieved F-score value of 0.844, showing that classical feature-based models outperform the neural ones.
Zaid Alyafeai
Rewayatech
[]
https://huggingface.co/datasets/arbml/Rewayatech
https://github.com/aseelad/Rewayatech-Saudi-Stories/
CC BY-NC-SA 4.0
2,020
ar
ar-SA: (Arabic (Saudi Arabia))
other
text
crawling
a collection of Arabic stories written in electronic forms between the years of 2003-2015 by online users using anonymized usernames
1,267
documents
Low
Multiple institutions
nan
Rewayatech: Saudi Web Novels Dataset
https://www.preprints.org/manuscript/202008.0628/v1
Arab
No
GitHub
Free
nan
No
text generation, language modeling
other
0.0
preprint
nan
Aseel Addawood,D. Alzeer
,
The internet has changed the way people perceived fiction to a new level. For instance, online forums have given people the opportunity to write without revealing their real identities. Especially in the Saudi context, online users were using these forums to write web novels that reflect their culture, lives, concerns, hopes and dreams. In this paper, we describe a dataset that was collected from one of the online forums that was used for sharing web novels among its readers. The collected dataset contains 1,267 novels between 2003-2015. This data set is available to the research community to analyze to gain a better understanding of the social, economical, and behavioral mindset that was manifested in the community in that decade.
Zaid Alyafeai
Cross-lingual NER
[]
https://huggingface.co/datasets/arbml/Zero_Shot_Cross_Lingual_NER_ar
https://github.com/ntunlp/Zero-Shot-Cross-Lingual-NER
unknown
2,020
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
NER for five different target languages — Spanish, Dutch, German, Arabic and Finnish
2,687
tokens
Low
Multiple institutions
AQMAR
Zero-Resource Cross-Lingual Named Entity Recognition
https://arxiv.org/pdf/1911.09812.pdf
Arab
No
GitHub
Free
nan
Yes
named entity recognition
AAAI
19.0
conference
Association for the Advancement of Artificial Intelligence
M SAIFUL BARI,Shafiq R. Joty,Prathyusha Jwalapuram
Nanyang Technological University,,
Recently, neural methods have achieved state-of-the-art (SOTA) results in Named Entity Recognition (NER) tasks for many languages without the need for manually crafted features. However, these models still require manually annotated training data, which is not available for many languages. In this paper, we propose an unsupervised cross-lingual NER model that can transfer NER knowledge from one language to another in a completely unsupervised way without relying on any bilingual dictionary or parallel data. Our model achieves this through word-level adversarial learning and augmented fine-tuning with parameter sharing and feature augmentation. Experiments on five different languages demonstrate the effectiveness of our approach, outperforming existing models by a good margin and setting a new SOTA for each language pair.
Zaid Alyafeai
EXAMS
[]
https://huggingface.co/datasets/exams
https://github.com/mhardalov/exams-qa
CC BY-SA 4.0
2,020
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
more than 24,000 high-quality high school exam questions in 26 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others
562
sentences
Low
Multiple institutions
nan
EXAMS: A Multi-Subject High School Examinations Dataset for Cross-Lingual and Multilingual Question Answering
https://arxiv.org/pdf/2011.03080.pdf
Arab
No
GitHub
Free
nan
Yes
question answering
EMNLP
3.0
conference
Conference on Empirical Methods in Natural Language Processing
Momchil Hardalov,Todor Mihaylov,Dimitrina Zlatkova,Yoan Dinkov,Ivan Koychev,Preslav Nakov
,,,Sofia University,,
We propose EXAMS -- a new benchmark dataset for cross-lingual and multilingual question answering for high school examinations. We collected more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others. EXAMS offers a fine-grained evaluation framework across multiple languages and subjects, which allows precise analysis and comparison of various models. We perform various experiments with existing top-performing multilingual pre-trained models and we show that EXAMS offers multiple challenges that require multilingual knowledge and reasoning in multiple domains. We hope that EXAMS will enable researchers to explore challenging reasoning and knowledge transfer methods and pre-trained models for school question answering in various languages which was not possible before. The data, code, pre-trained models, and evaluation are available at this https URL.
Zaid Alyafeai
Mega-COV
[]
nan
https://github.com/echen102/COVID-19-TweetIDs
CC BY-NC-SA 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
social media
text
crawling and annotation(other)
A Billion-Scale Dataset of 100+ Languages for COVID-19
45,000,000
sentences
Medium
University of British Columbia
nan
Mega-COV: A Billion-Scale Dataset of 100+ Languages for COVID-19
https://arxiv.org/pdf/2005.06012.pdf
Arab-Latn
Yes
GitHub
Free
nan
No
COVID relevance, COVID misinformation detection
EACL
10.0
conference
European Chapter of the Association for Computational Linguistics
Muhammad Abdul-Mageed,AbdelRahim Elmadany,Dinesh Pabbi,Kunal Verma,Rannie Lin
,University of British Columbia,,The University of British Columbia,
We describe Mega-COV, a billion-scale dataset from Twitter for studying COVID-19. The dataset is diverse (covers 268 countries), longitudinal (goes as back as 2007), multilingual (comes in 100+ languages), and has a significant number of location-tagged tweets (~169M tweets). We release tweet IDs from the dataset. We also develop two powerful models, one for identifying whether or not a tweet is related to the pandemic (best F1=97%) and another for detecting misinformation about COVID-19 (best F1=92%). A human annotation study reveals the utility of our models on a subset of Mega-COV. Our data and models can be useful for studying a wide host of phenomena related to the pandemic. Mega-COV and our models are publicly available.
Mustafa Ghaleb
DODA: Darija Open Dataset
[]
https://huggingface.co/datasets/arbml/darija
https://github.com/darija-open-dataset/dataset
MIT License
2,021
ar
ar-MA: (Arabic (Morocco))
other
text
manual curation
more than 10,000 words
10,000
tokens
Low
Unknown
nan
Moroccan Dialect -Darija- Open Dataset
https://arxiv.org/ftp/arxiv/papers/2103/2103.09687.pdf
Arab-Latn
Yes
GitHub
Free
nan
No
transliteration, machine translation
ArXiv
0.0
preprint
ArXiv
Aissam Outchakoucht,Hamza Es-Samaali
,
Nowadays, we are witnessing an unprecedented growth of IT products and services. Yet, in order for many of these solutions to flourish and be viable in a given society, they need to « understand » and be able to communicate to some extent using native languages. However, it turns out that step 0 in any serious engagement with Natural Language Processing (NLP) consists of translating the vocabulary to the widely used and most documented language in this field, namely English.
Mustafa Ghaleb
MSAC
[]
https://huggingface.co/datasets/arbml/MSAC
https://github.com/ososs/Arabic-Sentiment-Analysis-corpus
unknown
2,020
ar
ar-MA: (Arabic (Morocco))
social media
text
crawling and annotation(other)
rich and publicly available Arabic corpus called Moroccan Sentiment Analysis Corpus (MSAC)
2,000
sentences
Medium
Ibn Tofail University
nan
ASA: A framework for Arabic sentiment analysis
https://dl.acm.org/doi/abs/10.1177/0165551519849516
Arab
No
GitHub
Free
nan
No
sentiment analysis
JIS
20.0
journal
Journal of Information Science
Ahmed Oussous,Fatima-Zahra Benjelloun,A. A. Lahcen,Samir Belfkih
,,,
Sentiment analysis (SA), also known as opinion mining, is a growing important research area. Generally, it helps to automatically determine if a text expresses a positive, negative or neutral sentiment. It enables to mine the huge increasing resources of shared opinions such as social networks, review sites and blogs. In fact, SA is used by many fields and for various languages such as English and Arabic. However, since Arabic is a highly inflectional and derivational language, it raises many challenges. In fact, SA of Arabic text should handle such complex morphology. To better handle these challenges, we decided to provide the research community and Arabic users with a new efficient framework for Arabic Sentiment Analysis (ASA). Our primary goal is to improve the performance of ASA by exploiting deep learning while varying the preprocessing techniques. For that, we implement and evaluate two deep learning models namely convolutional neural network (CNN) and long short-term memory (LSTM) models. The framework offers various preprocessing techniques for ASA (including stemming, normalisation, tokenization and stop words). As a result of this work, we first provide a new rich and publicly available Arabic corpus called Moroccan Sentiment Analysis Corpus (MSAC). Second, the proposed framework demonstrates improvement in ASA. In fact, the experimental results prove that deep learning models have a better performance for ASA than classical approaches (support vector machines, naive Bayes classifiers and maximum entropy). They also show the key role of morphological features in Arabic Natural Language Processing (NLP).
Zaid Alyafeai
ARA-TimeBank
[]
nan
https://github.com/nafaa5/Arabic-event-timex-gazetteers-
unknown
2,020
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
enriched Arabic corpus, called “ARA-TimeBank”, for events, temporal expressions and temporal relations based on the new Arabic TimeML.
1,000
sentences
Low
Multiple institutions
nan
Enrichment of Arabic TimeML Corpus
https://link.springer.com/chapter/10.1007/978-3-030-63007-2_51
Arab
No
GitHub
Free
nan
No
event detection
ICCCI
0.0
conference
international scientific conference for research in the field of Computational Collective Intelligence
Nafaa Haffar,Emna Hkiri,M. Zrigui
,,
Automatic temporal information extraction is an important task for many natural language processing systems. This task requires thorough knowledge of the ontological and grammatical characteristics of temporal information in the text as well as annotated linguistic resources of the temporal entities. Before creating the resources or developing the system, it is first necessary to define a structured schema which describes how to annotate temporal entities. In this paper, we present a revised version of Arabic TimeML, and we propose an enriched Arabic corpus, called “ARA-TimeBank”, for events, temporal expressions and temporal relations based on the new Arabic TimeML. We describe our methodology which combines a pre-annotation phase with manuel validation and verification. ARA-TimeBank is the first corpus constructed for Arabic, which meets the needs of TimeML and addresses the limitations of existing Arabic TimeBank.
Zaid Alyafeai
ALC: Arabic Learner Corpus
[]
nan
https://catalog.ldc.upenn.edu/LDC2015S10
custom
2,013
ar
ar-MSA: (Arabic (Modern Standard Arabic))
transcribed audio
spoken
manual curation
comprises a collection of texts written by learners of Arabic in Saudi Arabia
1
hours
Low
Leeds University
nan
Arabic Learner Corpus v1: A New Resource for Arabic Language Research
https://eprints.whiterose.ac.uk/75470/22/AtwellVer2.13.pdf
Arab
No
LDC
With-Fee
25 $
No
speech recognition
other
12.0
preprint
nan
Ayg Alfaifi,E. Atwell
,
nan
Zaid Alyafeai
AyaTEC
[]
nan
http://qufaculty.qu.edu.qa/telsayed/datasets/
unknown
2,020
ar
ar-CLS: (Arabic (Classic))
other
text
manual curation
QA on the Holy Qur’an Dataset
207
sentences
Low
Qatar University
nan
AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an
https://dl.acm.org/doi/pdf/10.1145/3400396
Arab
No
other
Free
nan
No
question answering
TALLIP
0.0
journal
ACM Transactions on Asian and Low-Resource Language Information Processing
R. Malhas,Tamer Elsayed
,
The absence of publicly available reusable test collections for Arabic question answering on the Holy Qur’an has impeded the possibility of fairly comparing the performance of systems in that domain. In this article, we introduce AyaTEC, a reusable test collection for verse-based question answering on the Holy Qur’an, which serves as a common experimental testbed for this task.AyaTEC includes 207 questions (with their corresponding 1,762 answers) covering 11 topic categories of the Holy Qur’an that target the information needs of both curious and skeptical users. To the best of our effort, the answers to the questions (each represented as a sequence of verses) in AyaTECwere exhaustive—that is, all qur’anic verses that directly answered the questions were exhaustively extracted and annotated. To facilitate the use ofAyaTEC in evaluating the systems designed for that task, we propose several evaluation measures to support the different types of questions and the nature of verse-based answers while integrating the concept of partial matching of answers in the evaluation.
Zaid Alyafeai
EVOCA
[]
nan
https://sinai.ujaen.es/en/research/resources/evoca-corpus
unknown
2,011
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
machine translation
English corpus generated from the translation of the Arabic corpus OCA
500
sentences
Low
University of Jaén
nan
Bilingual Experiments with an Arabic-English Corpus for Opinion Mining
https://aclanthology.org/R11-1108.pdf
Arab
No
other
Free
nan
No
machine translation, sentiment analysis
RANLP
87.0
conference
Recent Advances in Natural Language Processing
Mohammed Rushdi-Saleh,M. T. Martín-Valdivia,L. A. U. López,José Manuel Perea Ortega
,,,
Recently, Opinion Mining (OM) is receiving more attention due to the abundance of forums, blogs, ecommerce web sites, news reports and additional web sources where people tend to express their opinions. There are a number of works about Sentiment Analysis (SA) studying the task of identifying the polarity, whether the opinion expressed in a text is positive or negative about a given topic. However, most of research is focused on English texts and there are very few resources for other languages. In this work we present an Opinion Corpus for Arabic (OCA) composed of Arabic reviews extracted from specialized web pages related to movies and films using this language. Moreover, we have translated the OCA corpus into English, generating the EVOCA corpus (English Version of OCA). In the experiments carried out in this work we have used different machine learning algorithms to classify the polarity in these corpora showing that, although the experiments with EVOCA are worse than OCA, the results are comparable with other English experiments, since the loss of precision due to the translation is very slight.
Zaid Alyafeai
COVID-FAKES
[]
https://huggingface.co/datasets/arbml/COVID_FAES_ar
https://github.com/mohaddad/COVID-FAKES
unknown
2,020
multilingual
mixed
social media
text
crawling and annotation(other)
Bilingual (Arabic/English) COVID-19 Twitter dataset for misleading information detection
3,263,000
sentences
Medium
University of Victoria
nan
COVID-19-FAKES: A Twitter (Arabic/English) Dataset for Detecting Misleading Information on COVID-19
https://link.springer.com/chapter/10.1007/978-3-030-57796-4_25
Arab
No
GitHub
Free
nan
No
fake news detection
INCoS
17.0
conference
International Conference on Intelligent Networking and Collaborative Systems
Mohamed K. Elhadad,K. F. Li,F. Gebali
,,
This paper aims to aid the ongoing research efforts for combating the Infodemic related to COVID-19. We provide an automatically annotated, bilingual (Arabic/English) COVID-19 Twitter dataset (COVID-19-FAKES). This dataset has been continuously collected from February 04, 2020, to March 10, 2020. For annotating the collected dataset, we utilized the shared information on the official websites and the official Twitter accounts of the WHO, UNICEF, and UN as a source of reliable information, and the collected COVID-19 pre-checked facts from different fact-checking websites to build a ground-truth database. Then, the Tweets in the COVID-19-FAKES dataset are annotated using 13 different machine learning algorithms and employing 7 different feature extraction techniques. We are making our dataset publicly available to the research community (https://github.com/mohaddad/COVID-FAKES). This work will help researchers in understanding the dynamics behind the COVID-19 outbreak on Twitter. Furthermore, it could help in studies related to sentiment analysis, the analysis of the propagation of misleading information related to this outbreak, the analysis of users’ behavior during the crisis, the detection of botnets, the analysis of the performance of different classification algorithms with various feature extraction techniques that are used in text mining. It is worth noting that, in this paper, we use the terms of misleading information, misinformation, and fake news interchangeably.
Zaid Alyafeai
NSURL-2019 Shared Task 8
[]
nan
https://ai.mawdoo3.com/nsurl-2019-task8
CC BY-NC-SA 4.0
2,019
ar
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
This dataset is composed of 12000 question pairs labelled with 1 for semantically similar questions and 0 for semantically different
15,712
sentences
Low
Mawdoo3
nan
NSURL-2019 Shared Task 8: Semantic Question Similarity in Arabic
https://aclanthology.org/2019.nsurl-1.1.pdf
Arab
No
other
Free
nan
Yes
semantic similarity
ArXiv
2.0
preprint
ArXiv
Haitham Seelawi,Ahmad Mustafa,H. Al-Bataineh,Wael Farhan,Hussein T. Al-Natsheh
,,,,
Question semantic similarity (Q2Q) is a challenging task that is very useful in many NLP applications, such as detecting duplicate questions and question answering systems. In this paper, we present the results and findings of the shared task (Semantic Question Similarity in Arabic). The task was organized as part of the first workshop on NLP Solutions for Under Resourced Languages (NSURL 2019) The goal of the task is to predict whether two questions are semantically similar or not, even if they are phrased differently. A total of 9 teams participated in the task. The datasets created for this task are made publicly available to support further research on Arabic Q2Q.
Zaid Alyafeai
SemEval-2021 Task 2
[]
https://huggingface.co/datasets/arbml/MCL_WiC_ar
https://github.com/SapienzaNLP/mcl-wic
CC BY-NC 4.0
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
manual curation
the first entirely manually-curated WiC-style dataset in five European and non-European languages, namely Arabic, Chinese, English, French and Russian.
2,000
tokens
Low
Sapienza University of Rome, Italy
nan
SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC)
https://aclanthology.org/2021.semeval-1.3.pdf
Arab-Latn
No
GitHub
Free
nan
Yes
word sense disambiguation
SEMEVAL
22.0
workshop
International Workshop on Semantic Evaluation
Federico Martelli,N. Kalach,Gabriele Tola,Roberto Navigli
,,,
In this paper, we introduce the first SemEval task on Multilingual and Cross-Lingual Word-in-Context disambiguation (MCL-WiC). This task allows the largely under-investigated inherent ability of systems to discriminate between word senses within and across languages to be evaluated, dropping the requirement of a fixed sense inventory. Framed as a binary classification, our task is divided into two parts. In the multilingual sub-task, participating systems are required to determine whether two target words, each occurring in a different context within the same language, express the same meaning or not. Instead, in the cross-lingual part, systems are asked to perform the task in a cross-lingual scenario, in which the two target words and their corresponding contexts are provided in two different languages. We illustrate our task, as well as the construction of our manually-created dataset including five languages, namely Arabic, Chinese, English, French and Russian, and the results of the participating systems. Datasets and results are available at: https://github.com/SapienzaNLP/mcl-wic.
Mustafa Ghaleb
SemEval-2018 Task 1
[]
https://huggingface.co/datasets/sem_eval_2018_task_1
https://competitions.codalab.org/competitions/17751#learn_the_details-datasets
unknown
2,018
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
social media
text
crawling and annotation(other)
it creates a new Affect in Tweets dataset of more than 22,000 tweets such that subsets are annotated for a number of emotion dimensions.
22,000
tokens
Medium
Multiple institutions
nan
SemEval-2018 Task 1: Affect in Tweets
https://aclanthology.org/S18-1001.pdf
Arab-Latn
No
CodaLab
Free
nan
Yes
emotion intensity regression, emotion intensity classification, valence regression, valence classification, emotion classification
SEMEVAL
322.0
workshop
International Workshop on Semantic Evaluation
Saif M. Mohammad,Felipe Bravo-Marquez,Mohammad Salameh,Svetlana Kiritchenko
National Research Council Canada,University of Chile,,
We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.
Mustafa Ghaleb
SemEval-2017 Task 4
[]
https://huggingface.co/datasets/arbml/SemEval_2017_ar_subtask_A
https://alt.qcri.org/semeval2017/task4/index.php?id=data-and-tools
unknown
2,017
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
social media
text
crawling and annotation(other)
a large accessible benchmark dataset containing over 70,000 tweets across two languages
70,000
sentences
Medium
Multiple institutions
nan
SemEval-2017 Task 4: Sentiment Analysis in Twitter
https://aclanthology.org/S17-2088.pdf
Arab-Latn
No
other
Free
nan
Yes
sentiment analysis
SEMEVAL
14.0
workshop
International Workshop on Semantic Evaluation
Sara Rosenthal,N. Farra,Preslav Nakov
,,
This paper describes the fifth year of the Sentiment Analysis in Twitter task. SemEval-2017 Task 4 continues with a rerun of the subtasks of SemEval-2016 Task 4, which include identifying the overall sentiment of the tweet, sentiment towards a topic with classification on a two-point and on a five-point ordinal scale, and quantification of the distribution of sentiment towards a topic across a number of tweets: again on a two-point and on a five-point ordinal scale. Compared to 2016, we made two changes: (i) we introduced a new language, Arabic, for all subtasks, and (ii) we made available information from the profiles of the Twitter users who posted the target tweets. The task continues to be very popular, with a total of 48 teams participating this year.
Mustafa Ghaleb
BUG
[]
nan
https://github.com/SLAB-NLP/BUG
MIT License
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
machine translation
machine translation
108,000
sentences
Low
The Hebrew University of Jerusalem
nan
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation
https://arxiv.org/pdf/2109.03858.pdf
Arab
No
GitHub
Free
nan
nan
machine translation, gender bias evaluation
ArXiv
nan
preprint
nan
Koren Lazar,Gabriel Stanovsky
,
Recent works have found evidence of gender bias in models of machine translation and coreference resolution using mostly synthetic diagnostic datasets. While these quantify bias in a controlled experiment, they often do so on a small scale and consist mostly of artificial, out-of-distribution sentences. In this work, we find grammatical patterns indicating stereotypical and non-stereotypical gender-role assignments (e.g., female nurses versus male dancers) in corpora from three domains, resulting in a first large-scale gender bias dataset of 108K diverse real-world English sentences. We manually verify the quality of our corpus and use it to evaluate gender bias in various coreference resolution and machine translation models. We find that all tested models tend to over-rely on gender stereotypes when presented with natural inputs, which may be especially harmful when deployed in commercial systems. Finally, we show that our dataset lends itself to finetuning a coreference resolution model, finding it mitigates bias on a held out set. Our dataset and models are publicly available at www.github.com/ SLAB-NLP/BUG. We hope they will spur future research into gender bias evaluation mitigation techniques in realistic settings.
Zaid Alyafeai
NewsTent
[]
nan
https://drive.google.com/drive/folders/1lVzaSmJWu63YFrJTszQxMqfVoAYuz28F
unknown
2,021
ar
ar-MSA: (Arabic (Modern Standard Arabic))
news articles
text
crawling
NewsTent extracts 8,443,484 articles and their summaries from 22 newspapers of 19 Arab countries dated from 1999 to 2019
8,443,484
documents
Low
Multiple institutions
nan
NewsTent: Arabic Text Summarization Dataset of 8 Million Arabic News Articles with Summaries
https://openreview.net/pdf?id=Sbf9j9WcAkk
Arab
No
Gdrive
Free
nan
No
summarization
other
0.0
preprint
nan
nan
nan
There has been intensive research about text summarization in many languages. Some languages have reached advanced stages, but Arabic Text Summarization (ATS) is still in its developing stages due to several limitations starting from datasets and ending at the evaluation process. Existing ATS datasets usually contain a range of 100 to 30,000 training samples which make them too small to train a summarization model. Further, existing datasets lack diversity; they rely on one source (e.g. newspaper) or a few to collect the summaries, also some rely on a single Arab country and ignore the rest. We present, NewsTent, the largest and most diverse dataset for Arabic Text Summarization tasks. NewsTent overcomes the limitations of existing datasets. First, eight million articles and their summaries are collected by the "inverted pyramid" writing style. The summaries are verified by human experts and achieved 87.4% accuracy from 1000 samples. Second, NewsTent collected news articles from popular newspapers sources of 19 Arab countries with an eclectic mix of topics varying from finance, politics, sports, etc to establish comprehensive diversity. Then, we corroborate the superiority of the extracted summaries by comparing them to MT5-generated summaries. NewsTent’s large and diverse corpus overcomes the limitations of ATS datasets to enhance the ATS field.
Zaid Alyafeai
ALUE
[]
nan
https://www.alue.org/tasks
unknown
2,021
ar
mixed
other
text
other
8 carefully selected and previously published tasks
nan
nan
Medium
Mawdoo3
OSACT4, SemEval-2018, IDAT, XNLI MADAR, NSURL-2019, IDAT
ALUE: Arabic Language Understanding Evaluation
https://aclanthology.org/2021.wanlp-1.18.pdf
Arab
No
other
Free
nan
Yes
irony detection, dialect identification, semantic question similarity, offensive language detection, emotion classification, sentiment intensity regression, entailment
WANLP
nan
workshop
Arabic Natural Language Processing Workshop
Haitham Seelawi,Ibraheem Tuffaha,Mahmoud Gzawi,Wael Farhan,Bashar Talafha,Riham Badawi,Zyad Sober,Oday Al-Dweik,Abed Alhakim Freihat,Hussein T. Al-Natsheh
,,,,,,,,,
The emergence of Multi-task learning (MTL)models in recent years has helped push thestate of the art in Natural Language Un-derstanding (NLU). We strongly believe thatmany NLU problems in Arabic are especiallypoised to reap the benefits of such models. Tothis end we propose the Arabic Language Un-derstanding Evaluation Benchmark (ALUE),based on 8 carefully selected and previouslypublished tasks. For five of these, we providenew privately held evaluation datasets to en-sure the fairness and validity of our benchmark.We also provide a diagnostic dataset to helpresearchers probe the inner workings of theirmodels.Our initial experiments show thatMTL models outperform their singly trainedcounterparts on most tasks. But in order to en-tice participation from the wider community,we stick to publishing singly trained baselinesonly. Nonetheless, our analysis reveals thatthere is plenty of room for improvement inArabic NLU. We hope that ALUE will playa part in helping our community realize someof these improvements. Interested researchersare invited to submit their results to our online,and publicly accessible leaderboard.
Zaid Alyafeai
Mr. TyDi
[]
https://huggingface.co/datasets/castorini/mr-tydi
https://github.com/castorini/mr.tydi
unknown
2,021
multilingual
ar-MSA: (Arabic (Modern Standard Arabic))
other
text
other
Mr. TYDI is constructed from TYDI, a question answering dataset covering eleven typologically diverse languages
16573
sentences
Low
University of Waterloo
TYDI
Mr. TYDI: A Multi-lingual Benchmark for Dense Retrieval
https://arxiv.org/pdf/2108.08787.pdf
Arab
No
GitHub
Free
nan
Yes
question answering
ArXiv
nan
preprint
nan
Xinyu Zhang,Jimmy J. Lin
,
We present Mr. TYDI, a multi-lingual benchmark dataset for mono-lingual retrieval in eleven typologically diverse languages, designed to evaluate ranking with learned dense representations. The goal of this resource is to spur research in dense retrieval techniques in non-English languages, motivated by recent observations that existing techniques for representation learning perform poorly when applied to out-of-distribution data. As a starting point, we provide zero-shot baselines for this new dataset based on a multi-lingual adaptation of DPR that we call “mDPR”. Experiments show that although the effectiveness of mDPR is much lower than BM25, dense representations nevertheless appear to provide valuable relevance signals, improving BM25 results in sparse–dense hybrids. In addition to analyses of our results, we also discuss future challenges and present a research agenda in multi-lingual dense retrieval. Mr. TYDI can be downloaded at https://github.com/ castorini/mr.tydi.
Zaid Alyafeai