--- license: cc-by-nc-4.0 task_categories: - fill-mask - text-generation language: - am - ar - ay - bm - bbj - bn - bs - bg - ca - cs - ku - da - el - en - et - ee - fil - fi - fr - fon - gu - guw - ha - he - hi - hu - ig - id - it - ja - kk - km - ko - lv - ln - lt - lg - luo - mk - mos - my - nl - 'no' - ne - om - or - pa - pcm - fa - pl - pt - mg - ro - rn - ru - sn - so - es - sr - sq - sw - sv - ta - tet - ti - th - tn - tr - tw - uk - ur - wo - xh - yo - zh - zu - de multilinguality: - multilingual pretty_name: PolyNews size_categories: - 1K>> from datasets import load_dataset >>> data = load_dataset('aiana94/polynews', 'ron_Latn') # Please, specify the language code, # A data point example is below: { "text": "Un public numeros. Este uimitor succesul după doar trei ediții . ", "provenance": "globalvoices" } ``` ### Data Fields - text (string): news text - provenance (string) : source dataset for the news example ### Data Splits For all languages, there is only the `train` split. ## Dataset Creation ### Curation Rationale Multiple multilingual, human-translated, datasets containing news texts have been released in recent years. However, these datasets are stored in different formats and various websites, and many contain numerous near duplicates. With PolyNews, we aim to provide an easily-accessible, unified and deduplicated dataset that combines these disparate data sources. It can be used for domain adaptation of language models, language modeling or text generation in both high-resource and low-resource languages. ### Source Data The source data consists of five multilingual news datasets. - [Wikinews](https://www.wikinews.org/) (latest dump available in May 2024) - [GlobalVoices](https://opus.nlpl.eu/GlobalVoices/corpus/version/GlobalVoices) (v2018q4) - [WMT-News](https://opus.nlpl.eu/WMT-News/corpus/version/WMT-News) (v2019) - [MasakhaNews](https://huggingface.co/datasets/masakhane/masakhanews) (`train` split) - [MAFAND](https://huggingface.co/datasets/masakhane/mafand) (`train` split) #### Data Collection and Processing We processed the data using a **working script** which covers the entire processing pipeline. It can be found [here](https://github.com/andreeaiana/nase/blob/main/scripts/construct_polynews.sh). The data processing pipeline consists of: 1. Downloading the WMT-News and GlobalVoices News from OPUS. 2. Downloading the latest dump from WikiNews. 3. Loading the MasakhaNews and MAFAND datasets from Hugging Face Hub (only the `train` splits). 4. Concatenating, per language, all news texts from the source datasets. 5. Data cleaning (e.g., removal of exact duplicates, short texts, texts in other scripts) 6. [MinHash near-deduplication](https://github.com/bigcode-project/bigcode-dataset/blob/main/near_deduplication/minhash_deduplication.py) per language. ### Annotations We augment the original samples with the `provenance` annotation which specifies the original data source from which a particular examples stems. #### Personal and Sensitive Information The data is sourced from newspaper sources and contains mentions of public figures and individuals. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Users should keep in mind that the dataset contains short news texts (e.g., mostly titles), which might limit the applicability of the developed systems to other domains. ## Additional Information ### Licensing Information The dataset is released under the [CC BY-NC Attribution-NonCommercial 4.0 International license](https://creativecommons.org/licenses/by-nc/4.0/). ### Citation Infomation **BibTeX:** ```bibtex @misc{iana2024news, title={News Without Borders: Domain Adaptation of Multilingual Sentence Embeddings for Cross-lingual News Recommendation}, author={Andreea Iana and Fabian David Schmidt and Goran Glavaš and Heiko Paulheim}, year={2024}, eprint={2406.12634}, archivePrefix={arXiv}, url={https://arxiv.org/abs/2406.12634} } ```