license: cc-by-4.0
language:
- is
Introduction
This dataset, derived from the Icelandic Gigaword Corpus, is designed as a more comprehensive alternative to the existing dataset found at https://huggingface.co/datasets/styletts2-community/multilingual-pl-bert/tree/main/is that primarily consists of processed text from the Icelandic Wikipedia and is only 52MB in size. The normalization and phonemization processes utilize the espeak-ng backend. Notably, the Icelandic component of espeak-ng hasn't seen updates in over a decade, and its phonemization relies on an outdated version of the IPA dialect.
Significant advancements in the normalization and G2P (Grapheme-to-Phoneme) conversion of Icelandic have been made through the Icelandic Language Technology program. More information about this program can be found here. The tools developed in this program have been extensively used to enhance the quality of this dataset.
Dataset
This dataset surpasses its predecessor in size, incorporating not only text from the relatively small Icelandic Wikipedia but also from the extensive Icelandic Gigaword corpus. Specifically, we have enriched the Wikipedia text with material from the News1 corpus. To adhere to the maximum size limit of 512 MB for the raw text, we combined the complete Wikipedia text with randomly shuffled paragraphs from the News1 corpus until reaching the size cap.
In total, the dataset contains 2,212,618
rows, each corresponding to a paragraph in the IGC corpus' XML format. This structure differs from the original dataset, where each row represented an entire Wikipedia article. This change accounts for the significantly increased row count. The dataset allows for merging of paragraphs belonging to the same original document, as the URL and title rows accurately identify their source and order.
Cleaning
Prior to processing with the Bert tokenizer, the dataset underwent cleaning, deduplication, and language detection to filter out most non-Icelandic text. Paragraphs containing fewer than five words were also removed. These steps eliminated approximately 15% of the original dataset.
Normalization
For normalization, we adapted the Regina Normalizer, which employs a BI-LSTM Part-of-Speech (PoS) tagger. Although this makes the process somewhat time-consuming, the adaptions were necessary to handle a variety of edge cases in the diverse and sometimes unclean text within the IGC. The processing of approximately 2.5 GB of raw text took about one day, utilizing 50 CPU cores.
Phonemization
Phonemization was conducted using IceG2P, which is also based on a BI-LSTM model. We made adaptations to ensure the IPA phoneset output aligns with the overall phoneset used in other PL-Bert datasets. Initially, we created and refined a new vocabulary from both the Wikipedia and News1 corpora. Following this, the BI-LSTM model was employed to generate phonetic transcriptions for the dictionary. We also enhanced stress labeling and incorporated secondary stresses after conducting compound analysis. A significant byproduct of this effort is a considerably improved G2P dictionary, which we plan to integrate into the G2P module and various other open-source projects involving Icelandic G2P.