--- annotations_creators: - no-annotation language: - de - fr - el - et - fi - hr - ji - pl - ru - sr - sv - uk language_creators: - machine-generated multilinguality: - multilingual pretty_name: 'Europeana Newspapers ' size_categories: - 1M 1: year_part = parts[1].split(".")[0] if year_part.isdigit(): year = int(year_part) if (min_year is None or min_year <= year) and (max_year is None or year <= max_year): filtered_files.append(f) parquet_files = filtered_files # Convert local paths to full URLs return [ hf_hub_url("biglam/europeana_newspapers", f, repo_type="dataset") for f in parquet_files ] ``` You can use this function to get the URLs for files you want to download from the Hub: ```python # Example 1: Load French newspaper data french_files = get_files_for_lang_and_years(['fr']) ds_french = load_dataset("parquet", data_files=french_files, num_proc=4) # Example 2: Load Ukrainian and French newspapers between 1900 and 1950 historical_files = get_files_for_lang_and_years( languages=['uk', 'fr'], min_year=1900, max_year=1950 ) ds_historical = load_dataset("parquet", data_files=historical_files, num_proc=4) # Example 3: Load all German newspapers from the 19th century german_19th_century = get_files_for_lang_and_years( languages=['de'], min_year=1800, max_year=1899 ) ds_german_historical = load_dataset("parquet", data_files=german_19th_century, num_proc=4) ``` ### Use Cases This dataset is particularly valuable for: #### Machine Learning Applications - Training large language models on historical texts - Fine-tuning models for historical language understanding - Developing OCR post-correction models using the confidence scores - Training layout analysis models using the bounding box information #### Digital Humanities Research - Cross-lingual analysis of historical newspapers - Studying information spread across European regions - Tracking cultural and political developments over time - Analyzing language evolution and shifts in terminology - Topic modeling of historical discourse - Named entity recognition in historical contexts #### Historical Research - Comparative analysis of news reporting across different countries - Studying historical events from multiple contemporary perspectives - Tracking the evolution of public discourse on specific topics - Analyzing changes in journalistic style and content over centuries #### OCR Development - Using the mean_ocr and std_ocr fields to assess OCR quality - Filtering content based on quality thresholds for specific applications - Benchmarking OCR improvement techniques against historical materials #### Institutional Uses - Enabling libraries and archives to provide computational access to their collections - Supporting searchable interfaces for digital historical collections - Creating teaching resources for historical linguistics and discourse analysis ## Dataset Creation ### Source Data The dataset is derived from the Europeana Newspapers collection, which contains digitized historical newspapers from various European countries. The original data is in ALTO XML format, which includes OCR text along with layout and metadata information. #### Data Collection and Processing The BigLAM initiative developed a comprehensive processing pipeline to convert the Europeana newspaper collections from their original ALTO XML format into a structured dataset format suitable for machine learning and digital humanities research: 1. **ALTO XML Parsing**: Custom parsers handle various ALTO schema versions (1-5 and BnF dialect) to ensure compatibility across the entire collection. 2. **Text Extraction**: The pipeline extracts full-text content while preserving reading order and handling special cases like hyphenated words. 3. **OCR Quality Assessment**: For each page, the system calculates: - `mean_ocr`: Average confidence score of the OCR engine - `std_ocr`: Standard deviation of confidence scores to indicate consistency 4. **Visual Element Extraction**: The pipeline captures bounding box coordinates for illustrations and visual elements, stored in the `bounding_boxes` field. 5. **Metadata Integration**: Each page is enriched with corresponding metadata from separate XML files: - Publication title and date - Language identification (including multi-language detection) - IIIF URLs for accessing the original digitized images - Persistent identifiers linking back to the source material 6. **Parallel Processing**: The system utilizes multi-processing to efficiently handle the massive collection (containing approximately 32 billion tokens). 7. **Dataset Creation**: The processed data is converted to Hugging Face's `Dataset` format and saved as parquet files, organized by language and decade for easier access. This processing approach preserves the valuable structure and metadata of the original collection while making it significantly more accessible for computational analysis and machine learning applications. ## Bias, Risks, and Limitations - **OCR Quality**: The dataset is based on OCR'd historical documents, which may contain errors, especially in older newspapers or those printed in non-standard fonts. - **Historical Bias**: Historical newspapers reflect the biases, prejudices, and perspectives of their time periods, which may include content that would be considered offensive by modern standards. - **Temporal and Geographic Coverage**: The coverage across languages, time periods, and geographic regions may be uneven. - **Data Completeness**: Some newspaper issues or pages may be missing or incomplete in the original Europeana collection. ### Recommendations - Users should consider the OCR confidence scores (mean_ocr and std_ocr) when working with this data, possibly filtering out low-quality content depending on their use case. - Researchers studying historical social trends should be aware of the potential biases in the source material and interpret findings accordingly. - For applications requiring high text accuracy, additional validation or correction may be necessary. ## More Information For more information about the original data source, visit [Europeana Newspapers](https://pro.europeana.eu/page/iiif#download). ## Dataset Card Contact Daniel van Strien (daniel [at] hf [dot] co) For questions about this processed version of the Europeana Newspapers dataset, please contact the BigLAM initiative representative above.