metadata
dataset_info:
features:
- name: date
dtype: string
- name: headline
dtype: string
- name: short_headline
dtype: string
- name: short_text
dtype: string
- name: article
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 107545823
num_examples: 21847
download_size: 63956047
dataset_size: 107545823
language:
- de
size_categories:
- 10K<n<100K
Tagesschau Archive Article Dataset
A scrape of Tagesschau.de articles from 01.01.2018 to 26.04.2023. Find all source code in github.com/bjoernpl/tagesschau.
Dataset Information
CSV structure:
Field | Description |
---|---|
date |
Date of the article |
headline |
Title of the article |
short_headline |
A short headline / Context |
short_text |
A brief summary of the article |
article |
The full text of the article |
href |
The href of the article on tagesschau.de |
Size:
The final dataset (2018-today) contains 225202 articles from 1942 days. Of these articles only 21848 are unique (Tagesschau often keeps articles in circulation for ~1 month). The total download size is ~65MB.
Cleaning:
- Duplicate articles are removed
- Articles with empty text are removed
- Articles with empty short_texts are removed
- Articles, headlines and short_headlines are stripped of leading and trailing whitespace
More details in clean.py
.