Datasets:
File size: 3,167 Bytes
786df49 f61c388 786df49 f61c388 786df49 f61c388 786df49 f61c388 ce757f8 786df49 ce757f8 786df49 ce757f8 786df49 f61c388 786df49 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
language:
- de
license: cc-by-4.0
pretty_name: Perlentaucher
dataset_info:
features:
- name: date
dtype: date32
- name: author
dtype: string
- name: title
dtype: string
- name: ISBN
dtype: string
- name: price
dtype: decimal128(6, 2)
- name: n_pages
dtype: uint16
- name: n_reviews
dtype: uint8
- name: content
dtype: string
- name: publisher
dtype: string
- name: pub_place
dtype: string
- name: pub_year
dtype: uint16
- name: media_type
dtype: string
- name: media_spec
dtype: string
- name: is_relevant
dtype: bool
- name: is_novel
dtype: bool
splits:
- name: train
num_bytes: 33239971
num_examples: 89790
download_size: 15694656
dataset_size: 33239971
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- Buchkritiken
- Bücherschau des Tages
- Books
- Media
- Book reviews
---
Listing of >7k books I've read over the last few years. If you don't believe me, here's a [video](https://www.youtube.com/watch?v=zoVRKlgn9jk) of my occasional reading activity (haters gonna say it's fake).
----
The data set contains information of books that have been reviewed at least once.
The reviews are received from reputable German print media such as the *Frankfurter Allgemeine Zeitung* (FAZ), *Süddeutsche Zeitung* (SZ), *Die Zeit*, etc. and other serious broadcasters such as *Deutschlandfunk Kultur*.
Those reviews are collected by the culture magazine [*Perlentaucher*](https://www.perlentaucher.de) and labeled as *read a lot* when there are at least three reviews of a book.
[*Perlentaucher*](https://www.perlentaucher.de) publishes on a daily basis since the year 2000, except for German Sun- and holidays. On average, there are about *M*=12 (*SD*=5) bookentries per day.
Data was harvested for all entries starting from March 15, 2000 to May 16, 2024. In total, the data set consists of 89,766 rows for 7,349 days and 14 columns.
**Variables overview:**
- `date`: the pubication day of book review (as Pandas `timestamp[ns]`, `YYYY-MM-DD`)
- `relevant`: boolean, whether the book is relevant (i.e. marked as *read a lot*, dtype: `bool`)
- `author`: the author of the book (dtype: `string`)
- `title`: the title of the book (dtype: `string`)
- `ISBN`: the International Standard Book Number, ISBN-13 (dtype: `int64`)
- `type`: the type of the book, e.g. soft- or hardcover (German labelling!) or if it is not a book, the type of medium (dtype: `string`)
- `pages`: the number of book pages; `NaN` for other types such as audio files (dtype: `int64`)
- `price`: the price of the book in Euros (dtype: `float64`)
- `content`: the dust cover blurb (dtype: `string`)
- `notes`: the number of review notes as an indicator wheter the the book is relevant; lumped to zero for *n*<3 notes (dtype: `int64`)
- `publisher`: the publisher of the book (dtype: `string`)
- `pub_place`: the place (city) where the books has been published (dtype: `string`)
- `pub_year`: the year the books has been published (dtype: `int64`)
- `is_novel`: boolean, whether the book is a novel (dtype: `bool`) |