albertvillanova
HF staff
Convert dataset sizes from base 2 to base 10 in the dataset card (#1)
974a60e
metadata
paperswithcode_id: null
pretty_name: StyleChangeDetection
dataset_info:
- config_name: narrow
features:
- name: id
dtype: string
- name: text
dtype: string
- name: authors
dtype: int32
- name: structure
sequence: string
- name: site
dtype: string
- name: multi-author
dtype: bool
- name: changes
sequence: bool
splits:
- name: train
num_bytes: 40499150
num_examples: 3418
- name: validation
num_bytes: 20447137
num_examples: 1713
download_size: 0
dataset_size: 60946287
- config_name: wide
features:
- name: id
dtype: string
- name: text
dtype: string
- name: authors
dtype: int32
- name: structure
sequence: string
- name: site
dtype: string
- name: multi-author
dtype: bool
- name: changes
sequence: bool
splits:
- name: train
num_bytes: 97403392
num_examples: 8030
- name: validation
num_bytes: 48850089
num_examples: 4019
download_size: 0
dataset_size: 146253481
Dataset Card for "style_change_detection"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://pan.webis.de/clef20/pan20-web/style-change-detection.html
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 0.00 MB
- Size of the generated dataset: 207.20 MB
- Total amount of disk used: 207.20 MB
Dataset Summary
The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general.
Access to the dataset needs to be requested from zenodo.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
narrow
- Size of downloaded dataset files: 0.00 MB
- Size of the generated dataset: 60.94 MB
- Total amount of disk used: 60.94 MB
An example of 'validation' looks as follows.
{
"authors": 2,
"changes": [false, false, true, false],
"id": "2",
"multi-author": true,
"site": "exampleSite",
"structure": ["A1", "A2"],
"text": "This is text from example problem 2.\n"
}
wide
- Size of downloaded dataset files: 0.00 MB
- Size of the generated dataset: 146.26 MB
- Total amount of disk used: 146.26 MB
An example of 'train' looks as follows.
{
"authors": 2,
"changes": [false, false, true, false],
"id": "2",
"multi-author": true,
"site": "exampleSite",
"structure": ["A1", "A2"],
"text": "This is text from example problem 2.\n"
}
Data Fields
The data fields are the same among all splits.
narrow
id
: astring
feature.text
: astring
feature.authors
: aint32
feature.structure
: alist
ofstring
features.site
: astring
feature.multi-author
: abool
feature.changes
: alist
ofbool
features.
wide
id
: astring
feature.text
: astring
feature.authors
: aint32
feature.structure
: alist
ofstring
features.site
: astring
feature.multi-author
: abool
feature.changes
: alist
ofbool
features.
Data Splits
name | train | validation |
---|---|---|
narrow | 3418 | 1713 |
wide | 8030 | 4019 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{bevendorff2020shared,
title={Shared Tasks on Authorship Analysis at PAN 2020},
author={Bevendorff, Janek and Ghanem, Bilal and Giachanou, Anastasia and Kestemont, Mike and Manjavacas, Enrique and Potthast, Martin and Rangel, Francisco and Rosso, Paolo and Specht, G{"u}nther and Stamatatos, Efstathios and others},
booktitle={European Conference on Information Retrieval},
pages={508--516},
year={2020},
organization={Springer}
}
Contributions
Thanks to @lewtun, @ghomasHudson, @thomwolf, @lhoestq for adding this dataset.