Dataset Preview Go to dataset viewer
The dataset preview is not available for this split.
Server error
Status code:   400
Exception:     ManualDownloadError
Message:                         The dataset style_change_detection with config narrow requires manual data.
                  Please follow the manual download instructions:
                     You should download the dataset from https://zenodo.org/record/3660984
The dataset needs requesting.

Download each file, extract it and place in a dir of your choice,
which will be used as a manual_dir, e.g. `~/.manual_dirs/style_change_detection`
Style Change Detection can then be loaded via:
`datasets.load_dataset("style_change_detection", data_dir="~/.manual_dirs/style_change_detection")`.

                  Manual data can be loaded with:
                   datasets.load_dataset("style_change_detection", data_dir="<path/to/manual/data>")

Need help to make the dataset viewer work? Open an issue for direct support.

Dataset Card for "style_change_detection"

Dataset Summary

The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general.

Access to the dataset needs to be requested from zenodo.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

narrow

  • Size of downloaded dataset files: 0.00 MB
  • Size of the generated dataset: 58.12 MB
  • Total amount of disk used: 58.12 MB

An example of 'validation' looks as follows.

{
    "authors": 2,
    "changes": [false, false, true, false],
    "id": "2",
    "multi-author": true,
    "site": "exampleSite",
    "structure": ["A1", "A2"],
    "text": "This is text from example problem 2.\n"
}

wide

  • Size of downloaded dataset files: 0.00 MB
  • Size of the generated dataset: 139.48 MB
  • Total amount of disk used: 139.48 MB

An example of 'train' looks as follows.

{
    "authors": 2,
    "changes": [false, false, true, false],
    "id": "2",
    "multi-author": true,
    "site": "exampleSite",
    "structure": ["A1", "A2"],
    "text": "This is text from example problem 2.\n"
}

Data Fields

The data fields are the same among all splits.

narrow

  • id: a string feature.
  • text: a string feature.
  • authors: a int32 feature.
  • structure: a list of string features.
  • site: a string feature.
  • multi-author: a bool feature.
  • changes: a list of bool features.

wide

  • id: a string feature.
  • text: a string feature.
  • authors: a int32 feature.
  • structure: a list of string features.
  • site: a string feature.
  • multi-author: a bool feature.
  • changes: a list of bool features.

Data Splits

name train validation
narrow 3418 1713
wide 8030 4019

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

@inproceedings{bevendorff2020shared,
  title={Shared Tasks on Authorship Analysis at PAN 2020},
  author={Bevendorff, Janek and Ghanem, Bilal and Giachanou, Anastasia and Kestemont, Mike and Manjavacas, Enrique and Potthast, Martin and Rangel, Francisco and Rosso, Paolo and Specht, G{"u}nther and Stamatatos, Efstathios and others},
  booktitle={European Conference on Information Retrieval},
  pages={508--516},
  year={2020},
  organization={Springer}
}

Contributions

Thanks to @lewtun, @ghomasHudson, @thomwolf, @lhoestq for adding this dataset.

Update on GitHub