Back to all datasets
Dataset: guardian_authorship 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/datasets library:

				
Copy to clipboard
from datasets import load_dataset dataset = load_dataset("guardian_authorship")

Description

A dataset cross-topic authorship attribution. The dataset is provided by Stamatatos 2013. 1- The cross-topic scenarios are based on Table-4 in Stamatatos 2017 (Ex. cross_topic_1 => row 1:P S U&W ). 2- The cross-genre scenarios are based on Table-5 in the same paper. (Ex. cross_genre_1 => row 1:B P S&U&W). 3- The same-topic/genre scenario is created by grouping all the datasts as follows. For ex., to use same_topic and split the data 60-40 use: train_ds = load_dataset('guardian_authorship', name="cross_topic_<<#>>", split='train[:60%]+validation[:60%]+test[:60%]') tests_ds = load_dataset('guardian_authorship', name="cross_topic_<<#>>", split='train[-40%:]+validation[-40%:]+test[-40%:]') IMPORTANT: train+validation+test[:60%] will generate the wrong splits becasue the data is imbalanced * See https://huggingface.co/docs/datasets/splits.html for detailed/more examples

Citation

@article{article,
    author = {Stamatatos, Efstathios},
    year = {2013},
    month = {01},
    pages = {421-439},
    title = {On the robustness of authorship attribution based on character n-gram features},
    volume = {21},
    journal = {Journal of Law and Policy}
}

@inproceedings{stamatatos2017authorship,
    title={Authorship attribution using text distortion},
    author={Stamatatos, Efstathios},
    booktitle={Proc. of the 15th Conf. of the European Chapter of the Association for Computational Linguistics},
    volume={1}
    pages={1138--1149},
    year={2017}
}

Models trained or fine-tuned on guardian_authorship

None yet. Start fine-tuning now =)