Welcome to RoBERTArg!
π€ Model description
This model was trained on ~40k heterogeneous manually annotated sentences (π Stab et al. 2018) of controversial topics (abortion etc.) to classify text into one of two labels: π· NON-ARGUMENT (0) and ARGUMENT (1).
Dataset
Please note that the label distribution in the dataset is imbalanced:
- NON-ARGUMENTS:
- ARGUMENTS:
Model training
RoBERTArg was fine-tuned on a RoBERTA (base) pre-trained model from HuggingFace using the HuggingFace trainer with the following hyperparameters. The hyperparameters were determined using a hyperparameter search on a 20% validation set.
training_args = TrainingArguments(
num_train_epochs=2,
learning_rate=2.3102e-06,
seed=8,
per_device_train_batch_size=64,
per_device_eval_batch_size=64,
)
Evaluation
The model was evaluated using 20% of the sentences (80-20 train-test split).
Model | Acc | F1 | R arg | R non | P arg | P non |
---|---|---|---|---|---|---|
RoBERTArg | 0.8193 | 0.8021 | 0.8463 | 0.7986 | 0.7623 | 0.8719 |
Showing the confusion matrix using the 20% of the sentences as an evaluation set:
ARGUMENT | NON-ARGUMENT | |
---|---|---|
ARGUMENT | 2213 | 558 |
NON-ARGUMENT | 325 | 1790 |
Intended Uses & Potential Limitations
The model can be a practical starting point to the complex topic Argument Mining. It is a quite challenging task due to the different conceptions of an argument.
This model is a part of an open-source project providing several models to detect arguments in text. ππΎ Check out chkla/argument-analyzer/ for more details.
Enjoy and stay tuned! π
πStab et al. (2018) https://public.ukp.informatik.tu-darmstadt.de/UKP_Webpage/publications/2018/2018_EMNLP_CS_Cross-topicArgumentMining.pdf