Edit model card

ALBERT-based model for Argument Relation Identification (ARI)

Argument Mining model trained with English (EN) data for the Argument Relation Identification (ARI) task using the US2016 and the QT30 corpora. This a fine-tuned albert/albert-base-v2 model, inspired by "Transformer-Based Models for Automatic Detection of Argument Relations: A Cross-Domain Evaluation" paper.


This model was trained on the full dataset: train and test merged.

Usage


from transformers import BertTokenizer,BertForSequenceClassification

classes_decoder = {
        0: "Inference",
        1: "Conflict",
        2: "Rephrase",
        3: "No-Relation"
    }


model = BertForSequenceClassification.from_pretrained("yevhenkost/ArgumentMining-EN-ARI-AIF-ALBERT")
tokenizer = BertTokenizer.from_pretrained("yevhenkost/ArgumentMining-EN-ARI-AIF-ALBERT")

text_one, text_two = "The water is wet", "The sun is really hot"

model_inputs = tokenizer(text_one, text_two, return_tensors="pt")

# regular SequenceClassifierOutput
model_output = model(**model_inputs)

Metrics

              precision    recall  f1-score   support

           0       0.51      0.59      0.55       833
           1       0.46      0.28      0.35       200
           2       0.51      0.30      0.38       156
           3       0.82      0.82      0.82      2209

    accuracy                           0.71      3398
   macro avg       0.58      0.50      0.53      3398
weighted avg       0.71      0.71      0.71      3398

Theses results for the model that was trained only on train chunk of data and tested on the test one.

Cite:

@article{ruiz2021transformer,
author = {R. Ruiz-Dolz and J. Alemany and S. Barbera and A. Garcia-Fornes},
journal = {IEEE Intelligent Systems},
title = {Transformer-Based Models for Automatic Identification of Argument Relations: A Cross-Domain Evaluation},
year = {2021},
volume = {36},
number = {06},
issn = {1941-1294},
pages = {62-70},
doi = {10.1109/MIS.2021.3073993},
publisher = {IEEE Computer Society}
}
Downloads last month
144