Edit model card

English Vossian Antonomasia Sequence Tagger

This page presents a fine-tuned BERT-base-cased language model for tagging Vossian Antonomasia expressions in text on word-level. The tags {B,I}-SRC refers to the source chunks, {B,I}-MOD to the modifier chunks and {B,I}-TRG to the target chunks if existing. We used the IOB tagging format.

Dataset

The dataset is an annotated Vossian Antonomasia dataset that evolved from Schwab et al. 2019 and was updated in Schwab et al. 2022.

Results

F1 score: 0.926

For more results, please have a look at our paper.

Please note, that this model was trained on the annotated dataset only and did not use any additional unlabeled training data. Thus, it may not be as robust as the best model in our paper against new data.


Cite

Please cite the following paper when using this model.

@article{schwab2022rodney,
  title={“The Rodney Dangerfield of Stylistic Devices”: End-to-End Detection and Extraction of Vossian Antonomasia Using Neural Networks},
  author={Schwab, Michel and J{\"a}schke, Robert and Fischer, Frank},
  journal={Frontiers in Artificial Intelligence},
  volume={5},
  year={2022},
  publisher={Frontiers Media SA}
}

Interested in more?

Visit our Website for more research on Vossian Antonomasia, including interactive visualizations for exploration.

Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.