lwachowiak's picture
Update README.md
d1524b0
|
raw
history blame
2.18 kB
metadata
license: cc-by-nc-sa-3.0
metrics:
  - f1
  - accuracy
widget:
  - text: We are at a relationship crossroad
    example_title: Metaphoric1
  - text: The car waits at a crossroad
    example_title: Literal1
  - text: I win the argument
    example_title: Metaphoric2
  - text: I win the game
    example_title: Literal2

Multilingual-Metaphor-Detection

This page provides a fine-tuned multilingual language model XLM-RoBERTa for metaphor detection on a token-level using the Huggingface token-classification approach. Label 1 corresponds to metaphoric usage.

Dataset

The dataset the model is trained on is the VU Amsterdam Metaphor Corpus that was annotated on a word-level following the metaphor identification protocol. The training corpus is restricted to English, however, XLM-R shows decent zero-shot performances when tested on other languages.

Results

Following the evaluation criteria from the 2020 Second Shared Task on Metaphor detection our model achieves a F1-Score of 0.76 for the metaphor-class when training XLM-RBase and 0.77 when training XLM-RLarge..

We train for 8 epochs loading the model with the best evaluation performance at the end and using a learning rate of 2e-5. From the allocated training data 10% are utilized for validation while the final test set is being kept seperate and only used for the final evaluation.

Code for Training and Reference

The training and evaluation code is available on Github. Our paper describing training and model application is available online:

@inproceedings{wachowiak2022drum, title={Drum Up SUPPORT: Systematic Analysis of Image-Schematic Conceptual Metaphors}, author={Wachowiak, Lennart and Gromann, Dagmar and Xu, Chao}, booktitle={Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)}, pages={44--53}, year={2022} }