Edit model card

AMRBART-large-finetuned-AMR3.0-AMR2Text

This model is a fine-tuned version of AMRBART-large on an AMR3.0 dataset. It achieves a sacre-bleu score of 45.0 on the evaluation set: More details are introduced in the paper: Graph Pre-training for AMR Parsing and Generation by bai et al. in ACL 2022.

Model description

Same with AMRBART.

Training data

The model is finetuned on AMR2.0, a dataset consisting of 55,635 training instances, 1,722 validation instances, and 1,898 test instances.

Intended uses & limitations

You can use the model for AMR-to-text generation, but it's mostly intended to be used in the domain of News.

How to use

Here is how to initialize this model in PyTorch:

from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR3.0-AMR2Text")

Please refer to this repository for tokenizer initialization and data preprocessing.

BibTeX entry and citation info

Please cite this paper if you find this model helpful

@inproceedings{bai-etal-2022-graph,
    title = "Graph Pre-training for {AMR} Parsing and Generation",
    author = "Bai, Xuefeng  and
      Chen, Yulong and
      Zhang, Yue",
    booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = may,
    year = "2022",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "todo",
    doi = "todo",
    pages = "todo"
}
Downloads last month
3