--- language: en tags: - AMRBART license: mit --- ## AMRBART (large-sized model) AMRBART model is continually pre-trained on the English text and AMR Graphs based on the BART model. It was introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022 and first released in [this repository](https://github.com/muyeby/AMRBART). ## Model description AMRBART follows the BART model which uses a transformer encoder-encoder architecture. AMRBART is pre-trained with 6 tasks: + learning to reconstruct the text based on the corrupted text. + learning to reconstruct AMR graphs based on the corrupted AMR graph. + learning to reconstruct the text based on the corrupted text and its corresponding AMR graph. + learning to reconstruct an AMR graph based on the corrupted AMR graph and its corresponding text. + learning to reconstruct the text based on the corrupted text and its corresponding corrupted AMR graph. + learning to reconstruct an AMR graph based on the corrupted AMR graph and its corresponding corrupted text. AMRBART is particularly effective when fine-tuned for AMR parsing and AMR-to-text generation tasks. ## Training data The AMRBART model is pre-trained on [AMR3.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 55,635 training instances and [English Gigaword](https://catalog.ldc.upenn.edu/LDC2003T05) (we randomly sampled 200,000 sentences). ## Intended uses & limitations You can use the raw model for either AMR encoding or AMR parsing, but it's mostly intended to be fine-tuned on a downstream task. ## How to use Here is how to initialize this model in PyTorch: ```python from transformers import BartForConditionalGeneration model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large") ``` Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing. ## BibTeX entry and citation info Please cite this paper if you find this model helpful ```bibtex @inproceedings{bai-etal-2022-graph, title = "Graph Pre-training for {AMR} Parsing and Generation", author = "Bai, Xuefeng and Chen, Yulong and Zhang, Yue", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", url = "todo", doi = "todo", pages = "todo" } ```