AMRBART-large-v2 / README.md
xfbai's picture
Update README.md
4d602b6 verified
metadata
language: en
tags:
  - AMRBART
license: mit

AMRBART is a pretrained semantic parser which converts a sentence into an abstract meaning graph. You may find our paper here (Arxiv). The original implementation is avaliable here

PWC

PWC

PWC

PWC

News🎈

  • (2022/12/10) fix max_length bugs in AMR parsing and update results.
  • (2022/10/16) release the AMRBART-v2 model which is simpler, faster, and stronger.

Requirements

  • python 3.8
  • pytorch 1.8
  • transformers 4.21.3
  • datasets 2.4.0
  • Tesla V100 or A100

We recommend to use conda to manage virtual environments:

conda env update --name <env> --file requirements.yml

Data Processing

You may download the AMR corpora at LDC.

Please follow this respository to preprocess AMR graphs:

bash run-process-acl2022.sh

Usage

Our model is avaliable at huggingface. Here is how to initialize a AMR parsing model in PyTorch:

from transformers import BartForConditionalGeneration
from model_interface.tokenization_bart import AMRBartTokenizer      # We use our own tokenizer to process AMRs

model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR3.0-AMRParsing-v2")
tokenizer = AMRBartTokenizer.from_pretrained("xfbai/AMRBART-large-finetuned-AMR3.0-AMRParsing-v2")