--- language: en datasets: - squad_v2 license: cc-by-4.0 model-index: - name: plm_qa results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - name: Exact Match type: exact_match value: 0 verified: false - name: F1 type: f1 value: 0 verified: false - name: total type: total value: 11869 verified: false --- # roberta-base for QA finetuned over community safety domain data We fine-tuned the roBERTa-based model (https://huggingface.co/deepset/roberta-base-squad2) over LiveSafe community safety dialogue data for event argument extraction with the objective of question-answering. ### Using model in Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "yirenl2/plm_qa" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'What is the location of the incident?', 'context': 'I was attacked by someone in front of the bus station.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ```