sjrhuschlee's picture
Update README.md
f7c9f9b
|
raw
history blame
5.35 kB
---
license: mit
datasets:
- squad_v2
- squad
- mrqa
- mbartolo/synQA
- adversarial_qa
- newsqa
- trivia_qa
- search_qa
- hotpot_qa
- natural_questions
language:
- en
library_name: transformers
pipeline_tag: question-answering
tags:
- deberta
- deberta-v3
- question-answering
- squad
- squad_v2
- mrqa
- synQA
- adversarial_qa
model-index:
- name: sjrhuschlee/deberta-v3-base-squad2-ext-v1
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 79.483
name: Exact Match
- type: f1
value: 82.343
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- type: exact_match
value: 85.894
name: Exact Match
- type: f1
value: 91.298
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- type: exact_match
value: 44.867
name: Exact Match
- type: f1
value: 55.996
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_adversarial
type: squad_adversarial
config: AddOneSent
split: validation
metrics:
- type: exact_match
value: 80.19
name: Exact Match
- type: f1
value: 85.028
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts
type: squadshifts
config: amazon
split: test
metrics:
- type: exact_match
value: 69.712
name: Exact Match
- type: f1
value: 81.171
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts
type: squadshifts
config: new_wiki
split: test
metrics:
- type: exact_match
value: 81.544
name: Exact Match
- type: f1
value: 89.782
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts
type: squadshifts
config: nyt
split: test
metrics:
- type: exact_match
value: 80.05
name: Exact Match
- type: f1
value: 87.756
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts
type: squadshifts
config: reddit
split: test
metrics:
- type: exact_match
value: 60.481
name: Exact Match
- type: f1
value: 68.686
name: F1
---
# deberta-v3-base for Extractive QA
This is the [deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) model, fine-tuned using the SQuAD 2.0, MRQA, AdversarialQA, and SynQA datasets. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
## Overview
**Language model:** deberta-v3-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0, MRQA, AdversarialQA, SynQA
**Eval data:** SQuAD 2.0
**Infrastructure**: 1x NVIDIA 3070
## Model Usage
```python
import torch
from transformers import(
AutoModelForQuestionAnswering,
AutoTokenizer,
pipeline
)
model_name = "sjrhuschlee/deberta-v3-base-squad2-ext-v1"
# a) Using pipelines
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
qa_input = {
'question': 'Where do I live?',
'context': 'My name is Sarah and I live in London'
}
res = nlp(qa_input)
# {'score': 0.984, 'start': 30, 'end': 37, 'answer': ' London'}
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
question = 'Where do I live?'
context = 'My name is Sarah and I live in London'
encoding = tokenizer(question, context, return_tensors="pt")
start_scores, end_scores = model(
encoding["input_ids"],
attention_mask=encoding["attention_mask"],
return_dict=False
)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores):torch.argmax(end_scores) + 1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# 'London'
```
## Dataset Preparation
The MRQA dataset was updated to fix some errors and formatting to work with the `run_qa.py` example script provided in the Hugging Face Transformers library.
The changes included
- Updating incorrect answer starts locations (usually off by a few characters)
- Updating the answer text to exactly match the text found in the context
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Framework versions
- Transformers 4.31.0.dev0