Back to all models
Model card Files and versions Use in transformers
question-answering mask_token: <mask>
Context
Query this model
馃敟 This model is currently loaded and running on the Inference API. 鈿狅笍 This model could not be loaded by the inference API. 鈿狅笍 This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

鈿★笍 Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

deepset/roberta-base-squad2-covid deepset/roberta-base-squad2-covid
1,971 downloads
last 30 days

pytorch

tf

Contributed by

deepset deepset.ai company
21 models

roberta-base-squad2 for QA on COVID-19

Overview

Language model: deepset/roberta-base-squad2
Language: English
Downstream-task: Extractive QA
Training data: SQuAD-style CORD-19 annotations from 23rd April
Code: See example in FARM
Infrastructure: Tesla v100

Hyperparameters

batch_size = 24
n_epochs = 3
base_LM_model = "deepset/roberta-base-squad2"
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.1
doc_stride = 128
xval_folds = 5
dev_split = 0
no_ans_boost = -100

Performance

5-fold cross-validation on the data set led to the following results:

Single EM-Scores: [0.222, 0.123, 0.234, 0.159, 0.158]
Single F1-Scores: [0.476, 0.493, 0.599, 0.461, 0.465]
Single top_3_recall Scores: [0.827, 0.776, 0.860, 0.771, 0.777]
XVAL EM: 0.17890995260663506
XVAL f1: 0.49925444207319924
XVAL top_3_recall: 0.8021327014218009

This model is the model obtained from the third fold of the cross-validation.

Usage

In Transformers

from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline


model_name = "deepset/roberta-base-squad2-covid"

# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
    'question': 'Why is model conversion important?',
    'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)

# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

In FARM

from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer

model_name = "deepset/roberta-base-squad2-covid"

# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
             "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)

# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)

In haystack

For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:

reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2-covid")
# or 
reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2-covid")

Authors

Branden Chan: branden.chan [at] deepset.ai
Timo M枚ller: timo.moeller [at] deepset.ai
Malte Pietsch: malte.pietsch [at] deepset.ai
Tanay Soni: tanay.soni [at] deepset.ai
Bogdan Kosti膰: bogdan.kostic [at] deepset.ai

About us

deepset logo

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.

Some of our work:

Get in touch: Twitter | LinkedIn | Website