Edit model card

minilm-uncased-squad2 for QA on COVID-19

Overview

Language model: deepset/minilm-uncased-squad2
Language: English
Downstream-task: Extractive QA
Training data: SQuAD-style COV-19 QA
Infrastructure: A4000

Initially fine-tuned for https://github.com/CDCapobianco/COVID-Question-Answering-REST-API ## Hyperparameters batch_size = 24 n_epochs = 3 base_LM_model = "deepset/minilm-uncased-squad2" max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup warmup_proportion = 0.1 doc_stride = 128 dev_split = 0 x_val_splits = 5 no_ans_boost = -100

license: cc-by-4.0

Performance

Single EM-Scores: [0.7441, 0.7938, 0.6666, 0.6576, 0.6445]
Single F1-Scores: [0.8261, 0.8748, 0.8188, 0.7633, 0.7935]
XVAL EM: 0.7013 XVAL f1: 0.8153

Usage

In Haystack

For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:

reader = FARMReader(model_name_or_path="Frizio/minilm-uncased-squad2-covidqa")

In Transformers

from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline


model_name = "Frizio/minilm-uncased-squad2-covidqa"

# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
    'question': 'Why is model conversion important?',
    'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)

# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Downloads last month
5
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train Frizio/minilm-uncased-squad2-covidqa