Edit model card

Multilingual XLM-RoBERTa base for QA on various languages

Overview

Language model: xlm-roberta-base
Language: Multilingual
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0 dev set - German MLQA - German XQuAD
Code: See example in FARM
Infrastructure: 4x Tesla v100

Hyperparameters

batch_size = 22*4
n_epochs = 2
max_seq_len=256,
doc_stride=128,
learning_rate=2e-5,

Corresponding experiment logs in mlflow: link

Performance

Evaluated on the SQuAD 2.0 dev set with the official eval script.

"exact": 73.91560683904657
"f1": 77.14103746689592

Evaluated on German MLQA: test-context-de-question-de.json "exact": 33.67279167589108 "f1": 44.34437105434842 "total": 4517

Evaluated on German XQuAD: xquad.de.json "exact": 48.739495798319325 "f1": 62.552615701071495 "total": 1190

Usage

In Transformers

from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoTokenizer

model_name = "deepset/xlm-roberta-base-squad2"

# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
    'question': 'Why is model conversion important?',
    'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)

# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

In FARM

from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer

model_name = "deepset/xlm-roberta-base-squad2"

# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
             "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)

# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)

In haystack

For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:

reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2")
# or 
reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/xlm-roberta-base-squad2")

Authors

Branden Chan: branden.chan [at] deepset.ai Timo Möller: timo.moeller [at] deepset.ai Malte Pietsch: malte.pietsch [at] deepset.ai Tanay Soni: tanay.soni [at] deepset.ai

About us

deepset logo

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.

Some of our work:

Get in touch: Twitter | LinkedIn | Discord | GitHub Discussions | Website

By the way: we're hiring!

Downloads last month
1,153
Safetensors
Model size
277M params
Tensor type
I64
·
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train deepset/xlm-roberta-base-squad2

Spaces using deepset/xlm-roberta-base-squad2 13

Evaluation results