This model is a fine-tuned version of deepset/xlm-roberta-large-squad2 on the milqa dataset.
Packages to install for large roberta model:
sentencepiece==0.1.97
protobuf==3.20.0
How to use:
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model = "ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible",
tokenizer = "ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible",
device = 0, #GPU selection, -1 on CPU
handle_impossible_answer = True,
max_answer_len = 50 #This can be modified
)
predictions = qa_pipeline({
'context': context,
'question': question
})
print(predictions)
- Downloads last month
- 36
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.