Back to all models
question-answering mask_token: <mask>
Context
Query this model
馃敟 This model is currently loaded and running on the Inference API. 鈿狅笍 This model could not be loaded by the inference API. 鈿狅笍 This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

鈿★笍 Upgrade your account to access the Inference API

							$
							curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"question": "Where does she live?", "context": "She lives in Berlin."}' \
https://api-inference.huggingface.co/models/a-ware/xlmroberta-squadv2
Share Copied link to clipboard

Monthly model downloads

a-ware/xlmroberta-squadv2 a-ware/xlmroberta-squadv2
269 downloads
last 30 days

pytorch

tf

Contributed by

a-ware A-ware UG
10 models

How to use this model directly from the 馃/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("a-ware/xlmroberta-squadv2") model = AutoModelForQuestionAnswering.from_pretrained("a-ware/xlmroberta-squadv2")

XLM-ROBERTA-LARGE finetuned on SQuADv2

This is xlm-roberta-large model finetuned on SQuADv2 dataset for question answering task

Model details

XLM-Roberta was propsed in the paper **XLM-R: State-of-the-art cross-lingual understanding through self-supervision

Model training

This model was trained with following parameters using simpletransformers wrapper:

train_args = {
    'learning_rate': 1e-5,
    'max_seq_length': 512,
    'doc_stride': 512,
    'overwrite_output_dir': True,
    'reprocess_input_data': False,
    'train_batch_size': 8,
    'num_train_epochs': 2,
    'gradient_accumulation_steps': 2,
    'no_cache': True,
    'use_cached_eval_features': False,
    'save_model_every_epoch': False,
    'output_dir': "bart-squadv2",
    'eval_batch_size': 32,
    'fp16_opt_level': 'O2',
    }

Results

{"correct": 6961, "similar": 4359, "incorrect": 553, "eval_loss": -12.177856394381962}

Model in Action 馃殌

from transformers import XLMRobertaTokenizer, XLMRobertaForQuestionAnswering
import torch

tokenizer = XLMRobertaTokenizer.from_pretrained('a-ware/xlmroberta-squadv2')
model = XLMRobertaForQuestionAnswering.from_pretrained('a-ware/xlmroberta-squadv2')

question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer(question, text, return_tensors='pt')
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']

start_scores, end_scores = model(input_ids, attention_mask=attention_mask, output_attentions=False)[:2]

all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
answer = tokenizer.convert_tokens_to_ids(answer.split())
answer = tokenizer.decode(answer)
#answer => 'a nice puppet' 

Created with 鉂わ笍 by A-ware UG Github icon