Back to all models
question-answering mask_token: [MASK]
Context
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$ curl -X POST \
https://api-inference.huggingface.co/models/mrm8488/spanbert-finetuned-squadv1
Share Copied link to clipboard

Monthly model downloads

mrm8488/spanbert-finetuned-squadv1 mrm8488/spanbert-finetuned-squadv1
77 downloads
last 30 days

pytorch

tf

Contributed by

mrm8488 Manuel Romero
93 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("mrm8488/spanbert-finetuned-squadv1") model = AutoModelForQuestionAnswering.from_pretrained("mrm8488/spanbert-finetuned-squadv1")

SpanBERT (spanbert-base-cased) fine-tuned on SQuAD v1.1

SpanBERT created by Facebook Research and fine-tuned on SQuAD 1.1 for Q&A downstream task.

Details of SpanBERT

A pre-training method that is designed to better represent and predict spans of text.

SpanBERT: Improving Pre-training by Representing and Predicting Spans

Details of the downstream task (Q&A) - Dataset

SQuAD 1.1 contains 100,000+ question-answer pairs on 500+ articles.

Dataset Split # samples
SQuAD1.1 train 87.7k
SQuAD1.1 eval 10.6k

Model training

The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found here

Results:

Metric # Value
EM 85.49
F1 91.98

Raw metrics:

{
  "exact": 85.49668874172185,
  "f1": 91.9845699540379,
  "total": 10570,
  "HasAns_exact": 85.49668874172185,
  "HasAns_f1": 91.9845699540379,
  "HasAns_total": 10570,
  "best_exact": 85.49668874172185,
  "best_exact_thresh": 0.0,
  "best_f1": 91.9845699540379,
  "best_f1_thresh": 0.0
}

Comparison:

Model EM F1 score
SpanBert official repo - 92.4*
spanbert-finetuned-squadv1 85.49 91.98

Model in action

Fast usage with pipelines:

from transformers import pipeline

qa_pipeline = pipeline(
    "question-answering",
    model="mrm8488/spanbert-finetuned-squadv1",
    tokenizer="mrm8488/spanbert-finetuned-squadv1"
)

qa_pipeline({
    'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
    'question': "Who has been working hard for hugginface/transformers lately?"

})

Created by Manuel Romero/@mrm8488 | LinkedIn

Made with in Spain