Back to all models
question-answering mask_token: <mask>
Context
Query this model
馃敟 This model is currently loaded and running on the Inference API. 鈿狅笍 This model could not be loaded by the inference API. 鈿狅笍 This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

鈿★笍 Upgrade your account to access the Inference API

							$
							curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"question": "Where does she live?", "context": "She lives in Berlin."}' \
https://api-inference.huggingface.co/models/valhalla/longformer-base-4096-finetuned-squadv1
Share Copied link to clipboard

Monthly model downloads

valhalla/longformer-base-4096-finetuned-squadv1 valhalla/longformer-base-4096-finetuned-squadv1
455 downloads
last 30 days

pytorch

tf

Contributed by

valhalla Suraj Patil
15 models

How to use this model directly from the 馃/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1") model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")

LONGFORMER-BASE-4096 fine-tuned on SQuAD v1

This is longformer-base-4096 model fine-tuned on SQuAD v1 dataset for question answering task.

Longformer model created by Iz Beltagy, Matthew E. Peters, Arman Coha from AllenAI. As the paper explains it

Longformer is a BERT-like model for long documents.

The pre-trained model can handle sequences with upto 4096 tokens.

Model Training

This model was trained on google colab v100 GPU. You can find the fine-tuning colab here Open In Colab.

Few things to keep in mind while training longformer for QA task, by default longformer uses sliding-window local attention on all tokens. But For QA, all question tokens should have global attention. For more details on this please refer the paper. The LongformerForQuestionAnswering model automatically does that for you. To allow it to do that

  1. The input sequence must have three sep tokens, i.e the sequence should be encoded like this <s> question</s></s> context</s>. If you encode the question and answer as a input pair, then the tokenizer already takes care of that, you shouldn't worry about it.
  2. input_ids should always be a batch of examples.

Results

Metric # Value
Exact Match 85.1466
F1 91.5415

Model in Action 馃殌

import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering,

tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")

text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done ?"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]

# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]

start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())

answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => democratized NLP

The LongformerForQuestionAnswering isn't yet supported in pipeline . I'll update this card once the support has been added.

Created with 鉂わ笍 by Suraj Patil Github icon Twitter icon