Back to all models
question-answering mask_token: [MASK]
Context
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$ curl -X POST \
https://api-inference.huggingface.co/models/twmkn9/bert-base-uncased-squad2
Share Copied link to clipboard

Monthly model downloads

twmkn9/bert-base-uncased-squad2 twmkn9/bert-base-uncased-squad2
2,640 downloads
last 30 days

pytorch

tf

Contributed by

twmkn9 Travis McGuire
4 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("twmkn9/bert-base-uncased-squad2") model = AutoModelForQuestionAnswering.from_pretrained("twmkn9/bert-base-uncased-squad2")

This model is BERT base uncased trained on SQuAD v2 as:

export SQUAD_DIR=../../squad2
python3 run_squad.py 
    --model_type bert 
    --model_name_or_path bert-base-uncased 
    --do_train 
    --do_eval 
    --overwrite_cache 
    --do_lower_case 
    --version_2_with_negative 
    --save_steps 100000 
    --train_file $SQUAD_DIR/train-v2.0.json 
    --predict_file $SQUAD_DIR/dev-v2.0.json 
    --per_gpu_train_batch_size 8 
    --num_train_epochs 3 
    --learning_rate 3e-5 
    --max_seq_length 384 
    --doc_stride 128 
    --output_dir ./tmp/bert_fine_tuned/

Performance on a dev subset is close to the original paper:

Results: 
{
    'exact': 72.35932872655479, 
    'f1': 75.75355132564763, 
    'total': 6078, 
    'HasAns_exact': 74.29553264604812, 
    'HasAns_f1': 81.38490892002987, 
    'HasAns_total': 2910, 
    'NoAns_exact': 70.58080808080808, 
    'NoAns_f1': 70.58080808080808, 
    'NoAns_total': 3168, 
    'best_exact': 72.35932872655479, 
    'best_exact_thresh': 0.0, 
    'best_f1': 75.75355132564766, 
    'best_f1_thresh': 0.0
}

We are hopeful this might save you time, energy, and compute. Cheers!