Back to all models
question-answering mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$ curl -X POST \
Share Copied link to clipboard

Monthly model downloads

NeuML/bert-small-cord19-squad2 NeuML/bert-small-cord19-squad2
last 30 days



Contributed by

NeuML company
1 team member · 3 models

How to use this model directly from the 🤗/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("NeuML/bert-small-cord19-squad2") model = AutoModelForQuestionAnswering.from_pretrained("NeuML/bert-small-cord19-squad2")

BERT-Small CORD-19 fine-tuned on SQuAD 2.0

bert-small-cord19 model fine-tuned on SQuAD 2.0

Building the model

    --model_type bert
    --model_name_or_path bert-small-cord19
    --train_file train-v2.0.json
    --predict_file dev-v2.0.json
    --per_gpu_train_batch_size 8
    --learning_rate 3e-5
    --num_train_epochs 3.0
    --max_seq_length 384
    --doc_stride 128
    --output_dir bert-small-cord19-squad2
    --save_steps 0
    --threads 8