Edit model card

This model is quantized version of vuiseng9/bert-l-squadv1.1-sl384 using OpenVINO NNCF.

Training

# used 4xV100 GPUS
# --fp16 for lower turnaround and resource requirement
python run_qa.py \
  --model_name_or_path bert-large-uncased-whole-word-masking-finetuned-squad \
  --dataset_name squad \
  --do_eval \
  --do_train \
  --evaluation_strategy steps \
  --eval_steps 250 \
  --learning_rate 3e-5 \
  --fp16 \
  --num_train_epochs 2 \
  --per_device_eval_batch_size 64 \
  --per_device_train_batch_size 8 \
  --max_seq_length 384 \
  --doc_stride 128 \
  --save_steps 500 \
  --logging_steps 1 \
  --overwrite_output_dir \
  --nncf_config nncf_bert_config_squad_kd.json \ #stock config which is also enclosed here
  --run_name $RUNID \
  --output_dir $OUTDIR

Evaluation

Require vuiseng9/transformers (fork) , commit: ff24569b, NNCF v2.1+ commit (8e26365)

git clone https://huggingface.co/vuiseng9/nncf-qat-kd-bert-l-squadv1.1-sl384
python run_qa.py \
  --model_name_or_path ./nncf-qat-kd-bert-l-squadv1.1-sl384 \
  --dataset_name squad \
  --nncf_config nncf-qat-kd-bert-l-squadv1.1-sl384/nncf_bert_config_squad_kd.json \
  --nncf_ckpt ./nncf-qat-kd-bert-l-squadv1.1-sl384 \
  --do_eval \
  --per_device_eval_batch_size 128 \
  --max_seq_length 384 \
  --doc_stride 128 \
  --output_dir /tmp/eval-nncf-qat-kd-bert-l-squadv1.1-sl384 \
  --overwrite_output_dir

Results

  eval_exact_match = 87.1523
  eval_f1          = 93.2668
  eval_samples     =   10784
Downloads last month
2
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train vuiseng9/nncf-qat-kd-bert-l-squadv1.1-sl384