Edit model card

This model is quantized version of vuiseng9/bert-l-squadv1.1-sl256 using OpenVINO NNCF.

Training

# used 4xV100 GPUS
# --fp16 for lower turnaround and resource requirement
python run_qa.py \
  --model_name_or_path vuiseng9/bert-l-squadv1.1-sl256 \
  --dataset_name squad \
  --do_eval \
  --do_train \
  --evaluation_strategy steps \
  --eval_steps 250 \
  --learning_rate 3e-5 \
  --fp16 \
  --num_train_epochs 2 \
  --per_device_eval_batch_size 64 \
  --per_device_train_batch_size 8 \
  --max_seq_length 256 \
  --doc_stride 128 \
  --save_steps 500 \
  --logging_steps 1 \
  --overwrite_output_dir \
  --nncf_config nncf_bert_config_squad_kd.json \ #stock config which has seq.len modified to 256.
  --run_name $RUNID \
  --output_dir $OUTDIR

Evaluation

Require vuiseng9/transformers (fork) , commit: ff24569b, NNCF v2.1+ commit (8e26365)

git clone https://huggingface.co/vuiseng9/nncf-qat-kd-bert-l-squadv1.1-sl256
python run_qa.py \
  --model_name_or_path ./nncf-qat-kd-bert-l-squadv1.1-sl256 \
  --dataset_name squad \
  --nncf_config ./nncf-qat-kd-bert-l-squadv1.1-sl256/nncf_bert_config_squad_kd.json \
  --nncf_ckpt ./nncf-qat-kd-bert-l-squadv1.1-sl256 \
  --do_eval \
  --per_device_eval_batch_size 128 \
  --max_seq_length 256 \
  --doc_stride 128 \
  --output_dir /tmp/eval-nncf-qat-kd-bert-l-squadv1.1-sl256 \
  --overwrite_output_dir

Results

  eval_exact_match = 87.1902
  eval_f1          = 93.0286
  eval_samples     =   12097
Downloads last month
3
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train vuiseng9/nncf-qat-kd-bert-l-squadv1.1-sl256