albertZero is a model with a prediction head pre-trained for SQuAD 2.0. Based on albert-base-v2, albertZero employs a novel method to speed up fine-tuning. It re-initializes weights of final linear layer in shared albert transformer block, resulting in a 2% point improvement during the early epochs of fine-tuning.