Sparse BERT base model (uncased)

Pretrained model pruned to 70% sparsity. The model is a pruned version of the BERT base model.

Intended Use

The model can be used for fine-tuning to downstream tasks with sparsity already embeded to the model. To keep the sparsity a mask should be added to each sparse weight blocking the optimizer from updating the zeros.

Downloads last month
Hosted inference API
Mask token: [MASK]
This model can be loaded on the Inference API on-demand.