Edit model card

90% Sparse BERT-Large (uncased) Prune OFA

This model is a result from our paper Prune Once for All: Sparse Pre-Trained Language Models presented in ENLSP NeurIPS Workshop 2021.

For further details on the model and its result, see our paper and our implementation available here.

Downloads last month
74
Hosted inference API
Fill-Mask
Examples
Examples
Mask token: [MASK]
This model can be loaded on the Inference API on-demand.

Datasets used to train Intel/bert-large-uncased-sparse-90-unstructured-pruneofa