This is a model checkpoint for "Should You Mask 15% in Masked Language Modeling" (code).
The original checkpoint is avaliable at princeton-nlp/efficient_mlm_m0.40-801010. Unfortunately this checkpoint depends on code that isn't part of the official transformers
library. Additionally, the checkpoints contains unused weights due to a bug.
This checkpoint fixes the unused weights issue and uses the RobertaPreLayerNorm
model from the transformers
library.
- Downloads last month
- 17
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model authors have turned it off explicitly.