Feature Extraction
Transformers
PyTorch
English
albert
Inference Endpoints
Edit model card

ALBERT Large (dropout)

Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. The model is initialized from the relevant publicly-available checkpoint and pre-training continued over Wikipedia, with increased dropout rate.

Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the FairNLP team.

BibTeX entry and citation info

@misc{zari,
      title={Measuring and Reducing Gendered Correlations in Pre-trained Models},
      author={Kellie Webster and Xuezhi Wang and Ian Tenney and Alex Beutel and Emily Pitler and Ellie Pavlick and Jilin Chen and Slav Petrov},
      year={2020},
      eprint={2010.06032},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
1

Dataset used to train fairnlp/albert-dropout