Feature Extraction
Transformers
PyTorch
English
bert
text-embeddings-inference

BERT Large Uncased (dropout)

Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. The model is initialized from the relevant publicly-available checkpoint and pre-training continued over Wikipedia, with increased dropout rate.

Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the FairNLP team.

BibTeX entry and citation info

@misc{zari,
      title={Measuring and Reducing Gendered Correlations in Pre-trained Models},
      author={Kellie Webster and Xuezhi Wang and Ian Tenney and Alex Beutel and Emily Pitler and Ellie Pavlick and Jilin Chen and Slav Petrov},
      year={2020},
      eprint={2010.06032},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train fairnlp/bert-dropout