This model was converted from the release of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models.
Conversion was performed automatically using
transformers-cli convert as explained here
This model is also available on tfhub at
A small test was performed to check if converted model and the version at TFHub generate similar embeddings.