Transformers
Back to all models
Model: julien-c/bert-xsmall-dummy

Monthly model downloads

julien-c/bert-xsmall-dummy julien-c/bert-xsmall-dummy
- downloads
last 30 days

pytorch

tf

Contributed by

julien-c Julien Chaumond company
No model yet

How to use this model directly from the 🤗/transformers library:

			
Copy model
tokenizer = AutoTokenizer.from_pretrained("julien-c/bert-xsmall-dummy") model = AutoModel.from_pretrained("julien-c/bert-xsmall-dummy")

How to build a dummy model

from transformers.configuration_bert import BertConfig
from transformers.modeling_bert import BertForMaskedLM
from transformers.modeling_tf_bert import TFBertForMaskedLM
from transformers.tokenization_bert import BertTokenizer


SMALL_MODEL_IDENTIFIER = "julien-c/bert-xsmall-dummy"
DIRNAME = "./bert-xsmall-dummy"

config = BertConfig(10, 20, 1, 1, 40)

model = BertForMaskedLM(config)
model.save_pretrained(DIRNAME)

tf_model = TFBertForMaskedLM.from_pretrained(DIRNAME, from_pt=True)
tf_model.save_pretrained(DIRNAME)

# Slightly different for tokenizer.
# tokenizer = BertTokenizer.from_pretrained(DIRNAME)
# tokenizer.save_pretrained()

Config

See raw config file
attention_probs_dropout_prob: 0.1 ...
▾ finetuning_task: null ...
hidden_act: "gelu" ...
hidden_dropout_prob: 0.1 ...
hidden_size: 20 ...
initializer_range: 0.02 ...
intermediate_size: 40 ...
is_decoder: false ...
layer_norm_eps: 1e-12 ...
max_position_embeddings: 512 ...
num_attention_heads: 1 ...
num_hidden_layers: 1 ...
num_labels: 2 ...
output_attentions: false ...
output_hidden_states: false ...
output_past: true ...
▾ pruned_heads: {} ...
torchscript: false ...
type_vocab_size: 2 ...
use_bfloat16: false ...
vocab_size: 10 ...