Transformers
Back to all models
Model: bert-base-japanese-char-whole-word-masking

Monthly model downloads

bert-base-japanese-char-whole-word-masking bert-base-japanese-char-whole-word-masking
- downloads
last 30 days

pytorch

tf

How to use this model directly from the πŸ€—/transformers library:

			
Copy model
tokenizer = AutoTokenizer.from_pretrained("bert-base-japanese-char-whole-word-masking") model = AutoModel.from_pretrained("bert-base-japanese-char-whole-word-masking")

Config

See raw config file
attention_probs_dropout_prob: 0.1 ...
hidden_act: "gelu" ...
hidden_dropout_prob: 0.1 ...
hidden_size: 768 ...
initializer_range: 0.02 ...
intermediate_size: 3072 ...
max_position_embeddings: 512 ...
num_attention_heads: 12 ...
num_hidden_layers: 12 ...
type_vocab_size: 2 ...
vocab_size: 4000 ...