Transformers
Back to all models
Model: adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers

Monthly model downloads

adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers
- downloads
last 30 days

pytorch

tf

Contributed by

adamlin Adam Lin university
No model yet

How to use this model directly from the 🤗/transformers library:

			
Copy model
tokenizer = AutoTokenizer.from_pretrained("adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers") model = AutoModel.from_pretrained("adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers")

Config

See raw config file
attention_probs_dropout_prob: 0.1 ...
▾ finetuning_task: null ...
hidden_act: "gelu" ...
hidden_dropout_prob: 0.1 ...
hidden_size: 1024 ...
▾ id2label: { "0": "LABEL_0", "1": "LABEL_1" } ...
initializer_range: 0.02 ...
intermediate_size: 4096 ...
is_decoder: false ...
▾ label2id: { "LABEL_0": 0, "LABEL_1": 1 } ...
layer_norm_eps: 1e-12 ...
max_position_embeddings: 512 ...
num_attention_heads: 16 ...
num_hidden_layers: 24 ...
num_labels: 2 ...
output_attentions: false ...
output_hidden_states: false ...
output_past: true ...
▾ pruned_heads: {} ...
torchscript: false ...
type_vocab_size: 2 ...
use_bfloat16: false ...
vocab_size: 30522 ...