Transformers
Back to all models
Model: ktrapeznikov/albert-xlarge-v2-squad-v2

Monthly model downloads

ktrapeznikov/albert-xlarge-v2-squad-v2 ktrapeznikov/albert-xlarge-v2-squad-v2
- downloads
last 30 days

pytorch

tf

Contributed by

ktrapeznikov Kirill Trapeznikov
No model yet

How to use this model directly from the 🤗/transformers library:

			
Copy model
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/albert-xlarge-v2-squad-v2") model = AutoModel.from_pretrained("ktrapeznikov/albert-xlarge-v2-squad-v2")

Config

See raw config file
attention_probs_dropout_prob: 0.1 ...
bos_token_id: 0 ...
do_sample: false ...
down_scale_factor: 1 ...
embedding_size: 128 ...
eos_token_ids: 0 ...
▾ finetuning_task: null ...
gap_size: 0 ...
hidden_act: "gelu" ...
hidden_dropout_prob: 0.1 ...
hidden_size: 2048 ...
▾ id2label: { "0": "LABEL_0", "1": "LABEL_1" } ...
initializer_range: 0.02 ...
inner_group_num: 1 ...
intermediate_size: 8192 ...
is_decoder: false ...
▾ label2id: { "LABEL_0": 0, "LABEL_1": 1 } ...
layer_norm_eps: 1e-12 ...
length_penalty: 1 ...
max_length: 20 ...
max_position_embeddings: 512 ...
net_structure_type: 0 ...
num_attention_heads: 16 ...
num_beams: 1 ...
num_hidden_groups: 1 ...
num_hidden_layers: 24 ...
num_labels: 2 ...
num_memory_blocks: 0 ...
num_return_sequences: 1 ...
output_attentions: false ...
output_hidden_states: false ...
output_past: true ...
pad_token_id: 0 ...
▾ pruned_heads: {} ...
repetition_penalty: 1 ...
temperature: 1 ...
top_k: 50 ...
top_p: 1 ...
torchscript: false ...
type_vocab_size: 2 ...
use_bfloat16: false ...
vocab_size: 30000 ...