DistilBERT

DistilBertConfig

class transformers.DistilBertConfig(vocab_size_or_config_json_file=30522, max_position_embeddings=512, sinusoidal_pos_embds=False, n_layers=6, n_heads=12, dim=768, hidden_dim=3072, dropout=0.1, attention_dropout=0.1, activation='gelu', initializer_range=0.02, tie_weights_=True, qa_dropout=0.1, seq_classif_dropout=0.2, **kwargs)[source]

DistilBertTokenizer

class transformers.DistilBertTokenizer(vocab_file, do_lower_case=True, do_basic_tokenize=True, never_split=None, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', tokenize_chinese_chars=True, **kwargs)[source]

Constructs a DistilBertTokenizer. DistilBertTokenizer is identical to BertTokenizer and runs end-to-end tokenization: punctuation splitting + wordpiece

Parameters
  • vocab_file – Path to a one-wordpiece-per-line vocabulary file

  • do_lower_case – Whether to lower case the input. Only has an effect when do_wordpiece_only=False

  • do_basic_tokenize – Whether to do basic tokenization before wordpiece.

  • max_len – An artificial maximum length to truncate tokenized sequences to; Effective maximum length is always the minimum of this value (if specified) and the underlying BERT model’s sequence length.

  • never_split – List of tokens which will never be split during tokenization. Only has an effect when do_wordpiece_only=False

DistilBertModel

class transformers.DistilBertModel(config)[source]

The bare DistilBERT encoder/transformer outputting raw hidden-states without any specific head on top. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling Bert base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of Bert’s performances as measured on the GLUE language understanding benchmark.

Here are the differences between the interface of Bert and DistilBert:

  • DistilBert doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])

  • DistilBert doesn’t have options to select the input positions (position_ids input). This could be added if necessary though, just let’s us know if you need this option.

For more information on DistilBERT, please refer to our detailed blog post

Parameters

config (DistilBertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. The input sequences should start with [CLS] and end with [SEP] tokens.

For now, ONLY BertTokenizer(bert-base-uncased) is supported and you should use this tokenizer when using DistilBERT.

attention_mask: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) torch.FloatTensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
last_hidden_state: torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)

Sequence of hidden-states at the output of the last layer of the model.

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained('distilbert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)  # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple
forward(input_ids=None, attention_mask=None, head_mask=None, inputs_embeds=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_input_embeddings()[source]

Get model’s input embeddings

set_input_embeddings(new_embeddings)[source]

Set model’s input embeddings

DistilBertForMaskedLM

class transformers.DistilBertForMaskedLM(config)[source]

DistilBert Model with a masked language modeling head on top. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling Bert base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of Bert’s performances as measured on the GLUE language understanding benchmark.

Here are the differences between the interface of Bert and DistilBert:

  • DistilBert doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])

  • DistilBert doesn’t have options to select the input positions (position_ids input). This could be added if necessary though, just let’s us know if you need this option.

For more information on DistilBERT, please refer to our detailed blog post

Parameters

config (DistilBertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. The input sequences should start with [CLS] and end with [SEP] tokens.

For now, ONLY BertTokenizer(bert-base-uncased) is supported and you should use this tokenizer when using DistilBERT.

attention_mask: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) torch.FloatTensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

masked_lm_labels: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Labels for computing the masked language modeling loss. Indices should be in [-1, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -1 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when masked_lm_labels is provided) torch.FloatTensor of shape (1,):

Masked language modeling loss.

prediction_scores: torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertForMaskedLM.from_pretrained('distilbert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
forward(input_ids=None, attention_mask=None, head_mask=None, inputs_embeds=None, masked_lm_labels=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_output_embeddings()[source]

Get model’s output embeddings Return None if the model doesn’t have output embeddings

DistilBertForSequenceClassification

class transformers.DistilBertForSequenceClassification(config)[source]
DistilBert Model transformer with a sequence classification/regression head on top (a linear layer on top of

the pooled output) e.g. for GLUE tasks.

DistilBERT is a small, fast, cheap and light Transformer model trained by distilling Bert base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of Bert’s performances as measured on the GLUE language understanding benchmark.

Here are the differences between the interface of Bert and DistilBert:

  • DistilBert doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])

  • DistilBert doesn’t have options to select the input positions (position_ids input). This could be added if necessary though, just let’s us know if you need this option.

For more information on DistilBERT, please refer to our detailed blog post

Parameters

config (DistilBertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. The input sequences should start with [CLS] and end with [SEP] tokens.

For now, ONLY BertTokenizer(bert-base-uncased) is supported and you should use this tokenizer when using DistilBERT.

attention_mask: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) torch.FloatTensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

labels: (optional) torch.LongTensor of shape (batch_size,):

Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Classification (or regression if config.num_labels==1) loss.

logits: torch.FloatTensor of shape (batch_size, config.num_labels)

Classification (or regression if config.num_labels==1) scores (before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)  # Batch size 1
labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
forward(input_ids=None, attention_mask=None, head_mask=None, inputs_embeds=None, labels=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

DistilBertForQuestionAnswering

class transformers.DistilBertForQuestionAnswering(config)[source]
DistilBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of

the hidden-states output to compute span start logits and span end logits).

DistilBERT is a small, fast, cheap and light Transformer model trained by distilling Bert base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of Bert’s performances as measured on the GLUE language understanding benchmark.

Here are the differences between the interface of Bert and DistilBert:

  • DistilBert doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])

  • DistilBert doesn’t have options to select the input positions (position_ids input). This could be added if necessary though, just let’s us know if you need this option.

For more information on DistilBERT, please refer to our detailed blog post

Parameters

config (DistilBertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. The input sequences should start with [CLS] and end with [SEP] tokens.

For now, ONLY BertTokenizer(bert-base-uncased) is supported and you should use this tokenizer when using DistilBERT.

attention_mask: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) torch.FloatTensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

start_positions: (optional) torch.LongTensor of shape (batch_size,):

Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

end_positions: (optional) torch.LongTensor of shape (batch_size,):

Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.

start_scores: torch.FloatTensor of shape (batch_size, sequence_length,)

Span-start scores (before SoftMax).

end_scores: torch.FloatTensor of shape (batch_size, sequence_length,)

Span-end scores (before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)  # Batch size 1
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions)
loss, start_scores, end_scores = outputs[:3]
forward(input_ids=None, attention_mask=None, head_mask=None, inputs_embeds=None, start_positions=None, end_positions=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

TFDistilBertModel

class transformers.TFDistilBertModel(config, *inputs, **kwargs)[source]

The bare DistilBERT encoder/transformer outputing raw hidden-states without any specific head on top. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling Bert base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of Bert’s performances as measured on the GLUE language understanding benchmark.

Here are the differences between the interface of Bert and DistilBert:

  • DistilBert doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])

  • DistilBert doesn’t have options to select the input positions (position_ids input). This could be added if necessary though, just let’s us know if you need this option.

For more information on DistilBERT, please refer to our detailed blog post

This model is a tf.keras.Model tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note on the model inputs:

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is usefull when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: `model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:

    model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associaed to the input names given in the docstring:

    model({‘input_ids’: input_ids, ‘token_type_ids’: token_type_ids})

Parameters

config (DistilBertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. The input sequences should start with [CLS] and end with [SEP] tokens.

For now, ONLY BertTokenizer(bert-base-uncased) is supported and you should use this tokenizer when using DistilBERT.

attention_mask: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

head_mask: (optional) Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
last_hidden_state: tf.Tensor of shape (batch_size, sequence_length, hidden_size)

Sequence of hidden-states at the output of the last layer of the model.

hidden_states: (optional, returned when config.output_hidden_states=True)

list of tf.Tensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

import tensorflow as tf
from transformers import DistilBertTokenizer, TFDistilBertModel

tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :]  # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple
call(inputs, **kwargs)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Parameters
  • inputs – A tensor or list of tensors.

  • training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.

  • mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

TFDistilBertForMaskedLM

class transformers.TFDistilBertForMaskedLM(config, *inputs, **kwargs)[source]

DistilBert Model with a masked language modeling head on top. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling Bert base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of Bert’s performances as measured on the GLUE language understanding benchmark.

Here are the differences between the interface of Bert and DistilBert:

  • DistilBert doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])

  • DistilBert doesn’t have options to select the input positions (position_ids input). This could be added if necessary though, just let’s us know if you need this option.

For more information on DistilBERT, please refer to our detailed blog post

This model is a tf.keras.Model tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note on the model inputs:

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is usefull when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: `model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:

    model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associaed to the input names given in the docstring:

    model({‘input_ids’: input_ids, ‘token_type_ids’: token_type_ids})

Parameters

config (DistilBertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. The input sequences should start with [CLS] and end with [SEP] tokens.

For now, ONLY BertTokenizer(bert-base-uncased) is supported and you should use this tokenizer when using DistilBERT.

attention_mask: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

head_mask: (optional) Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
prediction_scores: tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of tf.Tensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

import tensorflow as tf
from transformers import DistilBertTokenizer, TFDistilBertForMaskedLM

tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertForMaskedLM.from_pretrained('distilbert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :]  # Batch size 1
outputs = model(input_ids)
prediction_scores = outputs[0]
call(inputs, **kwargs)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Parameters
  • inputs – A tensor or list of tensors.

  • training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.

  • mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

get_output_embeddings()[source]

Get model’s output embeddings Return None if the model doesn’t have output embeddings

TFDistilBertForSequenceClassification

class transformers.TFDistilBertForSequenceClassification(config, *inputs, **kwargs)[source]
DistilBert Model transformer with a sequence classification/regression head on top (a linear layer on top of

the pooled output) e.g. for GLUE tasks.

DistilBERT is a small, fast, cheap and light Transformer model trained by distilling Bert base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of Bert’s performances as measured on the GLUE language understanding benchmark.

Here are the differences between the interface of Bert and DistilBert:

  • DistilBert doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])

  • DistilBert doesn’t have options to select the input positions (position_ids input). This could be added if necessary though, just let’s us know if you need this option.

For more information on DistilBERT, please refer to our detailed blog post

This model is a tf.keras.Model tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note on the model inputs:

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is usefull when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: `model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:

    model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associaed to the input names given in the docstring:

    model({‘input_ids’: input_ids, ‘token_type_ids’: token_type_ids})

Parameters

config (DistilBertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. The input sequences should start with [CLS] and end with [SEP] tokens.

For now, ONLY BertTokenizer(bert-base-uncased) is supported and you should use this tokenizer when using DistilBERT.

attention_mask: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

head_mask: (optional) Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
logits: tf.Tensor of shape (batch_size, config.num_labels)

Classification (or regression if config.num_labels==1) scores (before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of tf.Tensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

import tensorflow as tf
from transformers import BertTokenizer, TFDistilBertForSequenceClassification

tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :]  # Batch size 1
outputs = model(input_ids)
logits = outputs[0]
call(inputs, **kwargs)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Parameters
  • inputs – A tensor or list of tensors.

  • training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.

  • mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

TFDistilBertForQuestionAnswering

class transformers.TFDistilBertForQuestionAnswering(config, *inputs, **kwargs)[source]
DistilBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of

the hidden-states output to compute span start logits and span end logits).

DistilBERT is a small, fast, cheap and light Transformer model trained by distilling Bert base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of Bert’s performances as measured on the GLUE language understanding benchmark.

Here are the differences between the interface of Bert and DistilBert:

  • DistilBert doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])

  • DistilBert doesn’t have options to select the input positions (position_ids input). This could be added if necessary though, just let’s us know if you need this option.

For more information on DistilBERT, please refer to our detailed blog post

This model is a tf.keras.Model tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note on the model inputs:

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is usefull when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: `model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:

    model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associaed to the input names given in the docstring:

    model({‘input_ids’: input_ids, ‘token_type_ids’: token_type_ids})

Parameters

config (DistilBertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. The input sequences should start with [CLS] and end with [SEP] tokens.

For now, ONLY BertTokenizer(bert-base-uncased) is supported and you should use this tokenizer when using DistilBERT.

attention_mask: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

head_mask: (optional) Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
start_scores: tf.Tensor of shape (batch_size, sequence_length,)

Span-start scores (before SoftMax).

end_scores: tf.Tensor of shape (batch_size, sequence_length,)

Span-end scores (before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of tf.Tensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

import tensorflow as tf
from transformers import BertTokenizer, TFDistilBertForQuestionAnswering

tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :]  # Batch size 1
outputs = model(input_ids)
start_scores, end_scores = outputs[:2]
call(inputs, **kwargs)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Parameters
  • inputs – A tensor or list of tensors.

  • training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.

  • mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns

A tensor if there is a single output, or a list of tensors if there are more than one outputs.