CamemBERT

CamembertConfig

class transformers.CamembertConfig(vocab_size_or_config_json_file=30522, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-12, **kwargs)[source]

CamembertTokenizer

class transformers.CamembertTokenizer(vocab_file, bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', additional_special_tokens=['<s>NOTUSED', '<s>NOTUSED'], **kwargs)[source]

Adapted from RobertaTokenizer and XLNetTokenizer SentencePiece based tokenizer. Peculiarities:

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[source]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A RoBERTa sequence has the following format:

single sequence: <s> X </s> pair of sequences: <s> A </s></s> B </s>

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[source]

Creates a mask from the two sequences passed to be used in a sequence-pair classification task. A RoBERTa sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 | first sequence | second sequence

if token_ids_1 is None, only returns the first portion of the mask (0’s).

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[source]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.

Parameters
  • token_ids_0 – list of ids (must not contain special tokens)

  • token_ids_1 – Optional list of ids (must not contain special tokens), necessary when fetching sequence ids for sequence pairs

  • already_has_special_tokens – (default False) Set to True if the token list is already formated with special tokens for the model

Returns

1 for a special token, 0 for a sequence token.

Return type

A list of integers in the range [0, 1]

save_vocabulary(save_directory)[source]

Save the sentencepiece vocabulary (copy original file) and special tokens file to a directory.

property vocab_size

Size of the base vocabulary (without the added tokens)

CamembertModel

class transformers.CamembertModel(config)[source]

The bare CamemBERT Model transformer outputting raw hidden-states without any specific head on top. The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook’s RoBERTa model released in 2019.

It is a model trained on 138GB of French text.

This implementation is the same as RoBERTa.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (CamembertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, CamemBERT input sequence should be formatted with <s> and </s> tokens as follows:

  1. For sequence pairs:

    tokens:         <s> Is this Jacksonville ? </s> </s> No it is not . </s>

  2. For single sequences:

    tokens:         <s> the dog is hairy . </s>

Fully encoded sequences or sequence pairs can be obtained using the CamembertTokenizer.encode function with the add_special_tokens parameter set to True.

CamemBERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional need to be trained) torch.LongTensor of shape (batch_size, sequence_length):

Optional segment token indices to indicate first and second portions of the inputs. This embedding matrice is not trained (not pretrained during CamemBERT pretraining), you will have to train it during finetuning. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1[.

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) torch.FloatTensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
last_hidden_state: torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)

Sequence of hidden-states at the output of the last layer of the model.

pooler_output: torch.FloatTensor of shape (batch_size, hidden_size)

Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) eo match pre-training, CamemBERT input sequence should be formatted with [CLS] and [SEP] tokens as follows:

  1. For sequence pairs:

    tokens:         [CLS] is this jack ##son ##ville ? [SEP] [SEP] no it is not . [SEP]

    token_type_ids:   0   0  0    0    0     0       0   0   0     1  1  1  1   1   1

  2. For single sequences:

    tokens:         [CLS] the dog is hairy . [SEP]

    token_type_ids:   0   0   0   0  0     0   0

objective during Bert pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = CamembertTokenizer.from_pretrained('camembert-base')
model = CamembertModel.from_pretrained('camembert-base')
input_ids = torch.tensor(tokenizer.encode("J'aime le camembert !")).unsqueeze(0)  # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple
config_class

alias of transformers.configuration_camembert.CamembertConfig

CamembertForMaskedLM

class transformers.CamembertForMaskedLM(config)[source]

CamemBERT Model with a language modeling head on top. The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook’s RoBERTa model released in 2019.

It is a model trained on 138GB of French text.

This implementation is the same as RoBERTa.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (CamembertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, CamemBERT input sequence should be formatted with <s> and </s> tokens as follows:

  1. For sequence pairs:

    tokens:         <s> Is this Jacksonville ? </s> </s> No it is not . </s>

  2. For single sequences:

    tokens:         <s> the dog is hairy . </s>

Fully encoded sequences or sequence pairs can be obtained using the CamembertTokenizer.encode function with the add_special_tokens parameter set to True.

CamemBERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional need to be trained) torch.LongTensor of shape (batch_size, sequence_length):

Optional segment token indices to indicate first and second portions of the inputs. This embedding matrice is not trained (not pretrained during CamemBERT pretraining), you will have to train it during finetuning. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1[.

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) torch.FloatTensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

masked_lm_labels: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Labels for computing the masked language modeling loss. Indices should be in [-1, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -1 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when masked_lm_labels is provided) torch.FloatTensor of shape (1,):

Masked language modeling loss.

prediction_scores: torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = CamembertTokenizer.from_pretrained('camembert-base')
model = CamembertForMaskedLM.from_pretrained('camembert-base')
input_ids = torch.tensor(tokenizer.encode("J'aime le camembert !")).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
config_class

alias of transformers.configuration_camembert.CamembertConfig

CamembertForSequenceClassification

class transformers.CamembertForSequenceClassification(config)[source]

CamemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook’s RoBERTa model released in 2019.

It is a model trained on 138GB of French text.

This implementation is the same as RoBERTa.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (CamembertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, CamemBERT input sequence should be formatted with <s> and </s> tokens as follows:

  1. For sequence pairs:

    tokens:         <s> Is this Jacksonville ? </s> </s> No it is not . </s>

  2. For single sequences:

    tokens:         <s> the dog is hairy . </s>

Fully encoded sequences or sequence pairs can be obtained using the CamembertTokenizer.encode function with the add_special_tokens parameter set to True.

CamemBERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional need to be trained) torch.LongTensor of shape (batch_size, sequence_length):

Optional segment token indices to indicate first and second portions of the inputs. This embedding matrice is not trained (not pretrained during CamemBERT pretraining), you will have to train it during finetuning. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1[.

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) torch.FloatTensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

labels: (optional) torch.LongTensor of shape (batch_size,):

Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Classification (or regression if config.num_labels==1) loss.

logits: torch.FloatTensor of shape (batch_size, config.num_labels)

Classification (or regression if config.num_labels==1) scores (before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = CamembertTokenizer.from_pretrained('camembert-base')
model = CamembertForSequenceClassification.from_pretrained('camembert-base')
input_ids = torch.tensor(tokenizer.encode("J'aime le camembert !")).unsqueeze(0)  # Batch size 1
labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
config_class

alias of transformers.configuration_camembert.CamembertConfig

CamembertForMultipleChoice

class transformers.CamembertForMultipleChoice(config)[source]

CamemBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook’s RoBERTa model released in 2019.

It is a model trained on 138GB of French text.

This implementation is the same as RoBERTa.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (CamembertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, CamemBERT input sequence should be formatted with <s> and </s> tokens as follows:

  1. For sequence pairs:

    tokens:         <s> Is this Jacksonville ? </s> </s> No it is not . </s>

  2. For single sequences:

    tokens:         <s> the dog is hairy . </s>

Fully encoded sequences or sequence pairs can be obtained using the CamembertTokenizer.encode function with the add_special_tokens parameter set to True.

CamemBERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional need to be trained) torch.LongTensor of shape (batch_size, sequence_length):

Optional segment token indices to indicate first and second portions of the inputs. This embedding matrice is not trained (not pretrained during CamemBERT pretraining), you will have to train it during finetuning. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1[.

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) torch.FloatTensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Classification loss.

classification_scores: torch.FloatTensor of shape (batch_size, num_choices) where num_choices is the size of the second dimension

of the input tensors. (see input_ids above). Classification scores (before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = CamembertTokenizer.from_pretrained('camembert-base')
model = CamembertForMultipleChoice.from_pretrained('camembert-base')
choices = ["J'aime le camembert !", "Je deteste le camembert !"]
input_ids = torch.tensor([tokenizer.encode(s, add_special_tokens=True) for s in choices]).unsqueeze(0)  # Batch size 1, 2 choices
labels = torch.tensor(1).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, labels=labels)
loss, classification_scores = outputs[:2]
config_class

alias of transformers.configuration_camembert.CamembertConfig

CamembertForTokenClassification

class transformers.CamembertForTokenClassification(config)[source]

CamemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook’s RoBERTa model released in 2019.

It is a model trained on 138GB of French text.

This implementation is the same as RoBERTa.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (CamembertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, CamemBERT input sequence should be formatted with <s> and </s> tokens as follows:

  1. For sequence pairs:

    tokens:         <s> Is this Jacksonville ? </s> </s> No it is not . </s>

  2. For single sequences:

    tokens:         <s> the dog is hairy . </s>

Fully encoded sequences or sequence pairs can be obtained using the CamembertTokenizer.encode function with the add_special_tokens parameter set to True.

CamemBERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional need to be trained) torch.LongTensor of shape (batch_size, sequence_length):

Optional segment token indices to indicate first and second portions of the inputs. This embedding matrice is not trained (not pretrained during CamemBERT pretraining), you will have to train it during finetuning. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1[.

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

inputs_embeds: (optional) torch.FloatTensor of shape (batch_size, sequence_length, embedding_dim):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

labels: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Classification loss.

scores: torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)

Classification scores (before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = CamembertTokenizer.from_pretrained('camembert-base')
model = CamembertForTokenClassification.from_pretrained('camembert-base')
input_ids = torch.tensor(tokenizer.encode("J'aime le camembert !", add_special_tokens=True)).unsqueeze(0)  # Batch size 1
labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, labels=labels)
loss, scores = outputs[:2]
config_class

alias of transformers.configuration_camembert.CamembertConfig