Edit model card

The pipeline groups together three steps: preprocessing, passing the inputs through the model, and postprocessing.

Preprocessing with tokenizer

Like other neural networks, Transformer models can’t process raw text directly, so the first step of our pipeline is to convert the text inputs into numbers that the model can make sense of. To do this we use a tokenizer, which will be responsible for:

Splitting the input into words, subwords, or symbols (like punctuation) that are called tokens
Mapping each token to an integer
Adding additional inputs that may be useful to the model

Going through the model

We can download our pretrained model the same way we did with our tokenizer. 🤗 Transformers provides an TFAutoModel class which also has a from_pretrained method. This architecture contains only the base Transformer module: given some inputs, it outputs what we’ll call hidden states, also known as features. For each model input, we’ll retrieve a high-dimensional vector representing the contextual understanding of that input by the Transformer model.

Model heads: Making sense out of numbers

he model heads take the high-dimensional vector of hidden states as input and project them onto a different dimension. They are usually composed of one or a few linear layers The output of the Transformer model is sent directly to the model head to be processed.

For our example, we will need a model with a sequence classification head (to be able to classify the sentences as positive or negative) which is TFAutoModelForSequenceClassification.

Postprocessing the output

The outputs are not probabilities but logits, the raw, unnormalized scores outputted by the last layer of the model. To be converted to probabilities, they need to go through a SoftMax layer

We have successfully reproduced the three steps of the pipeline: preprocessing with tokenizers, passing the inputs through the model, and postprocessing!

Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .