dreamboat26
commited on
Commit
•
6baef3d
1
Parent(s):
a4b27b7
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,27 @@
|
|
1 |
---
|
2 |
license: afl-3.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: afl-3.0
|
3 |
---
|
4 |
+
The pipeline groups together three steps: preprocessing, passing the inputs through the model, and postprocessing.
|
5 |
+
|
6 |
+
# Preprocessing with tokenizer
|
7 |
+
Like other neural networks, Transformer models can’t process raw text directly, so the first step of our pipeline is to convert the text inputs into numbers that the model can make sense of. To do this we use a tokenizer, which will be responsible for:
|
8 |
+
|
9 |
+
Splitting the input into words, subwords, or symbols (like punctuation) that are called tokens
|
10 |
+
Mapping each token to an integer
|
11 |
+
Adding additional inputs that may be useful to the model
|
12 |
+
|
13 |
+
# Going through the model
|
14 |
+
We can download our pretrained model the same way we did with our tokenizer. 🤗 Transformers provides an TFAutoModel class which also has a from_pretrained method.
|
15 |
+
This architecture contains only the base Transformer module: given some inputs, it outputs what we’ll call hidden states, also known as features. For each model input, we’ll retrieve a high-dimensional vector representing the contextual understanding of that input by the Transformer model.
|
16 |
+
|
17 |
+
# Model heads: Making sense out of numbers
|
18 |
+
he model heads take the high-dimensional vector of hidden states as input and project them onto a different dimension. They are usually composed of one or a few linear layers
|
19 |
+
The output of the Transformer model is sent directly to the model head to be processed.
|
20 |
+
|
21 |
+
|
22 |
+
For our example, we will need a model with a sequence classification head (to be able to classify the sentences as positive or negative) which is TFAutoModelForSequenceClassification.
|
23 |
+
|
24 |
+
# Postprocessing the output
|
25 |
+
The outputs are not probabilities but logits, the raw, unnormalized scores outputted by the last layer of the model. To be converted to probabilities, they need to go through a SoftMax layer
|
26 |
+
|
27 |
+
We have successfully reproduced the three steps of the pipeline: preprocessing with tokenizers, passing the inputs through the model, and postprocessing!
|