In this chapter, you saw how to approach different NLP tasks using the high-level
pipeline() function from 🤗 Transformers. You also saw how to search for and use models in the Hub, as well as how to use the Inference API to test the models directly in your browser.
We discussed how Transformer models work at a high level, and talked about the importance of transfer learning and fine-tuning. A key aspect is that you can use the full architecture or only the encoder or decoder, depending on what kind of task you aim to solve. The following table summarizes this:
|Encoder||ALBERT, BERT, DistilBERT, ELECTRA, RoBERTa||Sentence classification, named entity recognition, extractive question answering|
|Decoder||CTRL, GPT, GPT-2, Transformer XL||Text generation|
|Encoder-decoder||BART, T5, Marian, mBART||Summarization, translation, generative question answering|