Introduction

Welcome to the πŸ€— Course!

This course will teach you about natural language processing (NLP) using libraries from the Hugging Face ecosystem β€” πŸ€— Transformers, πŸ€— Datasets, πŸ€— Tokenizers, and πŸ€— Accelerate β€” as well as the Hugging Face Hub. It’s completely free and without ads.

What to expect?

Here is a brief overview of the course:

Brief overview of the chapters of the course.
  • Chapters 1 to 4 provide an introduction to the main concepts of the πŸ€— Transformers library. By the end of this part of the course, you will be familiar with how Transformer models work and will know how to use a model from the Hugging Face Hub, fine-tune it on a dataset, and share your results on the Hub!
  • Chapters 5 to 8 teach the basics of πŸ€— Datasets and πŸ€— Tokenizers before diving into classic NLP tasks. By the end of this part, you will be able to tackle the most common NLP problems by yourself.
  • Chapters 9 to 12 dive even deeper, showcasing specialized architectures (memory efficiency, long sequences, etc.) and teaching you how to write custom objects for more exotic use cases. By the end of this part, you will be ready to solve complex NLP problems and make meaningful contributions to πŸ€— Transformers.

This course:

Who are we?

About the authors:

Matthew Carrigan is a Machine Learning Engineer at Hugging Face. He lives in Dublin, Ireland and previously worked as an ML engineer at Parse.ly and before that as a post-doctoral researcher at Trinity College Dublin. He does not believe we’re going to get to AGI by scaling existing architectures, but has high hopes for robot immortality regardless.

Lysandre Debut is a Machine Learning Engineer at Hugging Face and has been working on the πŸ€— Transformers library since the very early development stages. His aim is to make NLP accessible for everyone by developing tools with a very simple API.

Sylvain Gugger is a Research Engineer at Hugging Face and one of the core maintainers of the πŸ€— Transformers library. Previously he was a Research Scientist at fast.ai, and he co-wrote Deep Learning for Coders with fastai and PyTorch with Jeremy Howard. The main focus of his research is on making deep learning more accessible, by designing and improving techniques that allow models to train fast on limited resources.

Are you ready to roll? In this chapter, you will learn:

  • How to use the pipeline function to solve NLP tasks such as text generation and classification
  • About the Transformer architecture
  • How to distinguish between encoder, decoder, and encoder-decoder architectures and use cases