{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Transformers installation\n", "! pip install transformers datasets\n", "# To install from source instead of the last release, comment the command above and uncomment the following one.\n", "# ! pip install git+https://github.com/huggingface/transformers.git" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Quick tour" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Get up and running with 🤗 Transformers! Whether you're a developer or an everyday user, this quick tour will help you get started and show you how to use the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) for inference, load a pretrained model and preprocessor with an [AutoClass](https://huggingface.co/docs/transformers/main/en/./model_doc/auto), and quickly train a model with PyTorch or TensorFlow. If you're a beginner, we recommend checking out our tutorials or [course](https://huggingface.co/course/chapter1/1) next for more in-depth explanations of the concepts introduced here.\n", "\n", "Before you begin, make sure you have all the necessary libraries installed:\n", "\n", "```bash\n", "!pip install transformers datasets\n", "```\n", "\n", "You'll also need to install your preferred machine learning framework:\n", "\n", "```bash\n", "pip install torch\n", "```\n", "```bash\n", "pip install tensorflow\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pipeline" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "hide_input": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#@title\n", "from IPython.display import HTML\n", "\n", "HTML('')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) is the easiest way to use a pretrained model for inference. You can use the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) out-of-the-box for many tasks across different modalities. Take a look at the table below for some supported tasks:\n", "\n", "| **Task** | **Description** | **Modality** | **Pipeline identifier** |\n", "|------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------|-----------------------------------------------|\n", "| Text classification | assign a label to a given sequence of text | NLP | pipeline(task=\"sentiment-analysis\") |\n", "| Text generation | generate text that follows a given prompt | NLP | pipeline(task=\"text-generation\") |\n", "| Name entity recognition | assign a label to each token in a sequence (people, organization, location, etc.) | NLP | pipeline(task=\"ner\") |\n", "| Question answering | extract an answer from the text given some context and a question | NLP | pipeline(task=\"question-answering\") |\n", "| Fill-mask | predict the correct masked token in a sequence | NLP | pipeline(task=\"fill-mask\") |\n", "| Summarization | generate a summary of a sequence of text or document | NLP | pipeline(task=\"summarization\") |\n", "| Translation | translate text from one language into another | NLP | pipeline(task=\"translation\") |\n", "| Image classification | assign a label to an image | Computer vision | pipeline(task=\"image-classification\") |\n", "| Image segmentation | assign a label to each individual pixel of an image (supports semantic, panoptic, and instance segmentation) | Computer vision | pipeline(task=\"image-segmentation\") |\n", "| Object detection | predict the bounding boxes and classes of objects in an image | Computer vision | pipeline(task=\"object-detection\") |\n", "| Audio classification | assign a label to an audio file | Audio | pipeline(task=\"audio-classification\") |\n", "| Automatic speech recognition | extract speech from an audio file into text | Audio | pipeline(task=\"automatic-speech-recognition\") |\n", "| Visual question answering | given an image and a question, correctly answer a question about the image | Multimodal | pipeline(task=\"vqa\") |\n", "\n", "Start by creating an instance of [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) and specifying a task you want to use it for. You can use the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) for any of the previously mentioned tasks, and for a complete list of supported tasks, take a look at the [pipeline API reference](https://huggingface.co/docs/transformers/main/en/./main_classes/pipelines). In this guide though, you'll use the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) for sentiment analysis as an example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import pipeline\n", "\n", "classifier = pipeline(\"sentiment-analysis\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) downloads and caches a default [pretrained model](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) and tokenizer for sentiment analysis. Now you can use the `classifier` on your target text:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'label': 'POSITIVE', 'score': 0.9998}]" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "classifier(\"We are very happy to show you the 🤗 Transformers library.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you have more than one input, pass your inputs as a list to the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) to return a list of dictionaries:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "label: POSITIVE, with score: 0.9998\n", "label: NEGATIVE, with score: 0.5309" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "results = classifier([\"We are very happy to show you the 🤗 Transformers library.\", \"We hope you don't hate it.\"])\n", "for result in results:\n", " print(f\"label: {result['label']}, with score: {round(result['score'], 4)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) can also iterate over an entire dataset for any task you like. For this example, let's choose automatic speech recognition as our task:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "from transformers import pipeline\n", "\n", "speech_recognizer = pipeline(\"automatic-speech-recognition\", model=\"facebook/wav2vec2-base-960h\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Load an audio dataset (see the 🤗 Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart#audio) for more details) you'd like to iterate over. For example, load the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from datasets import load_dataset, Audio\n", "\n", "dataset = load_dataset(\"PolyAI/minds14\", name=\"en-US\", split=\"train\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You need to make sure the sampling rate of the dataset matches the sampling \n", "rate [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h) was trained on:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dataset = dataset.cast_column(\"audio\", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The audio files are automatically loaded and resampled when calling the `\"audio\"` column.\n", "Extract the raw waveform arrays from the first 4 samples and pass it as a list to the pipeline:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', \"FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE\", \"I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS\", 'HOW DO I THURN A JOIN A COUNT']" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result = speech_recognizer(dataset[:4][\"audio\"])\n", "print([d[\"text\"] for d in result])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For larger datasets where the inputs are big (like in speech or vision), you'll want to pass a generator instead of a list to load all the inputs in memory. Take a look at the [pipeline API reference](https://huggingface.co/docs/transformers/main/en/./main_classes/pipelines) for more information." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Use another model and tokenizer in the pipeline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) can accommodate any model from the [Hub](https://huggingface.co/models), making it easy to adapt the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) for other use-cases. For example, if you'd like a model capable of handling French text, use the tags on the Hub to filter for an appropriate model. The top filtered result returns a multilingual [BERT model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) finetuned for sentiment analysis you can use for French text:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_name = \"nlptown/bert-base-multilingual-uncased-sentiment\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use [AutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForSequenceClassification) and [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer) to load the pretrained model and it's associated tokenizer (more on an `AutoClass` in the next section):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer, AutoModelForSequenceClassification\n", "\n", "model = AutoModelForSequenceClassification.from_pretrained(model_name)\n", "tokenizer = AutoTokenizer.from_pretrained(model_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use [TFAutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.TFAutoModelForSequenceClassification) and [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer) to load the pretrained model and it's associated tokenizer (more on an `TFAutoClass` in the next section):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer, TFAutoModelForSequenceClassification\n", "\n", "model = TFAutoModelForSequenceClassification.from_pretrained(model_name)\n", "tokenizer = AutoTokenizer.from_pretrained(model_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Specify the model and tokenizer in the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline), and now you can apply the `classifier` on French text:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'label': '5 stars', 'score': 0.7273}]" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "classifier = pipeline(\"sentiment-analysis\", model=model, tokenizer=tokenizer)\n", "classifier(\"Nous sommes très heureux de vous présenter la bibliothèque 🤗 Transformers.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you can't find a model for your use-case, you'll need to finetune a pretrained model on your data. Take a look at our [finetuning tutorial](https://huggingface.co/docs/transformers/main/en/./training) to learn how. Finally, after you've finetuned your pretrained model, please consider [sharing](https://huggingface.co/docs/transformers/main/en/./model_sharing) the model with the community on the Hub to democratize machine learning for everyone! 🤗" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## AutoClass" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "hide_input": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#@title\n", "from IPython.display import HTML\n", "\n", "HTML('')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Under the hood, the [AutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForSequenceClassification) and [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer) classes work together to power the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) you used above. An [AutoClass](https://huggingface.co/docs/transformers/main/en/./model_doc/auto) is a shortcut that automatically retrieves the architecture of a pretrained model from it's name or path. You only need to select the appropriate `AutoClass` for your task and it's associated preprocessing class. \n", "\n", "Let's return to the example from the previous section and see how you can use the `AutoClass` to replicate the results of the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### AutoTokenizer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A tokenizer is responsible for preprocessing text into an array of numbers as inputs to a model. There are multiple rules that govern the tokenization process, including how to split a word and at what level words should be split (learn more about tokenization in the [tokenizer summary](https://huggingface.co/docs/transformers/main/en/./tokenizer_summary)). The most important thing to remember is you need to instantiate a tokenizer with the same model name to ensure you're using the same tokenization rules a model was pretrained with.\n", "\n", "Load a tokenizer with [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "\n", "model_name = \"nlptown/bert-base-multilingual-uncased-sentiment\"\n", "tokenizer = AutoTokenizer.from_pretrained(model_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Pass your text to the tokenizer:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102],\n", " 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n", " 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "encoding = tokenizer(\"We are very happy to show you the 🤗 Transformers library.\")\n", "print(encoding)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The tokenizer returns a dictionary containing:\n", "\n", "* [input_ids](https://huggingface.co/docs/transformers/main/en/./glossary#input-ids): numerical representations of your tokens.\n", "* [attention_mask](https://huggingface.co/docs/transformers/main/en/.glossary#attention-mask): indicates which tokens should be attended to.\n", "\n", "A tokenizer can also accept a list of inputs, and pad and truncate the text to return a batch with uniform length:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pt_batch = tokenizer(\n", " [\"We are very happy to show you the 🤗 Transformers library.\", \"We hope you don't hate it.\"],\n", " padding=True,\n", " truncation=True,\n", " max_length=512,\n", " return_tensors=\"pt\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tf_batch = tokenizer(\n", " [\"We are very happy to show you the 🤗 Transformers library.\", \"We hope you don't hate it.\"],\n", " padding=True,\n", " truncation=True,\n", " max_length=512,\n", " return_tensors=\"tf\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Check out the [preprocess](https://huggingface.co/docs/transformers/main/en/./preprocessing) tutorial for more details about tokenization, and how to use an [AutoImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoImageProcessor), [AutoFeatureExtractor](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoFeatureExtractor) and [AutoProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoProcessor) to preprocess image, audio, and multimodal inputs.\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### AutoModel" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [AutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModel) like you would load an [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer). The only difference is selecting the correct [AutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModel) for the task. For text (or sequence) classification, you should load [AutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForSequenceClassification):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoModelForSequenceClassification\n", "\n", "model_name = \"nlptown/bert-base-multilingual-uncased-sentiment\"\n", "pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "See the [task summary](https://huggingface.co/docs/transformers/main/en/./task_summary) for tasks supported by an [AutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModel) class.\n", "\n", "\n", "\n", "Now pass your preprocessed batch of inputs directly to the model. You just have to unpack the dictionary by adding `**`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pt_outputs = pt_model(**pt_batch)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model outputs the final activations in the `logits` attribute. Apply the softmax function to the `logits` to retrieve the probabilities:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725],\n", " [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=)" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from torch import nn\n", "\n", "pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)\n", "print(pt_predictions)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [TFAutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.TFAutoModel) like you would load an [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer). The only difference is selecting the correct [TFAutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.TFAutoModel) for the task. For text (or sequence) classification, you should load [TFAutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.TFAutoModelForSequenceClassification):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import TFAutoModelForSequenceClassification\n", "\n", "model_name = \"nlptown/bert-base-multilingual-uncased-sentiment\"\n", "tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "See the [task summary](https://huggingface.co/docs/transformers/main/en/./task_summary) for tasks supported by an [AutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModel) class.\n", "\n", "\n", "\n", "Now pass your preprocessed batch of inputs directly to the model by passing the dictionary keys directly to the tensors:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tf_outputs = tf_model(tf_batch)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model outputs the final activations in the `logits` attribute. Apply the softmax function to the `logits` to retrieve the probabilities:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf\n", "\n", "tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1)\n", "tf_predictions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "All 🤗 Transformers models (PyTorch or TensorFlow) output the tensors *before* the final activation\n", "function (like softmax) because the final activation function is often fused with the loss. Model outputs are special dataclasses so their attributes are autocompleted in an IDE. The model outputs behave like a tuple or a dictionary (you can index with an integer, a slice or a string) in which case, attributes that are None are ignored.\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Save a model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once your model is fine-tuned, you can save it with its tokenizer using [PreTrainedModel.save_pretrained()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.save_pretrained):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pt_save_directory = \"./pt_save_pretrained\"\n", "tokenizer.save_pretrained(pt_save_directory)\n", "pt_model.save_pretrained(pt_save_directory)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When you are ready to use the model again, reload it with [PreTrainedModel.from_pretrained()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pt_model = AutoModelForSequenceClassification.from_pretrained(\"./pt_save_pretrained\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once your model is fine-tuned, you can save it with its tokenizer using [TFPreTrainedModel.save_pretrained()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.TFPreTrainedModel.save_pretrained):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tf_save_directory = \"./tf_save_pretrained\"\n", "tokenizer.save_pretrained(tf_save_directory)\n", "tf_model.save_pretrained(tf_save_directory)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When you are ready to use the model again, reload it with [TFPreTrainedModel.from_pretrained()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tf_model = TFAutoModelForSequenceClassification.from_pretrained(\"./tf_save_pretrained\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One particularly cool 🤗 Transformers feature is the ability to save a model and reload it as either a PyTorch or TensorFlow model. The `from_pt` or `from_tf` parameter can convert the model from one framework to the other:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoModel\n", "\n", "tokenizer = AutoTokenizer.from_pretrained(tf_save_directory)\n", "pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import TFAutoModel\n", "\n", "tokenizer = AutoTokenizer.from_pretrained(pt_save_directory)\n", "tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Custom model builds" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can modify the model's configuration class to change how a model is built. The configuration specifies a model's attributes, such as the number of hidden layers or attention heads. You start from scratch when you initialize a model from a custom configuration class. The model attributes are randomly initialized, and you'll need to train the model before you can use it to get meaningful results.\n", "\n", "Start by importing [AutoConfig](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoConfig), and then load the pretrained model you want to modify. Within [AutoConfig.from_pretrained()](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoConfig.from_pretrained), you can specify the attribute you want to change, such as the number of attention heads:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoConfig\n", "\n", "my_config = AutoConfig.from_pretrained(\"distilbert-base-uncased\", n_heads=12)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a model from your custom configuration with [AutoModel.from_config()](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_config):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoModel\n", "\n", "my_model = AutoModel.from_config(my_config)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a model from your custom configuration with [TFAutoModel.from_config()](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_config):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import TFAutoModel\n", "\n", "my_model = TFAutoModel.from_config(my_config)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Take a look at the [Create a custom architecture](https://huggingface.co/docs/transformers/main/en/./create_a_model) guide for more information about building custom configurations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Trainer - a PyTorch optimized training loop" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All models are a standard [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) so you can use them in any typical training loop. While you can write your own training loop, 🤗 Transformers provides a [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer) class for PyTorch, which contains the basic training loop and adds additional functionality for features like distributed training, mixed precision, and more.\n", "\n", "Depending on your task, you'll typically pass the following parameters to [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer):\n", "\n", "1. A [PreTrainedModel](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel) or a [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module):\n", "\n", " ```py\n", " >>> from transformers import AutoModelForSequenceClassification\n", "\n", " >>> model = AutoModelForSequenceClassification.from_pretrained(\"distilbert-base-uncased\")\n", " ```\n", "\n", "2. [TrainingArguments](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.TrainingArguments) contains the model hyperparameters you can change like learning rate, batch size, and the number of epochs to train for. The default values are used if you don't specify any training arguments:\n", "\n", " ```py\n", " >>> from transformers import TrainingArguments\n", "\n", " >>> training_args = TrainingArguments(\n", " ... output_dir=\"path/to/save/folder/\",\n", " ... learning_rate=2e-5,\n", " ... per_device_train_batch_size=8,\n", " ... per_device_eval_batch_size=8,\n", " ... num_train_epochs=2,\n", " ... )\n", " ```\n", "\n", "3. A preprocessing class like a tokenizer, image processor, feature extractor, or processor:\n", "\n", " ```py\n", " >>> from transformers import AutoTokenizer\n", "\n", " >>> tokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\n", " ```\n", "\n", "4. Load a dataset:\n", "\n", " ```py\n", " >>> from datasets import load_dataset\n", "\n", " >>> dataset = load_dataset(\"rotten_tomatoes\") # doctest: +IGNORE_RESULT\n", " ```\n", "\n", "5. Create a function to tokenize the dataset:\n", "\n", " ```py\n", " >>> def tokenize_dataset(dataset):\n", " ... return tokenizer(dataset[\"text\"])\n", " ```\n", "\n", " Then apply it over the entire dataset with [map](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map):\n", "\n", " ```py\n", " >>> dataset = dataset.map(tokenize_dataset, batched=True)\n", " ```\n", "\n", "6. A [DataCollatorWithPadding](https://huggingface.co/docs/transformers/main/en/main_classes/data_collator#transformers.DataCollatorWithPadding) to create a batch of examples from your dataset:\n", "\n", " ```py\n", " >>> from transformers import DataCollatorWithPadding\n", "\n", " >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer)\n", " ```\n", "\n", "Now gather all these classes in [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import Trainer\n", "\n", "trainer = Trainer(\n", " model=model,\n", " args=training_args,\n", " train_dataset=dataset[\"train\"],\n", " eval_dataset=dataset[\"test\"],\n", " tokenizer=tokenizer,\n", " data_collator=data_collator,\n", ") # doctest: +SKIP" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When you're ready, call [train()](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.train) to start training:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "trainer.train()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "For tasks - like translation or summarization - that use a sequence-to-sequence model, use the [Seq2SeqTrainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Seq2SeqTrainer) and [Seq2SeqTrainingArguments](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Seq2SeqTrainingArguments) classes instead.\n", "\n", "\n", "\n", "You can customize the training loop behavior by subclassing the methods inside [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer). This allows you to customize features such as the loss function, optimizer, and scheduler. Take a look at the [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer) reference for which methods can be subclassed. \n", "\n", "The other way to customize the training loop is by using [Callbacks](https://huggingface.co/docs/transformers/main/en/./main_classes/callbacks). You can use callbacks to integrate with other libraries and inspect the training loop to report on progress or stop the training early. Callbacks do not modify anything in the training loop itself. To customize something like the loss function, you need to subclass the [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer) instead." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Train with TensorFlow" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All models are a standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) so they can be trained in TensorFlow with the [Keras](https://keras.io/) API. 🤗 Transformers provides the [prepare_tf_dataset()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset) method to easily load your dataset as a `tf.data.Dataset` so you can start training right away with Keras' [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) and [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) methods.\n", "\n", "1. You'll start with a [TFPreTrainedModel](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.TFPreTrainedModel) or a [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model):\n", "\n", " ```py\n", " >>> from transformers import TFAutoModelForSequenceClassification\n", "\n", " >>> model = TFAutoModelForSequenceClassification.from_pretrained(\"distilbert-base-uncased\")\n", " ```\n", "\n", "2. A preprocessing class like a tokenizer, image processor, feature extractor, or processor:\n", "\n", " ```py\n", " >>> from transformers import AutoTokenizer\n", "\n", " >>> tokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\n", " ```\n", "\n", "3. Create a function to tokenize the dataset:\n", "\n", " ```py\n", " >>> def tokenize_dataset(dataset):\n", " ... return tokenizer(dataset[\"text\"]) # doctest: +SKIP\n", " ```\n", "\n", "4. Apply the tokenizer over the entire dataset with [map](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map) and then pass the dataset and tokenizer to [prepare_tf_dataset()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset). You can also change the batch size and shuffle the dataset here if you'd like:\n", "\n", " ```py\n", " >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP\n", " >>> tf_dataset = model.prepare_tf_dataset(\n", " ... dataset, batch_size=16, shuffle=True, tokenizer=tokenizer\n", " ... ) # doctest: +SKIP\n", " ```\n", "\n", "5. When you're ready, you can call `compile` and `fit` to start training:\n", "\n", " ```py\n", " >>> from tensorflow.keras.optimizers import Adam\n", "\n", " >>> model.compile(optimizer=Adam(3e-5))\n", " >>> model.fit(dataset) # doctest: +SKIP\n", " ```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What's next?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that you've completed the 🤗 Transformers quick tour, check out our guides and learn how to do more specific things like writing a custom model, fine-tuning a model for a task, and how to train a model with a script. If you're interested in learning more about 🤗 Transformers core concepts, grab a cup of coffee and take a look at our Conceptual Guides!" ] } ], "metadata": {}, "nbformat": 4, "nbformat_minor": 4 }