diff --git "a/10_nlp.ipynb" "b/10_nlp.ipynb" new file mode 100644--- /dev/null +++ "b/10_nlp.ipynb" @@ -0,0 +1,2961 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 40, + "metadata": {}, + "outputs": [], + "source": [ + "#hide\n", + "! [ -e /content ] && pip install -Uqq fastbook\n", + "import fastbook\n", + "fastbook.setup_book()" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "metadata": {}, + "outputs": [], + "source": [ + "#hide\n", + "from fastbook import *\n", + "from IPython.display import display,HTML" + ] + }, + { + "cell_type": "raw", + "metadata": {}, + "source": [ + "[[chapter_nlp]]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# NLP Deep Dive: RNNs" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In <> we saw that deep learning can be used to get great results with natural language datasets. Our example relied on using a pretrained language model and fine-tuning it to classify reviews. That example highlighted a difference between transfer learning in NLP and computer vision: in general in NLP the pretrained model is trained on a different task.\n", + "\n", + "What we call a language model is a model that has been trained to guess what the next word in a text is (having read the ones before). This kind of task is called *self-supervised learning*: we do not need to give labels to our model, just feed it lots and lots of texts. It has a process to automatically get labels from the data, and this task isn't trivial: to properly guess the next word in a sentence, the model will have to develop an understanding of the English (or other) language. Self-supervised learning can also be used in other domains; for instance, see [\"Self-Supervised Learning and Computer Vision\"](https://www.fast.ai/2020/01/13/self_supervised/) for an introduction to vision applications. Self-supervised learning is not usually used for the model that is trained directly, but instead is used for pretraining a model used for transfer learning." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> jargon: Self-supervised learning: Training a model using labels that are embedded in the independent variable, rather than requiring external labels. For instance, training a model to predict the next word in a text." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The language model we used in <> to classify IMDb reviews was pretrained on Wikipedia. We got great results by directly fine-tuning this language model to a movie review classifier, but with one extra step, we can do even better. The Wikipedia English is slightly different from the IMDb English, so instead of jumping directly to the classifier, we could fine-tune our pretrained language model to the IMDb corpus and then use *that* as the base for our classifier.\n", + "\n", + "Even if our language model knows the basics of the language we are using in the task (e.g., our pretrained model is in English), it helps to get used to the style of the corpus we are targeting. It may be more informal language, or more technical, with new words to learn or different ways of composing sentences. In the case of the IMDb dataset, there will be lots of names of movie directors and actors, and often a less formal style of language than that seen in Wikipedia.\n", + "\n", + "We already saw that with fastai, we can download a pretrained English language model and use it to get state-of-the-art results for NLP classification. (We expect pretrained models in many more languages to be available soon—they might well be available by the time you are reading this book, in fact.) So, why are we learning how to train a language model in detail?\n", + "\n", + "One reason, of course, is that it is helpful to understand the foundations of the models that you are using. But there is another very practical reason, which is that you get even better results if you fine-tune the (sequence-based) language model prior to fine-tuning the classification model. For instance, for the IMDb sentiment analysis task, the dataset includes 50,000 additional movie reviews that do not have any positive or negative labels attached. Since there are 25,000 labeled reviews in the training set and 25,000 in the validation set, that makes 100,000 movie reviews altogether. We can use all of these reviews to fine-tune the pretrained language model, which was trained only on Wikipedia articles; this will result in a language model that is particularly good at predicting the next word of a movie review.\n", + "\n", + "This is known as the Universal Language Model Fine-tuning (ULMFit) approach. The [paper](https://arxiv.org/abs/1801.06146) showed that this extra stage of fine-tuning of the language model, prior to transfer learning to a classification task, resulted in significantly better predictions. Using this approach, we have three stages for transfer learning in NLP, as summarized in <>." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\"Diagram" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll now explore how to apply a neural network to this language modeling problem, using the concepts introduced in the last two chapters. But before reading further, pause and think about how *you* would approach this." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Text Preprocessing" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It's not at all obvious how we're going to use what we've learned so far to build a language model. Sentences can be different lengths, and documents can be very long. So, how can we predict the next word of a sentence using a neural network? Let's find out!\n", + "\n", + "We've already seen how categorical variables can be used as independent variables for a neural network. The approach we took for a single categorical variable was to:\n", + "\n", + "1. Make a list of all possible levels of that categorical variable (we'll call this list the *vocab*).\n", + "1. Replace each level with its index in the vocab.\n", + "1. Create an embedding matrix for this containing a row for each level (i.e., for each item of the vocab).\n", + "1. Use this embedding matrix as the first layer of a neural network. (A dedicated embedding matrix can take as inputs the raw vocab indexes created in step 2; this is equivalent to but faster and more efficient than a matrix that takes as input one-hot-encoded vectors representing the indexes.)\n", + "\n", + "We can do nearly the same thing with text! What is new is the idea of a sequence. First we concatenate all of the documents in our dataset into one big long string and split it into words, giving us a very long list of words (or \"tokens\"). Our independent variable will be the sequence of words starting with the first word in our very long list and ending with the second to last, and our dependent variable will be the sequence of words starting with the second word and ending with the last word. \n", + "\n", + "Our vocab will consist of a mix of common words that are already in the vocabulary of our pretrained model and new words specific to our corpus (cinematographic terms or actors names, for instance). Our embedding matrix will be built accordingly: for words that are in the vocabulary of our pretrained model, we will take the corresponding row in the embedding matrix of the pretrained model; but for new words we won't have anything, so we will just initialize the corresponding row with a random vector." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Each of the steps necessary to create a language model has jargon associated with it from the world of natural language processing, and fastai and PyTorch classes available to help. The steps are:\n", + "\n", + "- Tokenization:: Convert the text into a list of words (or characters, or substrings, depending on the granularity of your model)\n", + "- Numericalization:: Make a list of all of the unique words that appear (the vocab), and convert each word into a number, by looking up its index in the vocab\n", + "- Language model data loader creation:: fastai provides an `LMDataLoader` class which automatically handles creating a dependent variable that is offset from the independent variable by one token. It also handles some important details, such as how to shuffle the training data in such a way that the dependent and independent variables maintain their structure as required\n", + "- Language model creation:: We need a special kind of model that does something we haven't seen before: handles input lists which could be arbitrarily big or small. There are a number of ways to do this; in this chapter we will be using a *recurrent neural network* (RNN). We will get to the details of these RNNs in the <>, but for now, you can think of it as just another deep neural network.\n", + "\n", + "Let's take a look at how each step works in detail." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Tokenization" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When we said \"convert the text into a list of words,\" we left out a lot of details. For instance, what do we do with punctuation? How do we deal with a word like \"don't\"? Is it one word, or two? What about long medical or chemical words? Should they be split into their separate pieces of meaning? How about hyphenated words? What about languages like German and Polish where we can create really long words from many, many pieces? What about languages like Japanese and Chinese that don't use bases at all, and don't really have a well-defined idea of *word*?\n", + "\n", + "Because there is no one correct answer to these questions, there is no one approach to tokenization. There are three main approaches:\n", + "\n", + "- Word-based:: Split a sentence on spaces, as well as applying language-specific rules to try to separate parts of meaning even when there are no spaces (such as turning \"don't\" into \"do n't\"). Generally, punctuation marks are also split into separate tokens.\n", + "- Subword based:: Split words into smaller parts, based on the most commonly occurring substrings. For instance, \"occasion\" might be tokenized as \"o c ca sion.\"\n", + "- Character-based:: Split a sentence into its individual characters.\n", + "\n", + "We'll be looking at word and subword tokenization here, and we'll leave character-based tokenization for you to implement in the questionnaire at the end of this chapter." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> jargon: token: One element of a list created by the tokenization process. It could be a word, part of a word (a _subword_), or a single character." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Word Tokenization with fastai" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Rather than providing its own tokenizers, fastai instead provides a consistent interface to a range of tokenizers in external libraries. Tokenization is an active field of research, and new and improved tokenizers are coming out all the time, so the defaults that fastai uses change too. However, the API and options shouldn't change too much, since fastai tries to maintain a consistent API even as the underlying technology changes.\n", + "\n", + "Let's try it out with the IMDb dataset that we used in <>:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + "\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "\n", + "
\n", + " \n", + " 100.00% [144441344/144440600 00:13<00:00]\n", + "
\n", + " " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from fastai.text.all import *\n", + "path = untar_data(URLs.IMDB)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll need to grab the text files in order to try out a tokenizer. Just like `get_image_files`, which we've used many times already, gets all the image files in a path, `get_text_files` gets all the text files in a path. We can also optionally pass `folders` to restrict the search to a particular list of subfolders:" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "files = get_text_files(path, folders = ['train', 'test', 'unsup'])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here's a review that we'll tokenize (we'll just print the start of it here to save space):" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'Alan Rickman & Emma Thompson give good performances with southern/New Orlea'" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "txt = files[0].open().read(); txt[:75]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As we write this book, the default English word tokenizer for fastai uses a library called *spaCy*. It has a sophisticated rules engine with special rules for URLs, individual special English words, and much more. Rather than directly using `SpacyTokenizer`, however, we'll use `WordTokenizer`, since that will always point to fastai's current default word tokenizer (which may not necessarily be spaCy, depending when you're reading this).\n", + "\n", + "Let's try it out. We'll use fastai's `coll_repr(collection, n)` function to display the results. This displays the first *`n`* items of *`collection`*, along with the full size—it's what `L` uses by default. Note that fastai's tokenizers take a collection of documents to tokenize, so we have to wrap `txt` in a list:" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(#121) ['Alan','Rickman','&','Emma','Thompson','give','good','performances','with','southern','/','New','Orleans','accents','in','this','detective','flick','.','It',\"'s\",'worth','seeing','for','their','scenes-','and','Rickman',\"'s\",'scene'...]\n" + ] + } + ], + "source": [ + "spacy = WordTokenizer()\n", + "toks = first(spacy([txt]))\n", + "print(coll_repr(toks, 30))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As you see, spaCy has mainly just separated out the words and punctuation. But it does something else here too: it has split \"it's\" into \"it\" and \"'s\". That makes intuitive sense; these are separate words, really. Tokenization is a surprisingly subtle task, when you think about all the little details that have to be handled. Fortunately, spaCy handles these pretty well for us—for instance, here we see that \".\" is separated when it terminates a sentence, but not in an acronym or number:" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(#9) ['The','U.S.','dollar','$','1','is','$','1.00','.']" + ] + }, + "execution_count": 10, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "first(spacy(['The U.S. dollar $1 is $1.00.']))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "fastai then adds some additional functionality to the tokenization process with the `Tokenizer` class:" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(#139) ['xxbos','xxmaj','alan','xxmaj','rickman','&','xxmaj','emma','xxmaj','thompson','give','good','performances','with','southern','/','xxmaj','new','xxmaj','orleans','accents','in','this','detective','flick','.','xxmaj','it',\"'s\",'worth','seeing'...]\n" + ] + } + ], + "source": [ + "tkn = Tokenizer(spacy)\n", + "print(coll_repr(tkn(txt), 31))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Notice that there are now some tokens that start with the characters \"xx\", which is not a common word prefix in English. These are *special tokens*.\n", + "\n", + "For example, the first item in the list, `xxbos`, is a special token that indicates the start of a new text (\"BOS\" is a standard NLP acronym that means \"beginning of stream\"). By recognizing this start token, the model will be able to learn it needs to \"forget\" what was said previously and focus on upcoming words.\n", + "\n", + "These special tokens don't come from spaCy directly. They are there because fastai adds them by default, by applying a number of rules when processing text. These rules are designed to make it easier for a model to recognize the important parts of a sentence. In a sense, we are translating the original English language sequence into a simplified tokenized language—a language that is designed to be easy for a model to learn.\n", + "\n", + "For instance, the rules will replace a sequence of four exclamation points with a special *repeated character* token, followed by the number four, and then a single exclamation point. In this way, the model's embedding matrix can encode information about general concepts such as repeated punctuation rather than requiring a separate token for every number of repetitions of every punctuation mark. Similarly, a capitalized word will be replaced with a special capitalization token, followed by the lowercase version of the word. This way, the embedding matrix only needs the lowercase versions of the words, saving compute and memory resources, but can still learn the concept of capitalization.\n", + "\n", + "Here are some of the main special tokens you'll see:\n", + "\n", + "- `xxbos`:: Indicates the beginning of a text (here, a review)\n", + "- `xxmaj`:: Indicates the next word begins with a capital (since we lowercased everything)\n", + "- `xxunk`:: Indicates the word is unknown\n", + "\n", + "To see the rules that were used, you can check the default rules:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[,\n", + " ,\n", + " ,\n", + " ,\n", + " ,\n", + " ,\n", + " ,\n", + " ]" + ] + }, + "execution_count": null, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "defaults.text_proc_rules" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As always, you can look at the source code of each of them in a notebook by typing:\n", + "\n", + "```\n", + "??replace_rep\n", + "```\n", + "\n", + "Here is a brief summary of what each does:\n", + "\n", + "- `fix_html`:: Replaces special HTML characters with a readable version (IMDb reviews have quite a few of these)\n", + "- `replace_rep`:: Replaces any character repeated three times or more with a special token for repetition (`xxrep`), the number of times it's repeated, then the character\n", + "- `replace_wrep`:: Replaces any word repeated three times or more with a special token for word repetition (`xxwrep`), the number of times it's repeated, then the word\n", + "- `spec_add_spaces`:: Adds spaces around / and #\n", + "- `rm_useless_spaces`:: Removes all repetitions of the space character\n", + "- `replace_all_caps`:: Lowercases a word written in all caps and adds a special token for all caps (`xxup`) in front of it\n", + "- `replace_maj`:: Lowercases a capitalized word and adds a special token for capitalized (`xxmaj`) in front of it\n", + "- `lowercase`:: Lowercases all text and adds a special token at the beginning (`xxbos`) and/or the end (`xxeos`)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's take a look at a few of them in action:" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "\"(#11) ['xxbos','©','xxmaj','fast.ai','xxrep','3','w','.fast.ai','/','xxup','index']\"" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "coll_repr(tkn('© Fast.ai www.fast.ai/INDEX'), 31)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's take a look at how subword tokenization would work." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Subword Tokenization" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In addition to the *word tokenization* approach seen in the last section, another popular tokenization method is *subword tokenization*. Word tokenization relies on an assumption that spaces provide a useful separation of components of meaning in a sentence. However, this assumption is not always appropriate. For instance, consider this sentence: 我的名字是郝杰瑞 (\"My name is Jeremy Howard\" in Chinese). That's not going to work very well with a word tokenizer, because there are no spaces in it! Languages like Chinese and Japanese don't use spaces, and in fact they don't even have a well-defined concept of a \"word.\" There are also languages, like Turkish and Hungarian, that can add many subwords together without spaces, creating very long words that include a lot of separate pieces of information.\n", + "\n", + "To handle these cases, it's generally best to use subword tokenization. This proceeds in two steps:\n", + "\n", + "1. Analyze a corpus of documents to find the most commonly occurring groups of letters. These become the vocab.\n", + "2. Tokenize the corpus using this vocab of *subword units*.\n", + "\n", + "Let's look at an example. For our corpus, we'll use the first 2,000 movie reviews:" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [], + "source": [ + "txts = L(o.open().read() for o in files[:2000])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We instantiate our tokenizer, passing in the size of the vocab we want to create, and then we need to \"train\" it. That is, we need to have it read our documents and find the common sequences of characters to create the vocab. This is done with `setup`. As we'll see shortly, `setup` is a special fastai method that is called automatically in our usual data processing pipelines. Since we're doing everything manually at the moment, however, we have to call it ourselves. Here's a function that does these steps for a given vocab size, and shows an example output:" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [], + "source": [ + "def subword(sz):\n", + " sp = SubwordTokenizer(vocab_sz=sz)\n", + " sp.setup(txts)\n", + " return ' '.join(first(sp([txt]))[:40])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's try it out:" + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + "\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [ + "'▁A l an ▁R ick man ▁ & ▁E mm a ▁Th om p son ▁give ▁good ▁performance s ▁with ▁so u ther n / N e w ▁O r le an s ▁a c cent s ▁in ▁this ▁de'" + ] + }, + "execution_count": 43, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "subword(1000)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When using fastai's subword tokenizer, the special character `▁` represents a space character in the original text.\n", + "\n", + "If we use a smaller vocab, then each token will represent fewer characters, and it will take more tokens to represent a sentence:" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + "\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [ + "'▁A l an ▁ R i ck m an ▁ & ▁ E m m a ▁ T h o m p s on ▁g i ve ▁g o o d ▁p er f or m an ce s ▁with'" + ] + }, + "execution_count": 16, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "subword(200)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "On the other hand, if we use a larger vocab, then most common English words will end up in the vocab themselves, and we will not need as many to represent a sentence:" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + "\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [ + "\"▁Alan ▁Rick man ▁ & ▁Emma ▁Thompson ▁give ▁good ▁performances ▁with ▁southern / N e w ▁O rleans ▁accents ▁in ▁this ▁detective ▁flick . ▁It ' s ▁worth ▁seeing ▁for ▁their ▁scenes - ▁and ▁Rick man ' s ▁scene ▁with\"" + ] + }, + "execution_count": 17, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "subword(10000)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Picking a subword vocab size represents a compromise: a larger vocab means fewer tokens per sentence, which means faster training, less memory, and less state for the model to remember; but on the downside, it means larger embedding matrices, which require more data to learn.\n", + "\n", + "Overall, subword tokenization provides a way to easily scale between character tokenization (i.e., using a small subword vocab) and word tokenization (i.e., using a large subword vocab), and handles every human language without needing language-specific algorithms to be developed. It can even handle other \"languages\" such as genomic sequences or MIDI music notation! For this reason, in the last year its popularity has soared, and it seems likely to become the most common tokenization approach (it may well already be, by the time you read this!)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Once our texts have been split into tokens, we need to convert them to numbers. We'll look at that next." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Numericalization with fastai" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "*Numericalization* is the process of mapping tokens to integers. The steps are basically identical to those necessary to create a `Category` variable, such as the dependent variable of digits in MNIST:\n", + "\n", + "1. Make a list of all possible levels of that categorical variable (the vocab).\n", + "1. Replace each level with its index in the vocab.\n", + "\n", + "Let's take a look at this in action on the word-tokenized text we saw earlier:" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(#139) ['xxbos','xxmaj','alan','xxmaj','rickman','&','xxmaj','emma','xxmaj','thompson','give','good','performances','with','southern','/','xxmaj','new','xxmaj','orleans','accents','in','this','detective','flick','.','xxmaj','it',\"'s\",'worth','seeing'...]\n" + ] + } + ], + "source": [ + "toks = tkn(txt)\n", + "print(coll_repr(tkn(txt), 31))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Just like with `SubwordTokenizer`, we need to call `setup` on `Numericalize`; this is how we create the vocab. That means we'll need our tokenized corpus first. Since tokenization takes a while, it's done in parallel by fastai; but for this manual walkthrough, we'll use a small subset:" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(#139) ['xxbos','xxmaj','alan','xxmaj','rickman','&','xxmaj','emma','xxmaj','thompson'...]" + ] + }, + "execution_count": 19, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "toks200 = txts[:200].map(tkn)\n", + "toks200[0]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can pass this to `setup` to create our vocab:" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "\"(#1984) ['xxunk','xxpad','xxbos','xxeos','xxfld','xxrep','xxwrep','xxup','xxmaj','the','.',',','and','a','to','of','i','it','is','in'...]\"" + ] + }, + "execution_count": 20, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "num = Numericalize()\n", + "num.setup(toks200)\n", + "coll_repr(num.vocab,20)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Our special rules tokens appear first, and then every word appears once, in frequency order. The defaults to `Numericalize` are `min_freq=3,max_vocab=60000`. `max_vocab=60000` results in fastai replacing all words other than the most common 60,000 with a special *unknown word* token, `xxunk`. This is useful to avoid having an overly large embedding matrix, since that can slow down training and use up too much memory, and can also mean that there isn't enough data to train useful representations for rare words. However, this last issue is better handled by setting `min_freq`; the default `min_freq=3` means that any word appearing less than three times is replaced with `xxunk`.\n", + "\n", + "fastai can also numericalize your dataset using a vocab that you provide, by passing a list of words as the `vocab` parameter.\n", + "\n", + "Once we've created our `Numericalize` object, we can use it as if it were a function:" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "TensorText([ 2, 8, 0, 8, 1442, 234, 8, 0, 8, 0, 199,\n", + " 64, 731, 29, 0, 122, 8, 253, 8, 0])" + ] + }, + "execution_count": 21, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "nums = num(toks)[:20]; nums" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This time, our tokens have been converted to a tensor of integers that our model can receive. We can check that they map back to the original text:" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'xxbos xxmaj xxunk xxmaj rickman & xxmaj xxunk xxmaj xxunk give good performances with xxunk / xxmaj new xxmaj xxunk'" + ] + }, + "execution_count": 24, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "' '.join(num.vocab[o] for o in nums)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now that we have numbers, we need to put them in batches for our model." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Putting Our Texts into Batches for a Language Model" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When dealing with images, we needed to resize them all to the same height and width before grouping them together in a mini-batch so they could stack together efficiently in a single tensor. Here it's going to be a little different, because one cannot simply resize text to a desired length. Also, we want our language model to read text in order, so that it can efficiently predict what the next word is. This means that each new batch should begin precisely where the previous one left off.\n", + "\n", + "Suppose we have the following text:\n", + "\n", + "> : In this chapter, we will go back over the example of classifying movie reviews we studied in chapter 1 and dig deeper under the surface. First we will look at the processing steps necessary to convert text into numbers and how to customize it. By doing this, we'll have another example of the PreProcessor used in the data block API.\\nThen we will study how we build a language model and train it for a while.\n", + "\n", + "The tokenization process will add special tokens and deal with punctuation to return this text:\n", + "\n", + "> : xxbos xxmaj in this chapter , we will go back over the example of classifying movie reviews we studied in chapter 1 and dig deeper under the surface . xxmaj first we will look at the processing steps necessary to convert text into numbers and how to customize it . xxmaj by doing this , we 'll have another example of the preprocessor used in the data block xxup api . \\n xxmaj then we will study how we build a language model and train it for a while .\n", + "\n", + "We now have 90 tokens, separated by spaces. Let's say we want a batch size of 6. We need to break this text into 6 contiguous parts of length 15:" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": { + "hide_input": false + }, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
01234567891011121314
0xxbosxxmajinthischapter,wewillgobackovertheexampleofclassifying
1moviereviewswestudiedinchapter1anddigdeeperunderthesurface.xxmaj
2firstwewilllookattheprocessingstepsnecessarytoconverttextintonumbersand
3howtocustomizeit.xxmajbydoingthis,we'llhaveanotherexample
4ofthepreprocessorusedinthedatablockxxupapi.\\nxxmajthenwe
5willstudyhowwebuildalanguagemodelandtrainitforawhile.
\n", + "
" + ], + "text/plain": [ + " 0 1 2 3 4 5 6 7 \\\n", + "0 xxbos xxmaj in this chapter , we will \n", + "1 movie reviews we studied in chapter 1 and \n", + "2 first we will look at the processing steps \n", + "3 how to customize it . xxmaj by doing \n", + "4 of the preprocessor used in the data block \n", + "5 will study how we build a language model \n", + "\n", + " 8 9 10 11 12 13 14 \n", + "0 go back over the example of classifying \n", + "1 dig deeper under the surface . xxmaj \n", + "2 necessary to convert text into numbers and \n", + "3 this , we 'll have another example \n", + "4 xxup api . \\n xxmaj then we \n", + "5 and train it for a while . " + ] + }, + "execution_count": 29, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "#hide_input\n", + "stream = \"In this chapter, we will go back over the example of classifying movie reviews we studied in chapter 1 and dig deeper under the surface. First we will look at the processing steps necessary to convert text into numbers and how to customize it. By doing this, we'll have another example of the PreProcessor used in the data block API.\\nThen we will study how we build a language model and train it for a while.\"\n", + "tokens = tkn(stream)\n", + "bs,seq_len = 6,15\n", + "d_tokens = np.array([tokens[i*seq_len:(i+1)*seq_len] for i in range(bs)])\n", + "df = pd.DataFrame(d_tokens)\n", + "df\n", + "#display(HTML(df.to_html(index=False,header=None)))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In a perfect world, we could then give this one batch to our model. But that approach doesn't scale, because outside of this toy example it's unlikely that a single batch containing all the texts would fit in our GPU memory (here we have 90 tokens, but all the IMDb reviews together give several million).\n", + "\n", + "So, we need to divide this array more finely into subarrays of a fixed sequence length. It is important to maintain order within and across these subarrays, because we will use a model that maintains a state so that it remembers what it read previously when predicting what comes next. \n", + "\n", + "Going back to our previous example with 6 batches of length 15, if we chose a sequence length of 5, that would mean we first feed the following array:" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "metadata": { + "hide_input": true + }, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
01234
0xxbosxxmajinthischapter
1moviereviewswestudiedin
2firstwewilllookat
3howtocustomizeit.
4ofthepreprocessorusedin
5willstudyhowwebuild
\n", + "
" + ], + "text/plain": [ + " 0 1 2 3 4\n", + "0 xxbos xxmaj in this chapter\n", + "1 movie reviews we studied in\n", + "2 first we will look at\n", + "3 how to customize it .\n", + "4 of the preprocessor used in\n", + "5 will study how we build" + ] + }, + "execution_count": 30, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "#hide_input\n", + "bs,seq_len = 6,5\n", + "d_tokens = np.array([tokens[i*15:i*15+seq_len] for i in range(bs)])\n", + "df = pd.DataFrame(d_tokens)\n", + "df\n", + "#display(HTML(df.to_html(index=False,header=None)))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Then this one:" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "metadata": { + "hide_input": true + }, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
01234
0,wewillgoback
1chapter1anddigdeeper
2theprocessingstepsnecessaryto
3xxmajbydoingthis,
4thedatablockxxupapi
5alanguagemodelandtrain
\n", + "
" + ], + "text/plain": [ + " 0 1 2 3 4\n", + "0 , we will go back\n", + "1 chapter 1 and dig deeper\n", + "2 the processing steps necessary to\n", + "3 xxmaj by doing this ,\n", + "4 the data block xxup api\n", + "5 a language model and train" + ] + }, + "execution_count": 31, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "#hide_input\n", + "bs,seq_len = 6,5\n", + "d_tokens = np.array([tokens[i*15+seq_len:i*15+2*seq_len] for i in range(bs)])\n", + "df = pd.DataFrame(d_tokens)\n", + "#display(HTML(df.to_html(index=False,header=None)))\n", + "df" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And finally:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hide_input": true + }, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
overtheexampleofclassifying
underthesurface.xxmaj
converttextintonumbersand
we'llhaveanotherexample
.\\nxxmajthenwe
itforawhile.
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "#hide_input\n", + "bs,seq_len = 6,5\n", + "d_tokens = np.array([tokens[i*15+10:i*15+15] for i in range(bs)])\n", + "df = pd.DataFrame(d_tokens)\n", + "display(HTML(df.to_html(index=False,header=None)))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Going back to our movie reviews dataset, the first step is to transform the individual texts into a stream by concatenating them together. As with images, it's best to randomize the order of the inputs, so at the beginning of each epoch we will shuffle the entries to make a new stream (we shuffle the order of the documents, not the order of the words inside them, or the texts would not make sense anymore!).\n", + "\n", + "We then cut this stream into a certain number of batches (which is our *batch size*). For instance, if the stream has 50,000 tokens and we set a batch size of 10, this will give us 10 mini-streams of 5,000 tokens. What is important is that we preserve the order of the tokens (so from 1 to 5,000 for the first mini-stream, then from 5,001 to 10,000...), because we want the model to read continuous rows of text (as in the preceding example). An `xxbos` token is added at the start of each during preprocessing, so that the model knows when it reads the stream when a new entry is beginning.\n", + "\n", + "So to recap, at every epoch we shuffle our collection of documents and concatenate them into a stream of tokens. We then cut that stream into a batch of fixed-size consecutive mini-streams. Our model will then read the mini-streams in order, and thanks to an inner state, it will produce the same activation whatever sequence length we picked.\n", + "\n", + "This is all done behind the scenes by the fastai library when we create an `LMDataLoader`. We do this by first applying our `Numericalize` object to the tokenized texts:" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(#200) [TensorText([ 2, 8, 0, 8, 1442, 234, 8, 0, 8, 0, 199,\n", + " 64, 731, 29, 0, 122, 8, 253, 8, 0, 943, 19,\n", + " 20, 944, 294, 10, 8, 17, 25, 338, 408, 28, 102,\n", + " 0, 12, 8, 1442, 25, 160, 29, 8, 0, 8, 0,\n", + " 10, 8, 163, 320, 164, 0, 14, 1443, 295, 77, 254,\n", + " 61, 9, 27, 11, 17, 296, 10, 8, 9, 110, 28,\n", + " 9, 27, 650, 1134, 11, 31, 42, 321, 9, 732, 19,\n", + " 152, 9, 32, 22, 17, 23, 36, 1135, 123, 33, 101,\n", + " 33, 17, 81, 38, 87, 10, 8, 9, 180, 22, 17,\n", + " 18, 0, 49, 13, 264, 1444, 1445, 12, 835, 1444, 0,\n", + " 76, 0, 188, 10, 8, 9, 27, 18, 338, 13, 0,\n", + " 54, 28, 132, 79, 90, 456, 731, 49, 8, 1442, 11,\n", + " 8, 0, 11, 12, 8, 0, 10]),TensorText([ 2, 16, 38, 140, 20, 27, 12, 16, 73, 36, 255,\n", + " 28, 20, 27, 1136, 10, 16, 68, 36, 142, 60, 156,\n", + " 14, 8, 1137, 93, 16, 48, 36, 46, 20, 651, 12,\n", + " 109, 1446, 0, 10, 16, 48, 36, 46, 14, 0, 652,\n", + " 1136, 93, 16, 48, 36, 457, 102, 733, 10, 8, 133,\n", + " 68, 16, 161, 14, 8, 0, 69, 16, 264, 161, 14,\n", + " 8, 0, 58, 9, 8, 583, 8, 1447, 55, 8, 0,\n", + " 12, 9, 8, 583, 8, 1447, 43, 9, 1448, 16, 0,\n", + " 10, 8, 409, 9, 8, 1449, 8, 1450, 48, 36, 457,\n", + " 9, 8, 652, 733, 56, 46, 88, 10, 8, 1451, 16,\n", + " 131, 36, 584, 9, 8, 0, 945, 77, 254, 61, 10,\n", + " 16, 205, 33, 101, 836, 14, 9, 8, 583, 8, 1447,\n", + " 12, 1138, 585, 533, 12, 339, 297, 1452, 54, 59, 18,\n", + " 13, 297, 0, 10, 8, 22, 18, 47, 10]),TensorText([ 2, 8, 19, 8, 0, 8, 0, 11, 9, 1453, 12,\n", + " 0, 8, 0, 8, 0, 41, 1139, 8, 0, 39, 0,\n", + " 13, 1140, 1454, 15, 340, 26, 0, 0, 56, 14, 94,\n", + " 244, 322, 14, 0, 12, 0, 19, 1455, 0, 10, 8,\n", + " 53, 0, 12, 82, 0, 18, 380, 12, 489, 1141, 29,\n", + " 0, 533, 10, 24, 21, 1456, 21, 18, 62, 1444, 11,\n", + " 946, 12, 534, 256, 27, 60, 62, 0, 0, 0, 22,\n", + " 586, 13, 734, 15, 653, 58, 1457, 56, 14, 94, 244,\n", + " 322, 14, 587, 0, 12, 0, 11, 117, 735, 0, 458,\n", + " 14, 53, 0, 1142, 10, 8, 19, 0, 29, 654, 11,\n", + " 20, 141, 68, 37, 9, 0, 0, 26, 0, 15, 9,\n", + " 0, 8, 947, 8, 0, 11, 12, 535, 34, 20, 103,\n", + " 11, 16, 131, 410, 120, 235, 115, 15, 53, 0, 10,\n", + " 8, 0, 11, 54, 9, 490, 1143, 20, 837, 15, 1453,\n", + " 0, 11, 121, 123, 152, 9, 0, 12, 0, 8, 0,\n", + " 8, 0, 25, 21, 0, 21, 58, 8, 0, 8, 0,\n", + " 25, 21, 0, 0, 8, 0, 8, 0, 21, 22, 18,\n", + " 535, 34, 9, 189, 15, 9, 190, 491, 10, 8, 71,\n", + " 736, 18, 838, 10, 24, 8, 459, 41, 0, 1458, 21,\n", + " 1456, 492, 8, 0, 8, 0, 21, 41, 21, 1456, 492,\n", + " 8, 257, 8, 0, 21, 39]),TensorText([ 2, 8, 20, 32, 18, 0, 411, 29, 21, 0, 0,\n", + " 0, 8, 0, 66, 0, 0, 8, 0, 21, 12, 276,\n", + " 154, 839, 13, 277, 14, 37, 0, 19, 9, 129, 15,\n", + " 102, 7, 493, 0, 10, 8, 125, 11, 276, 154, 43,\n", + " 78, 341, 323, 0, 324, 17, 236, 14, 91, 494, 61,\n", + " 25, 0, 10, 8, 412, 11, 1144, 32, 70, 1145, 12,\n", + " 35, 43, 948, 14, 104, 13, 1146, 32, 323, 212, 21,\n", + " 0, 8, 0, 0, 8, 0, 21, 92, 298, 14, 38,\n", + " 13, 121, 0, 10, 8, 840, 11, 16, 413, 0, 1145,\n", + " 31, 28, 9, 340, 26, 0, 63, 59, 20, 18, 30,\n", + " 13, 495, 10, 8, 163, 7, 493, 841, 11, 219, 11,\n", + " 43, 36, 9, 1459, 15, 9, 171, 32, 842, 323, 56,\n", + " 9, 0, 0, 117, 0, 838, 0, 342, 10, 24, 8,\n", + " 33, 28, 9, 32, 11, 17, 25, 60, 9, 0, 15,\n", + " 8, 496, 8, 588, 10, 8, 20, 18, 13, 7, 589,\n", + " 495, 11, 33, 8, 536, 8, 1460, 237, 60, 33, 98,\n", + " 46, 8, 588, 33, 8, 1461, 8, 0, 10, 8, 19,\n", + " 77, 129, 1147, 92, 40, 146, 46, 8, 588, 10, 8,\n", + " 40, 25, 949, 9, 0, 11, 70, 9, 381, 950, 0,\n", + " 12, 655, 12, 18, 56, 36, 67, 460, 19, 115, 129,\n", + " 41, 0, 88, 34, 20, 11, 16, 200, 62, 8, 308,\n", + " 8, 353, 0, 12, 89, 43, 497, 14, 126, 163, 537,\n", + " 15, 188, 50, 39, 10, 8, 9, 153, 189, 8, 588,\n", + " 23, 13, 8, 0, 8, 590, 435, 12, 414, 46, 9,\n", + " 265, 34, 9, 8, 843, 8, 951, 0, 0, 0, 10,\n", + " 8, 844, 11, 845, 65, 105, 258, 14, 0, 9, 538,\n", + " 28, 8, 1460, 19, 9, 952, 18, 56, 1462, 10, 8,\n", + " 120, 178, 58, 278, 70, 8, 536, 8, 1460, 1463, 354,\n", + " 45, 1464, 50, 50, 8, 40, 23, 13, 461, 258, 66,\n", + " 31, 410, 36, 13, 953, 435, 58, 1464, 496, 10, 24,\n", + " 8, 19, 0, 14, 9, 213, 539, 11, 8, 496, 8,\n", + " 588, 25, 737, 23, 19, 77, 129, 46, 20, 32, 10,\n", + " 8, 17, 25, 355, 22, 9, 32, 842, 43, 214, 0,\n", + " 19, 34, 9, 846, 0, 60, 0, 0, 9, 737, 15,\n", + " 7, 0, 11, 36, 8, 588, 10, 8, 588, 23, 245,\n", + " 19, 8, 0, 11, 7, 1465, 41, 36, 8, 0, 39,\n", + " 49, 13, 0, 0, 29, 1466, 1467, 841, 323, 36, 13,\n", + " 656, 15, 382, 29, 0, 10, 8, 219, 11, 954, 14,\n", + " 108, 0, 11, 61, 214, 279, 8, 588, 41, 118, 127,\n", + " 847, 342, 39, 75, 0, 0, 323, 51, 1468, 12, 1468,\n", + " 12, 1468, 14, 0, 13, 955, 41, 14, 77, 0, 39,\n", + " 12, 120, 956, 0, 102, 1469, 58, 0, 19, 9, 957,\n", + " 10, 8, 19, 105, 657, 11, 46, 8, 540, 8, 0,\n", + " 41, 51, 23, 958, 279, 49, 0, 0, 69, 1470, 29,\n", + " 0, 39, 40, 1471, 458, 14, 0, 10, 8, 19, 9,\n", + " 27, 44, 246, 132, 220, 1147, 66, 105, 90, 1472, 8,\n", + " 496, 8, 588, 23, 245, 10, 24, 8, 93, 9, 32,\n", + " 0, 221, 77, 0, 14, 153, 353, 11, 17, 25, 46,\n", + " 13, 353, 1473, 33, 0, 57, 238, 57, 181, 738, 58,\n", + " 238, 29, 13, 1466, 658, 0, 10, 8, 133, 36, 124,\n", + " 0, 1474, 11, 659, 0, 12, 9, 8, 0, 0, 182,\n", + " 35, 175, 52, 17, 55, 50, 55, 50, 8, 959, 57,\n", + " 65, 259, 106, 12, 383, 1148, 11, 93, 9, 195, 18,\n", + " 848, 0, 356, 11, 16, 48, 30, 541, 266, 104, 17,\n", + " 10, 8, 17, 25, 56, 13, 462, 12, 848, 591, 10]),TensorText([ 2, 16, 82, 739, 34, 76, 78, 64, 154, 12, 34,\n", + " 848, 0, 10, 8, 71, 0, 18, 14, 309, 99, 51,\n", + " 176, 14, 91, 150, 154, 14, 592, 102, 80, 26, 12,\n", + " 322, 26, 0, 10, 24, 16, 124, 176, 14, 542, 99,\n", + " 1149, 102, 80, 34, 543, 11, 12, 176, 14, 0, 9,\n", + " 180, 22, 9, 157, 122, 1150, 15, 163, 543, 154, 215,\n", + " 30, 94, 357, 29, 17, 28, 78, 206, 10, 8, 89,\n", + " 131, 280, 63, 51, 35, 43, 12, 131, 736, 29, 63,\n", + " 1475, 26, 12, 0, 10, 24, 8, 20, 32, 660, 740,\n", + " 95, 9, 543, 0, 10, 24, 8, 9, 157, 12, 491,\n", + " 18, 8, 415, 8, 1476, 10, 8, 17, 25, 413, 13,\n", + " 74, 960, 69, 9, 491, 18, 124, 9, 157, 10, 8,\n", + " 281, 40, 498, 127, 741, 0, 10, 8, 40, 143, 30,\n", + " 94, 115, 10, 8, 45, 358, 9, 299, 26, 8, 415,\n", + " 7, 1476, 10, 8, 12, 54, 35, 91, 191, 310, 49,\n", + " 111, 11, 593, 17, 10, 24, 16, 594, 30, 147, 191,\n", + " 60, 9, 110, 26, 595, 38, 463, 10, 16, 200, 13,\n", + " 130, 1477, 49, 112, 98, 9, 157, 1143, 14, 1478, 19,\n", + " 14, 9, 267, 268, 25, 359, 69, 97, 18, 1151, 12,\n", + " 961, 10, 8, 163, 206, 0, 742, 43, 13, 130, 0,\n", + " 12, 282, 147, 138, 60, 9, 661, 15, 436, 15, 8,\n", + " 849, 10, 8, 1476, 10, 8, 281, 40, 143, 94, 0,\n", + " 309, 10, 24, 8, 244, 463, 10, 8, 17, 25, 356,\n", + " 26, 48, 30, 283, 155, 80, 34, 17, 10]),TensorText([ 2, 8, 69, 35, 146, 52, 9, 544, 12, 235, 437,\n", + " 60, 17, 62, 962, 343, 438, 15, 27, 499, 14, 436,\n", + " 90, 61, 35, 94, 196, 10, 8, 128, 201, 281, 16,\n", + " 235, 9, 1479, 28, 9, 105, 27, 500, 21, 0, 21,\n", + " 239, 33, 59, 75, 127, 113, 15, 20, 459, 1480, 60,\n", + " 9, 190, 80, 29, 276, 0, 963, 22, 83, 1152, 964,\n", + " 19, 850, 10, 8, 219, 11, 743, 437, 60, 22, 27,\n", + " 196, 16, 126, 16, 192, 20, 42, 12, 36, 22, 42,\n", + " 12, 22, 27, 18, 67, 360, 61, 42, 68, 384, 13,\n", + " 27, 29, 22, 459, 68, 37, 60, 10, 16, 131, 37,\n", + " 545, 11, 16, 464, 79, 15, 13, 0, 438, 744, 12,\n", + " 35, 94, 22, 19, 20, 27, 14, 65, 0, 10, 8,\n", + " 219, 11, 59, 18, 79, 437, 965, 9, 0, 12, 745,\n", + " 1481, 33, 9, 966, 160, 15, 9, 99, 117, 501, 357,\n", + " 49, 9, 0, 52, 9, 416, 15, 9, 32, 131, 0,\n", + " 14, 10, 8, 9, 27, 124, 70, 9, 167, 837, 15,\n", + " 967, 156, 851, 44, 48, 36, 0, 14, 38, 65, 846,\n", + " 1482, 11, 19, 20, 385, 17, 18, 19, 180, 13, 0,\n", + " 10, 8, 9, 105, 27, 16, 48, 36, 142, 76, 70,\n", + " 22, 1152, 0, 417, 22, 0, 386, 19, 9, 27, 12,\n", + " 16, 91, 9, 746, 28, 20, 42, 18, 968, 100, 11,\n", + " 151, 17, 23, 56, 36, 9, 27, 16, 23, 747, 10]),TensorText([ 2, 8, 0, 969, 55, 50, 8, 16, 137, 546, 31,\n", + " 67, 28, 9, 1153, 25, 22, 25, 56, 129, 100, 547,\n", + " 14, 37, 0, 465, 66, 8, 35, 84, 748, 9, 171,\n", + " 9, 1483, 0, 749, 93, 17, 23, 0, 167, 113, 12,\n", + " 7, 247, 650, 11, 31, 22, 25, 87, 216, 311, 11,\n", + " 45, 9, 418, 502, 14, 37, 13, 130, 360, 548, 11,\n", + " 36, 67, 79, 0, 50, 8, 844, 11, 9, 125, 27,\n", + " 83, 9, 852, 15, 8, 419, 8, 0, 26, 13, 148,\n", + " 51, 81, 67, 114, 0, 0, 19, 0, 0, 298, 0,\n", + " 50, 26, 22, 20, 42, 0, 0, 11, 45, 59, 23,\n", + " 77, 72, 0, 72, 19, 191, 22, 596, 11, 17, 56,\n", + " 344, 662, 10, 24, 8, 0, 16, 82, 192, 20, 311,\n", + " 69, 16, 23, 853, 11, 31, 49, 128, 463, 117, 13,\n", + " 549, 284, 15, 9, 171, 16, 358, 117, 0, 10, 8,\n", + " 14, 88, 11, 59, 18, 77, 418, 14, 8, 0, 8,\n", + " 466, 11, 56, 13, 0, 0, 22, 92, 30, 0, 115,\n", + " 0, 1147, 10]),TensorText([ 2, 8, 1484, 0, 41, 29, 0, 21, 1485, 21, 300,\n", + " 0, 0, 0, 11, 21, 0, 21, 1486, 13, 277, 15,\n", + " 0, 14, 0, 172, 12, 131, 207, 387, 123, 108, 970,\n", + " 173, 31, 9, 439, 0, 0, 345, 12, 8, 1154, 8,\n", + " 0, 11, 51, 0, 53, 503, 19, 221, 202, 160, 11,\n", + " 124, 750, 62, 854, 953, 388, 10, 13, 0, 11, 14,\n", + " 37, 208, 11, 31, 9, 79, 0, 21, 663, 21, 11,\n", + " 116, 127, 240, 342, 11, 18, 13, 1487, 0, 664, 10,\n", + " 16, 0, 35, 104, 22, 239, 10, 41, 260, 222, 122,\n", + " 168, 39]),TensorText([ 2, 8, 69, 8, 325, 18, 301, 14, 0, 61, 62,\n", + " 21, 855, 467, 21, 18, 46, 11, 44, 1155, 45, 1488,\n", + " 11, 0, 17, 236, 971, 657, 19, 9, 1489, 15, 9,\n", + " 0, 21, 856, 21, 10, 24, 8, 220, 11, 115, 856,\n", + " 0, 209, 301, 14, 0, 19, 53, 0, 52, 202, 972,\n", + " 10, 8, 208, 11, 44, 1490, 63, 19, 0, 12, 1156,\n", + " 857, 492, 17, 25, 36, 46, 44, 43, 41, 0, 0,\n", + " 39, 0, 1491, 51, 120, 38, 13, 0, 10, 24, 8,\n", + " 844, 11, 54, 35, 43, 13, 856, 35, 126, 47, 60,\n", + " 8, 0, 12, 8, 353, 12, 8, 0, 12, 15, 302,\n", + " 35, 175, 973, 85, 14, 1492, 29, 1493, 974, 12, 13,\n", + " 0, 0, 15, 136, 10, 8, 0, 163, 188, 11, 46,\n", + " 11, 47, 161, 665, 1494, 437, 11, 0, 55, 24, 8,\n", + " 844, 11, 35, 0, 209, 29, 13, 0, 47, 9, 80,\n", + " 10, 8, 35, 43, 56, 13, 0, 468, 15, 13, 35,\n", + " 26, 126, 26, 61, 11, 22, 25, 112, 17, 18, 11,\n", + " 0, 47, 10, 24, 8, 12, 15, 302, 35, 1495, 11,\n", + " 46, 238, 51, 120, 0, 178, 11, 31, 35, 1495, 0,\n", + " 17, 25, 46, 469, 1494, 437, 11, 0, 10, 8, 12,\n", + " 35, 175, 343, 10, 8, 22, 18, 1496, 10, 24, 8,\n", + " 12, 15, 302, 35, 84, 751, 492, 35, 175, 13, 0,\n", + " 10, 13, 0, 51, 858, 80, 14, 1497, 185, 10, 5,\n", + " 139, 597, 666, 1157, 40, 92, 30, 0, 0, 10, 8,\n", + " 12, 1157, 40, 92, 30, 1495, 58, 0, 0, 93, 40,\n", + " 0, 13, 0, 1498, 0, 10, 24, 8, 12, 35, 1492,\n", + " 13, 1158, 26, 138, 859, 975, 492, 8, 0, 8, 0,\n", + " 10, 8, 101, 11, 16, 594, 30, 67, 739, 8, 1139,\n", + " 8, 0, 10, 8, 550, 8, 976, 70, 752, 9, 1499,\n", + " 34, 9, 326, 463, 10, 24, 8, 20, 27, 18, 13,\n", + " 0, 15, 13, 8, 0, 234, 8, 0, 438, 975, 41,\n", + " 19, 105, 657, 0, 1159, 15, 136, 1458, 21, 667, 11,\n", + " 22, 25, 61, 16, 68, 37, 46, 54, 16, 23, 13,\n", + " 856, 10, 21, 8, 31, 213, 99, 12, 213, 0, 19,\n", + " 20, 385, 84, 36, 384, 9, 1160, 15, 0, 10]),TensorText([ 2, 8, 0, 504, 0, 8, 0, 8, 0, 18, 0,\n", + " 63, 15, 327, 29, 20, 0, 860, 285, 551, 12, 0,\n", + " 103, 22, 0, 57, 13, 1161, 505, 15, 1162, 0, 10,\n", + " 8, 0, 296, 14, 38, 0, 0, 80, 12, 1500, 28,\n", + " 9, 125, 203, 33, 1501, 8, 1154, 8, 0, 12, 8,\n", + " 0, 8, 0, 261, 130, 598, 95, 102, 753, 11, 0,\n", + " 303, 0, 12, 257, 0, 10, 24, 8, 0, 0, 0,\n", + " 8, 0, 21, 0, 21, 8, 0, 586, 33, 62, 861,\n", + " 148, 34, 13, 0, 361, 22, 328, 74, 12, 321, 53,\n", + " 420, 279, 19, 9, 957, 10, 8, 40, 0, 57, 421,\n", + " 12, 735, 599, 63, 14, 470, 9, 420, 15, 9, 944,\n", + " 51, 279, 53, 10, 8, 0, 15, 551, 43, 0, 14,\n", + " 506, 111, 57, 241, 14, 9, 600, 15, 9, 977, 51,\n", + " 70, 87, 1502, 14, 181, 754, 31, 68, 30, 35, 126,\n", + " 19, 9, 154, 422, 668, 89, 38, 8, 0, 0, 1475,\n", + " 346, 9, 1163, 41, 51, 248, 1164, 22, 755, 13, 0,\n", + " 183, 14, 9, 471, 23, 13, 304, 507, 39, 182, 13,\n", + " 756, 15, 551, 0, 12, 0, 118, 61, 227, 14, 203,\n", + " 10, 8, 304, 0, 55, 8, 35, 143, 91, 17, 10,\n", + " 8, 17, 25, 47, 15, 22, 12, 79, 10, 24, 8,\n", + " 0, 8, 0, 25, 440, 197, 92, 13, 259, 361, 15,\n", + " 0, 0, 14, 9, 0, 31, 9, 601, 18, 0, 12,\n", + " 0, 12, 17, 0, 9, 32, 15, 109, 1503, 12, 1165,\n", + " 10, 8, 33, 8, 0, 11, 8, 0, 8, 0, 18,\n", + " 9, 169, 158, 19, 9, 32, 0, 14, 0, 150, 1504,\n", + " 33, 40, 0, 57, 0, 1505, 14, 0, 10, 8, 163,\n", + " 0, 959, 8, 669, 0, 1166, 19, 0, 12, 1167, 324,\n", + " 109, 552, 862, 10, 8, 161, 8, 504, 167, 8, 0,\n", + " 10])...]" + ] + }, + "execution_count": 33, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "nums200 = toks200.map(num)\n", + "nums200" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "and then passing that to `LMDataLoader`:" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "metadata": {}, + "outputs": [], + "source": [ + "dl = LMDataLoader(nums200)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's confirm that this gives the expected results, by grabbing the first batch:" + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(torch.Size([64, 72]), torch.Size([64, 72]))" + ] + }, + "execution_count": 39, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "x,y = first(dl)\n", + "x.shape,y.shape" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "and then looking at the first row of the independent variable, which should be the start of the first text:" + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'xxbos xxmaj xxunk xxmaj rickman & xxmaj xxunk xxmaj xxunk give good performances with xxunk / xxmaj new xxmaj xxunk'" + ] + }, + "execution_count": 36, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "' '.join(num.vocab[o] for o in x[0][:20])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The dependent variable is the same thing offset by one token:" + ] + }, + { + "cell_type": "code", + "execution_count": 37, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'xxmaj xxunk xxmaj rickman & xxmaj xxunk xxmaj xxunk give good performances with xxunk / xxmaj new xxmaj xxunk accents'" + ] + }, + "execution_count": 37, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "' '.join(num.vocab[o] for o in y[0][:20])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This concludes all the preprocessing steps we need to apply to our data. We are now ready to train our text classifier." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Training a Text Classifier" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As we saw at the beginning of this chapter, there are two steps to training a state-of-the-art text classifier using transfer learning: first we need to fine-tune our language model pretrained on Wikipedia to the corpus of IMDb reviews, and then we can use that model to train a classifier.\n", + "\n", + "As usual, let's start with assembling our data." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Language Model Using DataBlock" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "fastai handles tokenization and numericalization automatically when `TextBlock` is passed to `DataBlock`. All of the arguments that can be passed to `Tokenize` and `Numericalize` can also be passed to `TextBlock`. In the next chapter we'll discuss the easiest ways to run each of these steps separately, to ease debugging—but you can always just debug by running them manually on a subset of your data as shown in the previous sections. And don't forget about `DataBlock`'s handy `summary` method, which is very useful for debugging data issues.\n", + "\n", + "Here's how we use `TextBlock` to create a language model, using fastai's defaults:" + ] + }, + { + "cell_type": "code", + "execution_count": 44, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + "\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "get_imdb = partial(get_text_files, folders=['train', 'test', 'unsup'])\n", + "\n", + "dls_lm = DataBlock(\n", + " blocks=TextBlock.from_folder(path, is_lm=True),\n", + " get_items=get_imdb, splitter=RandomSplitter(0.1)\n", + ").dataloaders(path, path=path, bs=128, seq_len=80)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "One thing that's different to previous types we've used in `DataBlock` is that we're not just using the class directly (i.e., `TextBlock(...)`, but instead are calling a *class method*. A class method is a Python method that, as the name suggests, belongs to a *class* rather than an *object*. (Be sure to search online for more information about class methods if you're not familiar with them, since they're commonly used in many Python libraries and applications; we've used them a few times previously in the book, but haven't called attention to them.) The reason that `TextBlock` is special is that setting up the numericalizer's vocab can take a long time (we have to read and tokenize every document to get the vocab). To be as efficient as possible it performs a few optimizations: \n", + "\n", + "- It saves the tokenized documents in a temporary folder, so it doesn't have to tokenize them more than once\n", + "- It runs multiple tokenization processes in parallel, to take advantage of your computer's CPUs\n", + "\n", + "We need to tell `TextBlock` how to access the texts, so that it can do this initial preprocessing—that's what `from_folder` does.\n", + "\n", + "`show_batch` then works in the usual way:" + ] + }, + { + "cell_type": "code", + "execution_count": 46, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
texttext_
0xxbos do n't buy this film for comedy value like i did , i did nt find it one bit funny , but so f xxrep 6 * miserable and lame it 's unbelievable . i gave it to a friend for christmas which was pretty funny ( on my side ) i recently heard that he watched it and told me what an xxunk i am ! \\n\\n xxmaj there is nothing more frustrating than watching an over -do n't buy this film for comedy value like i did , i did nt find it one bit funny , but so f xxrep 6 * miserable and lame it 's unbelievable . i gave it to a friend for christmas which was pretty funny ( on my side ) i recently heard that he watched it and told me what an xxunk i am ! \\n\\n xxmaj there is nothing more frustrating than watching an over - lit
1a truly emotional connection with these characters . the xxmaj penguin , played to perfection by xxmaj danny xxmaj devito , is a type of tragic character who was abandoned by his parents at birth and later in life seeks revenge on the world that denied him . very disgusting to watch at times , but he has some of the most classic lines a villain could ever utter in a single film . the other villain is xxmaj catwomantruly emotional connection with these characters . the xxmaj penguin , played to perfection by xxmaj danny xxmaj devito , is a type of tragic character who was abandoned by his parents at birth and later in life seeks revenge on the world that denied him . very disgusting to watch at times , but he has some of the most classic lines a villain could ever utter in a single film . the other villain is xxmaj catwoman ,
2made the brilliant xxup tv series ' the xxmaj book xxmaj group ' . xxmaj the whole film has the feeling that was a rushed affair . xxmaj potential viewers would be best advised to avoid this film , instead saving the money towards a trip to xxmaj edinburgh to visit the festival for real - a far more rewarding experience . xxbos xxmaj this movie is # 1 in the list of worst movies i have ever seen ,the brilliant xxup tv series ' the xxmaj book xxmaj group ' . xxmaj the whole film has the feeling that was a rushed affair . xxmaj potential viewers would be best advised to avoid this film , instead saving the money towards a trip to xxmaj edinburgh to visit the festival for real - a far more rewarding experience . xxbos xxmaj this movie is # 1 in the list of worst movies i have ever seen , with
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "dls_lm.show_batch(max_n=3)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now that our data is ready, we can fine-tune the pretrained language model." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Fine-Tuning the Language Model" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To convert the integer word indices into activations that we can use for our neural network, we will use embeddings, just like we did for collaborative filtering and tabular modeling. Then we'll feed those embeddings into a *recurrent neural network* (RNN), using an architecture called *AWD-LSTM* (we will show you how to write such a model from scratch in <>). As we discussed earlier, the embeddings in the pretrained model are merged with random embeddings added for words that weren't in the pretraining vocabulary. This is handled automatically inside `language_model_learner`:" + ] + }, + { + "cell_type": "code", + "execution_count": 47, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + "\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "\n", + "
\n", + " \n", + " 100.00% [105070592/105067061 00:03<00:00]\n", + "
\n", + " " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "learn = language_model_learner(\n", + " dls_lm, AWD_LSTM, drop_mult=0.3, \n", + " metrics=[accuracy, Perplexity()]).to_fp16()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The loss function used by default is cross-entropy loss, since we essentially have a classification problem (the different categories being the words in our vocab). The *perplexity* metric used here is often used in NLP for language models: it is the exponential of the loss (i.e., `torch.exp(cross_entropy)`). We also include the accuracy metric, to see how many times our model is right when trying to predict the next word, since cross-entropy (as we've seen) is both hard to interpret, and tells us more about the model's confidence than its accuracy.\n", + "\n", + "Let's go back to the process diagram from the beginning of this chapter. The first arrow has been completed for us and made available as a pretrained model in fastai, and we've just built the `DataLoaders` and `Learner` for the second stage. Now we're ready to fine-tune our language model!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\"Diagram" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It takes quite a while to train each epoch, so we'll be saving the intermediate model results during the training process. Since `fine_tune` doesn't do that for us, we'll use `fit_one_cycle`. Just like `vision_learner`, `language_model_learner` automatically calls `freeze` when using a pretrained model (which is the default), so this will only train the embeddings (the only part of the model that contains randomly initialized weights—i.e., embeddings for words that are in our IMDb vocab, but aren't in the pretrained model vocab):" + ] + }, + { + "cell_type": "code", + "execution_count": 49, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/Users/etienne/mambaforge/lib/python3.9/site-packages/torch/amp/autocast_mode.py:198: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling\n", + " warnings.warn('User provided device_type of \\'cuda\\', but CUDA is not available. Disabling')\n", + "/Users/etienne/mambaforge/lib/python3.9/site-packages/torch/cuda/amp/grad_scaler.py:115: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.\n", + " warnings.warn(\"torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.\")\n" + ] + }, + { + "data": { + "text/html": [ + "\n", + "\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "\n", + "
\n", + " \n", + " 0.00% [0/1 00:00<?]\n", + "
\n", + " \n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
epochtrain_lossvalid_lossaccuracyperplexitytime

\n", + "\n", + "

\n", + " \n", + " 0.00% [0/2632 00:00<?]\n", + "
\n", + " " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "ename": "KeyboardInterrupt", + "evalue": "", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)", + "Input \u001b[0;32mIn [49]\u001b[0m, in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mlearn\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfit_one_cycle\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m2e-2\u001b[39;49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/callback/schedule.py:119\u001b[0m, in \u001b[0;36mfit_one_cycle\u001b[0;34m(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt, start_epoch)\u001b[0m\n\u001b[1;32m 116\u001b[0m lr_max \u001b[38;5;241m=\u001b[39m np\u001b[38;5;241m.\u001b[39marray([h[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mlr\u001b[39m\u001b[38;5;124m'\u001b[39m] \u001b[38;5;28;01mfor\u001b[39;00m h \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mopt\u001b[38;5;241m.\u001b[39mhypers])\n\u001b[1;32m 117\u001b[0m scheds \u001b[38;5;241m=\u001b[39m {\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mlr\u001b[39m\u001b[38;5;124m'\u001b[39m: combined_cos(pct_start, lr_max\u001b[38;5;241m/\u001b[39mdiv, lr_max, lr_max\u001b[38;5;241m/\u001b[39mdiv_final),\n\u001b[1;32m 118\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mmom\u001b[39m\u001b[38;5;124m'\u001b[39m: combined_cos(pct_start, \u001b[38;5;241m*\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mmoms \u001b[38;5;28;01mif\u001b[39;00m moms \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;28;01melse\u001b[39;00m moms))}\n\u001b[0;32m--> 119\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfit\u001b[49m\u001b[43m(\u001b[49m\u001b[43mn_epoch\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcbs\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mParamScheduler\u001b[49m\u001b[43m(\u001b[49m\u001b[43mscheds\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241;43m+\u001b[39;49m\u001b[43mL\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcbs\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mreset_opt\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mreset_opt\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mwd\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mwd\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstart_epoch\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstart_epoch\u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:256\u001b[0m, in \u001b[0;36mLearner.fit\u001b[0;34m(self, n_epoch, lr, wd, cbs, reset_opt, start_epoch)\u001b[0m\n\u001b[1;32m 254\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mopt\u001b[38;5;241m.\u001b[39mset_hypers(lr\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlr \u001b[38;5;28;01mif\u001b[39;00m lr \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;28;01melse\u001b[39;00m lr)\n\u001b[1;32m 255\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mn_epoch \u001b[38;5;241m=\u001b[39m n_epoch\n\u001b[0;32m--> 256\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_with_events\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_do_fit\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mfit\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mCancelFitException\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_end_cleanup\u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:193\u001b[0m, in \u001b[0;36mLearner._with_events\u001b[0;34m(self, f, event_type, ex, final)\u001b[0m\n\u001b[1;32m 192\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_with_events\u001b[39m(\u001b[38;5;28mself\u001b[39m, f, event_type, ex, final\u001b[38;5;241m=\u001b[39mnoop):\n\u001b[0;32m--> 193\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m: \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mbefore_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m); \u001b[43mf\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 194\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m ex: \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mafter_cancel_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 195\u001b[0m \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mafter_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m); final()\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:245\u001b[0m, in \u001b[0;36mLearner._do_fit\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 243\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m epoch \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mrange\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mn_epoch):\n\u001b[1;32m 244\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mepoch\u001b[38;5;241m=\u001b[39mepoch\n\u001b[0;32m--> 245\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_with_events\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_do_epoch\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mepoch\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mCancelEpochException\u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:193\u001b[0m, in \u001b[0;36mLearner._with_events\u001b[0;34m(self, f, event_type, ex, final)\u001b[0m\n\u001b[1;32m 192\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_with_events\u001b[39m(\u001b[38;5;28mself\u001b[39m, f, event_type, ex, final\u001b[38;5;241m=\u001b[39mnoop):\n\u001b[0;32m--> 193\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m: \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mbefore_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m); \u001b[43mf\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 194\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m ex: \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mafter_cancel_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 195\u001b[0m \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mafter_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m); final()\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:239\u001b[0m, in \u001b[0;36mLearner._do_epoch\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 238\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_do_epoch\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[0;32m--> 239\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_do_epoch_train\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 240\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_do_epoch_validate()\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:231\u001b[0m, in \u001b[0;36mLearner._do_epoch_train\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 229\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_do_epoch_train\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m 230\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdl \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdls\u001b[38;5;241m.\u001b[39mtrain\n\u001b[0;32m--> 231\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_with_events\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mall_batches\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mtrain\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mCancelTrainException\u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:193\u001b[0m, in \u001b[0;36mLearner._with_events\u001b[0;34m(self, f, event_type, ex, final)\u001b[0m\n\u001b[1;32m 192\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_with_events\u001b[39m(\u001b[38;5;28mself\u001b[39m, f, event_type, ex, final\u001b[38;5;241m=\u001b[39mnoop):\n\u001b[0;32m--> 193\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m: \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mbefore_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m); \u001b[43mf\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 194\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m ex: \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mafter_cancel_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 195\u001b[0m \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mafter_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m); final()\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:199\u001b[0m, in \u001b[0;36mLearner.all_batches\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 197\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mall_batches\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m 198\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mn_iter \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mlen\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdl)\n\u001b[0;32m--> 199\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m o \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28menumerate\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdl): \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mone_batch\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mo\u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:227\u001b[0m, in \u001b[0;36mLearner.one_batch\u001b[0;34m(self, i, b)\u001b[0m\n\u001b[1;32m 225\u001b[0m b \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_set_device(b)\n\u001b[1;32m 226\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_split(b)\n\u001b[0;32m--> 227\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_with_events\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_do_one_batch\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mbatch\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mCancelBatchException\u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:193\u001b[0m, in \u001b[0;36mLearner._with_events\u001b[0;34m(self, f, event_type, ex, final)\u001b[0m\n\u001b[1;32m 192\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_with_events\u001b[39m(\u001b[38;5;28mself\u001b[39m, f, event_type, ex, final\u001b[38;5;241m=\u001b[39mnoop):\n\u001b[0;32m--> 193\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m: \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mbefore_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m); \u001b[43mf\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 194\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m ex: \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mafter_cancel_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 195\u001b[0m \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mafter_\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mevent_type\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m); final()\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/learner.py:208\u001b[0m, in \u001b[0;36mLearner._do_one_batch\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 206\u001b[0m \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mafter_pred\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 207\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39myb):\n\u001b[0;32m--> 208\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mloss_grad \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mloss_func\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpred\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43myb\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 209\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mloss \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mloss_grad\u001b[38;5;241m.\u001b[39mclone()\n\u001b[1;32m 210\u001b[0m \u001b[38;5;28mself\u001b[39m(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mafter_loss\u001b[39m\u001b[38;5;124m'\u001b[39m)\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/losses.py:54\u001b[0m, in \u001b[0;36mBaseLoss.__call__\u001b[0;34m(self, inp, targ, **kwargs)\u001b[0m\n\u001b[1;32m 52\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m targ\u001b[38;5;241m.\u001b[39mdtype \u001b[38;5;129;01min\u001b[39;00m [torch\u001b[38;5;241m.\u001b[39mint8, torch\u001b[38;5;241m.\u001b[39mint16, torch\u001b[38;5;241m.\u001b[39mint32]: targ \u001b[38;5;241m=\u001b[39m targ\u001b[38;5;241m.\u001b[39mlong()\n\u001b[1;32m 53\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mflatten: inp \u001b[38;5;241m=\u001b[39m inp\u001b[38;5;241m.\u001b[39mview(\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m,inp\u001b[38;5;241m.\u001b[39mshape[\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m]) \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mis_2d \u001b[38;5;28;01melse\u001b[39;00m inp\u001b[38;5;241m.\u001b[39mview(\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m)\n\u001b[0;32m---> 54\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfunc\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[38;5;21;43m__call__\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43minp\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtarg\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mview\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m-\u001b[39;49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mif\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mflatten\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01melse\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mtarg\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/torch/nn/modules/module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m 1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[1;32m 1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[1;32m 1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[1;32m 1129\u001b[0m \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[0;32m-> 1130\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mforward_call\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[1;32m 1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/torch/nn/modules/loss.py:1164\u001b[0m, in \u001b[0;36mCrossEntropyLoss.forward\u001b[0;34m(self, input, target)\u001b[0m\n\u001b[1;32m 1163\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m: Tensor, target: Tensor) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tensor:\n\u001b[0;32m-> 1164\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mF\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcross_entropy\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtarget\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mweight\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1165\u001b[0m \u001b[43m \u001b[49m\u001b[43mignore_index\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mignore_index\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mreduction\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mreduction\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1166\u001b[0m \u001b[43m \u001b[49m\u001b[43mlabel_smoothing\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mlabel_smoothing\u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/torch/nn/functional.py:3000\u001b[0m, in \u001b[0;36mcross_entropy\u001b[0;34m(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)\u001b[0m\n\u001b[1;32m 2934\u001b[0m \u001b[38;5;124mr\u001b[39m\u001b[38;5;124;03m\"\"\"This criterion computes the cross entropy loss between input and target.\u001b[39;00m\n\u001b[1;32m 2935\u001b[0m \n\u001b[1;32m 2936\u001b[0m \u001b[38;5;124;03mSee :class:`~torch.nn.CrossEntropyLoss` for details.\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 2997\u001b[0m \u001b[38;5;124;03m >>> loss.backward()\u001b[39;00m\n\u001b[1;32m 2998\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 2999\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m has_torch_function_variadic(\u001b[38;5;28minput\u001b[39m, target, weight):\n\u001b[0;32m-> 3000\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mhandle_torch_function\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 3001\u001b[0m \u001b[43m \u001b[49m\u001b[43mcross_entropy\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3002\u001b[0m \u001b[43m \u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtarget\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mweight\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3003\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3004\u001b[0m \u001b[43m \u001b[49m\u001b[43mtarget\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3005\u001b[0m \u001b[43m \u001b[49m\u001b[43mweight\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3006\u001b[0m \u001b[43m \u001b[49m\u001b[43msize_average\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43msize_average\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3007\u001b[0m \u001b[43m \u001b[49m\u001b[43mignore_index\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mignore_index\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3008\u001b[0m \u001b[43m \u001b[49m\u001b[43mreduce\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mreduce\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3009\u001b[0m \u001b[43m \u001b[49m\u001b[43mreduction\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mreduction\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3010\u001b[0m \u001b[43m \u001b[49m\u001b[43mlabel_smoothing\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mlabel_smoothing\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3011\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 3012\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m size_average \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mor\u001b[39;00m reduce \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 3013\u001b[0m reduction \u001b[38;5;241m=\u001b[39m _Reduction\u001b[38;5;241m.\u001b[39mlegacy_get_string(size_average, reduce)\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/torch/overrides.py:1498\u001b[0m, in \u001b[0;36mhandle_torch_function\u001b[0;34m(public_api, relevant_args, *args, **kwargs)\u001b[0m\n\u001b[1;32m 1492\u001b[0m warnings\u001b[38;5;241m.\u001b[39mwarn(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mDefining your `__torch_function__ as a plain method is deprecated and \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 1493\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mwill be an error in future, please define it as a classmethod.\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 1494\u001b[0m \u001b[38;5;167;01mDeprecationWarning\u001b[39;00m)\n\u001b[1;32m 1496\u001b[0m \u001b[38;5;66;03m# Use `public_api` instead of `implementation` so __torch_function__\u001b[39;00m\n\u001b[1;32m 1497\u001b[0m \u001b[38;5;66;03m# implementations can do equality/identity comparisons.\u001b[39;00m\n\u001b[0;32m-> 1498\u001b[0m result \u001b[38;5;241m=\u001b[39m \u001b[43mtorch_func_method\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpublic_api\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtypes\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1500\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m result \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mNotImplemented\u001b[39m:\n\u001b[1;32m 1501\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m result\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/fastai/torch_core.py:376\u001b[0m, in \u001b[0;36mTensorBase.__torch_function__\u001b[0;34m(cls, func, types, args, kwargs)\u001b[0m\n\u001b[1;32m 374\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39mdebug \u001b[38;5;129;01mand\u001b[39;00m func\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;129;01min\u001b[39;00m (\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__str__\u001b[39m\u001b[38;5;124m'\u001b[39m,\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__repr__\u001b[39m\u001b[38;5;124m'\u001b[39m): \u001b[38;5;28mprint\u001b[39m(func, types, args, kwargs)\n\u001b[1;32m 375\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m _torch_handled(args, \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m_opt, func): types \u001b[38;5;241m=\u001b[39m (torch\u001b[38;5;241m.\u001b[39mTensor,)\n\u001b[0;32m--> 376\u001b[0m res \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43msuper\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m__torch_function__\u001b[49m\u001b[43m(\u001b[49m\u001b[43mfunc\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtypes\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mifnone\u001b[49m\u001b[43m(\u001b[49m\u001b[43mkwargs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43m{\u001b[49m\u001b[43m}\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 377\u001b[0m dict_objs \u001b[38;5;241m=\u001b[39m _find_args(args) \u001b[38;5;28;01mif\u001b[39;00m args \u001b[38;5;28;01melse\u001b[39;00m _find_args(\u001b[38;5;28mlist\u001b[39m(kwargs\u001b[38;5;241m.\u001b[39mvalues()))\n\u001b[1;32m 378\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28missubclass\u001b[39m(\u001b[38;5;28mtype\u001b[39m(res),TensorBase) \u001b[38;5;129;01mand\u001b[39;00m dict_objs: res\u001b[38;5;241m.\u001b[39mset_meta(dict_objs[\u001b[38;5;241m0\u001b[39m],as_copy\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/torch/_tensor.py:1121\u001b[0m, in \u001b[0;36mTensor.__torch_function__\u001b[0;34m(cls, func, types, args, kwargs)\u001b[0m\n\u001b[1;32m 1118\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mNotImplemented\u001b[39m\n\u001b[1;32m 1120\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m _C\u001b[38;5;241m.\u001b[39mDisableTorchFunction():\n\u001b[0;32m-> 1121\u001b[0m ret \u001b[38;5;241m=\u001b[39m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1122\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m func \u001b[38;5;129;01min\u001b[39;00m get_default_nowrap_functions():\n\u001b[1;32m 1123\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m ret\n", + "File \u001b[0;32m~/mambaforge/lib/python3.9/site-packages/torch/nn/functional.py:3014\u001b[0m, in \u001b[0;36mcross_entropy\u001b[0;34m(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)\u001b[0m\n\u001b[1;32m 3012\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m size_average \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mor\u001b[39;00m reduce \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 3013\u001b[0m reduction \u001b[38;5;241m=\u001b[39m _Reduction\u001b[38;5;241m.\u001b[39mlegacy_get_string(size_average, reduce)\n\u001b[0;32m-> 3014\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mtorch\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_C\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_nn\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcross_entropy_loss\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtarget\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43m_Reduction\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_enum\u001b[49m\u001b[43m(\u001b[49m\u001b[43mreduction\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mignore_index\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mlabel_smoothing\u001b[49m\u001b[43m)\u001b[49m\n", + "\u001b[0;31mKeyboardInterrupt\u001b[0m: " + ] + } + ], + "source": [ + "learn.fit_one_cycle(1, 2e-2)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This model takes a while to train, so it's a good opportunity to talk about saving intermediary results. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Saving and Loading Models" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can easily save the state of your model like so:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "learn.save('1epoch')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This will create a file in `learn.path/models/` named *1epoch.pth*. If you want to load your model in another machine after creating your `Learner` the same way, or resume training later, you can load the content of this file with:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "learn = learn.load('1epoch')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Once the initial training has completed, we can continue fine-tuning the model after unfreezing:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
epochtrain_lossvalid_lossaccuracyperplexitytime
03.8934863.7728200.31710443.50254812:37
13.8204793.7171970.32379041.14888012:30
23.7356223.6597600.33032138.85199712:09
33.6770863.6247940.33396037.51698712:12
43.6366463.6013000.33701736.64585912:05
53.5536363.5842410.33935536.02600112:04
63.5076343.5718920.34135335.58386212:08
73.4441013.5659880.34219435.37437112:08
83.3985973.5662830.34264735.38481512:11
93.3755633.5681660.34252835.45150012:05
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "learn.unfreeze()\n", + "learn.fit_one_cycle(10, 2e-3)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Once this is done, we save all of our model except the final layer that converts activations to probabilities of picking each token in our vocabulary. The model not including the final layer is called the *encoder*. We can save it with `save_encoder`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "learn.save_encoder('finetuned')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> jargon: Encoder: The model not including the task-specific final layer(s). This term means much the same thing as _body_ when applied to vision CNNs, but \"encoder\" tends to be more used for NLP and generative models." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This completes the second stage of the text classification process: fine-tuning the language model. We can now use it to fine-tune a classifier using the IMDb sentiment labels." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Text Generation" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Before we move on to fine-tuning the classifier, let's quickly try something different: using our model to generate random reviews. Since it's trained to guess what the next word of the sentence is, we can use the model to write new reviews:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "TEXT = \"I liked this movie because\"\n", + "N_WORDS = 40\n", + "N_SENTENCES = 2\n", + "preds = [learn.predict(TEXT, N_WORDS, temperature=0.75) \n", + " for _ in range(N_SENTENCES)]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "i liked this movie because of its story and characters . The story line was very strong , very good for a sci - fi film . The main character , Alucard , was very well developed and brought the whole story\n", + "i liked this movie because i like the idea of the premise of the movie , the ( very ) convenient virus ( which , when you have to kill a few people , the \" evil \" machine has to be used to protect\n" + ] + } + ], + "source": [ + "print(\"\\n\".join(preds))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As you can see, we add some randomness (we pick a random word based on the probabilities returned by the model) so we don't get exactly the same review twice. Our model doesn't have any programmed knowledge of the structure of a sentence or grammar rules, yet it has clearly learned a lot about English sentences: we can see it capitalizes properly (*I* is just transformed to *i* because our rules require two characters or more to consider a word as capitalized, so it's normal to see it lowercased) and is using consistent tense. The general review makes sense at first glance, and it's only if you read carefully that you can notice something is a bit off. Not bad for a model trained in a couple of hours! \n", + "\n", + "But our end goal wasn't to train a model to generate reviews, but to classify them... so let's use this model to do just that." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Creating the Classifier DataLoaders" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We're now moving from language model fine-tuning to classifier fine-tuning. To recap, a language model predicts the next word of a document, so it doesn't need any external labels. A classifier, however, predicts some external label—in the case of IMDb, it's the sentiment of a document.\n", + "\n", + "This means that the structure of our `DataBlock` for NLP classification will look very familiar. It's actually nearly the same as we've seen for the many image classification datasets we've worked with:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dls_clas = DataBlock(\n", + " blocks=(TextBlock.from_folder(path, vocab=dls_lm.vocab),CategoryBlock),\n", + " get_y = parent_label,\n", + " get_items=partial(get_text_files, folders=['train', 'test']),\n", + " splitter=GrandparentSplitter(valid_name='test')\n", + ").dataloaders(path, path=path, bs=128, seq_len=72)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Just like with image classification, `show_batch` shows the dependent variable (sentiment, in this case) with each independent variable (movie review text):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
textcategory
0xxbos i rate this movie with 3 skulls , only coz the girls knew how to scream , this could 've been a better movie , if actors were better , the twins were xxup ok , i believed they were evil , but the eldest and youngest brother , they sucked really bad , it seemed like they were reading the scripts instead of acting them … . spoiler : if they 're vampire 's why do they freeze the blood ? vampires ca n't drink frozen blood , the sister in the movie says let 's drink her while she is alive … .but then when they 're moving to another house , they take on a cooler they 're frozen blood . end of spoiler \\n\\n it was a huge waste of time , and that made me mad coz i read all the reviews of howneg
1xxbos i have read all of the xxmaj love xxmaj come xxmaj softly books . xxmaj knowing full well that movies can not use all aspects of the book , but generally they at least have the main point of the book . i was highly disappointed in this movie . xxmaj the only thing that they have in this movie that is in the book is that xxmaj missy 's father comes to xxunk in the book both parents come ) . xxmaj that is all . xxmaj the story line was so twisted and far fetch and yes , sad , from the book , that i just could n't enjoy it . xxmaj even if i did n't read the book it was too sad . i do know that xxmaj pioneer life was rough , but the whole movie was a downer . xxmaj the ratingneg
2xxbos xxmaj this , for lack of a better term , movie is lousy . xxmaj where do i start … … \\n\\n xxmaj cinemaphotography - xxmaj this was , perhaps , the worst xxmaj i 've seen this year . xxmaj it looked like the camera was being tossed from camera man to camera man . xxmaj maybe they only had one camera . xxmaj it gives you the sensation of being a volleyball . \\n\\n xxmaj there are a bunch of scenes , haphazardly , thrown in with no continuity at all . xxmaj when they did the ' split screen ' , it was absurd . xxmaj everything was squished flat , it looked ridiculous . \\n\\n xxmaj the color tones were way off . xxmaj these people need to learn how to balance a camera . xxmaj this ' movie ' is poorly made , andneg
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "dls_clas.show_batch(max_n=3)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Looking at the `DataBlock` definition, every piece is familiar from previous data blocks we've built, with two important exceptions:\n", + "\n", + "- `TextBlock.from_folder` no longer has the `is_lm=True` parameter.\n", + "- We pass the `vocab` we created for the language model fine-tuning.\n", + "\n", + "The reason that we pass the `vocab` of the language model is to make sure we use the same correspondence of token to index. Otherwise the embeddings we learned in our fine-tuned language model won't make any sense to this model, and the fine-tuning step won't be of any use.\n", + "\n", + "By passing `is_lm=False` (or not passing `is_lm` at all, since it defaults to `False`) we tell `TextBlock` that we have regular labeled data, rather than using the next tokens as labels. There is one challenge we have to deal with, however, which is to do with collating multiple documents into a mini-batch. Let's see with an example, by trying to create a mini-batch containing the first 10 documents. First we'll numericalize them:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "nums_samp = toks200[:10].map(num)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's now look at how many tokens each of these 10 movie reviews have:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(#10) [228,238,121,290,196,194,533,124,581,155]" + ] + }, + "execution_count": null, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "nums_samp.map(len)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Remember, PyTorch `DataLoader`s need to collate all the items in a batch into a single tensor, and a single tensor has a fixed shape (i.e., it has some particular length on every axis, and all items must be consistent). This should sound familiar: we had the same issue with images. In that case, we used cropping, padding, and/or squishing to make all the inputs the same size. Cropping might not be a good idea for documents, because it seems likely we'd remove some key information (having said that, the same issue is true for images, and we use cropping there; data augmentation hasn't been well explored for NLP yet, so perhaps there are actually opportunities to use cropping in NLP too!). You can't really \"squish\" a document. So that leaves padding!\n", + "\n", + "We will expand the shortest texts to make them all the same size. To do this, we use a special padding token that will be ignored by our model. Additionally, to avoid memory issues and improve performance, we will batch together texts that are roughly the same lengths (with some shuffling for the training set). We do this by (approximately, for the training set) sorting the documents by length prior to each epoch. The result of this is that the documents collated into a single batch will tend to be of similar lengths. We won't pad every batch to the same size, but will instead use the size of the largest document in each batch as the target size. (It is possible to do something similar with images, which is especially useful for irregularly sized rectangular images, but at the time of writing no library provides good support for this yet, and there aren't any papers covering it. It's something we're planning to add to fastai soon, however, so keep an eye on the book's website; we'll add information about this as soon as we have it working well.)\n", + "\n", + "The sorting and padding are automatically done by the data block API for us when using a `TextBlock`, with `is_lm=False`. (We don't have this same issue for language model data, since we concatenate all the documents together first, and then split them into equally sized sections.)\n", + "\n", + "We can now create a model to classify our texts:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "learn = text_classifier_learner(dls_clas, AWD_LSTM, drop_mult=0.5, \n", + " metrics=accuracy).to_fp16()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The final step prior to training the classifier is to load the encoder from our fine-tuned language model. We use `load_encoder` instead of `load` because we only have pretrained weights available for the encoder; `load` by default raises an exception if an incomplete model is loaded:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "learn = learn.load_encoder('finetuned')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Fine-Tuning the Classifier" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The last step is to train with discriminative learning rates and *gradual unfreezing*. In computer vision we often unfreeze the model all at once, but for NLP classifiers, we find that unfreezing a few layers at a time makes a real difference:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
epochtrain_lossvalid_lossaccuracytime
00.3474270.1844800.92932000:33
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "learn.fit_one_cycle(1, 2e-2)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In just one epoch we get the same result as our training in <>: not too bad! We can pass `-2` to `freeze_to` to freeze all except the last two parameter groups:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
epochtrain_lossvalid_lossaccuracytime
00.2477630.1716830.93464000:37
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "learn.freeze_to(-2)\n", + "learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Then we can unfreeze a bit more, and continue training:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
epochtrain_lossvalid_lossaccuracytime
00.1933770.1566960.94120000:45
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "learn.freeze_to(-3)\n", + "learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And finally, the whole model!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
epochtrain_lossvalid_lossaccuracytime
00.1728880.1537700.94312001:01
10.1614920.1555670.94264000:57
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "learn.unfreeze()\n", + "learn.fit_one_cycle(2, slice(1e-3/(2.6**4),1e-3))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We reached 94.3% accuracy, which was state-of-the-art performance just three years ago. By training another model on all the texts read backwards and averaging the predictions of those two models, we can even get to 95.1% accuracy, which was the state of the art introduced by the ULMFiT paper. It was only beaten a few months ago, by fine-tuning a much bigger model and using expensive data augmentation techniques (translating sentences in another language and back, using another model for translation).\n", + "\n", + "Using a pretrained model let us build a fine-tuned language model that was pretty powerful, to either generate fake reviews or help classify them. This is exciting stuff, but it's good to remember that this technology can also be used for malign purposes." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Disinformation and Language Models" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Even simple algorithms based on rules, before the days of widely available deep learning language models, could be used to create fraudulent accounts and try to influence policymakers. Jeff Kao, now a computational journalist at ProPublica, analyzed the comments that were sent to the US Federal Communications Commission (FCC) regarding a 2017 proposal to repeal net neutrality. In his article [\"More than a Million Pro-Repeal Net Neutrality Comments Were Likely Faked\"](https://hackernoon.com/more-than-a-million-pro-repeal-net-neutrality-comments-were-likely-faked-e9f0e3ed36a6), he reports how he discovered a large cluster of comments opposing net neutrality that seemed to have been generated by some sort of Mad Libs-style mail merge. In <>, the fake comments have been helpfully color-coded by Kao to highlight their formulaic nature." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Kao estimated that \"less than 800,000 of the 22M+ comments… could be considered truly unique\" and that \"more than 99% of the truly unique comments were in favor of keeping net neutrality.\"\n", + "\n", + "Given advances in language modeling that have occurred since 2017, such fraudulent campaigns could be nearly impossible to catch now. You now have all the necessary tools at your disposal to create a compelling language model—that is, something that can generate context-appropriate, believable text. It won't necessarily be perfectly accurate or correct, but it will be plausible. Think about what this technology would mean when put together with the kinds of disinformation campaigns we have learned about in recent years. Take a look at the Reddit dialogue shown in <>, where a language model based on OpenAI's GPT-2 algorithm is having a conversation with itself about whether the US government should cut defense spending." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\"An" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this case, it was explicitly said that an algorithm was used, but imagine what would happen if a bad actor decided to release such an algorithm across social networks. They could do it slowly and carefully, allowing the algorithm to gradually develop followers and trust over time. It would not take many resources to have literally millions of accounts doing this. In such a situation we could easily imagine getting to a point where the vast majority of discourse online was from bots, and nobody would have any idea that it was happening.\n", + "\n", + "We are already starting to see examples of machine learning being used to generate identities. For example, <> shows a LinkedIn profile for Katie Jones." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Katie Jones was connected on LinkedIn to several members of mainstream Washington think tanks. But she didn't exist. That image you see was auto-generated by a generative adversarial network, and somebody named Katie Jones has not, in fact, graduated from the Center for Strategic and International Studies.\n", + "\n", + "Many people assume or hope that algorithms will come to our defense here—that we will develop classification algorithms that can automatically recognise autogenerated content. The problem, however, is that this will always be an arms race, in which better classification (or discriminator) algorithms can be used to create better generation algorithms." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Conclusion" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this chapter we explored the last application covered out of the box by the fastai library: text. We saw two types of models: language models that can generate texts, and a classifier that determines if a review is positive or negative. To build a state-of-the art classifier, we used a pretrained language model, fine-tuned it to the corpus of our task, then used its body (the encoder) with a new head to do the classification.\n", + "\n", + "Before we end this section, we'll take a look at how the fastai library can help you assemble your data for your specific problems." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Questionnaire" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. What is \"self-supervised learning\"?\n", + "1. What is a \"language model\"?\n", + "1. Why is a language model considered self-supervised?\n", + "1. What are self-supervised models usually used for?\n", + "1. Why do we fine-tune language models?\n", + "1. What are the three steps to create a state-of-the-art text classifier?\n", + "1. How do the 50,000 unlabeled movie reviews help us create a better text classifier for the IMDb dataset?\n", + "1. What are the three steps to prepare your data for a language model?\n", + "1. What is \"tokenization\"? Why do we need it?\n", + "1. Name three different approaches to tokenization.\n", + "1. What is `xxbos`?\n", + "1. List four rules that fastai applies to text during tokenization.\n", + "1. Why are repeated characters replaced with a token showing the number of repetitions and the character that's repeated?\n", + "1. What is \"numericalization\"?\n", + "1. Why might there be words that are replaced with the \"unknown word\" token?\n", + "1. With a batch size of 64, the first row of the tensor representing the first batch contains the first 64 tokens for the dataset. What does the second row of that tensor contain? What does the first row of the second batch contain? (Careful—students often get this one wrong! Be sure to check your answer on the book's website.)\n", + "1. Why do we need padding for text classification? Why don't we need it for language modeling?\n", + "1. What does an embedding matrix for NLP contain? What is its shape?\n", + "1. What is \"perplexity\"?\n", + "1. Why do we have to pass the vocabulary of the language model to the classifier data block?\n", + "1. What is \"gradual unfreezing\"?\n", + "1. Why is text generation always likely to be ahead of automatic identification of machine-generated texts?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Further Research" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. See what you can learn about language models and disinformation. What are the best language models today? Take a look at some of their outputs. Do you find them convincing? How could a bad actor best use such a model to create conflict and uncertainty?\n", + "1. Given the limitation that models are unlikely to be able to consistently recognize machine-generated texts, what other approaches may be needed to handle large-scale disinformation campaigns that leverage deep learning?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "split_at_heading": true + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.13" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}