{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["# Inference only Text Models in `arcgis.learn`"]}, {"cell_type": "markdown", "metadata": {}, "source": ["<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n", "<div class=\"toc\">\n", "<ul class=\"toc-item\">\n", "<li><span><a href=\"#Introduction\" data-toc-modified-id=\"Introduction-1\">Introduction</a></span></li>\n", "<li><span><a href=\"#Transformer-Basics\" data-toc-modified-id=\"Transformer-Basics-2\">Transformer Basics</a></span></li>\n", "<li><span><a href=\"#Prerequisites\" data-toc-modified-id=\"Prerequisites-3\">Prerequisites</a></span></li>    \n", "<li><span><a href=\"#Inference-only-models\" data-toc-modified-id=\"Inference-only-models-4\">Inference only models</a></span></li>\n", "<ul class=\"toc-item\">\n", "    <li><span><a href=\"#ZeroShotClassifier\" data-toc-modified-id=\"ZeroShotClassifier-4.1\">ZeroShotClassifier</a></span></li>\n", "    <li><span><a href=\"#QuestionAnswering\" data-toc-modified-id=\"QuestionAnswering-4.2\">QuestionAnswering</a></span></li>\n", "    <ul class=\"toc-item\">\n", "        <li><span><a href=\"#Choosing-a-different-backbone-than-the-default\" data-toc-modified-id=\"Choosing-a-different backbone-than-the-default-4.2.1\">Choosing a different backbone than the default</a></span></li>\n", "    </ul>\n", "    <li><span><a href=\"#TextSummarizer\" data-toc-modified-id=\"TextSummarizer-4.3\">TextSummarizer</a></span></li>\n", "    <li><span><a href=\"#TextTranslator\" data-toc-modified-id=\"TextTranslator-4.4\">TextTranslator</a></span></li>\n", "    <li><span><a href=\"#TextGenerator\" data-toc-modified-id=\"TextGenerator-4.5\">TextGenerator</a></span></li>\n", "    <li><span><a href=\"#FillMask\" data-toc-modified-id=\"FillMask-4.6\">FillMask</a></span></li>\n", "</ul>\n", "<li><span><a href=\"#References\" data-toc-modified-id=\"References-5\">References</a></span></li>\n", "</ul>\n", "</div>"]}, {"cell_type": "markdown", "metadata": {}, "source": ["# Introduction"]}, {"cell_type": "markdown", "metadata": {}, "source": ["The pretrained/inference-only models available in `arcgis.learn.text` submodule are based on [Hugging Face Transformers](https://huggingface.co/transformers/v3.3.0/index.html) library. This library provides transformer models like `BERT` [[1]](#References), `RoBERTa`, `XLM`, `DistilBert`, `XLNet` etc., for **Natural Language Understanding (NLU)** with over 32+ pretrained models in 100+ languages. [This page](https://huggingface.co/transformers/v3.0.2/pretrained_models.html) mentions different transformer architectures [[2]](#References) which come in different sizes (model parameters), trained on different languages/corpus, having different attention heads, etc.\n", "\n", "\n", "These inference-only classes offers a simple API dedicated to several **Natural Language Processing (NLP)** tasks including **Masked Language Modeling**, **Text Generation**, **Sentiment Analysis**, **Summarization**, **Machine Translation** and **Question Answering**.\n", "\n", "The usage of these models differs from rest of the models available in `arcgis.learn` module in the sense that these models do not need to be trained on a given dataset before they can be used for inferencing. Therefore, these model do not have methods like `fit()`, `lr_find()` etc., which are required to train an `arcgis.learn` model. \n", "\n", "Instead these model classes follow the following pattern:\n", "- A model constructor where user can pass a pretrained model name to initialize the model.\n", "- A `supported_backbones` attribute which tells the supported transformer architectures for that particular model.\n", "- A method where user can pass input text and appropriate arguments to generate the model inference."]}, {"cell_type": "markdown", "metadata": {}, "source": ["# Transformer Basics\n", "\n", "Transformers in **Natural Language Processing (NLP)** are novel architectures that aim to solve [sequence-to-sequence](https://towardsdatascience.com/understanding-encoder-decoder-sequence-to-sequence-model-679e04af4346) tasks while handling [long-range dependencies](https://medium.com/tech-break/recurrent-neural-network-and-long-term-dependencies-e21773defd92) with ease. The transformers are the latest and advanced models that give state of the art results for a wide range of tasks such as **text/sequence classification**, **named entity recognition (NER)**, **question answering**, **machine translation**, **text summarization**, **text generation** etc."]}, {"cell_type": "markdown", "metadata": {}, "source": ["The Transformer architecture was proposed in the paper [Attention Is All You Need](https://arxiv.org/pdf/1706.03762.pdf). A transformer consists of an **encoding component**, a **decoding component** and **connections** between them.\n", "\n", "- The **Encoding component** is a stack of encoders (the paper stacks six of them on top of each other).\n", "- The **Decoding component** is a stack of decoders of the same number.\n", "\n", "<img src=\"\n", "\">\n", "<center>Figure1: High level view of a <a href=\"https://jalammar.github.io/illustrated-transformer/\">transformer</a></center>"]}, {"cell_type": "markdown", "metadata": {}, "source": ["1. The **encoders** are all identical in structure (yet they do not share weights). Each one is broken down into two sub-layers:\n", "\n", "- **Self-Attention Layer**\n", "    - Say the following sentence is an input sentence we want to translate:\n", "\n", "      **The animal didn't cross the street because it was too tired.**\n", "      \n", "      What does **\"it\"** in this sentence refer to? Is it referring to the **street** or to the **animal**? It's a simple question to a human, but not as simple to an algorithm. When such data is fed into a transformer model, the model processes the word **\"it\"** and the **self-attention layer** allows the model to associate **\"it\"** with **\"animal\"**. As each word in the input sequence is processed, **self-attention** looks at other words in the input sequence for clues that can lead to a better encoding for this word.\n", "\n", "- **Feed Forward Layer** \n", "    - The outputs of the self-attention layer are fed to a feed-forward neural network. The exact same feed-forward network is independently applied to each position.\n", "\n", "2. The **decoder** has both those layers (**self-attention** & **feed forward layer**), but between them is an **attention layer** (sometimes called **encoder-decoder** attention) that helps the decoder focus on relevant parts of the input sentence."]}, {"cell_type": "markdown", "metadata": {}, "source": ["<img src=\"\">\n", "\n", "<center>Figure2: Different types of layers in encoder and decoder component of a <a href=\"https://jalammar.github.io/illustrated-transformer/\">transformer</a></center>\n", "\n", "To get a more detailed explanation on **different forms of attention** visit [this](https://towardsdatascience.com/attention-and-its-different-forms-7fc3674d14dc) page. Also there is a great blog post on [Visualizing attention in machine translation model](https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/) that can help in understanding the attention mechanism in a better way. \n", "\n", "An **\u201cannotated\u201d** [[3]](#References) version of the paper is also present in the form of a line-by-line implementation of the transformer architecture."]}, {"cell_type": "markdown", "metadata": {}, "source": ["# Prerequisites"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- Inferencing workflows for pretrained text models of `arcgis.learn.text` submodule is based on [Hugging Face Transformers](https://huggingface.co/transformers/v3.0.2/index.html) library. \n", "- Refer to the section [Install deep learning dependencies of arcgis.learn module](https://developers.arcgis.com/python/guide/install-and-set-up/#Install-deep-learning-dependencies) for detailed explanation about deep learning dependencies.\n", "- **Choosing a pretrained model**: Depending on the task and the language of the input text, user might need to choose an appropriate transformer backbone to generate desired inference. This [link](https://huggingface.co/models) lists out all the pretrained models offered by [Hugging Face Transformers](https://huggingface.co/transformers/v3.0.2/index.html) library."]}, {"cell_type": "markdown", "metadata": {}, "source": ["# Inference only models"]}, {"cell_type": "markdown", "metadata": {}, "source": ["The `arcgis.learn.text` submodule offers the following models pretrained on unstructured text:\n", "- **ZeroShotClassifier**\n", "- **QuestionAnswering**\n", "- **TextSummarizer**\n", "- **TextTranslator**\n", "- **TextGenerator**\n", "- **FillMask**\n", "\n", "These models can be imported using the below command"]}, {"cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": ["from arcgis.learn.text import ZeroShotClassifier, QuestionAnswering, TextSummarizer, \\\n", "                            TextTranslator, TextGenerator, FillMask"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## ZeroShotClassifier"]}, {"cell_type": "markdown", "metadata": {}, "source": ["[Zero-shot learning](https://towardsdatascience.com/applications-of-zero-shot-learning-f65bb232963f) is a specific area of machine learning where we want the model to classify data based on very few or even no training example. In **Zero-shot learning** the classes covered in the training data and the classes we wish to classify are completely different. \n", "\n", "The **ZeroShotClassifier** model of `arcgis.learn.text` submodule **classifies an input sequence from a list of candidate labels**. The transformer model is trained on the task of **Natural Language Inference (NLI)**, which takes in two sequences and determines whether they contradict each other, entail each other, or neither. \n", "\n", "The model assumes by default that only one of the candidate labels is true, and returns a list of scores for each label which add up to 1. Visit [this link](https://huggingface.co/models?pipeline_tag=zero-shot-classification) to learn more about the available models for **zero-shot-classification task**. "]}, {"cell_type": "markdown", "metadata": {}, "source": ["The command below creates a model object by calling the `ZeroShotClassifier` class."]}, {"cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": ["classifier = ZeroShotClassifier()"]}, {"cell_type": "markdown", "metadata": {}, "source": ["A sample code for performing **single-label classification** task."]}, {"cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "    <div>\n", "        <style>\n", "            /* Turns off some styling */\n", "            progress {\n", "                /* gets rid of default border in Firefox and Opera. */\n", "                border: none;\n", "                /* Needs to be in here for Safari polyfill so background images work as expected. */\n", "                background-size: auto;\n", "            }\n", "            .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n", "                background: #F44336;\n", "            }\n", "        </style>\n", "      <progress value='1' class='' max='1' style='width:300px; height:20px; vertical-align: middle;'></progress>\n", "      100.00% [1/1 00:00<00:00]\n", "    </div>\n", "    "], "text/plain": ["<IPython.core.display.HTML object>"]}, "metadata": {}, "output_type": "display_data"}, {"data": {"text/plain": ["[{'sequence': 'Who are you voting for in 2020?',\n", "  'labels': ['politics', 'economics', 'public health'],\n", "  'scores': [0.972518801689148, 0.014584126882255077, 0.012897057458758354]}]"]}, "execution_count": 3, "metadata": {}, "output_type": "execute_result"}], "source": ["sequence = \"Who are you voting for in 2020?\"\n", "candidate_labels = [\"politics\", \"public health\", \"economics\"]\n", "\n", "classifier.predict(sequence, candidate_labels)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["For **multi-label classification**, we simply need to pass `multi_class=True` in the `predict()` method of the model. The resulting per label scores for multi-label classification are independent probabilities and fall in the (0, 1) range."]}, {"cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "    <div>\n", "        <style>\n", "            /* Turns off some styling */\n", "            progress {\n", "                /* gets rid of default border in Firefox and Opera. */\n", "                border: none;\n", "                /* Needs to be in here for Safari polyfill so background images work as expected. */\n", "                background-size: auto;\n", "            }\n", "            .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n", "                background: #F44336;\n", "            }\n", "        </style>\n", "      <progress value='2' class='' max='2' style='width:300px; height:20px; vertical-align: middle;'></progress>\n", "      100.00% [2/2 00:00<00:00]\n", "    </div>\n", "    "], "text/plain": ["<IPython.core.display.HTML object>"]}, "metadata": {}, "output_type": "display_data"}, {"name": "stdout", "output_type": "stream", "text": ["[{'labels': ['threat', 'insult', 'toxic', 'severe_toxic', 'identity_hate'],\n", "  'scores': [0.9951153993606567,\n", "             0.9661784768104553,\n", "             0.9028339982032776,\n", "             0.7790991067886353,\n", "             0.16090862452983856],\n", "  'sequence': 'TAKE THIS MAP DOWN! YOU DO NOT OWN THIS MAP PROJECT OR DATA!'},\n", " {'labels': ['threat', 'identity_hate', 'insult', 'severe_toxic', 'toxic'],\n", "  'scores': [0.11222238838672638,\n", "             0.04374469816684723,\n", "             0.00017427862621843815,\n", "             9.843543375609443e-05,\n", "             4.655999146052636e-05],\n", "  'sequence': 'This imagery was great but is not available now'}]\n"]}], "source": ["sequence_list = [\n", "    \"TAKE THIS MAP DOWN! YOU DO NOT OWN THIS MAP PROJECT OR DATA!\",\n", "    \"This imagery was great but is not available now\"\n", "]\n", "\n", "candidate_labels = [\"toxic\", \"severe_toxic\", \"threat\", \"insult\", \"identity_hate\"]\n", "\n", "from pprint import pprint\n", "pprint(classifier.predict(sequence_list, candidate_labels, multi_class=True))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["The **ZeroShotClassifier** model has been fine-tuned on [XNLI](https://cims.nyu.edu/~sbowman/xnli/) corpus which includes 15 languages: Arabic, Bulgarian, Chinese, English, French, German, Greek, Hindi, Russian, Spanish, Swahili, Thai, Turkish, Urdu, and Vietnamese. So this model can be used to classify **multi-lingual** text as well. \n", "\n", "Below example shows how this model can be used to classify an input sequence written in Spanish language. "]}, {"cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "    <div>\n", "        <style>\n", "            /* Turns off some styling */\n", "            progress {\n", "                /* gets rid of default border in Firefox and Opera. */\n", "                border: none;\n", "                /* Needs to be in here for Safari polyfill so background images work as expected. */\n", "                background-size: auto;\n", "            }\n", "            .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n", "                background: #F44336;\n", "            }\n", "        </style>\n", "      <progress value='1' class='' max='1' style='width:300px; height:20px; vertical-align: middle;'></progress>\n", "      100.00% [1/1 00:00<00:00]\n", "    </div>\n", "    "], "text/plain": ["<IPython.core.display.HTML object>"]}, "metadata": {}, "output_type": "display_data"}, {"data": {"text/plain": ["[{'sequence': '\u00bfA qui\u00e9n vas a votar en 2020?',\n", "  'labels': ['pol\u00edtica', 'salud p\u00fablica', 'Europa'],\n", "  'scores': [0.7787874341011047, 0.14989280700683594, 0.07131969183683395]}]"]}, "execution_count": 5, "metadata": {}, "output_type": "execute_result"}], "source": ["# Classification on spanish data\n", "\n", "sequence = \"\u00bfA qui\u00e9n vas a votar en 2020?\" # translation: \"Who are you voting for in 2020?\"\n", "candidate_labels = [\"Europa\", \"salud p\u00fablica\", \"pol\u00edtica\"] # [\"Europe\", \"public health\", \"politics\"]\n", "\n", "classifier.predict(sequence, candidate_labels)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["This model can be used with any combination of languages. For example, we can classify a Russian sentence with English candidate labels:"]}, {"cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "    <div>\n", "        <style>\n", "            /* Turns off some styling */\n", "            progress {\n", "                /* gets rid of default border in Firefox and Opera. */\n", "                border: none;\n", "                /* Needs to be in here for Safari polyfill so background images work as expected. */\n", "                background-size: auto;\n", "            }\n", "            .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n", "                background: #F44336;\n", "            }\n", "        </style>\n", "      <progress value='1' class='' max='1' style='width:300px; height:20px; vertical-align: middle;'></progress>\n", "      100.00% [1/1 00:00<00:00]\n", "    </div>\n", "    "], "text/plain": ["<IPython.core.display.HTML object>"]}, "metadata": {}, "output_type": "display_data"}, {"data": {"text/plain": ["[{'sequence': '\u0417\u0430 \u043a\u043e\u0433\u043e \u0432\u044b \u0433\u043e\u043b\u043e\u0441\u0443\u0435\u0442\u0435 \u0432 2020 \u0433\u043e\u0434\u0443?',\n", "  'labels': ['politics', 'public health', 'economics'],\n", "  'scores': [0.5152668356895447, 0.2594522535800934, 0.22528085112571716]}]"]}, "execution_count": 6, "metadata": {}, "output_type": "execute_result"}], "source": ["# Russian with english candidate labels\n", "\n", "sequence = \"\u0417\u0430 \u043a\u043e\u0433\u043e \u0432\u044b \u0433\u043e\u043b\u043e\u0441\u0443\u0435\u0442\u0435 \u0432 2020 \u0433\u043e\u0434\u0443?\" # translation: \"Who are you voting for in 2020?\"\n", "candidate_labels = [\"economics\", \"public health\", \"politics\"]\n", "\n", "classifier.predict(sequence, candidate_labels)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## QuestionAnswering"]}, {"cell_type": "markdown", "metadata": {}, "source": ["**QuestionAnswering** model can be used to extract the answers for an input question from a given context. The model has been fine-tuned on a question answering task like [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/). SQuAD belongs to a subdivision of **question-answering** system known as [extractive question-answering](https://medium.com/deepset-ai/going-beyond-squad-part-1-question-answering-in-different-languages-8eac6cf56f21#:~:text=SQuAD%20belongs%20to%20a%20subdivision,referred%20to%20as%20reading%20comprehension.&text=When%20an%20extractive%20QA%20system,the%20question%20(see%20diagram).), also referred to as reading comprehension. Its training data is formed from triples of question, passage and answer. When an **extractive question-answering** system is presented a question and a passage, it is tasked with returning the string sequence from the passage which answers the question. \n", "\n", "Visit [this link](https://huggingface.co/models?pipeline_tag=question-answering) to learn more about the available models for **question-answering** task. \n", "\n", "Run the below command to instantiate a model object."]}, {"cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": ["model = QuestionAnswering()"]}, {"cell_type": "markdown", "metadata": {}, "source": ["A sample code to **extract answers** from a given context for a list of questions. "]}, {"cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "    <div>\n", "        <style>\n", "            /* Turns off some styling */\n", "            progress {\n", "                /* gets rid of default border in Firefox and Opera. */\n", "                border: none;\n", "                /* Needs to be in here for Safari polyfill so background images work as expected. */\n", "                background-size: auto;\n", "            }\n", "            .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n", "                background: #F44336;\n", "            }\n", "        </style>\n", "      <progress value='3' class='' max='3' style='width:300px; height:20px; vertical-align: middle;'></progress>\n", "      100.00% [3/3 00:18<00:00]\n", "    </div>\n", "    "], "text/plain": ["<IPython.core.display.HTML object>"]}, "metadata": {}, "output_type": "display_data"}, {"data": {"text/plain": ["[{'question': 'What is PointCNN?',\n", "  'answer': 'to efficiently classify and segment points from a point cloud dataset.',\n", "  'score': 0.43700891733169556},\n", " {'question': 'How is Point cloud dataset collected?',\n", "  'answer': 'Lidar sensors',\n", "  'score': 0.3490387797355652},\n", " {'question': 'What is Lidar?',\n", "  'answer': 'light detection and ranging',\n", "  'score': 0.8532344102859497}]"]}, "execution_count": 8, "metadata": {}, "output_type": "execute_result"}], "source": ["context = r\"\"\"\n", "The arcgis.learn module includes PointCNN model to efficiently classify and segment points from a point cloud dataset. \n", "Point cloud datasets are typically collected using Lidar sensors ( light detection and ranging ) \u2013 an optical \n", "remote-sensing technique that uses laser light to densely sample the surface of the earth, producing highly \n", "accurate x, y, and z measurements. These Lidar sensor produced points, once post-processed and spatially \n", "organized are referred to as a 'Point cloud' and are typically collected using terrestrial (both mobile or static) \n", "and airborne Lidar.\n", "\"\"\"\n", "\n", "question_list = [\"What is PointCNN?\", \"How is Point cloud dataset collected?\", \"What is Lidar?\"]\n", "\n", "model.get_answer(question_list, context=context)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Choosing a different backbone than the default"]}, {"cell_type": "markdown", "metadata": {}, "source": ["The default backbones offered by **Inference only Text Models** of the `arcgis.learn.text` submodule may not be suitable for all the use-cases that a user might want to solve. In this case, the user has the option to supply the appropriate transformer model depending on his/her use-case by visiting the model zoo for the given **task** at hand. The `arcgis.learn.text` submodule offers inference only models for 6 different **Natural Language Processing (NLP)** tasks. These tasks are **Classification**, **Summarization**, **Machine Translation**, **Masked Language Modeling** or **Fill Mask**, **Text Generation**, and **Question Answering**\n", "\n", "Let's pick a use-case where we want to perform the **question-answering** task on text written in **French** language. As mentioned before the **QuestionAnswering** model of the `arcgis.learn.text` submodule can be used to extract the answers for an input question from a given context. To know more about the available transformer models for the **question-answering** task, one can visit [this link](https://huggingface.co/models?pipeline_tag=question-answering). \n", "\n", "The page lists out 300+ models that are finetuned on a **question-answering** task. A user can filter out the models based on various search criteria such as **DataSets**, **Languages**, **Libraries** etc. We are interested in models that can work with **French** text. The model zoo offers models like `fmikaelian/camembert-base-fquad`, `illuin/camembert-base-fquad` etc. that are finetuned for **question-answering** task in **French** language.\n", "\n", "A sample code to extract answers from a given context for an input question written in **French** language."]}, {"cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": ["french_qa_model = QuestionAnswering(\"fmikaelian/camembert-base-fquad\")"]}, {"cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [{"data": {"text/plain": ["[{'question': 'Qui \u00e9tait Claude Monet?',\n", "  'answer': 'un peintre fran\u00e7ais',\n", "  'score': 0.02561291679739952}]"]}, "execution_count": 10, "metadata": {}, "output_type": "execute_result"}], "source": ["question = \"Qui \u00e9tait Claude Monet?\" # translation: \"Who was Claude Monet?\"\n", "\n", "# translation: \"Claude Monet, born November 14, 1840 in Paris and died December 5, 1926 in Giverny,\n", "#               was a French painter and one of the founders of Impressionism.\"\n", "context = r\"\"\"\n", "Claude Monet, n\u00e9 le 14 novembre 1840 \u00e0 Paris et mort le 5 d\u00e9cembre 1926 \u00e0 Giverny,\n", "\u00e9tait un peintre fran\u00e7ais et l'un des fondateurs de l'impressionnisme.\n", "\"\"\"\n", "\n", "model.get_answer(question, context=context, show_progress=False)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["The answer **'un peintre fran\u00e7ais'** translates to **'a french painter'** which is the answer to the question presented to the model. Here we have demonstrated how a user can select an existing transformer model finetuned on a **question-answering** and use the **QuestionAnswering** model to solve a use-case. Similar steps can be followed for other inference only models offered by the `arcgis.learn.text` submodule."]}, {"cell_type": "markdown", "metadata": {}, "source": ["## TextSummarizer"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Text summarization [[4]](#References) refers to a technique of shortening long pieces of text. The intent is to create a coherent and concise sequence of text keeping only the main points outlined in the input sentence or paragraph. It's a common problem in **Natural Language Processing (NLP)** domain. Machine learning models are usually trained on documents to distill the useful information before outputting the required summarized texts.\n", "\n", "The **TextSummarizer** model can be used generate summary for a given text. These models have been fine-tuned on **summarization** task. Visit [this link](https://huggingface.co/models?pipeline_tag=summarization) to learn more about the available models for **summarization** task.\n", "\n", "Sample code to instantiate the model object and summarize a given text."]}, {"cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": ["summarizer = TextSummarizer()"]}, {"cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "    <div>\n", "        <style>\n", "            /* Turns off some styling */\n", "            progress {\n", "                /* gets rid of default border in Firefox and Opera. */\n", "                border: none;\n", "                /* Needs to be in here for Safari polyfill so background images work as expected. */\n", "                background-size: auto;\n", "            }\n", "            .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n", "                background: #F44336;\n", "            }\n", "        </style>\n", "      <progress value='1' class='' max='1' style='width:300px; height:20px; vertical-align: middle;'></progress>\n", "      100.00% [1/1 00:02<00:00]\n", "    </div>\n", "    "], "text/plain": ["<IPython.core.display.HTML object>"]}, "metadata": {}, "output_type": "display_data"}, {"data": {"text/plain": ["[{'summary_text': ' This deep learning model is used to extract building footprints from high resolution (30-50 cm) satellite imagery . Building footprint layers are useful in preparing base maps and analysis workflows for urban planning and development, insurance, taxation, change detection, infrastructure planning and a variety of other applications .'}]"]}, "execution_count": 12, "metadata": {}, "output_type": "execute_result"}], "source": ["summary_text = \"\"\"\n", "This deep learning model is used to extract building footprints from high resolution (30-50 cm) satellite imagery. \n", "Building footprint layers are useful in preparing base maps and analysis workflows for urban planning and development, \n", "insurance, taxation, change detection, infrastructure planning and a variety of other applications.\n", "Digitizing building footprints from imagery is a time consuming task and is commonly done by digitizing features \n", "manually. Deep learning models have a high capacity to learn these complex workflow semantics and can produce \n", "superior results. Use this deep learning model to automate this process and reduce the time and effort required \n", "for acquiring building footprints.\n", "\"\"\"\n", "\n", "summarizer.summarize(summary_text, max_length=100)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## TextTranslator"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Machine translation is a sub-field of computational linguistics that deals with the problem of translating an input text or speech from one language to another. The **TextTranslator** model is a class of inference only models that are fine-tuned on a translation task. Visit [this](https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/) link to get a more detailed explanation on how machine translation model works. These models uses technique called **Attention**, which highly improves the quality of machine translation systems. **Attention** allows the model to focus on the relevant parts of the input sequence as needed.\n", "\n", "\n", "<img src=\" \">\n", "\n", "<center>Figure3: The model paid attention correctly when outputing \"European Economic Area\". In French, the order of these words is reversed (\"europ\u00e9enne \u00e9conomique zone\") as compared to English.</center>"]}, {"cell_type": "markdown", "metadata": {}, "source": ["This [link](https://huggingface.co/models?pipeline_tag=translation&search=Helsinki) lists out the models that allows translation from a source language to one or more target languages. \n", "\n", "Sample code to instantiate the model object and translate a Spanish language text in German language."]}, {"cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "    <div>\n", "        <style>\n", "            /* Turns off some styling */\n", "            progress {\n", "                /* gets rid of default border in Firefox and Opera. */\n", "                border: none;\n", "                /* Needs to be in here for Safari polyfill so background images work as expected. */\n", "                background-size: auto;\n", "            }\n", "            .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n", "                background: #F44336;\n", "            }\n", "        </style>\n", "      <progress value='1' class='' max='1' style='width:300px; height:20px; vertical-align: middle;'></progress>\n", "      100.00% [1/1 00:01<00:00]\n", "    </div>\n", "    "], "text/plain": ["<IPython.core.display.HTML object>"]}, "metadata": {}, "output_type": "display_data"}, {"name": "stdout", "output_type": "stream", "text": ["[{'translated_text': 'Die Bodenbedeckung beschreibt die Erdoberfl\u00e4che. Sie sind n\u00fctzlich f\u00fcr das Verst\u00e4ndnis der Stadtplanung, des Ressourcenmanagements, der Erkennung von Ver\u00e4nderungen, der Landwirtschaft und einer Vielzahl anderer Anwendungen.'}]\n"]}], "source": ["translator_german = TextTranslator(target_language=\"de\")\n", "\n", "text = \"\"\"La cobertura terrestre describe la superficie de la tierra. Son \u00fatiles para comprender la planificaci\u00f3n \n", "urbana, la gesti\u00f3n de recursos, la detecci\u00f3n de cambios, la agricultura y una variedad de otras aplicaciones.\"\"\"\n", "\n", "print(translator_german.translate(text))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Sample code for translating English language text to French language."]}, {"cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "    <div>\n", "        <style>\n", "            /* Turns off some styling */\n", "            progress {\n", "                /* gets rid of default border in Firefox and Opera. */\n", "                border: none;\n", "                /* Needs to be in here for Safari polyfill so background images work as expected. */\n", "                background-size: auto;\n", "            }\n", "            .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n", "                background: #F44336;\n", "            }\n", "        </style>\n", "      <progress value='1' class='' max='1' style='width:300px; height:20px; vertical-align: middle;'></progress>\n", "      100.00% [1/1 00:01<00:00]\n", "    </div>\n", "    "], "text/plain": ["<IPython.core.display.HTML object>"]}, "metadata": {}, "output_type": "display_data"}, {"name": "stdout", "output_type": "stream", "text": ["[{'translated_text': \"La couverture terrestre d\u00e9crit la surface de la terre.Elle est utile pour comprendre l'urbanisme, la gestion des ressources, la d\u00e9tection des changements, l'agriculture et une vari\u00e9t\u00e9 d'autres applications.\"}]\n"]}], "source": ["translator_french = TextTranslator(source_language=\"en\", target_language=\"fr\")\n", "text_list = \"\"\"Land cover describes the surface of the earth. They are useful for understanding urban planning, \n", "resource management, change detection, agriculture and a variety of other applications.\"\"\"\n", "\n", "print(translator_french.translate(text_list))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## TextGenerator"]}, {"cell_type": "markdown", "metadata": {}, "source": ["**TextGenerator** model can be used to generate sequence of text for a given incomplete text sequence or paragraph. These models are trained with an autoregressive language modeling objective and is therefore powerful at predicting the next token in a sequence. Visit [this link](https://huggingface.co/models?pipeline_tag=text-generation) to learn more about the available models for **text-generation** task.\n", "\n", "Sample code to instantiate the model object and using it to generate sequence of text for a given incomplete sentence"]}, {"cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": ["text_gen = TextGenerator()"]}, {"cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "    <div>\n", "        <style>\n", "            /* Turns off some styling */\n", "            progress {\n", "                /* gets rid of default border in Firefox and Opera. */\n", "                border: none;\n", "                /* Needs to be in here for Safari polyfill so background images work as expected. */\n", "                background-size: auto;\n", "            }\n", "            .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n", "                background: #F44336;\n", "            }\n", "        </style>\n", "      <progress value='1' class='' max='1' style='width:300px; height:20px; vertical-align: middle;'></progress>\n", "      100.00% [1/1 00:00<00:00]\n", "    </div>\n", "    "], "text/plain": ["<IPython.core.display.HTML object>"]}, "metadata": {}, "output_type": "display_data"}, {"name": "stdout", "output_type": "stream", "text": ["[[{'generated_text': 'Hundreds of thousands of organizations in virtually '\n", "                     'every field are using GIS to make maps that can help '\n", "                     'them understand their local populations and identify '\n", "                     'potential problems. So'},\n", "  {'generated_text': 'Hundreds of thousands of organizations in virtually '\n", "                     'every field are using GIS to make maps that are easily '\n", "                     'portable to anyone, anytime.\\n'\n", "                     '\\n'\n", "                     'The GIS'}]]\n"]}], "source": ["text_list = [\"Hundreds of thousands of organizations in virtually every field are using GIS to make maps that\"]\n", "pprint(text_gen.generate_text(text_list, num_return_sequences=2, max_length=30))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## FillMask"]}, {"cell_type": "markdown", "metadata": {}, "source": ["**FillMask** model can be used to provide suggestion for a missing token/word in a sentence. These models have been trained with a [Masked Language Modeling (MLM)](https://huggingface.co/transformers/task_summary.html#masked-language-modeling) objective, which includes the bi-directional models in the library. Visit [this link](https://huggingface.co/models?pipeline_tag=fill-mask) to learn more about the available models for **fill-mask** task.\n", "\n", "Sample usage to get suggestions for a missing word in a sentence"]}, {"cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": ["fill_mask = FillMask()"]}, {"cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "    <div>\n", "        <style>\n", "            /* Turns off some styling */\n", "            progress {\n", "                /* gets rid of default border in Firefox and Opera. */\n", "                border: none;\n", "                /* Needs to be in here for Safari polyfill so background images work as expected. */\n", "                background-size: auto;\n", "            }\n", "            .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n", "                background: #F44336;\n", "            }\n", "        </style>\n", "      <progress value='1' class='' max='1' style='width:300px; height:20px; vertical-align: middle;'></progress>\n", "      100.00% [1/1 00:00<00:00]\n", "    </div>\n", "    "], "text/plain": ["<IPython.core.display.HTML object>"]}, "metadata": {}, "output_type": "display_data"}, {"data": {"text/plain": ["[[{'sequence': 'This deep learning model is used to extract building footprints from high resolution satellite imagery.',\n", "   'score': 0.6854187846183777,\n", "   'token_str': 'imagery'},\n", "  {'sequence': 'This deep learning model is used to extract building footprints from high resolution satellite images.',\n", "   'score': 0.24048534035682678,\n", "   'token_str': 'images'},\n", "  {'sequence': 'This deep learning model is used to extract building footprints from high resolution satellite data.',\n", "   'score': 0.010344304144382477,\n", "   'token_str': 'data'},\n", "  {'sequence': 'This deep learning model is used to extract building footprints from high resolution satellite photos.',\n", "   'score': 0.00868541095405817,\n", "   'token_str': 'photos'}]]"]}, "execution_count": 18, "metadata": {}, "output_type": "execute_result"}], "source": ["# original text - This deep learning model is used to extract building footprints from high resolution satellite imagery\n", "\n", "text_list = [\"This deep learning model is used to extract building footprints from high resolution satellite __.\"]\n", "\n", "fill_mask.predict_token(text_list, num_suggestions=4)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["# References"]}, {"cell_type": "markdown", "metadata": {}, "source": ["[1] [BERT Paper](https://arxiv.org/pdf/1810.04805.pdf)\n", "\n", "[2] [Summary of the models](https://huggingface.co/transformers/summary.html)\n", "\n", "[3] [The Annotated Transformer](http://nlp.seas.harvard.edu/2018/04/03/attention.html)\n", "\n", "[4] [Text Summarization with Machine Learning](https://medium.com/luisfredgs/automatic-text-summarization-with-machine-learning-an-overview-68ded5717a25)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": []}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9"}}, "nbformat": 4, "nbformat_minor": 4}