{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "iFyUOrS1fczL"
   },
   "source": [
    "# Better NLP\n",
    "\n",
    "This is a wrapper program/library that encapsulates a couple of NLP libraries that are popular among the AI and ML communities.\n",
    "\n",
    "Examples have been used to illustrate the usage as much as possible. Not all the APIs of the underlying libraries have been covered.\n",
    "\n",
    "The idea is to keep the API language as high-level as possible, so its easier to use and stays human-readable.\n",
    "\n",
    "Libraries / frameworks covered:\n",
    "\n",
    "- nltk [site](http://www.nltk.org/) | [docs](https://buildmedia.readthedocs.org/media/pdf/nltk/latest/nltk.pdf)\n",
    "- numpy [site](https://www.numpy.org/) | [docs](https://docs.scipy.org/doc/)\n",
    "- networkx [site](https://networkx.github.io/) | [docs](https://networkx.github.io/documentation/stable/index.html)\n",
    "\n",
    "See [https://github.com/neomatrix369/awesome-ai-ml-dl/blob/master/examples/better-nlp](https://github.com/neomatrix369/awesome-ai-ml-dl/blob/master/examples/better-nlp) for more details.\n",
    "\n",
    "### This notebook will demonstrate the below NLP features / functionalities, using the above mentioned libraries\n",
    "\n",
    "- Cosine similarity summarisation technique (extractive summarisation)\n",
    "- Vertex ranking algorithm summarisation technique\n",
    "- Build a simple text summarisation tool using NLTK\n",
    "- Summarisation 4 (TODO)\n",
    "- Summarisation 5 (TODO)\n",
    "\n",
    "_Summarisation can be defined as a task of producing a concise and fluent summary while preserving key information and overall meaning._"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "IMTAIhz53w8G"
   },
   "source": [
    "### Resources\n",
    "\n",
    "- [Understand Text Summarization and create your own summarizer in python](https://towardsdatascience.com/understand-text-summarization-and-create-your-own-summarizer-in-python-b26a9f09fc70) or [An Introduction to Text Summarization using the TextRank Algorithm (with Python implementation)](https://www.analyticsvidhya.com/blog/2018/11/introduction-text-summarization-textrank-python/)\n",
    "- [Beyond bag of words: Using PyTextRank to find Phrases and Summarize text](https://medium.com/@aneesha/beyond-bag-of-words-using-pytextrank-to-find-phrases-and-summarize-text-f736fa3773c5)\n",
    "- [Build a simple text summarisation tool using NLTK](https://medium.com/@wilamelima/build-a-simple-text-summarisation-tool-using-nltk-ff0984fedb4f)\n",
    "- [Summarise Text with TFIDF in Python 1/2](https://towardsdatascience.com/tfidf-for-piece-of-text-in-python-43feccaa74f8) and [Summarise Text with TFIDF in Python 2/2](https://medium.com/@shivangisareen/summarise-text-with-tfidf-in-python-bc7ca10d3284)\n",
    "- [How to Make a Text Summarizer - Intro to Deep Learning #10 by Siraj Raval](https://www.youtube.com/watch?v=ogrJaOIuBx4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Lre8GErufczN"
   },
   "source": [
    "#### Setup and installation ( optional )\n",
    "\n",
    "In case, this notebook is running in a local environment (Linux/MacOS) or _Google Colab_ environment and in case it does not have the necessary dependencies installed then please execute the steps in the next section.\n",
    "\n",
    "Otherwise, please SKIP to the **Install Spacy model ( NOT optional )** section."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 5927
    },
    "colab_type": "code",
    "id": "QJuCUOMOfczO",
    "outputId": "018e9102-de25-4fbe-b5b3-919c58644f00"
   },
   "outputs": [],
   "source": [
    "%%time\n",
    "%%bash\n",
    "\n",
    "apt-get install apt-utils dselect dpkg\n",
    "\n",
    "echo \"OSTYPE=$OSTYPE\"\n",
    "if [[ \"$OSTYPE\" == \"cygwin\" ]] || [[ \"$OSTYPE\" == \"msys\" ]] ; then\n",
    "    echo \"Windows or Windows-like environment detected, script not tested, and may not work.\"\n",
    "    echo \"Try installing the components mention in the install-[ostype].sh scripts manually.\"\n",
    "    echo \"Or try running under CGYWIN or git-bash.\"\n",
    "    echo \"If successfully installed, please contribute back with the solution via a pull request, to https://github.com/neomatrix369/awesome-ai-ml-dl/\"\n",
    "    echo \"Please give the file a good name, i.e. install-windows.sh or install-windows.bat depending on what kind of script you end up writing\"\n",
    "    exit 0\n",
    "elif [[ \"$OSTYPE\" == \"linux-gnu\" ]] || [[ \"$OSTYPE\" == \"linux\" ]]; then\n",
    "    TARGET_OS=\"linux\"\n",
    "else\n",
    "    TARGET_OS=\"macos\"\n",
    "fi\n",
    "\n",
    "if [[ -e ../../library/org/neomatrix369 ]]; then\n",
    "  echo \"Library source found\"\n",
    "  \n",
    "  cd ../../build\n",
    "  \n",
    "  echo \"Detected OS: ${TARGET_OS}\"\n",
    "  ./install-${TARGET_OS}.sh || true\n",
    "else\n",
    "  if [[ -e awesome-ai-ml-dl/examples/better-nlp/library ]]; then\n",
    "     echo \"Library source found\"\n",
    "  else\n",
    "     git clone \"https://github.com/neomatrix369/awesome-ai-ml-dl\"\n",
    "  fi\n",
    "\n",
    "  echo \"Library source exists\"\n",
    "  cd awesome-ai-ml-dl/examples/better-nlp/build\n",
    "\n",
    "  echo \"Detected OS: ${TARGET_OS}\"\n",
    "  ./install-${TARGET_OS}.sh || true \n",
    "fi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "vPLdwvt63w8R"
   },
   "source": [
    "#### Install Spacy model ( NOT optional )\n",
    "\n",
    "Install the large English language model for spaCy - will be needed for the examples in this notebooks.\n",
    "\n",
    "**Note:** from observation it appears that spaCy model should be installed towards the end of the installation process, it avoid errors when running programs using the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "-dJrJ54a3w8S",
    "outputId": "3da4300b-89a8-43e3-e989-fa7859ecdfc8"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting en_core_web_lg==2.2.5\n",
      "  Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-2.2.5/en_core_web_lg-2.2.5.tar.gz (827.9 MB)\n",
      "Requirement already satisfied: spacy>=2.2.2 in /usr/local/lib/python3.7/site-packages (from en_core_web_lg==2.2.5) (2.2.3)\n",
      "Requirement already satisfied: plac<1.2.0,>=0.9.6 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (1.1.3)\n",
      "Requirement already satisfied: catalogue<1.1.0,>=0.0.7 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (1.0.0)\n",
      "Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (1.18.1)\n",
      "Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (3.0.2)\n",
      "Requirement already satisfied: srsly<1.1.0,>=0.1.0 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (1.0.2)\n",
      "Requirement already satisfied: setuptools in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (45.2.0)\n",
      "Requirement already satisfied: blis<0.5.0,>=0.4.0 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (0.4.1)\n",
      "Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (2.23.0)\n",
      "Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (1.0.2)\n",
      "Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (2.0.3)\n",
      "Requirement already satisfied: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (0.6.0)\n",
      "Requirement already satisfied: thinc<7.4.0,>=7.3.0 in /usr/local/lib/python3.7/site-packages (from spacy>=2.2.2->en_core_web_lg==2.2.5) (7.3.1)\n",
      "Requirement already satisfied: importlib-metadata>=0.20; python_version < \"3.8\" in /usr/local/lib/python3.7/site-packages (from catalogue<1.1.0,>=0.0.7->spacy>=2.2.2->en_core_web_lg==2.2.5) (1.5.0)\n",
      "Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/site-packages (from requests<3.0.0,>=2.13.0->spacy>=2.2.2->en_core_web_lg==2.2.5) (1.25.8)\n",
      "Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests<3.0.0,>=2.13.0->spacy>=2.2.2->en_core_web_lg==2.2.5) (2.9)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests<3.0.0,>=2.13.0->spacy>=2.2.2->en_core_web_lg==2.2.5) (2019.11.28)\n",
      "Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests<3.0.0,>=2.13.0->spacy>=2.2.2->en_core_web_lg==2.2.5) (3.0.4)\n",
      "Requirement already satisfied: tqdm<5.0.0,>=4.10.0 in /usr/local/lib/python3.7/site-packages (from thinc<7.4.0,>=7.3.0->spacy>=2.2.2->en_core_web_lg==2.2.5) (4.43.0)\n",
      "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/site-packages (from importlib-metadata>=0.20; python_version < \"3.8\"->catalogue<1.1.0,>=0.0.7->spacy>=2.2.2->en_core_web_lg==2.2.5) (3.1.0)\n",
      "Building wheels for collected packages: en-core-web-lg\n",
      "  Building wheel for en-core-web-lg (setup.py): started\n",
      "  Building wheel for en-core-web-lg (setup.py): still running...\n",
      "  Building wheel for en-core-web-lg (setup.py): finished with status 'done'\n",
      "  Created wheel for en-core-web-lg: filename=en_core_web_lg-2.2.5-py3-none-any.whl size=829180943 sha256=2a41b7485c117adaafa311ac17cefb1d1be68687597fc9795c73bd5bdce22798\n",
      "  Stored in directory: /tmp/pip-ephem-wheel-cache-43djrlnp/wheels/11/95/ba/2c36cc368c0bd339b44a791c2c1881a1fb714b78c29a4cb8f5\n",
      "Successfully built en-core-web-lg\n",
      "Installing collected packages: en-core-web-lg\n",
      "Successfully installed en-core-web-lg-2.2.5\n",
      "\u001b[38;5;2m✔ Download and installation successful\u001b[0m\n",
      "You can now load the model via spacy.load('en_core_web_lg')\n",
      "\u001b[38;5;2m✔ Linking successful\u001b[0m\n",
      "/usr/local/lib/python3.7/site-packages/en_core_web_lg -->\n",
      "/usr/local/lib/python3.7/site-packages/spacy/data/en\n",
      "You can now load the model via spacy.load('en')\n",
      "CPU times: user 6.3 ms, sys: 12.8 ms, total: 19.1 ms\n",
      "Wall time: 4min 59s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "%%bash\n",
    "\n",
    "python -m spacy download en_core_web_lg\n",
    "python -m spacy link en_core_web_lg en || true"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "kwXgEdM8oeUv"
   },
   "source": [
    "## Examples of various summarisation methods"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "AX1pZlKofczb"
   },
   "source": [
    "### 1. Cosine similarity summarisation technique (extractive summarisation)\n",
    "\n",
    "**Abstractive Summarization:** Abstractive methods select words based on semantic understanding, even those words did not appear in the source documents. It aims at producing important material in a new way. They interpret and examine the text using advanced natural language techniques in order to generate a new shorter text that conveys the most critical information from the original text.\n",
    "\n",
    "**Flow:** Input document → understand context → semantics → create own summary\n",
    "\n",
    "**Extractive Summarization:** Extractive methods attempt to summarize articles by selecting a subset of words that retain the most important points.\n",
    "\n",
    "**Flow:** Input document → sentences similarity → weight sentences → select sentences with higher rank\n",
    "\n",
    "**Cosine similarity** is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. Its measures cosine of the angle between vectors. Angle will be 0 if sentences are similar and tend towards 90 as they begin to differ.\n",
    "\n",
    "Inspired by Praveen Dubey the author of https://towardsdatascience.com/understand-text-summarization-and-create-your-own-summarizer-in-python-b26a9f09fc70\n",
    "\n",
    "or see [Understand Text Summarization and create your own summarizer in python](https://www.analyticsvidhya.com/blog/2018/11/introduction-text-summarization-textrank-python/)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "Yjan7P5_3w8Z"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "This version of Python is 64 bits.\n",
      "This version of Python is 64 bits.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[nltk_data] Downloading package stopwords to /root/nltk_data...\n",
      "[nltk_data]   Unzipping corpora/stopwords.zip.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "This version of Python is 64 bits.\n",
      "This version of Python is 64 bits.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[nltk_data] Downloading package punkt to /root/nltk_data...\n",
      "[nltk_data]   Unzipping tokenizers/punkt.zip.\n",
      "[nltk_data] Downloading package wordnet to /root/nltk_data...\n",
      "[nltk_data]   Unzipping corpora/wordnet.zip.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "This version of Python is 64 bits.\n"
     ]
    }
   ],
   "source": [
    "import sys\n",
    "sys.path.insert(0, '../../library')\n",
    "sys.path.insert(0, './awesome-ai-ml-dl/examples/better-nlp/library')\n",
    "\n",
    "from org.neomatrix369.better_nlp import BetterNLP\n",
    "\n",
    "import pprint\n",
    "pp = pprint.PrettyPrinter(indent=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "zpjkLRHe3w8c"
   },
   "outputs": [],
   "source": [
    "betterNLP = BetterNLP() ### do not re-run this unless you wish to re-initialise the object\n",
    "generic_text=\"\"\"In an attempt to build an AI-ready workforce, SmartSoft Corp. announced Smart Colab Program which has been launched to empower the next generation of students with AI-ready skills. Envisioned as a three-year collaborative program, Smart Colab Program will support around 100 institutions with AI infrastructure, course content and curriculum, developer support, development tools and give students access to cloud and AI services. As part of the program, the Palo Alto giant which wants to expand its reach and is planning to build a strong developer ecosystem in India with the program will set up the core AI infrastructure and IoT Hub for the selected campuses. The company will provide AI development tools and AI services such as SmartSoft Corp. Cognitive Services, Bot Services and Machine Learning Services. According to Mark Smith, Country AI Manager, SmartSoft Corp. India, said, \"With AI being the defining technology of our time, it is transforming lives and industry and the jobs of tomorrow will require a different skillset. This will require more collaborations and training and working with AI. That’s why it has become more critical than ever for educational institutions to integrate new cloud and AI technologies. The program is an attempt to ramp up the institutional set-up and build capabilities among the educators to educate the workforce of tomorrow.\" The program aims to build up the cognitive skills and in-depth understanding of developing intelligent cloud connected solutions for applications across industry. Earlier in April this year, the company announced SmartSoft Corp. Advanced Program In AI as a learning track open to the public. The program was developed to provide job ready skills to programmers who wanted to hone their skills in AI and data science with a series of online courses which featured hands-on labs and expert instructors as well. This program also included developer-focused AI school that provided a bunch of assets to help build AI skills.\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "-vkYIGs13w8f",
    "outputId": "083398af-ca33-4c1c-f892-e1567de17931"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n",
      "summarisation_processing_time_in_secs= 0.05715775489807129\n",
      "('summarised_text=As part of the program, the Palo Alto giant which wants to '\n",
      " 'expand its reach and is planning to build a strong developer ecosystem in '\n",
      " 'India with the program will set up the core AI infrastructure and IoT Hub '\n",
      " 'for the selected campuses. The company will provide AI development tools and '\n",
      " 'AI services such as SmartSoft Corp. The program is an attempt to ramp up the '\n",
      " 'institutional set-up and build capabilities among the educators to educate '\n",
      " 'the workforce of tomorrow.\" The program aims to build up the cognitive '\n",
      " 'skills and in-depth understanding of developing intelligent cloud connected '\n",
      " 'solutions for applications across industry. The program was developed to '\n",
      " 'provide job ready skills to programmers who wanted to hone their skills in '\n",
      " 'AI and data science with a series of online courses which featured hands-on '\n",
      " 'labs and expert instructors as well. Envisioned as a three-year '\n",
      " 'collaborative program, Smart Colab Program will support around 100 '\n",
      " 'institutions with AI infrastructure, course content and curriculum, '\n",
      " 'developer support, development tools and give students access to cloud and '\n",
      " 'AI services')\n",
      "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n"
     ]
    }
   ],
   "source": [
    "summarised_result = betterNLP.summarise(generic_text)\n",
    "\n",
    "print(\"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\")\n",
    "print(\"summarisation_processing_time_in_secs=\",summarised_result['summarisation_processing_time_in_secs'])\n",
    "pp.pprint(\"summarised_text=\" + summarised_result['summarised_text'])\n",
    "print(\"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ranked_sentences=\n",
      "[   (   0.1005088411163724,\n",
      "        [   'As',\n",
      "            'part',\n",
      "            'of',\n",
      "            'the',\n",
      "            'program,',\n",
      "            'the',\n",
      "            'Palo',\n",
      "            'Alto',\n",
      "            'giant',\n",
      "            'which',\n",
      "            'wants',\n",
      "            'to',\n",
      "            'expand',\n",
      "            'its',\n",
      "            'reach',\n",
      "            'and',\n",
      "            'is',\n",
      "            'planning',\n",
      "            'to',\n",
      "            'build',\n",
      "            'a',\n",
      "            'strong',\n",
      "            'developer',\n",
      "            'ecosystem',\n",
      "            'in',\n",
      "            'India',\n",
      "            'with',\n",
      "            'the',\n",
      "            'program',\n",
      "            'will',\n",
      "            'set',\n",
      "            'up',\n",
      "            'the',\n",
      "            'core',\n",
      "            'AI',\n",
      "            'infrastructure',\n",
      "            'and',\n",
      "            'IoT',\n",
      "            'Hub',\n",
      "            'for',\n",
      "            'the',\n",
      "            'selected',\n",
      "            'campuses']),\n",
      "    (   0.0927534462515247,\n",
      "        [   'The',\n",
      "            'company',\n",
      "            'will',\n",
      "            'provide',\n",
      "            'AI',\n",
      "            'development',\n",
      "            'tools',\n",
      "            'and',\n",
      "            'AI',\n",
      "            'services',\n",
      "            'such',\n",
      "            'as',\n",
      "            'SmartSoft',\n",
      "            'Corp']),\n",
      "    (   0.09107393553087705,\n",
      "        [   'The',\n",
      "            'program',\n",
      "            'is',\n",
      "            'an',\n",
      "            'attempt',\n",
      "            'to',\n",
      "            'ramp',\n",
      "            'up',\n",
      "            'the',\n",
      "            'institutional',\n",
      "            'set-up',\n",
      "            'and',\n",
      "            'build',\n",
      "            'capabilities',\n",
      "            'among',\n",
      "            'the',\n",
      "            'educators',\n",
      "            'to',\n",
      "            'educate',\n",
      "            'the',\n",
      "            'workforce',\n",
      "            'of',\n",
      "            'tomorrow.\"',\n",
      "            'The',\n",
      "            'program',\n",
      "            'aims',\n",
      "            'to',\n",
      "            'build',\n",
      "            'up',\n",
      "            'the',\n",
      "            'cognitive',\n",
      "            'skills',\n",
      "            'and',\n",
      "            'in-depth',\n",
      "            'understanding',\n",
      "            'of',\n",
      "            'developing',\n",
      "            'intelligent',\n",
      "            'cloud',\n",
      "            'connected',\n",
      "            'solutions',\n",
      "            'for',\n",
      "            'applications',\n",
      "            'across',\n",
      "            'industry']),\n",
      "    (   0.0891243848923585,\n",
      "        [   'The',\n",
      "            'program',\n",
      "            'was',\n",
      "            'developed',\n",
      "            'to',\n",
      "            'provide',\n",
      "            'job',\n",
      "            'ready',\n",
      "            'skills',\n",
      "            'to',\n",
      "            'programmers',\n",
      "            'who',\n",
      "            'wanted',\n",
      "            'to',\n",
      "            'hone',\n",
      "            'their',\n",
      "            'skills',\n",
      "            'in',\n",
      "            'AI',\n",
      "            'and',\n",
      "            'data',\n",
      "            'science',\n",
      "            'with',\n",
      "            'a',\n",
      "            'series',\n",
      "            'of',\n",
      "            'online',\n",
      "            'courses',\n",
      "            'which',\n",
      "            'featured',\n",
      "            'hands-on',\n",
      "            'labs',\n",
      "            'and',\n",
      "            'expert',\n",
      "            'instructors',\n",
      "            'as',\n",
      "            'well']),\n",
      "    (   0.086410446263451,\n",
      "        [   'Envisioned',\n",
      "            'as',\n",
      "            'a',\n",
      "            'three-year',\n",
      "            'collaborative',\n",
      "            'program,',\n",
      "            'Smart',\n",
      "            'Colab',\n",
      "            'Program',\n",
      "            'will',\n",
      "            'support',\n",
      "            'around',\n",
      "            '100',\n",
      "            'institutions',\n",
      "            'with',\n",
      "            'AI',\n",
      "            'infrastructure,',\n",
      "            'course',\n",
      "            'content',\n",
      "            'and',\n",
      "            'curriculum,',\n",
      "            'developer',\n",
      "            'support,',\n",
      "            'development',\n",
      "            'tools',\n",
      "            'and',\n",
      "            'give',\n",
      "            'students',\n",
      "            'access',\n",
      "            'to',\n",
      "            'cloud',\n",
      "            'and',\n",
      "            'AI',\n",
      "            'services']),\n",
      "    (   0.07940482690749705,\n",
      "        [   'Advanced',\n",
      "            'Program',\n",
      "            'In',\n",
      "            'AI',\n",
      "            'as',\n",
      "            'a',\n",
      "            'learning',\n",
      "            'track',\n",
      "            'open',\n",
      "            'to',\n",
      "            'the',\n",
      "            'public']),\n",
      "    (   0.07552190239284035,\n",
      "        [   'India,',\n",
      "            'said,',\n",
      "            '\"With',\n",
      "            'AI',\n",
      "            'being',\n",
      "            'the',\n",
      "            'defining',\n",
      "            'technology',\n",
      "            'of',\n",
      "            'our',\n",
      "            'time,',\n",
      "            'it',\n",
      "            'is',\n",
      "            'transforming',\n",
      "            'lives',\n",
      "            'and',\n",
      "            'industry',\n",
      "            'and',\n",
      "            'the',\n",
      "            'jobs',\n",
      "            'of',\n",
      "            'tomorrow',\n",
      "            'will',\n",
      "            'require',\n",
      "            'a',\n",
      "            'different',\n",
      "            'skillset']),\n",
      "    (   0.06831027977170921,\n",
      "        [   'This',\n",
      "            'will',\n",
      "            'require',\n",
      "            'more',\n",
      "            'collaborations',\n",
      "            'and',\n",
      "            'training',\n",
      "            'and',\n",
      "            'working',\n",
      "            'with',\n",
      "            'AI']),\n",
      "    (   0.061347193209250084,\n",
      "        [   'announced',\n",
      "            'Smart',\n",
      "            'Colab',\n",
      "            'Program',\n",
      "            'which',\n",
      "            'has',\n",
      "            'been',\n",
      "            'launched',\n",
      "            'to',\n",
      "            'empower',\n",
      "            'the',\n",
      "            'next',\n",
      "            'generation',\n",
      "            'of',\n",
      "            'students',\n",
      "            'with',\n",
      "            'AI-ready',\n",
      "            'skills']),\n",
      "    (   0.057387957742993136,\n",
      "        [   'According',\n",
      "            'to',\n",
      "            'Mark',\n",
      "            'Smith,',\n",
      "            'Country',\n",
      "            'AI',\n",
      "            'Manager,',\n",
      "            'SmartSoft',\n",
      "            'Corp']),\n",
      "    (   0.05583463297815709,\n",
      "        [   'That’s',\n",
      "            'why',\n",
      "            'it',\n",
      "            'has',\n",
      "            'become',\n",
      "            'more',\n",
      "            'critical',\n",
      "            'than',\n",
      "            'ever',\n",
      "            'for',\n",
      "            'educational',\n",
      "            'institutions',\n",
      "            'to',\n",
      "            'integrate',\n",
      "            'new',\n",
      "            'cloud',\n",
      "            'and',\n",
      "            'AI',\n",
      "            'technologies']),\n",
      "    (   0.05409704750762544,\n",
      "        [   'Earlier',\n",
      "            'in',\n",
      "            'April',\n",
      "            'this',\n",
      "            'year,',\n",
      "            'the',\n",
      "            'company',\n",
      "            'announced',\n",
      "            'SmartSoft',\n",
      "            'Corp']),\n",
      "    (   0.0499849403541697,\n",
      "        [   'In',\n",
      "            'an',\n",
      "            'attempt',\n",
      "            'to',\n",
      "            'build',\n",
      "            'an',\n",
      "            'AI-ready',\n",
      "            'workforce,',\n",
      "            'SmartSoft',\n",
      "            'Corp']),\n",
      "    (   0.03824016508117425,\n",
      "        [   'Cognitive',\n",
      "            'Services,',\n",
      "            'Bot',\n",
      "            'Services',\n",
      "            'and',\n",
      "            'Machine',\n",
      "            'Learning',\n",
      "            'Services'])]\n"
     ]
    }
   ],
   "source": [
    "print(\"ranked_sentences=\") \n",
    "pp.pprint(summarised_result['ranked_sentences'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "T7A3p05vfcz0"
   },
   "source": [
    "### 2. Vertex ranking algorithm summarisation technique\n",
    "\n",
    "Using PyTextRank to find Phrases and Summarize text: Multi-word Phrase Extraction and Sentence Extraction for Summarization\n",
    "\n",
    "Inspired by the author of https://medium.com/@aneesha/beyond-bag-of-words-using-pytextrank-to-find-phrases-and-summarize-text-f736fa3773c5 \n",
    "(Notebook: https://github.com/DerwenAI/pytextrank/blob/master/example.ipynb)\n",
    "\n",
    "Another resource to take a look at: https://www.analyticsvidhya.com/blog/2018/11/introduction-text-summarization-textrank-python/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "R0by6FnE3w8n"
   },
   "outputs": [],
   "source": [
    "betterNLP = BetterNLP() ### do not re-run this unless you wish to re-initialise the object"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "w44pghUL3w8r",
    "outputId": "885ba416-64b3-48ef-b27c-9b66f2ca069f"
   },
   "outputs": [
    {
     "ename": "AttributeError",
     "evalue": "module 'pytextrank' has no attribute 'parse_doc'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mAttributeError\u001b[0m                            Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-7-6fee89998c79>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m      5\u001b[0m \u001b[0mf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mclose\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      6\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 7\u001b[0;31m \u001b[0msummarised_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mbetterNLP\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msummarise\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msource_file\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmethod\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"pytextrank\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m      8\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      9\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/better-nlp/library/org/neomatrix369/better_nlp.py\u001b[0m in \u001b[0;36msummarise\u001b[0;34m(self, text, method, top_n_sentences)\u001b[0m\n\u001b[1;32m    467\u001b[0m             \u001b[0mresult\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"summarised_text\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"token_ranks\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;31m \u001b[0m\u001b[0;31m\\\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    468\u001b[0m                      \u001b[0mresult\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"key_phrases\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"graph\"\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m\u001b[0;31m \u001b[0m\u001b[0;31m\\\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 469\u001b[0;31m                         \u001b[0msummariser\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgenerate_summary\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtext\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtop_n_sentences\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    470\u001b[0m         \u001b[0;32melif\u001b[0m \u001b[0mmethod\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m\"tfidf\"\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    471\u001b[0m             \u001b[0msummariser\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mSummariserTFIDF\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/better-nlp/library/org/neomatrix369/summariser_pytextrank.py\u001b[0m in \u001b[0;36mgenerate_summary\u001b[0;34m(self, text_file, top_n_sentences)\u001b[0m\n\u001b[1;32m    207\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    208\u001b[0m         \u001b[0;31m# Stage 1: Perform statistical parsing/tagging on a document in JSON format\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 209\u001b[0;31m         \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mperform_statistical_parsing_tagging\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtext_file\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mparagraph_output\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    210\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    211\u001b[0m         \u001b[0;31m# Stage 2: Collect and normalize the key phrases from a parsed document\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/better-nlp/library/org/neomatrix369/summariser_pytextrank.py\u001b[0m in \u001b[0;36mperform_statistical_parsing_tagging\u001b[0;34m(self, text_file, paragraph_output)\u001b[0m\n\u001b[1;32m     75\u001b[0m         \"\"\"\n\u001b[1;32m     76\u001b[0m         \u001b[0;32mwith\u001b[0m \u001b[0mopen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mparagraph_output\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'w'\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mtemp_file\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 77\u001b[0;31m             \u001b[0;32mfor\u001b[0m \u001b[0mparagraph\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mpytextrank\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mparse_doc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mpytextrank\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mjson_iter\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtext_file\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     78\u001b[0m                 \u001b[0mtemp_file\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwrite\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"%s\\n\"\u001b[0m \u001b[0;34m%\u001b[0m \u001b[0mpytextrank\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpretty_print\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mparagraph\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_asdict\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     79\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mAttributeError\u001b[0m: module 'pytextrank' has no attribute 'parse_doc'"
     ]
    }
   ],
   "source": [
    "source_file='source.json'\n",
    "source_json_content='{\"id\":\"777\", \"text\":\"In an attempt to build an AI-ready workforce, SmartSoft Corp. announced Smart Colab Program which has been launched to empower the next generation of students with AI-ready skills. Envisioned as a three-year collaborative program, Smart Colab Program will support around 100 institutions with AI infrastructure, course content and curriculum, developer support, development tools and give students access to cloud and AI services. As part of the program, the Palo Alto giant which wants to expand its reach and is planning to build a strong developer ecosystem in India with the program will set up the core AI infrastructure and IoT Hub for the selected campuses. The company will provide AI development tools and AI services such as SmartSoft Corp. Cognitive Services, Bot Services and Machine Learning Services. According to Mark Smith, Country AI Manager, SmartSoft Corp. India, said, ''With AI being the defining technology of our time, it is transforming lives and industry and the jobs of tomorrow will require a different skillset. This will require more collaborations and training and working with AI. That''s why it has become more critical than ever for educational institutions to integrate new cloud and AI technologies. The program is an attempt to ramp up the institutional set-up and build capabilities among the educators to educate the workforce of tomorrow.'' The program aims to build up the cognitive skills and in-depth understanding of developing intelligent cloud connected solutions for applications across industry. Earlier in April this year, the company announced SmartSoft Corp. Advanced Program In AI as a learning track open to the public. The program was developed to provide job ready skills to programmers who wanted to hone their skills in AI and data science with a series of online courses which featured hands-on labs and expert instructors as well. This program also included developer-focused AI school that provided a bunch of assets to help build AI skills.\"}'\n",
    "f = open(source_file, 'w')\n",
    "f.write(\"%s\" % source_json_content)\n",
    "f.close()\n",
    "\n",
    "summarised_result = betterNLP.summarise(source_file, method=\"pytextrank\")\n",
    "\n",
    "print(\"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\")\n",
    "print(\"summarisation_processing_time_in_secs=\",summarised_result['summarisation_processing_time_in_secs'])\n",
    "print(\"summarised_text=\",summarised_result['summarised_text'])\n",
    "print(\"token_ranks=\",summarised_result['token_ranks'])\n",
    "print(\"key_phrases=\",summarised_result['key_phrases'])\n",
    "print(\"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\")\n",
    "betterNLP.show_graph(summarised_result[\"graph\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install --upgrade pytextrank>=2.0.1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip list | grep pytext"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pytextrank\n",
    "import sys\n",
    "\n",
    "path_stage0 = \"dat/mih.json\"\n",
    "path_stage1 = \"o1.json\"\n",
    "\n",
    "with open(path_stage1, 'w') as f:\n",
    "    for graf in pytextrank.parse_doc(pytextrank.json_iter(path_stage0)):\n",
    "        f.write(\"%s\\n\" % pytextrank.pretty_print(graf._asdict()))\n",
    "        # to view output in this notebook\n",
    "        print(pretty_print(graf))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "s2kyTI6bfcz7"
   },
   "source": [
    "### 3. Build a simple text summarisation tool using NLTK\n",
    "\n",
    "Inspired by Wilame Lima Vallantin, the author of [Build a simple text summarisation tool using NLTK](https://medium.com/@wilamelima/build-a-simple-text-summarisation-tool-using-nltk-ff0984fedb4f).\n",
    "\n",
    "We have to break the text into sentences and tokens, remove stop-words. Tokenise words, calculate word frequency to determine if a word is important on the corpus, using the TF-IDF technique."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "MOqeUNkg3w82",
    "outputId": "bfe3346d-713d-4114-b0e8-1a8539895072"
   },
   "outputs": [],
   "source": [
    "summarised_result = betterNLP.summarise(generic_text, method=\"tfidf\")\n",
    "\n",
    "print(\"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\")\n",
    "print(\"summarisation_processing_time_in_secs=\",summarised_result['summarisation_processing_time_in_secs'])\n",
    "print(\"summarised_text=\")\n",
    "pp.pprint(summarised_result['summarised_text'])\n",
    "print(\"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"important_words=\")\n",
    "pp.pprint(summarised_result['important_words'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "4jQLb4Sqfc0E"
   },
   "source": [
    "### 4. Summarising text in python using a variation of TF-IDF method\n",
    "\n",
    "\n",
    "Inspired by Shivangi Sareen from the posts:\n",
    "[Summarise Text with TFIDF in Python 1](https://towardsdatascience.com/tfidf-for-piece-of-text-in-python-43feccaa74f8) and [Summarise Text with TFIDF in Python 2](https://medium.com/@shivangisareen/summarise-text-with-tfidf-in-python-bc7ca10d3284)\n",
    "\n",
    "We have to break the text into sentences and tokens, ***we do not remove stop-words*** but do remove special characters. Tokenise words, calculate word TF and IDF frequencies to determine if a word is important on the corpus, using the TF-IDF technique. And then based on the average score method filter out only those sentences that meet the criteria.\n",
    "\n",
    "We could also use the (average score + 1.5 * std dev) or (average score + 3 * std dev), depending on the size of the target documents to summarise."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "BWAPOQyt3w87"
   },
   "outputs": [],
   "source": [
    "summarised_result = betterNLP.summarise(generic_text, method=\"tfidf-ignore-stopwords\")\n",
    "\n",
    "print(\"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\")\n",
    "print(\"summarisation_processing_time_in_secs=\",summarised_result['summarisation_processing_time_in_secs'])\n",
    "pp.pprint(\"summarised_text=\" + summarised_result['summarised_text'])\n",
    "print(\"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"scored_documents=\")\n",
    "pp.pprint(summarised_result['scored_documents'])"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "include_colab_link": true,
   "name": "better_nlp_summarisers.ipynb",
   "provenance": [],
   "version": "0.3.2"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
