{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "gpuType": "T4"
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# Install and import"
      ],
      "metadata": {
        "id": "5KNsOIs5qEaa"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "IF-HnHg1ayYW"
      },
      "outputs": [],
      "source": [
        "%pip install languagemodels\n",
        "import languagemodels as lm"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Summarize list of documents"
      ],
      "metadata": {
        "id": "fOKBZ7zwqKU2"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "docs = [\n",
        "    'Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation.\\n\\nPython is dynamically typed and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. It is often described as a \"batteries included\" language due to its comprehensive standard library.\\n\\nGuido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language and first released it in 1991 as Python\\xa00.9.0. Python\\xa02.0 was released in 2000. Python\\xa03.0, released in 2008, was a major revision not completely backward-compatible with earlier versions. Python\\xa02.7.18, released in 2020, was the last release of Python\\xa02.\\n\\nPython consistently ranks as one of the most popular programming languages.',\n",
        "    'Natural language processing (NLP) is an interdisciplinary subfield of computer science and linguistics. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic (i.e. statistical and, most recently, neural network-based) machine learning approaches. The goal is a computer capable of \"understanding\" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.\\n\\nChallenges in natural language processing frequently involve speech recognition, natural-language understanding, and natural-language generation.',\n",
        "]\n",
        "\n",
        "for doc in docs:\n",
        "    result = lm.do(f\"Summarize the following text in one short sentence. Text: {doc}\")\n",
        "    print(result)"
      ],
      "metadata": {
        "id": "OcKsYj50a-u2"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}