{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "# 🎇 LangChain\n",
    "\n",
    "\\[[Chinese Version](./LangChainTutorial.ipynb)\\]"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "How to create our own exclusive chatbot? How to learn domain-specific knowledge through simple manipulation of large language models (LLMs)? How to unleash more potential in daily workflow using LLMs?\n",
    ">Please imagine the following scenarios: you have several e-books or dozens of text files, or you complete specific tasks using a database. We want the LLMs model to learn the data provided by the user and only answer relevant questions within the given data range. If the question exceeds the scope, inform the user that the question is beyond the scope and cannot be answered. In other words, we want to restrict the LLMs model from freely expressing itself and prevent it from speaking casually. How can we accomplish the above tasks based on the large model? LangChain can help you achieve this. Click 👉[here](https://python.langchain.com/en/latest/index.html)👈 to directly access the official documentation of LangChain."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "LangChain is an upper-level tool for large models, an application development framework based on LLMs that utilizes composability to build applications with LLMs. The emphasis is on \"composability\". It designs a series of interfaces that are easy to integrate into real-world applications, reducing the difficulty of deploying large language models in practical scenarios. LangChain can be used for chatbots, generative question answering (GQA), text summarization, and more.\n",
    "The goal of LangChain is:\n",
    "- Allowing large language models to handle data from different sources.\n",
    "- Enabling interaction between large language models and the environment in which they are deployed.\n",
    "\n",
    "<img src=\"./langchain.png\" align=center width=100% />\n",
    "\n",
    "As shown in the figure above, the LangChain library consists of six main components:\n",
    "- **Models**: Provides pre-packaged large models based on the OpenAI API, including common OpenAI models, and also supports custom encapsulation of large models.\n",
    "- **Prompt**: Supports rapid implementation of custom prompt projects and integration with LLMs.\n",
    "- **Index**: Accepts user queries and returns the most relevant content.\n",
    "- **Memory**: A standard interface for storing state between chains/calls.\n",
    "- **Chains**: A series of calls (LLMs or others, such as networks, operating systems). Chains provide standard interfaces and settings to combine these calls. Information is first obtained from external sources and then fed to LLMs. The large model sequentially executes logical chains for a series of tasks.\n",
    "- **Agents**: Agents play a crucial role in determining the actions to be taken with LLMs and how to perform them. Typically, capabilities in Utils and various logical chains in Chains are encapsulated as tools for intelligent invocation by Agents.\n",
    "\n",
    "🌟 We will primarily focus on code explanations and demonstrations using the OpenAI provider. So let's begin this journey together! ✊"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 📜 CATLOG\n",
    "\n",
    "- [Before Start](#before-start)\n",
    "- [Models](#models)\n",
    "- [Prompt](#prompt)\n",
    "- [Index](#index)\n",
    "- [Memory](#memory)\n",
    "- [Chains](#chains)\n",
    "- [Agents](#agents)\n",
    "- [Coding Examples](#coding-exampls)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## Before Start\n",
    "\n",
    "Environment configuration and proxy settings are for domestic users (specifically for mainland China, not including Hong Kong, Macau, and Taiwan regions) who need to use VPN to access OpenAI. Therefore, it is necessary to run the program locally to set up the proxy and avoid any network-related access and invocation issues."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "import os\n",
    "# Set up the HTTP proxy, local IP address, and port number. \n",
    "# You can view the IP address using the IPCONFIG/IFCONFIG command on Win/Linux. \n",
    "# Alternatively, you can use the default setting of 127.0.0.1 for the IP address \n",
    "# and specify the port number of the mounted VPN.\n",
    "os.environ['HTTP_PROXY'] = 'http://127.0.0.1:XXX'\n",
    "# Set up the HTTPS proxy, same as above.\n",
    "os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:XXX'\n",
    "\n",
    "#check the proxy is useful or not\n",
    "def check_proxy():\n",
    "    import urllib.request\n",
    "    url = \"https://www.google.com\"\n",
    "    # url = \"https://www.baidu.com\"\n",
    "    filename = \"google.html\"\n",
    "    urllib.request.urlretrieve(url, filename)#保存在当前文件夹下\n",
    "\n",
    "check_proxy()\n",
    "\n",
    "# openai key\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"填上你自己的openai api  key\"\n",
    "\n",
    "#search api key, only using in agent, this step is optional\n",
    "os.environ['SERPAPI_API_KEY']='填上你自己的serpapi api key'"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Check if the Google Search API is available. This step is optional and not necessary. If you are using the Google Search API in the following examples, you will need to register and obtain the necessary credentials to make the API calls. You can register by clicking [here](https://serpapi.com/dashboard)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'search_metadata': {'id': '643802169d158690e4963190', 'status': 'Success', 'json_endpoint': 'https://serpapi.com/searches/11aa25012aaee788/643802169d158690e4963190.json', 'created_at': '2023-04-13 13:22:30 UTC', 'processed_at': '2023-04-13 13:22:30 UTC', 'google_url': 'https://www.google.com/search?q=coffee&oq=coffee&uule=w+CAIQICIdQXVzdGluLFRYLFRleGFzLFVuaXRlZCBTdGF0ZXM&sourceid=chrome&ie=UTF-8', 'raw_html_file': 'https://serpapi.com/searches/11aa25012aaee788/643802169d158690e4963190.html', 'total_time_taken': 6.54}, 'search_parameters': {'engine': 'google', 'q': 'coffee', 'location_requested': 'Austin,Texas', 'location_used': 'Austin,TX,Texas,United States', 'google_domain': 'google.com', 'device': 'desktop'}, 'search_information': {'organic_results_state': 'Results for exact spelling', 'query_displayed': 'coffee', 'total_results': 4660000000, 'time_taken_displayed': 0.68, 'menu_items': [{'position': 1, 'title': 'Images', 'link': 'https://www.google.com/search?q=coffee&source=lnms&tbm=isch&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ0pQJegQIBRAC', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=coffee&tbm=isch'}, {'position': 2, 'title': 'Maps', 'link': 'https://maps.google.com/maps?q=coffee&uule=w+CAIQICIdQXVzdGluLFRYLFRleGFzLFVuaXRlZCBTdGF0ZXM&um=1&ie=UTF-8&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ0pQJegQIBRAE'}, {'position': 3, 'title': 'Shopping', 'link': 'https://www.google.com/search?q=coffee&source=lnms&tbm=shop&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ0pQJegQIBRAG', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=coffee&tbm=shop'}, {'position': 4, 'title': 'Videos', 'link': 'https://www.google.com/search?q=coffee&source=lnms&tbm=vid&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ0pQJegQIBRAI', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=coffee&tbm=vid'}, {'position': 5, 'title': 'News', 'link': 'https://www.google.com/search?q=coffee&source=lnms&tbm=nws&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ0pQJegQIBRAK', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=coffee&tbm=nws'}, {'position': 6, 'title': 'Books', 'link': 'https://www.google.com/search?q=coffee&source=lnms&tbm=bks&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ0pQJegQIBRAM'}, {'position': 7, 'title': 'Flights', 'link': 'https://www.google.com/flights?q=coffee&source=lnms&tbm=flm&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ0pQJegQIBRAO'}, {'position': 8, 'title': 'Finance', 'link': 'https://www.google.com/finance?sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ0pQJegQIBRAQ'}]}, 'local_map': {'link': 'https://www.google.com/search?q=coffee&npsic=0&rflfq=1&rldoc=1&rllag=32070028,-99860352,38230&tbm=lcl&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQtgN6BAggEAE', 'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/2ceeb353df0a2b7dbacfb30245d6b608.png', 'gps_coordinates': {'latitude': 32.070028, 'longitude': -99.860352, 'altitude': 38230}}, 'local_results': {'places': [{'position': 1, 'title': 'Cup of Joe', 'rating': 4.6, 'reviews_original': '(28)', 'reviews': 28, 'type': 'Cafe', 'address': 'Winters, TX', 'description': '\"Great coffee at a good value\"', 'place_id': '6645020494937232090', 'place_id_search': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&ludocid=6645020494937232090&q=coffee', 'lsig': 'AB86z5WMhEZFuTAJ4GDnJFa6q6QI', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/0705f5877861a41dca5d37ea99dd97ef89f977fc0b365c8cf27ec04d7e618086c09b1b2ed1363660.jpeg', 'gps_coordinates': {'latitude': 31.957932, 'longitude': -99.96268}}, {'position': 2, 'title': 'The Coffee Haus on Main', 'rating': 4.8, 'reviews_original': '(36)', 'reviews': 36, 'type': 'Coffee shop', 'address': 'Ballinger, TX', 'place_id': '7920778704432590090', 'place_id_search': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&ludocid=7920778704432590090&q=coffee', 'lsig': 'AB86z5UZD5xj713sXU0khIy4IiH8', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/0705f5877861a41dca5d37ea99dd97ef0a78d9eb720337ce35a1440d44674901de89cb271256be32.jpeg', 'gps_coordinates': {'latitude': 31.737326, 'longitude': -99.94827}, 'service_options': {'dine_in': True, 'takeout': True, 'no_delivery': True}}, {'position': 3, 'title': 'Starbucks', 'rating': 4.1, 'reviews_original': '(899)', 'reviews': 899, 'price': '$$', 'type': 'Coffee shop', 'address': 'Abilene, TX', 'description': 'Iconic Seattle-based coffeehouse chain', 'place_id': '12667355487137518257', 'place_id_search': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&ludocid=12667355487137518257&q=coffee', 'lsig': 'AB86z5UP_UZ7Xmy5pKn1KyEN_jln', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/0705f5877861a41dca5d37ea99dd97efa22f666b4763d1bb8847ef0f5b7d937e6e1dbdec5b00b43d.jpeg', 'gps_coordinates': {'latitude': 32.40273, 'longitude': -99.75803}}], 'more_locations_link': 'https://www.google.com/search?tbs=lf:1,lf_ui:9&tbm=lcl&q=coffee&rflfq=1&num=10&uule=w+CAIQICIdQXVzdGluLFRYLFRleGFzLFVuaXRlZCBTdGF0ZXM&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQjGp6BAgjEAI'}, 'knowledge_graph': {'title': 'Coffee', 'type': 'Beverage', 'kgmid': '/m/02vqfm', 'knowledge_graph_search_link': 'https://www.google.com/search?kgmid=/m/02vqfm&hl=en-US&q=Coffee&kgs=b20ee92310a81805&shndl=0&source=sh/x/kp/1', 'serpapi_knowledge_graph_search_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&hl=en-US&kgmid=%2Fm%2F02vqfm&location=Austin%2CTexas&q=Coffee', 'header_images': [{'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba7199679b67c7e38cfee5d269be063747a036a99fba652bbfbf7afe.jpeg', 'source': 'https://en.wikipedia.org/wiki/Coffee'}, {'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba7199679b67c7e38cfee5d2494a19d1b8b89f6366a3aec97ea21ede.jpeg', 'source': 'https://www.tastingtable.com/718678/coffee-brands-ranked-from-worst-to-best/'}, {'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba7199679b67c7e38cfee5d2c856230e8a98377b70dca233c2119e9d.jpeg', 'source': 'https://www.rush.edu/news/health-benefits-coffee'}, {'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba7199679b67c7e38cfee5d255d27c8f4046047c96132e92d1c347c3.jpeg', 'source': 'https://www.healthline.com/nutrition/top-evidence-based-health-benefits-of-coffee'}, {'image': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTfWpPUDBL9AWrANOJSorna-dMqhtYRtK8VhPYgoFfF3g&s', 'source': 'https://www.tastingtable.com/794355/different-types-of-coffee-explained/'}], 'description': 'Coffee is a beverage prepared from roasted coffee beans. Darkly colored, bitter, and slightly acidic, coffee has a stimulating effect on humans, primarily due to its caffeine content. It has the highest sales in the world market for hot drinks.', 'source': {'name': 'Wikipedia', 'link': 'https://en.wikipedia.org/wiki/Coffee'}, 'acidity_level': '4.85 to 5.10', 'acidity_level_links': [{'text': 'Acidity level', 'link': 'https://www.google.com/search?q=coffee+acidity+level&stick=H4sIAAAAAAAAAOPgE-LUz9U3MCorTMvVksjPttIvzsgvKklLTC6xSkzOTInPSS1LzVnEKpKcn5aWmqoAEsssqVQACwMA6FvyLz4AAAA&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ6BMoAHoFCI8BEAI'}, {'text': 'healthline.com', 'link': 'https://www.healthline.com/nutrition/is-coffee-acidic'}], 'buttons': [{'text': 'Price', 'subtitle': 'Price Of Coffee', 'title': 'Two tablespoons', 'link': 'https://twochimpscoffee.com/guides/how-to-use-coffee-syrup/#:~:text=Two%20tablespoons%20(30ml%20or%20one,for%20a%20regular%20coffee%20cup.', 'displayed_link': 'https://twochimpscoffee.com › guides › how-to-use-coff...', 'snippet': \"Two tablespoons (30ml or one ounce) of syrup is a good go-to if you're wondering how much coffee syrup to put in coffee. This is for a regular coffee cup.\", 'snippet_highlighted_words': ['Two tablespoons (30ml or one ounce)'], 'answer': 'Two tablespoons', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba7199675fd122d14c827c9e3e8fed4a470f851e3e84adff4a23eb4f.png', 'search_link': 'https://www.google.com/search?q=price+of+coffee&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQrooIegUIjAEQBA', 'serpapi_search_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=price+of+coffee'}, {'text': 'Energy Amount', 'subtitle': 'How Many Calories In Coffee', 'table': [['Amount Per 1 fl oz (29.6 g)100 grams6 fl oz (178 g)1 cup (8 fl oz) (237 g)1 cup (8 fl oz) (237 g)'], ['Calories 1']], 'search_link': 'https://www.google.com/search?q=how+many+calories+in+coffee&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQrooIegUIjAEQCA', 'serpapi_search_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=how+many+calories+in+coffee'}, {'text': 'Protein Amount', 'subtitle': 'How Much Protein Is In Coffee', 'thumbnail': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQixTzjUfKnje6rtTLWUrleiMFyY4UusMCuTGsIgABnHjJ3&s', 'title': 'Protein Coffee: How it works & 8 benefits you should be aware of', 'link': 'https://healthcareweekly.com/protein-coffee-benefits/#:~:text=Is%20Coffee%20On%20Its%20Own,meaningful%2C%20to%20say%20the%20least.', 'displayed_link': 'https://healthcareweekly.com › protein-coffee-benefits', 'snippet': 'Is Coffee On Its Own a Good Source of Protein? The short answer: no, coffee is not a good source of protein. However, protein content depends on the type of coffee. For instance, one cup (about 6 fluid ounces) of black coffee contains approximately 0.21 grams of protein, which is not meaningful, to say the least.', 'snippet_highlighted_words': ['no, coffee is not a good source of protein'], 'search_link': 'https://www.google.com/search?q=how+much+protein+is+in+coffee&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQrooIegUIjAEQDg', 'serpapi_search_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=how+much+protein+is+in+coffee'}, {'text': 'Ph level', 'subtitle': 'Ph Of Coffee', 'title': 'about 5', 'link': 'https://byjus.com/question-answer/ph-of-black-coffee/#:~:text=Black%20coffee%20typically%20has%20a,beans%2C%20roasting%2C%20and%20brewing.', 'displayed_link': 'https://byjus.com › question-answer › ph-of-black-coffee', 'snippet': 'Black coffee typically has a pH of about 5 and is thus slightly acidic. It is acidic, with pH between 4 and 5, depending on the beans, roasting, and brewing.', 'snippet_highlighted_words': ['about 5'], 'answer': 'about 5', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba7199675fd122d14c827c9ec36fd87acab3f7ae25c98ca17a4aace0.png', 'search_link': 'https://www.google.com/search?q=ph+of+coffee&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQrooIegUIjAEQEg', 'serpapi_search_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=ph+of+coffee'}], 'compounds_in_coffee': [{'name': 'Chlorogenic acid', 'link': 'https://www.google.com/search?q=Chlorogenic+acid&stick=H4sIAAAAAAAAAONgFmJQ4tTP1TcwL87IiddiWMQq4JyRk1-Un56al5mskJicmbKDlREA-TI5fCcAAAA&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQxA16BQiIARAF', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=Chlorogenic+acid&stick=H4sIAAAAAAAAAONgFmJQ4tTP1TcwL87IiddiWMQq4JyRk1-Un56al5mskJicmbKDlREA-TI5fCcAAAA', 'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba719967401e973bf2272dfc16e2c8cfafa7c4cc39c553489d2631b66274c812e7446727.png'}, {'name': 'Quinic acid', 'link': 'https://www.google.com/search?q=Quinic+acid&stick=H4sIAAAAAAAAAONgFmJQ4tTP1TcwL042tdBiWMTKHViamZeZrJCYnJmyg5URAI7HIvkiAAAA&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQxA16BQiIARAH', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=Quinic+acid&stick=H4sIAAAAAAAAAONgFmJQ4tTP1TcwL042tdBiWMTKHViamZeZrJCYnJmyg5URAI7HIvkiAAAA', 'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba719967401e973bf2272dfc16e2c8cfafa7c4cc2362001c04c2f1d2044b09e83d185675.png'}, {'name': 'Trigonelline', 'link': 'https://www.google.com/search?q=Trigonelline&stick=H4sIAAAAAAAAAONgFmJQ4tTP1TcwrTKojNdiWMTKE1KUmZ6fl5qTk5mXuoOVEQBGl_DeIwAAAA&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQxA16BQiIARAJ', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=Trigonelline&stick=H4sIAAAAAAAAAONgFmJQ4tTP1TcwrTKojNdiWMTKE1KUmZ6fl5qTk5mXuoOVEQBGl_DeIwAAAA', 'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba719967401e973bf2272dfc16e2c8cfafa7c4cc8a99acca8218dd6ae69d6e62b343a898.png'}, {'name': 'Melanoidin', 'link': 'https://www.google.com/search?q=Melanoidin&stick=H4sIAAAAAAAAAONgFmJQ4tLP1TcwKso2M8zQYljEyuWbmpOYl5-Zkpm3g5URAEiv4kMiAAAA&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQxA16BQiIARAL', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=Melanoidin&stick=H4sIAAAAAAAAAONgFmJQ4tLP1TcwKso2M8zQYljEyuWbmpOYl5-Zkpm3g5URAEiv4kMiAAAA', 'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba719967401e973bf2272dfc16e2c8cfafa7c4ccfcc2b1e8a30cf831eff946b2458c1940.png'}], 'compounds_in_coffee_link': 'https://www.google.com/search?q=compounds+in+coffee&stick=H4sIAAAAAAAAAONgFuLUz9U3MCorTMtVQjC1RLKTrfST83Nz8_OsUvLL88oTi1KKVzEKOmek5mYmJ-Y45-cW5JfmpRQvYhVOhrEVMvMUkvPT0lJTd7AyAgA94efZWwAAAA&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQMSgAegUIiAEQAQ', 'compounds_in_coffee_stick': 'H4sIAAAAAAAAAONgFuLUz9U3MCorTMtVQjC1RLKTrfST83Nz8_OsUvLL88oTi1KKVzEKOmek5mYmJ-Y45-cW5JfmpRQvYhVOhrEVMvMUkvPT0lJTd7AyAgA94efZWwAAAA', 'people_also_search_for': [{'name': 'Tea', 'link': 'https://www.google.com/search?q=Tea&si=AMnBZoFk_ppfOKgdccwTD_PVhdkg37dbl-p8zEtOPijkCaIHMgjPPDr-bVSwS7IwMGJZrTdb97oset9qkWSGGc8Wrx0cWPU0xfD2y8UwLcWHQST1bjafkmRjiQwYRe_3Itgz9qVSCKdZzbGbuLy9QXEb0Z8fNkExbIqMPt_n-TOdD4t0ZhyGHDkRXkCD3stHPYlkI9F6NqruSkOP8q19qSCJwA-F-iIzZCw4sk6NwHFknXQGz29sgmuriUCUg3JKKEr9yO6rD_j0gO7sFpA_vyMkVnlD2E-eVQ%3D%3D&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQxA16BQiJARAF', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=Tea&si=AMnBZoFk_ppfOKgdccwTD_PVhdkg37dbl-p8zEtOPijkCaIHMgjPPDr-bVSwS7IwMGJZrTdb97oset9qkWSGGc8Wrx0cWPU0xfD2y8UwLcWHQST1bjafkmRjiQwYRe_3Itgz9qVSCKdZzbGbuLy9QXEb0Z8fNkExbIqMPt_n-TOdD4t0ZhyGHDkRXkCD3stHPYlkI9F6NqruSkOP8q19qSCJwA-F-iIzZCw4sk6NwHFknXQGz29sgmuriUCUg3JKKEr9yO6rD_j0gO7sFpA_vyMkVnlD2E-eVQ%3D%3D', 'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba719967e72d5e765855f08c845d5d1bff276fdeeaa0922bc08dd464936d4e065df28d61.jpeg'}, {'name': 'Espresso', 'link': 'https://www.google.com/search?q=Espresso&si=AMnBZoFk_ppfOKgdccwTD_PVhdkg37dbl-p8zEtOPijkCaIHMjyaN62njrr4Y2vVD-0W98vYZcIInyX3v-g5zZZpHjjG-Poxv3XLD0drTfOgpVcCf-zFRIPvsy9oT9UP_w35iIRi9Dl7PJ4-ldCIJDe667xhyIl72gjAFfOKZBu3XGinHDe096zuvjykLeb_xXU3DhU-1QmnhQUqLZaEFgHJmWWQuA0wE2cEjLHtjh10JWDgzqQv2KOWtaNv-dvOLwEiFloEH3I1N1rrm0NY4CZcMZcqugn660gEsBi5JGVH9mmfP4wv-CI%3D&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQxA16BQiJARAH', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=Espresso&si=AMnBZoFk_ppfOKgdccwTD_PVhdkg37dbl-p8zEtOPijkCaIHMjyaN62njrr4Y2vVD-0W98vYZcIInyX3v-g5zZZpHjjG-Poxv3XLD0drTfOgpVcCf-zFRIPvsy9oT9UP_w35iIRi9Dl7PJ4-ldCIJDe667xhyIl72gjAFfOKZBu3XGinHDe096zuvjykLeb_xXU3DhU-1QmnhQUqLZaEFgHJmWWQuA0wE2cEjLHtjh10JWDgzqQv2KOWtaNv-dvOLwEiFloEH3I1N1rrm0NY4CZcMZcqugn660gEsBi5JGVH9mmfP4wv-CI%3D', 'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba719967e72d5e765855f08c845d5d1bff276fde5336e38b7da39c82f50576790ba2d2e1.jpeg'}, {'name': 'Cappuccino', 'link': 'https://www.google.com/search?q=Cappuccino&si=AMnBZoFk_ppfOKgdccwTD_PVhdkg37dbl-p8zEtOPijkCaIHMklZ6vtN6-8AY4RsUb9j1aMcn5_Fx0RVtguKTSjYOzJGH7WEnPl_82g6G4ZmQ47a0aQn5OaDfWitFqB1UyoD-3DPX9OrIzCD9nADAxnBhkOFxJrA8pSHWbkRwBQLlUjMBh218i1GQwoy1lyiRUKtITtVX99LqKB4Vnw74JG_9e88ZJr1WoMkOQoqE67xGs1BZHT4Q8YggMDNYAvtEZClZcNRxN3i715e-oP_ZO0WK7VtSx7RbP8PO9dSe9qaWJAVAfAFW7k%3D&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQxA16BQiJARAJ', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=Cappuccino&si=AMnBZoFk_ppfOKgdccwTD_PVhdkg37dbl-p8zEtOPijkCaIHMklZ6vtN6-8AY4RsUb9j1aMcn5_Fx0RVtguKTSjYOzJGH7WEnPl_82g6G4ZmQ47a0aQn5OaDfWitFqB1UyoD-3DPX9OrIzCD9nADAxnBhkOFxJrA8pSHWbkRwBQLlUjMBh218i1GQwoy1lyiRUKtITtVX99LqKB4Vnw74JG_9e88ZJr1WoMkOQoqE67xGs1BZHT4Q8YggMDNYAvtEZClZcNRxN3i715e-oP_ZO0WK7VtSx7RbP8PO9dSe9qaWJAVAfAFW7k%3D', 'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba719967e72d5e765855f08c845d5d1bff276fde30bbff2f88cf8f3a124c859ded15a3a2.jpeg'}, {'name': 'Latte', 'link': 'https://www.google.com/search?q=Latte&si=AMnBZoEofOODruSEFWFjdccePwMH96ZlZt3bOiKSR9t4pqlu2E2Y95jVypGw5nHfEGbzdy_B-mJrk7TV3R0_l7ZUBiUkk1BiloRi17mH8ufqGqbkUhe0c8_KO5vzQNZ-nrEeAaA8my_qJpcAz8qE92rIWhWLdi2-Y-wRjvz59t6q5db5z6-tVBvk31oUge5WvKN_KY-abBJwuAmRcDVTFFkzJExr2ij50aEqL8xoQy48LtRLUoNcARf8G9RPDtEL8wYJDyK5-OlqfXP74-MHX3KmEH_oT5JEeAsCl9zNCisFeyvLawCXTaY%3D&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQxA16BQiJARAL', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=Latte&si=AMnBZoEofOODruSEFWFjdccePwMH96ZlZt3bOiKSR9t4pqlu2E2Y95jVypGw5nHfEGbzdy_B-mJrk7TV3R0_l7ZUBiUkk1BiloRi17mH8ufqGqbkUhe0c8_KO5vzQNZ-nrEeAaA8my_qJpcAz8qE92rIWhWLdi2-Y-wRjvz59t6q5db5z6-tVBvk31oUge5WvKN_KY-abBJwuAmRcDVTFFkzJExr2ij50aEqL8xoQy48LtRLUoNcARf8G9RPDtEL8wYJDyK5-OlqfXP74-MHX3KmEH_oT5JEeAsCl9zNCisFeyvLawCXTaY%3D', 'image': 'https://serpapi.com/searches/643802169d158690e4963190/images/56f3044ba4f6836a67196996ba719967e72d5e765855f08c845d5d1bff276fde3abe05036e61cb692c62b7db1ef7bd3f.jpeg'}], 'people_also_search_for_link': 'https://www.google.com/search?q=Coffee&stick=H4sIAAAAAAAAAONgFuLUz9U3MCorTMtVQjC1BIMzU1LLEyuL_VIrSoJLUguKF7GyOeenpaWm7mBlBABkIv_mNwAAAA&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQMSgAegUIiQEQAQ', 'people_also_search_for_stick': 'H4sIAAAAAAAAAONgFuLUz9U3MCorTMtVQjC1BIMzU1LLEyuL_VIrSoJLUguKF7GyOeenpaWm7mBlBABkIv_mNwAAAA', 'see_results_about': [{'name': 'Coffee bean', 'extensions': ['A coffee bean is a seed of the Coffea plant and the source for ...'], 'link': 'https://www.google.com/search?q=Coffee+bean&si=AMnBZoGn39e0tI_t2dCPKQ2j8QBKDHso0GZnCtfIMxO7bEcZSZ7sNGHzwtf8qdnwn3tXQ0AGsm2Ng73UcXfebGimZRXPLuphz7aw2b9GplBsHHhxGd0xEKJ8XvF1_wM5SNp0hs8oNfrzJQdeFjaOKy4f1Y1HsVo-vZbsVBt1pibEO5fKPf-Urjg%3D&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ6RN6BAhlEAE', 'image': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQlwgD-6s3Vk872C9OaOBm_9QUtkchzWBoJWCdLswp_BSHUV8pNHndshQ&s=0'}], 'list': {'total_fat': ['0 g', '0%'], 'saturated_fat': ['0 g', '0%'], 'trans_fat_regulation': ['0 g'], 'cholesterol': ['0 mg', '0%'], 'sodium': ['5 mg', '0%'], 'potassium': ['116 mg', '3%'], 'total_carbohydrate': ['0 g', '0%'], 'dietary_fiber': ['0 g', '0%'], 'sugar': ['0 g'], 'protein': ['0.3 g', '0%'], 'caffeine': ['95 mg'], 'vitamin_c': ['0%'], 'calcium': ['0%'], 'iron': ['0%'], 'vitamin_d': ['0%'], 'vitamin_b6': ['0%'], 'cobalamin': ['0%'], 'magnesium': ['1%']}}, 'inline_images': [{'link': 'https://www.google.com/search?q=coffee&tbm=isch&source=iu&ictx=1&vet=1&fir=cHhAmJrw8EtbWM%252CU6oJMnF-eeVTAM%252C%252Fm%252F02vqfm%253B9M1X2EDxsYNTZM%252CO0p2m8H_t7E6nM%252C_%253B35LBrLe6iLMgNM%252CLe_shlToZ2_z7M%252C_%253BYe55hwurmsyIDM%252Cr1UW6FGz3F41UM%252C_%253Be1xARNWS4v0NdM%252CxWwjaFHTd_fBIM%252C_&usg=AI4_-kSeRZX4tRmBDC5WzqJ72knpK80xKw&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ_B16BQiGARAB#imgrc=cHhAmJrw8EtbWM', 'source': 'https://en.wikipedia.org/wiki/Coffee', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/9965b78b2d80b21d3f5db41f337bfeec902227f017d9dd1735e3765fd889c51e.jpeg', 'original': 'https://upload.wikimedia.org/wikipedia/commons/e/e4/Latte_and_dark_coffee.jpg', 'title': 'upload.wikimedia.org/wikipedia/commons/e/e4/Latte_...', 'source_name': 'en.wikipedia.org'}, {'link': 'https://www.google.com/search?q=coffee&tbm=isch&source=iu&ictx=1&vet=1&fir=cHhAmJrw8EtbWM%252CU6oJMnF-eeVTAM%252C_%253B9M1X2EDxsYNTZM%252CO0p2m8H_t7E6nM%252C_%253B35LBrLe6iLMgNM%252CLe_shlToZ2_z7M%252C_%253BYe55hwurmsyIDM%252Cr1UW6FGz3F41UM%252C_%253Be1xARNWS4v0NdM%252CxWwjaFHTd_fBIM%252C_&usg=AI4_-kQVRu5edShja3G23xcn5YB3AJ_0Lg&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ_h16BQiHARAB#imgrc=9M1X2EDxsYNTZM', 'source': 'https://www.tastingtable.com/718678/coffee-brands-ranked-from-worst-to-best/', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/9965b78b2d80b21d094f8ec1d3c2855e858ac1ef5bb628283bdffe67e0109546.jpeg', 'original': 'https://www.tastingtable.com/img/gallery/coffee-brands-ranked-from-worst-to-best/l-intro-1645231221.jpg', 'title': '31 Coffee Brands, Ranked From Worst To Best', 'source_name': 'Tasting Table'}, {'link': 'https://www.google.com/search?q=coffee&tbm=isch&source=iu&ictx=1&vet=1&fir=cHhAmJrw8EtbWM%252CU6oJMnF-eeVTAM%252C_%253B9M1X2EDxsYNTZM%252CO0p2m8H_t7E6nM%252C_%253B35LBrLe6iLMgNM%252CLe_shlToZ2_z7M%252C_%253BYe55hwurmsyIDM%252Cr1UW6FGz3F41UM%252C_%253Be1xARNWS4v0NdM%252CxWwjaFHTd_fBIM%252C_&usg=AI4_-kQVRu5edShja3G23xcn5YB3AJ_0Lg&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ_h16BQiFARAB#imgrc=35LBrLe6iLMgNM', 'source': 'https://www.rush.edu/news/health-benefits-coffee', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/9965b78b2d80b21d295a96d9e9320dc43b3ec3c1d728d8e2afa4eb919701e379.jpeg', 'original': 'https://www.rush.edu/sites/default/files/styles/386x217/public/media-images/Coffee_WebFeature.png?itok=gxteJ01c', 'title': 'Health Benefits of Coffee | Rush System', 'source_name': 'Rush University Medical Center'}, {'link': 'https://www.google.com/search?q=coffee&tbm=isch&source=iu&ictx=1&vet=1&fir=cHhAmJrw8EtbWM%252CU6oJMnF-eeVTAM%252C_%253B9M1X2EDxsYNTZM%252CO0p2m8H_t7E6nM%252C_%253B35LBrLe6iLMgNM%252CLe_shlToZ2_z7M%252C_%253BYe55hwurmsyIDM%252Cr1UW6FGz3F41UM%252C_%253Be1xARNWS4v0NdM%252CxWwjaFHTd_fBIM%252C_&usg=AI4_-kQVRu5edShja3G23xcn5YB3AJ_0Lg&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ_h16BQiEARAB#imgrc=Ye55hwurmsyIDM', 'source': 'https://www.healthline.com/nutrition/top-evidence-based-health-benefits-of-coffee', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/9965b78b2d80b21dc4a259fd912f9bcc346c5506f77fea62694c644a3ab00fa5.jpeg', 'original': 'https://post.healthline.com/wp-content/uploads/2020/08/coffee-worlds-biggest-source-of-antioxidants-1296x728-feature_0-732x549.jpg', 'title': '9 Health Benefits of Coffee, Based on Science', 'source_name': 'Healthline'}, {'link': 'https://www.google.com/search?q=coffee&tbm=isch&source=iu&ictx=1&vet=1&fir=cHhAmJrw8EtbWM%252CU6oJMnF-eeVTAM%252C_%253B9M1X2EDxsYNTZM%252CO0p2m8H_t7E6nM%252C_%253B35LBrLe6iLMgNM%252CLe_shlToZ2_z7M%252C_%253BYe55hwurmsyIDM%252Cr1UW6FGz3F41UM%252C_%253Be1xARNWS4v0NdM%252CxWwjaFHTd_fBIM%252C_&usg=AI4_-kQVRu5edShja3G23xcn5YB3AJ_0Lg&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ_h16BQiDARAB#imgrc=e1xARNWS4v0NdM', 'source': 'https://www.tastingtable.com/794355/different-types-of-coffee-explained/', 'thumbnail': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTfWpPUDBL9AWrANOJSorna-dMqhtYRtK8VhPYgoFfF3g&s', 'original': 'https://www.tastingtable.com/img/gallery/20-different-types-of-coffee-explained/l-intro-1659544996.jpg', 'title': '35 Different Types Of Coffee Explained', 'source_name': 'Tasting Table'}], 'related_questions': [{'question': 'Is coffee good for health?', 'snippet': \"“For most people, moderate coffee consumption can be incorporated into a healthy diet.” Hu said that moderate coffee intake—about 2–5 cups a day—is linked to a lower likelihood of type 2 diabetes, heart disease, liver and endometrial cancers, Parkinson's disease, and depression.\", 'title': 'Is coffee good or bad for your health? | News', 'link': 'https://www.hsph.harvard.edu/news/hsph-in-the-news/is-coffee-good-or-bad-for-your-health/', 'displayed_link': 'https://www.hsph.harvard.edu › news › hsph-in-the-news', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/581680f060c7d368510705ded46f2c9ed01360eca462a179580c31dd0f6541e0.png', 'next_page_token': 'eyJvbnMiOiIxMDA0MSIsImZjIjoiRW9zQkNreEJSWE0zYWs1UlIwTldTVTVvTTJ3NWJHMVRibXQzZEVod1JuZFdabEpMWm5ObExXSnNPSFZrYjJwbldFOHdMVXRmZG1OeVRFMWlWa3BPTlVSTWFtcGxMV3hRTkc0eVVtbHBMVTl4RWhkSGQwazBXazVITTBjMUxXRndkRkZRTW1ReU1uVkJSUm9pUVU4dE1ISnNOVzVIT0dKaWRWcEhRelJmVm1KVk0yRmlYekYwU1hSRE5teHdkdyIsImZjdiI6IjMiLCJlaSI6Ikd3STRaTkczRzUtYXB0UVAyZDIydUFFIiwicWMiOiJDZ1pqYjJabVpXVVFBSDFSTkMwXyIsInF1ZXN0aW9uIjoiSXMgY29mZmVlIGdvb2QgZm9yIGhlYWx0aD8iLCJsayI6IkdoNWtiMlZ6SUdOdlptWmxaU0JwY3lCbmIyOWtJR1p2Y2lCb1pXRnNkR2ciLCJicyI6ImMtT1M1NUx5TEZaSXprOUxTMDFWU01fUFQxRkl5eTlTeUVoTnpDbkpzSmU0WThDbHdDVWRucEdabktHUVZKU1lsd0pUbVZtc2tKUmFYR0l2TWR1UVM0VkxQandqc1VRaHNTaFZvU1FqVmNGRW9hU3lJTFZZSVQ4TnF0cGU0ZzA3bHlxWEFvcXFwTlM4MUxUTUVoUmxtLVVFR0FFIiwiaWQiOiJmY18xIn0=', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google_related_questions&google_domain=google.com&next_page_token=eyJvbnMiOiIxMDA0MSIsImZjIjoiRW9zQkNreEJSWE0zYWs1UlIwTldTVTVvTTJ3NWJHMVRibXQzZEVod1JuZFdabEpMWm5ObExXSnNPSFZrYjJwbldFOHdMVXRmZG1OeVRFMWlWa3BPTlVSTWFtcGxMV3hRTkc0eVVtbHBMVTl4RWhkSGQwazBXazVITTBjMUxXRndkRkZRTW1ReU1uVkJSUm9pUVU4dE1ISnNOVzVIT0dKaWRWcEhRelJmVm1KVk0yRmlYekYwU1hSRE5teHdkdyIsImZjdiI6IjMiLCJlaSI6Ikd3STRaTkczRzUtYXB0UVAyZDIydUFFIiwicWMiOiJDZ1pqYjJabVpXVVFBSDFSTkMwXyIsInF1ZXN0aW9uIjoiSXMgY29mZmVlIGdvb2QgZm9yIGhlYWx0aD8iLCJsayI6IkdoNWtiMlZ6SUdOdlptWmxaU0JwY3lCbmIyOWtJR1p2Y2lCb1pXRnNkR2ciLCJicyI6ImMtT1M1NUx5TEZaSXprOUxTMDFWU01fUFQxRkl5eTlTeUVoTnpDbkpzSmU0WThDbHdDVWRucEdabktHUVZKU1lsd0pUbVZtc2tKUmFYR0l2TWR1UVM0VkxQandqc1VRaHNTaFZvU1FqVmNGRW9hU3lJTFZZSVQ4TnF0cGU0ZzA3bHlxWEFvcXFwTlM4MUxUTUVoUmxtLVVFR0FFIiwiaWQiOiJmY18xIn0%3D'}, {'question': 'Which brand coffee is best?', 'title': 'List of Top 11 Coffee Brands in India', 'link': 'https://cashkaro.com/blog/best-coffee-brands/167279', 'list': ['Nescafe.', 'Rage Coffee.', 'Bru.', 'Davidoff.', 'Blue Tokai.', 'Starbucks.', 'Continental Coffee.', 'Country Bean.'], 'displayed_link': 'https://cashkaro.com › blog › best-coffee-brands', 'thumbnail': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcT4WFZ6iwvyFQiZ2wM5iLAEmFEfdSqje27fd4amMxSsFQ&s', 'next_page_token': 'eyJvbnMiOiIxMDA0MSIsImZjIjoiRW9zQkNreEJSWE0zYWs1UlIwTldTVTVvTTJ3NWJHMVRibXQzZEVod1JuZFdabEpMWm5ObExXSnNPSFZrYjJwbldFOHdMVXRmZG1OeVRFMWlWa3BPTlVSTWFtcGxMV3hRTkc0eVVtbHBMVTl4RWhkSGQwazBXazVITTBjMUxXRndkRkZRTW1ReU1uVkJSUm9pUVU4dE1ISnNOVzVIT0dKaWRWcEhRelJmVm1KVk0yRmlYekYwU1hSRE5teHdkdyIsImZjdiI6IjMiLCJlaSI6Ikd3STRaTkczRzUtYXB0UVAyZDIydUFFIiwicWMiOiJDZ1pqYjJabVpXVVFBSDFSTkMwXyIsInF1ZXN0aW9uIjoiV2hpY2ggYnJhbmQgY29mZmVlIGlzIGJlc3Q/IiwibGsiOiJHaHAzYUdsamFDQmljbUZ1WkNCamIyWm1aV1VnYVhNZ1ltVnpkQSIsImJzIjoiYy1PUzU1THlMRlpJems5TFMwMVZTTV9QVDFGSXl5OVN5RWhOekNuSnNKZTRZOENsd0NVZG5wR1puS0dRVkpTWWx3SlRtVm1za0pSYVhHSXZNZHVRUzRWTFBqd2pzVVFoc1NoVm9TUWpWY0ZFb2FTeUlMVllJVDhOcXRwZTRnMDdseXFYQW9xcXBOUzgxTFRNRWhSbG0tVUVHQUUiLCJpZCI6ImZjXzEifQ==', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google_related_questions&google_domain=google.com&next_page_token=eyJvbnMiOiIxMDA0MSIsImZjIjoiRW9zQkNreEJSWE0zYWs1UlIwTldTVTVvTTJ3NWJHMVRibXQzZEVod1JuZFdabEpMWm5ObExXSnNPSFZrYjJwbldFOHdMVXRmZG1OeVRFMWlWa3BPTlVSTWFtcGxMV3hRTkc0eVVtbHBMVTl4RWhkSGQwazBXazVITTBjMUxXRndkRkZRTW1ReU1uVkJSUm9pUVU4dE1ISnNOVzVIT0dKaWRWcEhRelJmVm1KVk0yRmlYekYwU1hSRE5teHdkdyIsImZjdiI6IjMiLCJlaSI6Ikd3STRaTkczRzUtYXB0UVAyZDIydUFFIiwicWMiOiJDZ1pqYjJabVpXVVFBSDFSTkMwXyIsInF1ZXN0aW9uIjoiV2hpY2ggYnJhbmQgY29mZmVlIGlzIGJlc3Q%2FIiwibGsiOiJHaHAzYUdsamFDQmljbUZ1WkNCamIyWm1aV1VnYVhNZ1ltVnpkQSIsImJzIjoiYy1PUzU1THlMRlpJems5TFMwMVZTTV9QVDFGSXl5OVN5RWhOekNuSnNKZTRZOENsd0NVZG5wR1puS0dRVkpTWWx3SlRtVm1za0pSYVhHSXZNZHVRUzRWTFBqd2pzVVFoc1NoVm9TUWpWY0ZFb2FTeUlMVllJVDhOcXRwZTRnMDdseXFYQW9xcXBOUzgxTFRNRWhSbG0tVUVHQUUiLCJpZCI6ImZjXzEifQ%3D%3D'}, {'question': 'What are the 4 types of coffee?', 'snippet': 'There are 4 types of coffee bean. Arabica, Robusta, Excelsa and Liberica.', 'title': 'Coffee beans guide: Origin & different types | NESCAFÉ MENA', 'link': 'https://www.nescafe.com/mena/en-ae/understanding-coffee/coffee-beans-guide', 'displayed_link': 'https://www.nescafe.com › en-ae › understanding-coffee', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/581680f060c7d368510705ded46f2c9ec6ef6e60144d6ff6bcfe79a9b6526622.png', 'next_page_token': 'eyJvbnMiOiIxMDA0MSIsImZjIjoiRW9zQkNreEJSWE0zYWs1UlIwTldTVTVvTTJ3NWJHMVRibXQzZEVod1JuZFdabEpMWm5ObExXSnNPSFZrYjJwbldFOHdMVXRmZG1OeVRFMWlWa3BPTlVSTWFtcGxMV3hRTkc0eVVtbHBMVTl4RWhkSGQwazBXazVITTBjMUxXRndkRkZRTW1ReU1uVkJSUm9pUVU4dE1ISnNOVzVIT0dKaWRWcEhRelJmVm1KVk0yRmlYekYwU1hSRE5teHdkdyIsImZjdiI6IjMiLCJlaSI6Ikd3STRaTkczRzUtYXB0UVAyZDIydUFFIiwicWMiOiJDZ1pqYjJabVpXVVFBSDFSTkMwXyIsInF1ZXN0aW9uIjoiV2hhdCBhcmUgdGhlIDQgdHlwZXMgb2YgY29mZmVlPyIsImxrIjoiR2g1M2FHRjBJR0Z5WlNCMGFHVWdOQ0IwZVhCbGN5QnZaaUJqYjJabVpXVSIsImJzIjoiYy1PUzU1THlMRlpJems5TFMwMVZTTV9QVDFGSXl5OVN5RWhOekNuSnNKZTRZOENsd0NVZG5wR1puS0dRVkpTWWx3SlRtVm1za0pSYVhHSXZNZHVRUzRWTFBqd2pzVVFoc1NoVm9TUWpWY0ZFb2FTeUlMVllJVDhOcXRwZTRnMDdseXFYQW9xcXBOUzgxTFRNRWhSbG0tVUVHQUUiLCJpZCI6ImZjXzEifQ==', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google_related_questions&google_domain=google.com&next_page_token=eyJvbnMiOiIxMDA0MSIsImZjIjoiRW9zQkNreEJSWE0zYWs1UlIwTldTVTVvTTJ3NWJHMVRibXQzZEVod1JuZFdabEpMWm5ObExXSnNPSFZrYjJwbldFOHdMVXRmZG1OeVRFMWlWa3BPTlVSTWFtcGxMV3hRTkc0eVVtbHBMVTl4RWhkSGQwazBXazVITTBjMUxXRndkRkZRTW1ReU1uVkJSUm9pUVU4dE1ISnNOVzVIT0dKaWRWcEhRelJmVm1KVk0yRmlYekYwU1hSRE5teHdkdyIsImZjdiI6IjMiLCJlaSI6Ikd3STRaTkczRzUtYXB0UVAyZDIydUFFIiwicWMiOiJDZ1pqYjJabVpXVVFBSDFSTkMwXyIsInF1ZXN0aW9uIjoiV2hhdCBhcmUgdGhlIDQgdHlwZXMgb2YgY29mZmVlPyIsImxrIjoiR2g1M2FHRjBJR0Z5WlNCMGFHVWdOQ0IwZVhCbGN5QnZaaUJqYjJabVpXVSIsImJzIjoiYy1PUzU1THlMRlpJems5TFMwMVZTTV9QVDFGSXl5OVN5RWhOekNuSnNKZTRZOENsd0NVZG5wR1puS0dRVkpTWWx3SlRtVm1za0pSYVhHSXZNZHVRUzRWTFBqd2pzVVFoc1NoVm9TUWpWY0ZFb2FTeUlMVllJVDhOcXRwZTRnMDdseXFYQW9xcXBOUzgxTFRNRWhSbG0tVUVHQUUiLCJpZCI6ImZjXzEifQ%3D%3D'}, {'question': 'What are the benefits of coffee?', 'title': 'Here are the top ways coffee can positively impact your health:', 'link': 'https://www.hopkinsmedicine.org/health/wellness-and-prevention/9-reasons-why-the-right-amount-of-coffee-is-good-for-you', 'list': ['You could live longer. ... ', 'Your body may process glucose (or sugar) better. ... ', \"You're less likely to develop heart failure. ... \", \"You are less likely to develop Parkinson's disease. ... \", 'Your liver will thank you. ... ', 'Your DNA will be stronger.'], 'displayed_link': 'https://www.hopkinsmedicine.org › health › 9-reasons...', 'thumbnail': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSDV1_KHRKIUNXbIJYeb7G3jkEJsXv-MlZnSnAT4Qh-iw&s', 'next_page_token': 'eyJvbnMiOiIxMDA0MSIsImZjIjoiRW9zQkNreEJSWE0zYWs1UlIwTldTVTVvTTJ3NWJHMVRibXQzZEVod1JuZFdabEpMWm5ObExXSnNPSFZrYjJwbldFOHdMVXRmZG1OeVRFMWlWa3BPTlVSTWFtcGxMV3hRTkc0eVVtbHBMVTl4RWhkSGQwazBXazVITTBjMUxXRndkRkZRTW1ReU1uVkJSUm9pUVU4dE1ISnNOVzVIT0dKaWRWcEhRelJmVm1KVk0yRmlYekYwU1hSRE5teHdkdyIsImZjdiI6IjMiLCJlaSI6Ikd3STRaTkczRzUtYXB0UVAyZDIydUFFIiwicWMiOiJDZ1pqYjJabVpXVVFBSDFSTkMwXyIsInF1ZXN0aW9uIjoiV2hhdCBhcmUgdGhlIGJlbmVmaXRzIG9mIGNvZmZlZT8iLCJsayI6IkdoOTNhR0YwSUdGeVpTQjBhR1VnWW1WdVpXWnBkSE1nYjJZZ1kyOW1abVZsIiwiYnMiOiJjLU9TNTVMeUxGWkl6azlMUzAxVlNNX1BUMUZJeXk5U3lFaE56Q25Kc0plNFk4Q2x3Q1VkbnBHWm5LR1FWSlNZbHdKVG1WbXNrSlJhWEdJdk1kdVFTNFZMUGp3anNVUWhzU2hWb1NRalZjRkVvYVN5SUxWWUlUOE5xdHBlNGcwN2x5cVhBb3FxcE5TODFMVE1FaFJsbS1VRUdBRSIsImlkIjoiZmNfMSJ9', 'serpapi_link': 'https://serpapi.com/search.json?device=desktop&engine=google_related_questions&google_domain=google.com&next_page_token=eyJvbnMiOiIxMDA0MSIsImZjIjoiRW9zQkNreEJSWE0zYWs1UlIwTldTVTVvTTJ3NWJHMVRibXQzZEVod1JuZFdabEpMWm5ObExXSnNPSFZrYjJwbldFOHdMVXRmZG1OeVRFMWlWa3BPTlVSTWFtcGxMV3hRTkc0eVVtbHBMVTl4RWhkSGQwazBXazVITTBjMUxXRndkRkZRTW1ReU1uVkJSUm9pUVU4dE1ISnNOVzVIT0dKaWRWcEhRelJmVm1KVk0yRmlYekYwU1hSRE5teHdkdyIsImZjdiI6IjMiLCJlaSI6Ikd3STRaTkczRzUtYXB0UVAyZDIydUFFIiwicWMiOiJDZ1pqYjJabVpXVVFBSDFSTkMwXyIsInF1ZXN0aW9uIjoiV2hhdCBhcmUgdGhlIGJlbmVmaXRzIG9mIGNvZmZlZT8iLCJsayI6IkdoOTNhR0YwSUdGeVpTQjBhR1VnWW1WdVpXWnBkSE1nYjJZZ1kyOW1abVZsIiwiYnMiOiJjLU9TNTVMeUxGWkl6azlMUzAxVlNNX1BUMUZJeXk5U3lFaE56Q25Kc0plNFk4Q2x3Q1VkbnBHWm5LR1FWSlNZbHdKVG1WbXNrSlJhWEdJdk1kdVFTNFZMUGp3anNVUWhzU2hWb1NRalZjRkVvYVN5SUxWWUlUOE5xdHBlNGcwN2x5cVhBb3FxcE5TODFMVE1FaFJsbS1VRUdBRSIsImlkIjoiZmNfMSJ9'}], 'organic_results': [{'position': 1, 'title': 'Coffee', 'link': 'https://en.wikipedia.org/wiki/Coffee', 'displayed_link': 'https://en.wikipedia.org › wiki › Coffee', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a41370235e3abd98c1947069c6e9748089046bd.jpeg', 'snippet': 'Coffee is a beverage prepared from roasted coffee beans. Darkly colored, bitter, and slightly acidic, coffee has a stimulating effect on humans, ...', 'snippet_highlighted_words': ['Coffee', 'coffee', 'coffee'], 'sitelinks': {'inline': [{'title': 'List of countries by coffee...', 'link': 'https://en.wikipedia.org/wiki/List_of_countries_by_coffee_production'}, {'title': 'Coffee production', 'link': 'https://en.wikipedia.org/wiki/Coffee_production'}, {'title': 'Coffee preparation', 'link': 'https://en.wikipedia.org/wiki/Coffee_preparation'}, {'title': 'Brewed coffee', 'link': 'https://en.wikipedia.org/wiki/Brewed_coffee'}]}, 'rich_snippet': {'bottom': {'extensions': ['Region of origin: Kaffa in Horn of Africa\\u200e', 'Ingredients: Roasted coffee beans', 'Introduced: 15th century', 'Flavor: Distinctive, somewhat bitter'], 'detected_extensions': {'introduced_th_century': 15}}}, 'about_this_result': {'source': {'description': 'Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.', 'source_info_link': 'https://en.wikipedia.org/wiki/Coffee', 'security': 'secure', 'icon': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a413702aa62bef54786db18713ef6486dccfb265a7385357ccdd522556c3f303ed26557.png'}}, 'about_page_link': 'https://www.google.com/search?q=About+https://en.wikipedia.org/wiki/Coffee&tbm=ilp&ilps=ADJL0izANxNmAZazzpMAeGlkd2tXrw-aIQ', 'about_page_serpapi_link': 'https://serpapi.com/search.json?engine=google_about_this_result&google_domain=google.com&ilps=ADJL0izANxNmAZazzpMAeGlkd2tXrw-aIQ&q=About+https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCoffee', 'cached_page_link': 'https://webcache.googleusercontent.com/search?q=cache:U6oJMnF-eeUJ:https://en.wikipedia.org/wiki/Coffee&cd=33&hl=en&ct=clnk&gl=us', 'related_pages_link': 'https://www.google.com/search?q=related:https://en.wikipedia.org/wiki/Coffee+coffee'}, {'position': 2, 'title': 'The Coffee Bean & Tea Leaf | CBTL', 'link': 'https://www.coffeebean.com/', 'displayed_link': 'https://www.coffeebean.com', 'snippet': 'Born and brewed in Southern California since 1963, The Coffee Bean & Tea Leaf® is passionate about connecting loyal customers with carefully handcrafted ...', 'snippet_highlighted_words': ['Coffee'], 'about_this_result': {'source': {'description': 'The Coffee Bean & Tea Leaf is an American coffee shop chain founded in 1963. Since 2019, it is a trade name of Ireland-based Super Magnificent Coffee Company Ireland Limited. Its 80% stake is by multinational company Jollibee Foods Corporation.', 'source_info_link': 'https://www.coffeebean.com/', 'security': 'secure', 'icon': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a41370236dec1b6bf726a3914d6a9d5028d25c59578b20700b175de3b92316a2e9db711.png'}}, 'about_page_link': 'https://www.google.com/search?q=About+https://www.coffeebean.com/&tbm=ilp&ilps=ADJL0iyEMfWcc_F0sQp68evlFpMNONzA7w', 'about_page_serpapi_link': 'https://serpapi.com/search.json?engine=google_about_this_result&google_domain=google.com&ilps=ADJL0iyEMfWcc_F0sQp68evlFpMNONzA7w&q=About+https%3A%2F%2Fwww.coffeebean.com%2F', 'cached_page_link': 'https://webcache.googleusercontent.com/search?q=cache:WpQxSYo2c6AJ:https://www.coffeebean.com/&cd=34&hl=en&ct=clnk&gl=us', 'related_pages_link': 'https://www.google.com/search?q=related:https://www.coffeebean.com/+coffee'}, {'position': 3, 'title': 'What is Coffee?', 'link': 'https://www.ncausa.org/About-Coffee/What-is-Coffee', 'displayed_link': 'https://www.ncausa.org › About Coffee', 'snippet': 'cof·fee /ˈkôfē,ˈkäfē/ noun The berries harvested from species of Coffea plants. Everyone recognizes a roasted coffee bean, but you might not recognize an actual ...', 'snippet_highlighted_words': ['coffee'], 'about_this_result': {'source': {'description': 'The National Coffee Association or, is the main market research, consumer information, and lobbying association for the coffee industry in the United States.', 'source_info_link': 'https://www.ncausa.org/About-Coffee/What-is-Coffee', 'security': 'secure'}}, 'about_page_link': 'https://www.google.com/search?q=About+https://www.ncausa.org/About-Coffee/What-is-Coffee&tbm=ilp&ilps=ADJL0ixxUUvfx6Ju5WRyyjiP05--z_0mTg', 'about_page_serpapi_link': 'https://serpapi.com/search.json?engine=google_about_this_result&google_domain=google.com&ilps=ADJL0ixxUUvfx6Ju5WRyyjiP05--z_0mTg&q=About+https%3A%2F%2Fwww.ncausa.org%2FAbout-Coffee%2FWhat-is-Coffee', 'cached_page_link': 'https://webcache.googleusercontent.com/search?q=cache:ENqpL6s3VPIJ:https://www.ncausa.org/About-Coffee/What-is-Coffee&cd=35&hl=en&ct=clnk&gl=us'}, {'position': 4, 'title': 'Coffee | Origin, Types, Uses, History, & Facts', 'link': 'https://www.britannica.com/topic/coffee', 'displayed_link': 'https://www.britannica.com › ... › Food', 'thumbnail': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a413702cdf1deddb67d451859d5294688e08a58.jpeg', 'date': 'Mar 22, 2023', 'snippet': 'coffee, beverage brewed from the roasted and ground seeds of the tropical evergreen coffee plants of African origin. Coffee is one of the ...', 'snippet_highlighted_words': ['coffee', 'coffee', 'Coffee'], 'rich_snippet': {'bottom': {'questions': ['What is coffee?', 'Where did coffee originate?']}}, 'about_this_result': {'source': {'description': 'britannica.com was first indexed by Google more than 10 years ago', 'source_info_link': 'https://www.britannica.com/topic/coffee', 'security': 'secure', 'icon': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a413702c76d8a9622e3a3b5b18646a9438a950d1c3d93c649733c1e50bf8159b64d21a1.png'}}, 'about_page_link': 'https://www.google.com/search?q=About+https://www.britannica.com/topic/coffee&tbm=ilp&ilps=ADJL0ixB8MrWhyEfLpjZI0CfZRgB9XA7wQ', 'about_page_serpapi_link': 'https://serpapi.com/search.json?engine=google_about_this_result&google_domain=google.com&ilps=ADJL0ixB8MrWhyEfLpjZI0CfZRgB9XA7wQ&q=About+https%3A%2F%2Fwww.britannica.com%2Ftopic%2Fcoffee', 'cached_page_link': 'https://webcache.googleusercontent.com/search?q=cache:Wikbu4ipU28J:https://www.britannica.com/topic/coffee&cd=36&hl=en&ct=clnk&gl=us', 'related_pages_link': 'https://www.google.com/search?q=related:https://www.britannica.com/topic/coffee+coffee'}, {'position': 5, 'title': 'Coffee', 'link': 'https://www.amazon.com/coffee/s?k=coffee', 'displayed_link': 'https://www.amazon.com › coffee › k=coffee', 'thumbnail': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTX9ofuNsZqYoWn7Q955M3hXmXDY5UnwxG3LEpCMlpVyuH9d6XEpGSFfSRSoU-2AgxkvTs&s', 'snippet': \"The Original Donut Shop Regular, Single-Serve Keurig K-Cup Pods, Medium Roast Coffee Pods, 24 Count (Pack of 4) ... Chock Full o'Nuts Original Roast Ground Coffee ...\", 'snippet_highlighted_words': ['Coffee', 'Coffee'], 'images': ['https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTX9ofuNsZqYoWn7Q955M3hXmXDY5UnwxG3LEpCMlpVyuH9d6XEpGSFfSRSoU-2AgxkvTs&s', 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQHRrxTFVWH2fwyGJIwUzVkOHtqToQPGSVY3-Okffd-S2Ynzx5Cl2KTnrUunIMysfWPsns&s', 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTPMicqxvC9FsqHIU5UmLz_gtZf2_VEsz2z7KeOGarDH639rrLKOTlOHCCpe-HiSbOFM5o&s', 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcS2fO2Mak3Y7USbiGvN4UdFgZXDn0DouDuRMgyUfPr29ANmsZGDk_zgeU_wsP6-E_9vN4c&s', 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQZvUs5K8EeSa2U1SEwUNglwDYsb0NvxL8ZSyIHcghiIHx3o1byE_rBSzg31CuZnG-szA&s', 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcR1Qee1OmG2Yu9yS9AUYwfwAEzKH6oyYeSIqESDOSTidszlgdJOUezTH6OiszoBD5aZGBo&s', 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTJz-mmbLw9CbQzPjj7jHU5ZZetu-XJqb5POLfDrQlpHgD3Ky9Sd65QDTyY5kJ48LUmh7A&s', 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQIYAvQmQ_BbowG0CwmsnKm67ayf1kTWa0aueBTjjRzSvvLmrYoMCV6gZM30NJwMhA1mw&s'], 'rich_snippet': {'top': {'detected_extensions': {'free': 30}, 'extensions': ['Free 30', 'day returns']}}, 'about_this_result': {'source': {'description': 'Amazon.com, Inc. is an American multinational technology company focusing on e-commerce, cloud computing, online advertising, digital streaming, and artificial intelligence.', 'source_info_link': 'https://www.amazon.com/coffee/s?k=coffee', 'security': 'secure', 'icon': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a41370271de3cc16a260186c3b4cb3a7fd7c1ed2f6dcfc37c91066409b0b45459219d02.png'}}, 'about_page_link': 'https://www.google.com/search?q=About+https://www.amazon.com/coffee/s?k%3Dcoffee&tbm=ilp&ilps=ADJL0iz4yaPjFE5YbA0TnbXqmN1j8Qqr0g', 'about_page_serpapi_link': 'https://serpapi.com/search.json?engine=google_about_this_result&google_domain=google.com&ilps=ADJL0iz4yaPjFE5YbA0TnbXqmN1j8Qqr0g&q=About+https%3A%2F%2Fwww.amazon.com%2Fcoffee%2Fs%3Fk%3Dcoffee', 'cached_page_link': 'https://webcache.googleusercontent.com/search?q=cache:wfQ5Et9Ni-kJ:https://www.amazon.com/coffee/s%3Fk%3Dcoffee&cd=37&hl=en&ct=clnk&gl=us'}, {'position': 6, 'title': 'Coffee | The Nutrition Source | Harvard T.H. Chan School of ...', 'link': 'https://www.hsph.harvard.edu/nutritionsource/food-features/coffee/', 'displayed_link': 'https://www.hsph.harvard.edu › ... › Food Features', 'snippet': 'Coffee beans are the seeds of a fruit called a coffee cherry. Coffee cherries grow on coffee trees from a genus of plants called Coffea. There are a wide ...', 'snippet_highlighted_words': ['Coffee', 'coffee', 'Coffee', 'coffee'], 'about_this_result': {'source': {'description': 'The Harvard T.H. Chan School of Public Health is the public health school of Harvard University, located in the Longwood Medical Area of Boston, Massachusetts.', 'source_info_link': 'https://www.hsph.harvard.edu/nutritionsource/food-features/coffee/', 'security': 'secure', 'icon': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a4137026dbd26df06a2dbaf014d13f33e6a3b6a920533e702cc436894e9cb1a14359630.png'}}, 'about_page_link': 'https://www.google.com/search?q=About+https://www.hsph.harvard.edu/nutritionsource/food-features/coffee/&tbm=ilp&ilps=ADJL0ix24UjkA35TUhg8KpNFoGqXA4X1pg', 'about_page_serpapi_link': 'https://serpapi.com/search.json?engine=google_about_this_result&google_domain=google.com&ilps=ADJL0ix24UjkA35TUhg8KpNFoGqXA4X1pg&q=About+https%3A%2F%2Fwww.hsph.harvard.edu%2Fnutritionsource%2Ffood-features%2Fcoffee%2F', 'cached_page_link': 'https://webcache.googleusercontent.com/search?q=cache:aCQFR0EWgPwJ:https://www.hsph.harvard.edu/nutritionsource/food-features/coffee/&cd=38&hl=en&ct=clnk&gl=us'}, {'position': 7, 'title': \"Peet's Coffee: The Original Craft Coffee\", 'link': 'https://www.peets.com/', 'displayed_link': 'https://www.peets.com', 'snippet': \"Since 1966, Peet's Coffee has offered superior coffees and teas by sourcing the best quality coffee beans and tea leaves in the world and adhering to strict ...\", 'snippet_highlighted_words': ['Coffee', 'coffees', 'coffee'], 'about_this_result': {'source': {'description': \"Peet's Coffee is a San Francisco Bay Area-based specialty coffee roaster and retailer owned by JAB Holding Company via JDE Peet's.\", 'source_info_link': 'https://www.peets.com/', 'security': 'secure', 'icon': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a4137024be2581d9638efd0771f573c952f60c8149f094c13f4611da3180decb846ea6a.png'}}, 'about_page_link': 'https://www.google.com/search?q=About+https://www.peets.com/&tbm=ilp&ilps=ADJL0iyi0Ke6jkQwj42VpDqECgl1WRdfeQ', 'about_page_serpapi_link': 'https://serpapi.com/search.json?engine=google_about_this_result&google_domain=google.com&ilps=ADJL0iyi0Ke6jkQwj42VpDqECgl1WRdfeQ&q=About+https%3A%2F%2Fwww.peets.com%2F', 'cached_page_link': 'https://webcache.googleusercontent.com/search?q=cache:BCjzno6zP6wJ:https://www.peets.com/&cd=39&hl=en&ct=clnk&gl=us', 'related_pages_link': 'https://www.google.com/search?q=related:https://www.peets.com/+coffee'}, {'position': 8, 'title': '31 Coffee Brands, Ranked From Worst To Best', 'link': 'https://www.tastingtable.com/718678/coffee-brands-ranked-from-worst-to-best/', 'displayed_link': 'https://www.tastingtable.com › coffee-brands-ranked-f...', 'thumbnail': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQJbKwc_XPeJDMpYB72QQ9gDtxMu3fnuYp3dVPBKNRH58VXOyFxTPIK&usqp=CAE&s', 'date': 'Mar 2, 2023', 'snippet': \"From café chains to retail roasters, we've ranked some of the most popular coffee brands from worst to first.\", 'snippet_highlighted_words': ['coffee'], 'about_this_result': {'source': {'description': \"Tasting Table is a digital media company focused on food and drink. The brand's website and email newsletter report on food and drink trends in the categories of dining, wine, cocktails, cooking and food travel.\", 'source_info_link': 'https://www.tastingtable.com/718678/coffee-brands-ranked-from-worst-to-best/', 'security': 'secure', 'icon': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a41370255d9fd38631e86b30b8ba5a6270a8f8f8d3fdf473f4298db2e8f3de80302c3c1.png'}}, 'about_page_link': 'https://www.google.com/search?q=About+https://www.tastingtable.com/718678/coffee-brands-ranked-from-worst-to-best/&tbm=ilp&ilps=ADJL0iwgbb7uiNB1U30anBQOfSid1twx2w', 'about_page_serpapi_link': 'https://serpapi.com/search.json?engine=google_about_this_result&google_domain=google.com&ilps=ADJL0iwgbb7uiNB1U30anBQOfSid1twx2w&q=About+https%3A%2F%2Fwww.tastingtable.com%2F718678%2Fcoffee-brands-ranked-from-worst-to-best%2F', 'cached_page_link': 'https://webcache.googleusercontent.com/search?q=cache:O0p2m8H_t7EJ:https://www.tastingtable.com/718678/coffee-brands-ranked-from-worst-to-best/&cd=40&hl=en&ct=clnk&gl=us'}, {'position': 9, 'title': 'Starbucks Coffee Company', 'link': 'https://www.starbucks.com/', 'displayed_link': 'https://www.starbucks.com', 'thumbnail': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTUIjRcZRdCIqaOwWV7l1lcn8E-1M1YZNoG2_Kh_UJ8snRuD8oenfX-&usqp=CAE&s', 'snippet': 'More than just great coffee. Explore the menu, sign up for Starbucks® Rewards, manage your gift card and more.', 'snippet_highlighted_words': ['coffee'], 'about_this_result': {'source': {'description': \"Starbucks Corporation is an American multinational chain of coffeehouses and roastery reserves headquartered in Seattle, Washington. It is the world's largest coffeehouse chain.\\nAs of November 2021, the company had 33,833 stores in 80 countries, 15,444 of which were located in the United States.\", 'source_info_link': 'https://www.starbucks.com/', 'security': 'secure', 'icon': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a41370201fd9153a994c33cca23ce212f7b632caa714a5c58d1a195fdcc3b69b71b230c.png'}}, 'about_page_link': 'https://www.google.com/search?q=About+https://www.starbucks.com/&tbm=ilp&ilps=ADJL0iz2L53LZLU_48M30C3XaAwABDTi8g', 'about_page_serpapi_link': 'https://serpapi.com/search.json?engine=google_about_this_result&google_domain=google.com&ilps=ADJL0iz2L53LZLU_48M30C3XaAwABDTi8g&q=About+https%3A%2F%2Fwww.starbucks.com%2F', 'cached_page_link': 'https://webcache.googleusercontent.com/search?q=cache:1vGXgo_FlHkJ:https://www.starbucks.com/&cd=41&hl=en&ct=clnk&gl=us', 'related_pages_link': 'https://www.google.com/search?q=related:https://www.starbucks.com/+coffee'}, {'position': 10, 'title': '9 Reasons Why (the Right Amount of) Coffee Is Good for You', 'link': 'https://www.hopkinsmedicine.org/health/wellness-and-prevention/9-reasons-why-the-right-amount-of-coffee-is-good-for-you', 'displayed_link': 'https://www.hopkinsmedicine.org › health › 9-reasons...', 'thumbnail': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRsnUcaHtehFB7MvyuymVeeL7PLkII_xVLL_v6YkOtwxYZV-rzP0RBL&usqp=CAE&s', 'snippet': 'But coffee also contains antioxidants and other active substances that may reduce internal inflammation and protect against disease, say nutrition experts from ...', 'snippet_highlighted_words': ['coffee'], 'about_this_result': {'source': {'description': 'hopkinsmedicine.org was first indexed by Google more than 10 years ago', 'source_info_link': 'https://www.hopkinsmedicine.org/health/wellness-and-prevention/9-reasons-why-the-right-amount-of-coffee-is-good-for-you', 'security': 'secure', 'icon': 'https://serpapi.com/searches/643802169d158690e4963190/images/f8a7b34e2b85a11c15dcd57b4a413702a84bcbfb2440423ae0a0c5fb59f87e224f8c64115e43db3dd8cb98600b9b54aa.png'}}, 'about_page_link': 'https://www.google.com/search?q=About+https://www.hopkinsmedicine.org/health/wellness-and-prevention/9-reasons-why-the-right-amount-of-coffee-is-good-for-you&tbm=ilp&ilps=ADJL0izDwqyqs6boUzNyhlVyPUkW3-dJ3Q', 'about_page_serpapi_link': 'https://serpapi.com/search.json?engine=google_about_this_result&google_domain=google.com&ilps=ADJL0izDwqyqs6boUzNyhlVyPUkW3-dJ3Q&q=About+https%3A%2F%2Fwww.hopkinsmedicine.org%2Fhealth%2Fwellness-and-prevention%2F9-reasons-why-the-right-amount-of-coffee-is-good-for-you', 'cached_page_link': 'https://webcache.googleusercontent.com/search?q=cache:MdKXyZO_8uQJ:https://www.hopkinsmedicine.org/health/wellness-and-prevention/9-reasons-why-the-right-amount-of-coffee-is-good-for-you&cd=42&hl=en&ct=clnk&gl=us'}], 'related_searches': [{'query': 'coffee near me', 'link': 'https://www.google.com/search?q=Coffee+near+me&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ1QJ6BAh1EAE'}, {'query': 'coffee types', 'link': 'https://www.google.com/search?q=Coffee+types&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ1QJ6BAh6EAE'}, {'query': 'coffee beans', 'link': 'https://www.google.com/search?q=Coffee+beans&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ1QJ6BAhzEAE'}, {'query': 'coffee machine', 'link': 'https://www.google.com/search?q=Coffee+machine&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ1QJ6BAh3EAE'}, {'query': 'coffee benefits', 'link': 'https://www.google.com/search?q=Coffee+benefits&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ1QJ6BAh7EAE'}, {'query': 'coffee starbucks', 'link': 'https://www.google.com/search?q=Coffee+Starbucks&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ1QJ6BAh4EAE'}, {'query': 'coffee origin', 'link': 'https://www.google.com/search?q=Coffee+origin&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ1QJ6BAhyEAE'}, {'query': 'coffee plant', 'link': 'https://www.google.com/search?q=Coffee+plant&sa=X&ved=2ahUKEwiRsZ3x-ab-AhUfjYkEHdmuDRcQ1QJ6BAh0EAE'}], 'pagination': {'current': 1, 'next': 'https://www.google.com/search?q=coffee&oq=coffee&start=10&sourceid=chrome&ie=UTF-8', 'other_pages': {'2': 'https://www.google.com/search?q=coffee&oq=coffee&start=10&sourceid=chrome&ie=UTF-8', '3': 'https://www.google.com/search?q=coffee&oq=coffee&start=20&sourceid=chrome&ie=UTF-8', '4': 'https://www.google.com/search?q=coffee&oq=coffee&start=30&sourceid=chrome&ie=UTF-8', '5': 'https://www.google.com/search?q=coffee&oq=coffee&start=40&sourceid=chrome&ie=UTF-8'}}, 'serpapi_pagination': {'current': 1, 'next_link': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=coffee&start=10', 'next': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=coffee&start=10', 'other_pages': {'2': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=coffee&start=10', '3': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=coffee&start=20', '4': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=coffee&start=30', '5': 'https://serpapi.com/search.json?device=desktop&engine=google&google_domain=google.com&location=Austin%2CTexas&q=coffee&start=40'}}}\n"
     ]
    }
   ],
   "source": [
    "from serpapi import GoogleSearch\n",
    "search = GoogleSearch({\n",
    "    \"q\": \"coffee\",\n",
    "    \"location\": \"Austin,Texas\",\n",
    "  })\n",
    "result = search.get_dict()\n",
    "print(result)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "check the OpenAI KEY is useful or not"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "response : {\n",
      "  \"choices\": [\n",
      "    {\n",
      "      \"finish_reason\": \"length\",\n",
      "      \"index\": 0,\n",
      "      \"logprobs\": null,\n",
      "      \"text\": \" Kaitlin. I\"\n",
      "    }\n",
      "  ],\n",
      "  \"created\": 1682500219,\n",
      "  \"id\": \"cmpl-79VSFbbtZ9ekxWgW9LTDSvimSJwd3\",\n",
      "  \"model\": \"davinci\",\n",
      "  \"object\": \"text_completion\",\n",
      "  \"usage\": {\n",
      "    \"completion_tokens\": 5,\n",
      "    \"prompt_tokens\": 5,\n",
      "    \"total_tokens\": 10\n",
      "  }\n",
      "}\n",
      "--------\n",
      " Kaitlin. I\n"
     ]
    }
   ],
   "source": [
    "import openai\n",
    "\n",
    "response = openai.Completion.create(\n",
    "    engine=\"davinci\",\n",
    "    prompt=\"Hello, my name is\",\n",
    "    max_tokens=5,\n",
    "    n=1,\n",
    "    stop=None,\n",
    "    temperature=0.5,\n",
    ")\n",
    "print(\"response : {}\".format(response))\n",
    "print(\"--------\")\n",
    "print(response.choices[0].text)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## Models\n",
    "\n",
    "So now we begin our first section of content, the code examples in this section are sourced from the official LangChain tutorial website. For detailed instructions and documentation on Models, please click 👉[Here](https://python.langchain.com/en/latest/modules/models.html)👈 for direct access.\n",
    "\n",
    "- [LLMs](#llms)\n",
    "- [Chat Models](#chat-models)\n",
    "- [Text Embedding Models](#text-embedding-models)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### LLMs\n",
    "\n",
    "We know that LangChain itself does not provide large language models, but rather provides various interfaces and methods for accessing and calling upon large language models. Currently, there are many LLM providers such as OpenAI, Cohere, Hugging Face, etc. Within these LLMs, a unified interface is provided to access these large models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.llms import OpenAI\n",
    "import os"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Here are the regular ways to call and declare LLMs, there are many models to choose from, you can visit [here](https://platform.openai.com/docs/models/overview) for browsing and setting which specific model to use."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "\"\"\"model_name is the name of the selected OpenAI model, \n",
    "n=2 means that two pieces of text are generated and best_of selects the best one out of the two as the output.\n",
    "\"\"\"\n",
    "llm = OpenAI(model_name=\"text-ada-001\", n=2, best_of=2)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So let's take a look at the specific parameters of this LLM."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1mOpenAI\u001b[0m\n",
      "Params: {'model_name': 'text-ada-001', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 2, 'best_of': 2, 'request_timeout': None, 'logit_bias': {}}\n"
     ]
    }
   ],
   "source": [
    "llm"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Text Generation: The most basic function of an LLM is to be able to call it, pass in a string and return a string. The \"tell me...story\" is the query, which is passed into the model and called upon to answer the question. The query type is a string and there is no restriction on whether it must be a question or statement. Large language models such as ChatGPT will parse and understand the content accordingly, so readers need not worry about this aspect."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\"\\n\\nOnce upon a time there was a princess who was very scared. She was sitting in her bedroom when she saw a scary scene in a movie. The princess was scared and started to cry. Her parents tried to soothe her and eventually she fell asleep. When she woke up, she found herself in a dark place. She couldn't make head or tail of what was going on and started to scream. A big part of her population felt sorry for her and helped her, but she wasn't alone. The people who were supposed to be helping her started to leave without her. The princess was scared and started to run around. She didn't know where she was going or what she was doing. Eventually she got her hands on a knife and started to kill people. She was scared and full of Syndicate, but she didn't care. The princess died in the dark, but her story is still shareable online.\""
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm(\"Tell me a scared story\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "LLM can accept a list of texts and generate answers for each statement, including the return information from API providers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "llm_result = llm.generate([\"How to study English\", \"How to generate a new idea\"]*3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "generations=[[Generation(text='\\n\\nThere are a few ways to study English:\\n\\n1. Use online resources.\\n\\n2. Take classes from an English teacher.\\n\\n3. Use English-speaking books and articles.\\n\\n4. Take English-speaking courses at your school.', generation_info={'finish_reason': 'stop', 'logprobs': None}), Generation(text=' in a foreign country\\n\\nThere are a number of ways to study English in a foreign country. One way is to go to a training course offered by a English speaking teacher. Another way is to use online resources.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\n1. Take a step back. What is the best idea you've heard?\\n2. Ask yourself how it would be used. Is the idea unique to you, or is it something that is commonly used?\\n3. How would you create it? This might be a difficult question to answer. Try to think of ways to create the idea that is most unique to you.\\n4. What ingredients would you need? The ingredients for the idea might be different for you, but the process will be the same.\\n5. What resources do you have? You might be able to find the idea through reading articles or Watch this video on how to generate a new idea.\\n6. How do you want to create it? The idea might be something that ismopolitan or consumer-based, so it might be a challenge to create a business around it.\\n7. What resources do you have? The resources for creating a new idea are same, but the process might be different.\", generation_info={'finish_reason': 'stop', 'logprobs': None}), Generation(text='\\n\\n1. Create a problem\\n\\n2. rescued a kitten from a euthanasia\\n\\n3. created a Pawtucket Mystic\\n\\n4. designed a one mile run\\n\\n5. written a story for the blog\\n\\n6. created a product for a product\\n\\n7. created a company in town\\n\\n8. designed a campaign', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nThere is no one-size-fits-all answer to this question, as the best way to study English will vary depending on your level of English proficientness. However, some tips on how to study English include using an online English course option, studying at a frequency that is comfortable for you, and using online resources.', generation_info={'finish_reason': 'stop', 'logprobs': None}), Generation(text='\\n\\nThere are a few ways to study English:\\n\\n1. Use online resources.\\n\\n2. Use online textbooks.\\n\\n3. Use local resources.\\n\\n4. Use online lectures.\\n\\n5. Use online vocabulary.\\n\\n6. Use online grammar.\\n\\n7. Use online writing.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nThere is no one way to generate a new idea. You will likely have to consider what people want, what might be appealing to them, and what you think will be a strong signal to investors. You will also need to create a plan for achieving your new idea.', generation_info={'finish_reason': None, 'logprobs': None}), Generation(text='\\n\\n1. brainstorm\\n2. create a scenario\\n3. think about what it does\\n4. come up with specific steps\\n5. test out the scenario\\n6. keep track of how it goes\\n7. Jr. Album', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\" in a foreign country\\n\\nThere is no one-size-fits-all answer to this question, as the best way to study English in a foreign country will vary depending on the individual's individual skills and knowledge. However, some ways to study English in a foreign country include attending English-language classes, taking classes from a English-speaking tutor, or studying English content in absence-friendly websites.\", generation_info={'finish_reason': 'stop', 'logprobs': None}), Generation(text='\\n\\nThere are a few ways to study English:\\n\\n1. Online courses: There are many online courses that teach English, including some free courses and some paid courses.\\n\\n2. Meeting English-speaking people: meeting English-speaking people can help you learn more about English culture and the language.\\n\\n3. Online groups and websites: there are many online groups and websites that teach English, and they offer resources about different topics.\\n\\n4. classroom learning: classroom learning is one of the most common ways to study English, because it is a more direct way to learning the language.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nThere is no one way to generate a new idea. You will need to consider what is important to you and what is comfortable for you to talk about. You also need to create a plan in how you will go about creating the idea.', generation_info={'finish_reason': 'stop', 'logprobs': None}), Generation(text='\\n\\nThere is no one way to generate a new idea. You will need to have some kind of metaphor or metaphor with people, things, or something. For example, you could say that you are creating a new kind of product, but be specific about how you think this new product might be used. You could also use a metaphor with people or something to mean different things.\\n\\nFor example, you could create a new type of product that uses people as the main character. This could be a type of product that people use to find new things to do. The product might be called \"People\\'s Way.\" People use the product to find new ways to do things. The new way people are using the product is called \"People\\'s Way\\'s new way.\" People\\'s Way is different from the old way and people using the old way is called \"The old way.\"', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 1074, 'prompt_tokens': 30, 'completion_tokens': 1044}, 'model_name': 'text-ada-001'}\n"
     ]
    }
   ],
   "source": [
    "llm_result"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "6"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(llm_result.generations)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Generation(text='\\n\\n1. Open a English course in your local university or department.\\n\\n2. Find a course on English you want to take and take it in your local university.\\n\\n3. Find a tutor or course Tutor English for you to learn from.\\n\\n4. Finish your coursework and you will have completed English studies in your local university.', generation_info={'finish_reason': 'stop', 'logprobs': None}),\n",
       " Generation(text='\\n\\nThere are a few ways to study English:\\n\\n1. Online courses: Take online courses from various English schools.\\n\\n2. Practice tests: Performance test materials and practice questions in real time so you can become comfortable with the material.\\n\\n3. Print books: Read print books and learn key concepts in the same way as if they were video lessons.\\n\\n4. Location: Take classes and workshops near you at a nearby school or organization.\\n\\n5. Online courses: Take online courses that interest you. Check out free courses beforeempying to sink in for real.', generation_info={'finish_reason': 'stop', 'logprobs': None})]"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm_result.generations[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Generation(text='\\n\\n\\nThere are a few ways to generate new ideas:\\n\\n1. Look at what others have done in their line of work and experiment with different ideas.\\n\\n2. Take a step back and look at the whole work process from a different perspective.\\n\\n3. Think about different ways to do a task that is popular or efficient in another job market.\\n\\n4.Solution:\\n\\nTake a step back and think about different ways to do a task that is popular or efficient in another job market. For example, think about how to automate task completion times in a specific industry.', generation_info={'finish_reason': 'stop', 'logprobs': None}),\n",
       " Generation(text=' for a game\\n\\nThere is no one definitive way to generate a new idea for a game. However, some methods you may consider include searching online forums for various ideas, creating a list of specific ideas you have, or searching Google for \" game idea\" resources. Additionally, if you have any specific ideas you would like to see included in this resource, be sure to include them in a comment!', generation_info={'finish_reason': 'stop', 'logprobs': None})]"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm_result.generations[1]"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Provide the result from API providers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 102,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'token_usage': {'prompt_tokens': 42,\n",
       "  'completion_tokens': 1407,\n",
       "  'total_tokens': 1449},\n",
       " 'model_name': 'text-ada-001'}"
      ]
     },
     "execution_count": 102,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm_result.llm_output"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "evaluation for a array by costing the number of tokens "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "29"
      ]
     },
     "execution_count": 103,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm.get_num_tokens(\"假设你是个教师，如何教英语\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Chat Models\n",
    "The turbo model is used in the ChatGPT interface. There are three important roles in the design of a chatbot: system, human, and AI. The image below shows the capture and processing of information by the chatbot:\n",
    "\n",
    "![Chatbot Information Capture and Processing](./three.png)\n",
    "\n",
    "As depicted in the image, these three roles are defined as follows:\n",
    "- System: Describes everything in the real world. The system determines what the AI ultimately needs to do and provides a clear context or role, such as a translator or banker.\n",
    "- Human: Specifies specific tasks or requests, such as a sentence for the AI to translate or a request for the AI to generate lyrics resembling those of Fang Wenshan.\n",
    "- AI: The large model that decides the content to be returned. This functionality is implemented using the langchain toolkit.\n",
    "\n",
    "In the code, we manipulate specific requirements and steps using the SystemMessage, HumanMessage, and AIMessage imported from the langchain.schema, as shown below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.chat_models import ChatOpenAI\n",
    "from langchain import PromptTemplate, LLMChain\n",
    "from langchain.prompts.chat import (\n",
    "    ChatPromptTemplate,\n",
    "    SystemMessagePromptTemplate,\n",
    "    AIMessagePromptTemplate,\n",
    "    HumanMessagePromptTemplate,\n",
    ")\n",
    "from langchain.schema import (\n",
    "    AIMessage,\n",
    "    HumanMessage,\n",
    "    SystemMessage\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ChatOpenAI(verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fad12836580>, client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, model_name='gpt-3.5-turbo', temperature=0.0, model_kwargs={}, openai_api_key=None, openai_organization=None, request_timeout=60, max_retries=6, streaming=False, n=1, max_tokens=None)"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat = ChatOpenAI(temperature=0)\n",
    "chat"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "the response is generated by AI"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='我喜欢编程。', additional_kwargs={})"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat([HumanMessage(content=\"Translate this sentence from English to Chinese. I love programming.\")])"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "As shown in the image above, we typically define the content of SystemMessage as assigning it a role so that the OpenAI model can answer within a specific domain of knowledge. The benefit of doing this is that it makes the answers provided more accurate. It is worth noting that the OpenAI chat model supports the input of multiple messages at once."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='我喜欢编程。', additional_kwargs={})"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "messages = [\n",
    "    SystemMessage(content=\"You are a helpful assistant that translates English to Chinese.\"),\n",
    "    HumanMessage(content=\"Translate this sentence from English to Chinese. I love programming.\")\n",
    "]\n",
    "chat(messages)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "you can use ```chat.generate``` to deal with the multiple dialogue, and LLMResult including the input info."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "LLMResult(generations=[[ChatGeneration(text=\"J'aime programmer.\", generation_info=None, message=AIMessage(content=\"J'aime programmer.\", additional_kwargs={}))], [ChatGeneration(text='我喜欢人工智能。', generation_info=None, message=AIMessage(content='我喜欢人工智能。', additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 73, 'completion_tokens': 16, 'total_tokens': 89}, 'model_name': 'gpt-3.5-turbo'})"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "batch_messages = [\n",
    "    [\n",
    "        SystemMessage(content=\"You are a helpful assistant that translates English to French.\"),\n",
    "        HumanMessage(content=\"Translate this sentence from English to French. I love programming.\")\n",
    "    ],\n",
    "    [\n",
    "        SystemMessage(content=\"You are a helpful assistant that translates English to Chinese.\"),\n",
    "        HumanMessage(content=\"Translate this sentence from English to Chinese. I love artificial intelligence.\")\n",
    "    ],\n",
    "]\n",
    "result = chat.generate(batch_messages)\n",
    "result"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "OUT：\n",
    "\n",
    "LLMResult(generations=[[ChatGeneration(text=\"J'aime programmer.\", generation_info=None, message=AIMessage(content=\"J'aime programmer.\", additional_kwargs={}))], [ChatGeneration(text='我喜欢人工智能。', generation_info=None, message=AIMessage(content='我喜欢人工智能。', additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 73, 'completion_tokens': 16, 'total_tokens': 89}, 'model_name': 'gpt-3.5-turbo'})\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "\n",
    "we can obtain the info from result as below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 109,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'token_usage': {'prompt_tokens': 73,\n",
       "  'completion_tokens': 16,\n",
       "  'total_tokens': 89},\n",
       " 'model_name': 'gpt-3.5-turbo'}"
      ]
     },
     "execution_count": 109,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "result.llm_output"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Use the prompt template in ChatModel\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "use the template to pack the SystemMessage and HumanMessage.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "template=\"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
    "system_message_prompt = SystemMessagePromptTemplate.from_template(template)\n",
    "human_template=\"{text}\"\n",
    "human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content=\"J'adore la programmation.\", additional_kwargs={})"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])\n",
    "chat(chat_prompt.format_prompt(input_language=\"English\", output_language=\"French\", text=\"I love programming.\").to_messages())"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "\n",
    "you can also adopt the methods below to better construct ```system_message``` directly"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "PromptTemplate(input_variables=['input_language', 'output_language'], output_parser=None, partial_variables={}, template='You are a helpful assistant that translates {input_language} to {output_language}.', template_format='f-string', validate_template=True)"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt=PromptTemplate(\n",
    "    template=\"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
    "    input_variables=[\"input_language\", \"output_language\"],\n",
    ")\n",
    "prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input_language', 'output_language'], output_parser=None, partial_variables={}, template='You are a helpful assistant that translates {input_language} to {output_language}.', template_format='f-string', validate_template=True), additional_kwargs={})"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)\n",
    "system_message_prompt"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### LLMChain\n",
    "\n",
    "结合LLMChain和Prompt、Chat model，可以更方便地开展对话"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "LLMChain(memory=None, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fad12836580>, verbose=False, prompt=ChatPromptTemplate(input_variables=['text', 'output_language', 'input_language'], output_parser=None, partial_variables={}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input_language', 'output_language'], output_parser=None, partial_variables={}, template='You are a helpful assistant that translates {input_language} to {output_language}.', template_format='f-string', validate_template=True), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['text'], output_parser=None, partial_variables={}, template='{text}', template_format='f-string', validate_template=True), additional_kwargs={})]), llm=ChatOpenAI(verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fad12836580>, client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, model_name='gpt-3.5-turbo', temperature=0.0, model_kwargs={}, openai_api_key=None, openai_organization=None, request_timeout=60, max_retries=6, streaming=False, n=1, max_tokens=None), output_key='text')"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = LLMChain(llm=chat, prompt=chat_prompt)\n",
    "chain"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'我喜欢编程和游泳。'"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain.run(input_language=\"English\", output_language=\"Chinese\", text=\"I love programming and swimming\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 流式对话\n",
    "通过回调函数处理对话"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "一加一等于二是因为这是我们所接受的基本数学原理之一。在十进制数系统中，我们将数字1表示为一个单位，数字2表示为两个单位。当我们将一个单位与另一个单位相加时，我们得到两个单位，即数字2。这是因为加法是一种基本的算术运算，它表示将两个或多个数值相加以得到它们的总和。因此，一加一等于二是数学中的基本原理之一，它被广泛接受并被用于各种数学和科学应用中。"
     ]
    }
   ],
   "source": [
    "from langchain.callbacks.base import CallbackManager\n",
    "from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
    "\"\"\"\n",
    "streaming=True是启用了聊天的流式模式，也就是响应的内容将会在逐步生成的数据块中逐步返回；\n",
    "而callback_manger用于管理聊天过程中的回调函数的对象，流式对象将会逐步返回内容而不是整个\n",
    "响应完成之后一次性返回，如果不传递callback_manager参数，则聊天过程中将会以默认的方式运行，也\n",
    "就是响应完成之后一次性返回；verbose是详细模式，决定是否打印出具体的信息；temperature的设置\n",
    "则是决定随机性\n",
    "\"\"\"\n",
    "chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)\n",
    "resp = chat([HumanMessage(content=\"帮我分析一下为什么一加一等于二？\")])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "As an AI language model, I cannot provide instructions on how to become a fictional character like Superman. However, I can suggest some ways to become a better version of yourself:\n",
      "\n",
      "1. Set goals and work towards them consistently.\n",
      "2. Exercise regularly to improve physical strength and endurance.\n",
      "3. Learn new skills and knowledge to enhance mental abilities.\n",
      "4. Practice empathy and kindness towards others.\n",
      "5. Be courageous and stand up for what is right.\n",
      "6. Develop a positive mindset and attitude towards life.\n",
      "7. Surround yourself with supportive and positive people.\n",
      "8. Take care of your health by eating a balanced diet and getting enough rest.\n",
      "9. Continuously challenge yourself to grow and improve.\n",
      "10. Believe in yourself and your abilities."
     ]
    }
   ],
   "source": [
    "resp_another = chat([HumanMessage(content=\"tell me how to become a superman?\")])"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Text Embedding Models\n",
    "词嵌入支持将自然语言转变为向量。LangChain中最重要的两个embedding方法为embed_documents 和 embed_query方法。因为处理多个文本文件和处理一个文件有很大的不同，所以LangChain将其分为两个类。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "LangChain中集成了很多来自不同平台的API可以进行词嵌入编码，这里举一个用OpenAi进行词嵌入的方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "#我们使用较多的也是OpenAI的embeddings方法\n",
    "from langchain.embeddings import OpenAIEmbeddings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "embeddings = OpenAIEmbeddings()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "text = \"This is a test document.\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[-0.0031265460635144116,\n",
       " 0.011133635493097378,\n",
       " -0.004037691773639618,\n",
       " -0.011746618046080886,\n",
       " -0.000993598608605281,\n",
       " 0.01080715570571065,\n",
       " -0.010440697965211392,\n",
       " -0.005253663322651083,\n",
       " -0.009874355115762394,\n",
       " -0.026171704933991537,\n",
       " 0.020348368352341755,\n",
       " 0.02257376179702522,\n",
       " -0.00752236652698065,\n",
       " 0.017230149476854816,\n",
       " -0.005986577872327018,\n",
       " 0.01914905397335298,\n",
       " 0.021254515421344024,\n",
       " -0.015644390243455687,\n",
       " 0.007642298058011785,\n",
       " -0.018402813128865346,\n",
       " -0.0006866907980891692,\n",
       " -0.006416332020722187,\n",
       " -0.010967063792870444,\n",
       " 0.0180030410483207,\n",
       " -0.022173991579125737,\n",
       " -0.0030682459684349844,\n",
       " 0.014405098842676966,\n",
       " -0.029982859474719296,\n",
       " 0.018496092535934365,\n",
       " -0.007935463505353126,\n",
       " 0.01004758949773389,\n",
       " -0.019442219420694323,\n",
       " -0.003694554583399551,\n",
       " -0.024279453027116777,\n",
       " -0.005540166088247863,\n",
       " 0.0038311432844533796,\n",
       " -0.00566676076668485,\n",
       " -0.029449831896638266,\n",
       " 0.018576046579514262,\n",
       " -0.016643818582172135,\n",
       " 0.007529029208725211,\n",
       " 0.012452880937455994,\n",
       " -0.0009711115338484352,\n",
       " -0.010693886390762786,\n",
       " 0.010034264134244767,\n",
       " -0.012566149321081278,\n",
       " 0.03358080261168561,\n",
       " -0.026957921868946542,\n",
       " -0.007935463505353126,\n",
       " 0.028970105772513726,\n",
       " 0.019735384868035666,\n",
       " -0.002010517301808462,\n",
       " -0.027397671902603716,\n",
       " -0.008768321075165222,\n",
       " -0.00623976629787841,\n",
       " 0.0019855315467744218,\n",
       " -0.007389109166799102,\n",
       " 0.027930699480684745,\n",
       " 0.00680944048819969,\n",
       " -0.009288023289757131,\n",
       " -0.022533785706557855,\n",
       " 0.010460686941767657,\n",
       " -0.013612218294654818,\n",
       " -0.011120309198285674,\n",
       " -0.004873880684323994,\n",
       " -0.008301921245852384,\n",
       " -0.0058333320012504955,\n",
       " 0.011120309198285674,\n",
       " 0.006299731830563333,\n",
       " 0.004460783240290227,\n",
       " 0.010627257710672012,\n",
       " 0.012452880937455994,\n",
       " 0.005370263512809938,\n",
       " 0.005436891727239422,\n",
       " 0.008941555457136718,\n",
       " 0.00428421751744645,\n",
       " -0.022693693793717648,\n",
       " -0.013212447145432755,\n",
       " 0.01123357851323354,\n",
       " -0.005710069129347079,\n",
       " 0.00980772643567162,\n",
       " -0.024439362976921734,\n",
       " -0.008555109671403776,\n",
       " 0.007875497507006914,\n",
       " 0.0025502087257872805,\n",
       " 0.007562343548770598,\n",
       " 0.005740052128520186,\n",
       " 0.028223864928026088,\n",
       " -0.022960208514080743,\n",
       " -0.015564436199875789,\n",
       " -0.008215303589205346,\n",
       " 0.02005520104235525,\n",
       " 0.003584617540646548,\n",
       " 0.012779361656165304,\n",
       " -0.01647058513152322,\n",
       " 0.031555295207274466,\n",
       " 0.013532264251074922,\n",
       " 0.030382631555263943,\n",
       " -0.006206451957833024,\n",
       " -0.049678266389642435,\n",
       " -0.0037212060088697317,\n",
       " -0.001316747287950374,\n",
       " -0.007055966232006522,\n",
       " 0.0035246517751309804,\n",
       " -0.03486007103425428,\n",
       " -0.009028172182461176,\n",
       " 0.021960778312719132,\n",
       " -0.007275840783173818,\n",
       " 0.015431178839694241,\n",
       " -0.008281932269296119,\n",
       " -0.016457257905388937,\n",
       " 0.005083760281551867,\n",
       " -0.006366360510654107,\n",
       " -0.03603273654890997,\n",
       " 0.01561773858515486,\n",
       " 0.00566676076668485,\n",
       " 0.006486292041685242,\n",
       " -0.021907476858762643,\n",
       " -0.010260800901495335,\n",
       " 0.0010419043318218987,\n",
       " 0.0006621215188718254,\n",
       " 0.016883680712911825,\n",
       " 0.026504848334445406,\n",
       " -0.02753092926278526,\n",
       " 0.010580618007137502,\n",
       " 0.013292401189012652,\n",
       " -0.00880829809695517,\n",
       " -0.04184274776707063,\n",
       " -0.006469634871662549,\n",
       " -0.005530172065631021,\n",
       " 0.019109076020240454,\n",
       " 0.021454401461616344,\n",
       " 0.01419188743891552,\n",
       " 0.017509989560707036,\n",
       " -0.023333328005001982,\n",
       " 0.037312004971478636,\n",
       " -0.04040357125734217,\n",
       " 0.024772504514730443,\n",
       " -0.0432286228228426,\n",
       " -0.008435177674711351,\n",
       " 0.008308583927596945,\n",
       " 0.027397671902603716,\n",
       " -0.019162379336842107,\n",
       " -0.016936984029513477,\n",
       " 0.010294115241540723,\n",
       " 0.01395202437685325,\n",
       " 0.008255280610995293,\n",
       " -0.024812482467842973,\n",
       " 0.012506184254057646,\n",
       " -0.021147910650785882,\n",
       " 0.0031082232230555777,\n",
       " 0.001852274419423519,\n",
       " 0.017763178917581012,\n",
       " 0.002528554544456166,\n",
       " 0.015551109905064087,\n",
       " 0.03144868857407117,\n",
       " 0.0038577947099235604,\n",
       " -0.009514560988330279,\n",
       " -0.018029693637944107,\n",
       " -0.01961545287134324,\n",
       " 0.0030682459684349844,\n",
       " 0.02218731694261486,\n",
       " 0.00956786337360935,\n",
       " 0.0018689315894462124,\n",
       " 0.023466585365183527,\n",
       " 0.034700162947094486,\n",
       " 0.004007708774466511,\n",
       " 0.007648960739756346,\n",
       " -0.0016307344306507275,\n",
       " -0.015404527181393415,\n",
       " -0.025039019235093538,\n",
       " 0.02475917915124132,\n",
       " -0.02537216263554741,\n",
       " 0.007655623421500907,\n",
       " -0.0201884584025368,\n",
       " 0.014271841482495417,\n",
       " 0.007202549421338483,\n",
       " 0.0034780118387658255,\n",
       " -0.0044374633885229725,\n",
       " -0.004037691773639618,\n",
       " -0.033873968059026954,\n",
       " 0.016883680712911825,\n",
       " 0.01480486999189903,\n",
       " 0.03179515547536899,\n",
       " -0.017056916026205904,\n",
       " -0.006936035166636677,\n",
       " 0.024226151573160288,\n",
       " -0.009534549033563962,\n",
       " 0.001379211559120152,\n",
       " -0.00461736045223903,\n",
       " 0.023200070644820433,\n",
       " 0.021880824269139235,\n",
       " -0.0063430406588868525,\n",
       " -0.01480486999189903,\n",
       " -0.7023184943753603,\n",
       " -0.016363978498319918,\n",
       " 0.004297543346596862,\n",
       " -0.017230149476854816,\n",
       " 0.022173991579125737,\n",
       " 0.03136873266784611,\n",
       " 0.024599271064081527,\n",
       " 0.002137111747414804,\n",
       " -0.0060098977240942725,\n",
       " 0.02185417354216099,\n",
       " -0.0226670430667394,\n",
       " -0.012706069363007388,\n",
       " -0.009201406564432673,\n",
       " -0.013212447145432755,\n",
       " -0.014111932464013044,\n",
       " -0.023160094554353066,\n",
       " -0.016963636619136885,\n",
       " -0.003323100297253163,\n",
       " -0.01761659619391034,\n",
       " 0.013085852466995769,\n",
       " 0.014924801057268873,\n",
       " 0.015457830497995068,\n",
       " -0.015711018923546458,\n",
       " -0.01672377262575203,\n",
       " 0.000518870077865774,\n",
       " 0.00890824111709133,\n",
       " 0.01309251514874033,\n",
       " -0.006133160595997688,\n",
       " 0.017483338833728792,\n",
       " 0.01689700793904611,\n",
       " -0.026211682887104064,\n",
       " -0.003571291711496135,\n",
       " -0.0009452929435963248,\n",
       " 0.008548446989659215,\n",
       " 0.033260986437366025,\n",
       " -0.0020588231414404023,\n",
       " -0.014485052886256863,\n",
       " -0.025039019235093538,\n",
       " 0.016590515265570482,\n",
       " 0.0008794972151389435,\n",
       " -0.016830379258955336,\n",
       " -0.000344802889780038,\n",
       " 0.0048205778333836324,\n",
       " -0.0017423372602551933,\n",
       " -0.004604034623088617,\n",
       " -0.0025218916298809595,\n",
       " 0.018829235936388234,\n",
       " 0.0003429289523317189,\n",
       " 0.027450973356560205,\n",
       " 0.017656572284377708,\n",
       " -0.016590515265570482,\n",
       " -0.0005492693200249316,\n",
       " 0.006656195082784458,\n",
       " -0.003491337435085593,\n",
       " 0.0010210829857088546,\n",
       " -0.010434035283466831,\n",
       " 0.021640962138399545,\n",
       " 0.007728915248997534,\n",
       " 0.010833806432688895,\n",
       " 0.016563864538592238,\n",
       " -0.0011426800707608075,\n",
       " 0.011879875406262435,\n",
       " -0.005626783279233612,\n",
       " -0.016883680712911825,\n",
       " -0.028943455045535478,\n",
       " -0.007495714868679824,\n",
       " -0.020841419839955418,\n",
       " 0.022547111070046977,\n",
       " 0.0010219158209269247,\n",
       " -0.019002471249682313,\n",
       " -0.003724537582572657,\n",
       " 0.012512846935802208,\n",
       " -0.010254138219750774,\n",
       " -0.01970873414105742,\n",
       " 0.020761463933730358,\n",
       " 0.015497807519785016,\n",
       " 0.00976108673213711,\n",
       " -0.009987623499387678,\n",
       " -0.00933466392461422,\n",
       " 0.016790401305842806,\n",
       " 0.00371454332712517,\n",
       " 0.003584617540646548,\n",
       " -0.014338470162586192,\n",
       " -0.015431178839694241,\n",
       " 0.01085379540924516,\n",
       " -0.02024176171913845,\n",
       " -0.04610697584229952,\n",
       " -0.01299923574167131,\n",
       " -0.0066761835936794325,\n",
       " 0.013385681527404252,\n",
       " 0.021547682731330526,\n",
       " 0.0014358458673481166,\n",
       " -0.007935463505353126,\n",
       " -0.0004426636181442092,\n",
       " 0.008928229162325016,\n",
       " -0.0027784116291352783,\n",
       " 0.0011585044055654307,\n",
       " 0.003967731752676563,\n",
       " 0.030142769424524252,\n",
       " -0.0019239002272380366,\n",
       " 0.0014741573351172471,\n",
       " 0.0024169515984363774,\n",
       " 0.0006317222185050064,\n",
       " 0.011733292682591764,\n",
       " 0.021161236014275005,\n",
       " 0.005247000640906522,\n",
       " 0.003637920391586909,\n",
       " 0.024452688340410856,\n",
       " 0.018256230405194675,\n",
       " -0.008488480991313004,\n",
       " -0.007169235081293096,\n",
       " -0.008988195160671227,\n",
       " -0.009554538010120227,\n",
       " 0.0015707686651351598,\n",
       " -0.016057487687489453,\n",
       " -0.034220438685615105,\n",
       " 0.008035406525489288,\n",
       " 0.011246903876722662,\n",
       " 0.0023653144179321566,\n",
       " 0.0033880630740771513,\n",
       " 0.01404530378392227,\n",
       " 0.017723200964468482,\n",
       " 0.013339040892547161,\n",
       " -0.0016365643935925412,\n",
       " 0.004580714771321362,\n",
       " -0.002518560289008679,\n",
       " 0.003941080560037028,\n",
       " -0.019415566831070916,\n",
       " -0.01719017338638745,\n",
       " -0.009707783415535458,\n",
       " -0.0194288940572052,\n",
       " -0.0017639915580016305,\n",
       " 0.03371405997186716,\n",
       " -0.01419188743891552,\n",
       " 0.007722252101591682,\n",
       " 0.002328668737014489,\n",
       " 0.0038078231998554795,\n",
       " -0.0052803149809519095,\n",
       " 0.021361122054547325,\n",
       " -0.009727772392091723,\n",
       " -0.013019223786904994,\n",
       " 0.013312390165568917,\n",
       " -0.006969349041020773,\n",
       " -0.009108126226041072,\n",
       " -0.011913189746307822,\n",
       " -0.02232057430279641,\n",
       " -0.017989715684831577,\n",
       " -0.0030299346170811763,\n",
       " 0.0009786072254340506,\n",
       " 0.00995430915934229,\n",
       " -0.022107362899034963,\n",
       " 0.0013617215538793885,\n",
       " -0.010100892814335542,\n",
       " -0.005843326489528627,\n",
       " -0.00813534954562545,\n",
       " -0.011106983834796552,\n",
       " -0.017070241389695026,\n",
       " -0.02627831156719484,\n",
       " 0.0010627257943502657,\n",
       " -0.0014475059096470666,\n",
       " -0.008601749374938286,\n",
       " 0.017549967513819566,\n",
       " -0.006196457935216181,\n",
       " 0.0011410144003246672,\n",
       " -0.02651817369793453,\n",
       " -0.018242905041705552,\n",
       " -0.03302112244398117,\n",
       " 0.033181030531140965,\n",
       " -0.009974298135898554,\n",
       " -0.0339006187860052,\n",
       " -0.018309533721796327,\n",
       " 0.005167046131665335,\n",
       " -0.013492287229284973,\n",
       " -0.0020704830673240297,\n",
       " 0.005017132067122384,\n",
       " -0.010527315621858432,\n",
       " -0.015737669650524706,\n",
       " 0.007842183166961527,\n",
       " 0.010354081239886934,\n",
       " -0.02133447132756908,\n",
       " -0.0049904804088215585,\n",
       " 0.0034580230950402063,\n",
       " 0.012279646555484499,\n",
       " 0.0020305058127034364,\n",
       " 0.01181324672617166,\n",
       " 0.015084710075751248,\n",
       " 0.013465635570984147,\n",
       " 0.007075954742901496,\n",
       " -0.012646104295983757,\n",
       " 0.0025885203099717336,\n",
       " 0.008042069207233849,\n",
       " 0.0033064431272304693,\n",
       " -0.01461831024643841,\n",
       " 0.009274697926268009,\n",
       " -0.007648960739756346,\n",
       " 0.01985531686472809,\n",
       " -0.029689694027377957,\n",
       " 0.010447360646955953,\n",
       " 0.012872641063234323,\n",
       " 0.02385303021959389,\n",
       " 0.02257376179702522,\n",
       " 0.008708355076819009,\n",
       " 0.0038544631362206344,\n",
       " -0.022720344520695893,\n",
       " 0.00016542626076378224,\n",
       " -0.03541975120195872,\n",
       " -0.01313915578359742,\n",
       " -0.0027151145227474302,\n",
       " 0.013552252296308605,\n",
       " 0.006339709318014572,\n",
       " 0.012885967358046027,\n",
       " -0.017643246920888585,\n",
       " -0.02470587583463967,\n",
       " 0.017390059426659773,\n",
       " 0.02152103014170712,\n",
       " 0.005450217556389835,\n",
       " -0.010134206223058348,\n",
       " -0.002833380383342425,\n",
       " -0.006746143614642487,\n",
       " -0.0020538258973013365,\n",
       " 0.0066895094228298455,\n",
       " 0.004837234537745035,\n",
       " 0.012559486639336717,\n",
       " -0.007282503464918379,\n",
       " -0.01624404650162749,\n",
       " 0.011493429620529494,\n",
       " 0.028596986281592487,\n",
       " 0.03259469963645829,\n",
       " 0.02300018460454811,\n",
       " -0.024159522893069513,\n",
       " -0.018362835175752816,\n",
       " 0.002605177479994427,\n",
       " 0.0008570100821744363,\n",
       " 0.0012476201022053896,\n",
       " 0.0031781832440186328,\n",
       " 0.010460686941767657,\n",
       " 0.02242717907335455,\n",
       " -0.01938891610409267,\n",
       " 0.028730243641774036,\n",
       " 0.00035979433115892995,\n",
       " 0.011486766938784932,\n",
       " 0.018829235936388234,\n",
       " 0.028650287735548976,\n",
       " -0.008501806354802126,\n",
       " 0.02290690519747909,\n",
       " -9.301974269190048e-05,\n",
       " 0.028197214201047843,\n",
       " 0.0085684350348929,\n",
       " -0.004021034603616924,\n",
       " 0.03147533930104941,\n",
       " -0.010707212685574488,\n",
       " 0.0018039686962069014,\n",
       " 0.001005258650904231,\n",
       " 0.004180943156438008,\n",
       " 0.03246144227627674,\n",
       " -0.026917945778479172,\n",
       " -0.013412332254382497,\n",
       " 0.0018822573021813028,\n",
       " 0.014338470162586192,\n",
       " 0.03123547717030972,\n",
       " 0.022227293033082227,\n",
       " 0.005023794748866945,\n",
       " -0.0026301630021978223,\n",
       " 0.0012159715490114653,\n",
       " 0.023213396008309555,\n",
       " -0.004550731772148256,\n",
       " -0.013205784463688194,\n",
       " 0.003389728977343937,\n",
       " -0.020015224951887883,\n",
       " -0.01304587544520582,\n",
       " -0.0056067947683386375,\n",
       " -0.0014141915696016796,\n",
       " 0.018082995091900596,\n",
       " 0.0024502659384817646,\n",
       " 0.0246658997441723,\n",
       " 0.010580618007137502,\n",
       " 0.02590519021362844,\n",
       " 0.012039784424744808,\n",
       " 0.01647058513152322,\n",
       " 0.001490814388724618,\n",
       " -0.009081475499062828,\n",
       " -0.022373875756752898,\n",
       " 0.0006258921973555314,\n",
       " 0.01218636714841548,\n",
       " 0.012292972850296202,\n",
       " -0.011719966387780062,\n",
       " -0.010247475538006213,\n",
       " 0.004657337474028978,\n",
       " -0.007442412017739463,\n",
       " 0.006479629359940681,\n",
       " 0.00400104609272195,\n",
       " -4.937281880562186e-05,\n",
       " 0.01709689397931843,\n",
       " -0.023160094554353066,\n",
       " -5.1793308003778376e-05,\n",
       " 0.01596420641777527,\n",
       " 0.008888252140535067,\n",
       " -0.0055368347473755825,\n",
       " -0.010147532517870051,\n",
       " -0.007815532439983282,\n",
       " 0.011180275196631888,\n",
       " -0.0075756693779210115,\n",
       " 0.008761658393420661,\n",
       " 0.00623643495700613,\n",
       " 0.02009517899546778,\n",
       " 0.0003102393260113764,\n",
       " -0.0026085088208667078,\n",
       " -0.012959258719881362,\n",
       " -0.01341899586744964,\n",
       " -0.010300777923285284,\n",
       " 0.01804301900143323,\n",
       " 0.003874451879946254,\n",
       " -0.014005326762132321,\n",
       " -0.005956594873153911,\n",
       " 0.015604413221665737,\n",
       " -0.005650103596662157,\n",
       " 0.0015449501912983721,\n",
       " 0.0133323782108026,\n",
       " 0.016830379258955336,\n",
       " 0.012819338677955252,\n",
       " -0.01537787552309259,\n",
       " -0.006882732315696315,\n",
       " -0.01809632231803488,\n",
       " 0.010727200730808172,\n",
       " 0.0489586781347782,\n",
       " 0.04320196837057403,\n",
       " 0.0025801917249603868,\n",
       " -0.010893772431035108,\n",
       " 0.0037012174979747575,\n",
       " -0.019308962060512774,\n",
       " -0.02694459650545742,\n",
       " -0.0135189388875858,\n",
       " 0.006366360510654107,\n",
       " -0.002205405865111073,\n",
       " -0.007882161120074055,\n",
       " -0.004354177538409504,\n",
       " 0.01624404650162749,\n",
       " 0.001379211559120152,\n",
       " 0.016630493218683012,\n",
       " 0.007122594912097296,\n",
       " -0.0013892058145676392,\n",
       " -0.0024735860230796648,\n",
       " -0.009487909330029453,\n",
       " -0.012279646555484499,\n",
       " 0.002900008830602554,\n",
       " 0.007708926272441269,\n",
       " 0.007688937761546295,\n",
       " 0.003371406136885103,\n",
       " 0.01209974949176844,\n",
       " -0.02133447132756908,\n",
       " 0.007722252101591682,\n",
       " 0.0016107459197557533,\n",
       " 0.0002640157490475955,\n",
       " -0.020361693715830877,\n",
       " -0.01419188743891552,\n",
       " 0.019215682653443755,\n",
       " 0.01014086983612549,\n",
       " 0.018109647681524003,\n",
       " -0.0006242264687117299,\n",
       " 0.03288786508379963,\n",
       " -0.0044308002411171206,\n",
       " -0.015790972967126355,\n",
       " 0.007755566441637069,\n",
       " 0.004847229026023168,\n",
       " 0.009254709881034323,\n",
       " 0.004580714771321362,\n",
       " -0.003394725988652358,\n",
       " -0.008261943292739854,\n",
       " -0.007149246570398122,\n",
       " 0.00025672823716266707,\n",
       " -0.016657143945661257,\n",
       " 0.003386397403641011,\n",
       " -0.008635063714983673,\n",
       " -0.007555680867026037,\n",
       " 0.012079761446534757,\n",
       " -0.01770987560097936,\n",
       " -0.0012184701710809985,\n",
       " -0.015417852544882538,\n",
       " 0.02875689436875228,\n",
       " 0.023306677278023737,\n",
       " 0.004390823219327172,\n",
       " 0.006816103635605542,\n",
       " -0.0022070715355472135,\n",
       " -0.017536642150330444,\n",
       " -0.029876254704161154,\n",
       " -0.006792783318176996,\n",
       " -0.011373498555159648,\n",
       " -0.009301349584568833,\n",
       " -0.011413475576949597,\n",
       " -0.03043593487186559,\n",
       " -0.023506561455650898,\n",
       " -0.013345704505614304,\n",
       " -0.020161807675558554,\n",
       " 0.0037212060088697317,\n",
       " 0.010980389156359566,\n",
       " -0.01760327083042122,\n",
       " -0.024785831740864725,\n",
       " -0.016217395774649247,\n",
       " 0.02899675649949197,\n",
       " -0.013938698082041547,\n",
       " 0.0005355271896808063,\n",
       " 0.01304587544520582,\n",
       " 0.0014974773032998246,\n",
       " 0.022533785706557855,\n",
       " -0.010840470045756037,\n",
       " -0.022507133116934447,\n",
       " 0.005899960681341269,\n",
       " -0.02413287030344611,\n",
       " -0.005899960681341269,\n",
       " 0.016843704622444458,\n",
       " -0.003987720263571537,\n",
       " 0.0067761261481543026,\n",
       " -0.0027351030336424044,\n",
       " 0.01047401230525678,\n",
       " -0.006076526404185046,\n",
       " -0.002423614513011584,\n",
       " -0.004863886196045861,\n",
       " -0.02936987785305837,\n",
       " 0.016350653134830796,\n",
       " -0.008768321075165222,\n",
       " 0.0019888631204773477,\n",
       " 0.007482389505190702,\n",
       " -0.00866837805502906,\n",
       " -0.004130971646369927,\n",
       " -0.015537784541574963,\n",
       " -0.001499975808954035,\n",
       " -0.009128115202597337,\n",
       " 0.0016174087179156372,\n",
       " 0.020788116523353765,\n",
       " -0.019055772703638802,\n",
       " -0.013379018845659691,\n",
       " 0.01689700793904611,\n",
       " -0.023293350051889452,\n",
       " -0.007715589419847121,\n",
       " -0.010460686941767657,\n",
       " -0.018162950998125656,\n",
       " 0.009587852350165614,\n",
       " -0.020015224951887883,\n",
       " 0.002370311662071223,\n",
       " 0.017443360880616262,\n",
       " -0.007422423506844489,\n",
       " 0.022014081629320784,\n",
       " -0.010260800901495335,\n",
       " -0.030169420151502497,\n",
       " 6.792991963538974e-05,\n",
       " -0.009801063753927058,\n",
       " 0.011086995789562869,\n",
       " 0.03563296446836532,\n",
       " 0.022080710309411555,\n",
       " 0.029209967903253413,\n",
       " -0.008768321075165222,\n",
       " -0.0030016175211748557,\n",
       " -0.014218538165893765,\n",
       " 0.007915475460119442,\n",
       " -0.00509375476983,\n",
       " 0.0011701644478643808,\n",
       " -0.007782218099937894,\n",
       " -0.0041976003264607015,\n",
       " -0.018882539252989886,\n",
       " -0.01143346362218328,\n",
       " -0.001524128728770005,\n",
       " 0.013672184293001032,\n",
       " -0.005263657810929216,\n",
       " -0.0106605720507174,\n",
       " -0.015084710075751248,\n",
       " 0.013658857998189328,\n",
       " 0.011393486600393332,\n",
       " -0.009168092224387286,\n",
       " -0.02567865344637787,\n",
       " -0.029822951387559502,\n",
       " -0.02170759081849032,\n",
       " 0.010107555496080103,\n",
       " -0.020574905119592323,\n",
       " 0.0008586758108182379,\n",
       " 0.021147910650785882,\n",
       " 0.0003445946809755205,\n",
       " -0.011626686980711043,\n",
       " -0.00490719455870809,\n",
       " 0.015830950920238885,\n",
       " -0.029396528580036614,\n",
       " -0.004813914685977781,\n",
       " 0.006193126128682611,\n",
       " 0.016217395774649247,\n",
       " 0.02710450459261721,\n",
       " 0.02270701915720677,\n",
       " -0.011613360685899339,\n",
       " 0.011773269704381712,\n",
       " 0.019788688184637318,\n",
       " -0.0034213774141225385,\n",
       " 0.009634492053700124,\n",
       " -0.009401292604704995,\n",
       " -0.013845418674972527,\n",
       " -0.016390629225298162,\n",
       " 0.0025635345549376936,\n",
       " 0.010027600521177625,\n",
       " 0.006729486444619793,\n",
       " 0.014511704544557687,\n",
       " -0.00942794333168324,\n",
       " 0.01547115586148419,\n",
       " 0.00895488082062584,\n",
       " -0.0016707115688559984,\n",
       " 0.001046901575960965,\n",
       " -0.0047073089840970585,\n",
       " -0.023466585365183527,\n",
       " -0.011933178722864087,\n",
       " 0.017416710153638017,\n",
       " -0.0086617153732845,\n",
       " 0.01647058513152322,\n",
       " -0.013685509656490154,\n",
       " -0.020428322395921652,\n",
       " 0.024559293110968997,\n",
       " 0.004660668814901258,\n",
       " 0.015724344287035584,\n",
       " 0.015071384712262126,\n",
       " 0.018589373805648544,\n",
       " -0.005963257554898472,\n",
       " 0.0007462400877880405,\n",
       " 0.001171830118300521,\n",
       " 0.005750046151137027,\n",
       " -0.01237958957562066,\n",
       " 0.017483338833728792,\n",
       " -0.022173991579125737,\n",
       " -0.015031406759149596,\n",
       " 0.025145625868296842,\n",
       " 0.0010419043318218987,\n",
       " 0.011326857920302559,\n",
       " -0.008468492014756739,\n",
       " 0.014671612631717481,\n",
       " 0.011773269704381712,\n",
       " 0.004297543346596862,\n",
       " 0.010194172221404561,\n",
       " 0.005030457430611506,\n",
       " -0.01076717775259812,\n",
       " 0.0067128292745971,\n",
       " -0.023693122132434095,\n",
       " -0.03152864448029622,\n",
       " -0.0027684173736877912,\n",
       " -0.008035406525489288,\n",
       " 0.01595088105428615,\n",
       " 0.012292972850296202,\n",
       " -0.012146390126625531,\n",
       " 0.012233006851949989,\n",
       " -0.02113458528729676,\n",
       " -0.013359029869103426,\n",
       " 0.0044874348985910525,\n",
       " -0.0006883564685253096,\n",
       " 0.04965161566266419,\n",
       " 0.007555680867026037,\n",
       " 0.005316960661869576,\n",
       " 0.017496664197217914,\n",
       " 0.000522617894554751,\n",
       " -0.024692550471150546,\n",
       " 0.017270127429967346,\n",
       " -0.018416138492354468,\n",
       " 0.0014724915482657842,\n",
       " 0.04035026607809535,\n",
       " 0.025878539486650195,\n",
       " -0.017016938073093374,\n",
       " -0.013598892931165696,\n",
       " -0.007768892270787481,\n",
       " 0.024559293110968997,\n",
       " 0.013299063870757213,\n",
       " -0.017936412368229925,\n",
       " -0.017829807597671783,\n",
       " 0.0194288940572052,\n",
       " -0.0027401002777814707,\n",
       " -0.005286977662696471,\n",
       " -0.0075690062305151596,\n",
       " -0.0401104020847105,\n",
       " 0.015071384712262126,\n",
       " -0.0008078715237397484,\n",
       " -0.003851131795348354,\n",
       " -0.02204073421894419,\n",
       " -0.0016540543988333048,\n",
       " -0.02260041438664863,\n",
       " 0.007275840783173818,\n",
       " -0.03619264463606976,\n",
       " 0.019975246998775356,\n",
       " 0.011933178722864087,\n",
       " -0.029023409089115375,\n",
       " -0.007902149165307739,\n",
       " 0.001842280163976032,\n",
       " -0.032861214356821385,\n",
       " 0.0014425086655080004,\n",
       " -0.00016022090968360585,\n",
       " 0.02843707633178753,\n",
       " -0.010753852389108998,\n",
       " -0.0016290687602145873,\n",
       " 0.01123357851323354,\n",
       " 0.008548446989659215,\n",
       " -0.01728345279345647,\n",
       " 0.007015989210216574,\n",
       " -0.0024302774275867904,\n",
       " 0.026264986203705716,\n",
       " -0.013992001398643199,\n",
       " -0.011027029791216655,\n",
       " -0.0004108068270419369,\n",
       " 0.006269749297051517,\n",
       " 0.008455166651267616,\n",
       " -0.003757851689787399,\n",
       " -0.002618503076314195,\n",
       " -0.0022070715355472135,\n",
       " -0.0085684350348929,\n",
       " 0.0039277547308866146,\n",
       " -0.002267037301062781,\n",
       " 0.018749281892808337,\n",
       " -0.0012909286976982635,\n",
       " -0.022826951153899194,\n",
       " -0.011393486600393332,\n",
       " -0.006266417956179236,\n",
       " -0.013965349740342373,\n",
       " -0.008748332098608957,\n",
       " -0.014764892970109082,\n",
       " -0.02629163693068396,\n",
       " -0.01428516684598454,\n",
       " 0.0058833035113185755,\n",
       " 0.001676541531797812,\n",
       " 0.021720916181979442,\n",
       " 0.010014275157688502,\n",
       " -0.014405098842676966,\n",
       " 0.02157433345830877,\n",
       " 0.027824094710126604,\n",
       " -0.02365314604196673,\n",
       " -0.0009569529276876134,\n",
       " -0.006299731830563333,\n",
       " -7.870500903152122e-05,\n",
       " -0.007608983717966398,\n",
       " 0.0029150003301891074,\n",
       " 0.0068960576791854385,\n",
       " -0.00980772643567162,\n",
       " 0.027477625946183613,\n",
       " -0.014165235780614694,\n",
       " -0.024119544939956987,\n",
       " -0.03677897553075245,\n",
       " -0.005889966193063137,\n",
       " 0.015484481224973312,\n",
       " -0.012792687019654426,\n",
       " -0.0025768601512574612,\n",
       " -0.005413571875472167,\n",
       " 0.0006854414870544027,\n",
       " 0.018682653212717563,\n",
       " 0.02494573982802452,\n",
       " -0.012219681488460867,\n",
       " 0.019402241467581793,\n",
       " -0.004590708793938204,\n",
       " -0.001802303025770761,\n",
       " 0.005670092107557131,\n",
       " -0.02403959089637709,\n",
       " -0.007195886273932631,\n",
       " 0.008988195160671227,\n",
       " -0.001668213063201788,\n",
       " 0.009041497545950298,\n",
       " -0.017976390321342454,\n",
       " -0.01794973959436421,\n",
       " 0.007062629379412374,\n",
       " 0.022720344520695893,\n",
       " 0.0026035115767276415,\n",
       " -0.015177990414142847,\n",
       " -0.030355980828285695,\n",
       " -0.0004659836154306172,\n",
       " -0.00495050338703161,\n",
       " -0.0018622687912863286,\n",
       " 0.007555680867026037,\n",
       " -0.008588424011449164,\n",
       " -0.021054631243716863,\n",
       " 0.00041455467283474455,\n",
       " -0.011833235702727925,\n",
       " 0.005690080618452105,\n",
       " -0.008641726396728234,\n",
       " 0.005213686300861135,\n",
       " -0.016883680712911825,\n",
       " -0.011879875406262435,\n",
       " -0.03347419597848231,\n",
       " -0.02098800256362609,\n",
       " 0.0015932559145149896,\n",
       " 0.0076689492506513204,\n",
       " 0.015497807519785016,\n",
       " -0.015191315777631971,\n",
       " -0.027211111225820515,\n",
       " -0.007442412017739463,\n",
       " -0.040510174165255146,\n",
       " -0.020041875678866127,\n",
       " -0.01728345279345647,\n",
       " -0.0009919329381691407,\n",
       " 0.0169503093930026,\n",
       " 0.01647058513152322,\n",
       " -0.012159715490114653,\n",
       " 0.032035019468753846,\n",
       " 0.0069027208265912895,\n",
       " 0.033314287891322514,\n",
       " -0.020548252529968915,\n",
       " -0.019468870147672568,\n",
       " 0.024545967747479875,\n",
       " -0.010593944301949206,\n",
       " 0.007322480486708327,\n",
       " 0.004217588837355676,\n",
       " -0.010447360646955953,\n",
       " -0.0213211441014348,\n",
       " 0.02246715702646708,\n",
       " 0.0023070145556833748,\n",
       " 0.009261372562778885,\n",
       " 0.02096134997400268,\n",
       " 0.004827240515128194,\n",
       " -0.028250517517649495,\n",
       " -0.008675040736773621,\n",
       " 0.01861602453262679,\n",
       " 0.0011318529800952502,\n",
       " 0.0033064431272304693,\n",
       " -0.0006021557533562577,\n",
       " -0.00714258342299227,\n",
       " 0.0035013316905330803,\n",
       " 0.0036512459879066766,\n",
       " 0.0003806157645834662,\n",
       " -0.0023869688320939165,\n",
       " -0.0047073089840970585,\n",
       " 0.004727297494992033,\n",
       " -0.004414143071094427,\n",
       " 0.0022104031092501394,\n",
       " -0.03379401587809222,\n",
       " -0.006076526404185046,\n",
       " -0.01000094979419938,\n",
       " -0.016417281814921567,\n",
       " 0.00985436707052871,\n",
       " -0.014165235780614694,\n",
       " -0.005670092107557131,\n",
       " 0.03358080261168561,\n",
       " 0.0038977717317135088,\n",
       " -0.012466207232267698,\n",
       " -0.011879875406262435,\n",
       " 0.02013515694858031,\n",
       " -0.013912047355063302,\n",
       " -0.0057766978094378535,\n",
       " 0.003531314689706187,\n",
       " -0.011486766938784932,\n",
       " 0.0016274029733631243,\n",
       " -0.0008507636434159262,\n",
       " -0.0008720015235533281,\n",
       " -0.015151338755842023,\n",
       " 0.002338662992461976,\n",
       " -0.0046440116448785655,\n",
       " -0.006699503445446687,\n",
       " -0.018269555768683797,\n",
       " 0.00226204028975436,\n",
       " 0.014511704544557687,\n",
       " -0.025771934716092054,\n",
       " -0.012712732976074531,\n",
       " -0.020641533799683094,\n",
       " 0.02013515694858031,\n",
       " 0.0005276150222784945,\n",
       " -0.005087092088085439,\n",
       " -0.009827715412227885,\n",
       " -0.010207498516216265,\n",
       " 0.014764892970109082,\n",
       " 0.023932984263173786,\n",
       " -0.013485624547540412,\n",
       " 0.000838270765898906,\n",
       " 0.013552252296308605,\n",
       " 0.014405098842676966,\n",
       " -0.0036212632215642158,\n",
       " 0.008388537971176842,\n",
       " 0.23197405199593576,\n",
       " 0.005710069129347079,\n",
       " 0.017390059426659773,\n",
       " 0.03243479154929849,\n",
       " -0.0063930121689549325,\n",
       " 0.033687409244888916,\n",
       " 0.007542355037875624,\n",
       " -0.010100892814335542,\n",
       " -0.008435177674711351,\n",
       " 0.021401100007659855,\n",
       " 0.01628402445474002,\n",
       " 0.014591658588137584,\n",
       " -0.005140394939025799,\n",
       " 0.005256994663523364,\n",
       " -0.002038834397714783,\n",
       " -0.0019338944826855237,\n",
       " -0.023160094554353066,\n",
       " -0.02166761286537779,\n",
       " -0.03517989093386419,\n",
       " -0.01671044726226291,\n",
       " 0.0011418472355427373,\n",
       " -0.022000756265831662,\n",
       " 0.01917570470033123,\n",
       " -0.010927086771080495,\n",
       " 0.014604983951626706,\n",
       " -0.01613744173106935,\n",
       " -0.019988574224909638,\n",
       " -0.001730677334371566,\n",
       " 0.021640962138399545,\n",
       " 0.020921373883535314,\n",
       " -0.013578903954609431,\n",
       " 0.0012567815224348064,\n",
       " 0.012739383703052776,\n",
       " -0.003974394434421124,\n",
       " -0.03136873266784611,\n",
       " -0.005650103596662157,\n",
       " 0.030542539642423733,\n",
       " -0.003741194752595351,\n",
       " 0.02557204867581973,\n",
       " -0.008401863334665964,\n",
       " 0.006229771809600279,\n",
       " -0.02438605966032008,\n",
       " 0.006462972189917988,\n",
       " -0.014338470162586192,\n",
       " 0.0044874348985910525,\n",
       " 0.026957921868946542,\n",
       " ...]"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "query_result = embeddings.embed_query(text)\n",
    "query_result"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[[-0.0031584087512342572,\n",
       "  0.011094410212838685,\n",
       "  -0.004001317166816525,\n",
       "  -0.011747414500761147,\n",
       "  -0.0010153218392504927,\n",
       "  0.010781234363886399,\n",
       "  -0.010368109905383128,\n",
       "  -0.0052973312542353886,\n",
       "  -0.00988168756470157,\n",
       "  -0.026160153195893116,\n",
       "  0.02037639960097617,\n",
       "  0.022575293225485345,\n",
       "  -0.007522876537439858,\n",
       "  0.01728462465309313,\n",
       "  -0.0060036416535162295,\n",
       "  0.01912369806781222,\n",
       "  0.02125595742330888,\n",
       "  -0.015645450320121206,\n",
       "  0.007669469042167345,\n",
       "  -0.018364081557173,\n",
       "  -0.0006909018997805782,\n",
       "  -0.006416767043342096,\n",
       "  -0.010981134376956157,\n",
       "  0.017950956167347133,\n",
       "  -0.022135513382996398,\n",
       "  -0.0030767832152439494,\n",
       "  0.014379421560625712,\n",
       "  -0.029984892064397486,\n",
       "  0.018524000822570733,\n",
       "  -0.007916012019091008,\n",
       "  0.010068260351439788,\n",
       "  -0.019456863824939222,\n",
       "  -0.003688141783525537,\n",
       "  -0.02424112014847516,\n",
       "  -0.005530547004827511,\n",
       "  0.0038247395669963142,\n",
       "  -0.005653818260458694,\n",
       "  -0.029451827225523322,\n",
       "  0.018603960455269596,\n",
       "  -0.016618293138839127,\n",
       "  0.0075295396849443304,\n",
       "  0.012420408696858325,\n",
       "  -0.000990334337616773,\n",
       "  -0.01072126463936225,\n",
       "  0.010001627945072464,\n",
       "  -0.012560338054081339,\n",
       "  0.03352977408110101,\n",
       "  -0.026999729339231202,\n",
       "  -0.007929338314099954,\n",
       "  0.029025376472011102,\n",
       "  0.01969674272303582,\n",
       "  -0.002077286642354891,\n",
       "  -0.02739952936537072,\n",
       "  -0.008775579001926404,\n",
       "  -0.006213536155012048,\n",
       "  0.0019939954359037893,\n",
       "  -0.007382946714555546,\n",
       "  0.027985899384280664,\n",
       "  0.006829892433167963,\n",
       "  -0.009315306522643745,\n",
       "  -0.022482006366454938,\n",
       "  0.010468059446256709,\n",
       "  -0.013546508099130803,\n",
       "  -0.011094410212838685,\n",
       "  -0.0048875376709924066,\n",
       "  -0.008275830366235902,\n",
       "  -0.0058237324799437825,\n",
       "  0.0111144001210134,\n",
       "  0.006276837686119082,\n",
       "  0.004434432464817108,\n",
       "  0.010568008987130291,\n",
       "  0.012480378421382474,\n",
       "  0.005337311070584821,\n",
       "  0.005393948988526085,\n",
       "  0.008988804378682514,\n",
       "  0.0042611864387491335,\n",
       "  -0.022681905448202103,\n",
       "  -0.013233332250178519,\n",
       "  0.011201023366878036,\n",
       "  -0.005753767801332276,\n",
       "  0.009828381453343192,\n",
       "  -0.024401039413872894,\n",
       "  -0.008582342602022416,\n",
       "  0.007829388773226373,\n",
       "  0.0025037385600203523,\n",
       "  0.007549529593119047,\n",
       "  0.005747104653827803,\n",
       "  0.028172473102341476,\n",
       "  -0.022908457119967154,\n",
       "  -0.015525510871072908,\n",
       "  -0.00818254443852809,\n",
       "  0.020056561070180712,\n",
       "  0.003568202334477239,\n",
       "  0.01278689065716899,\n",
       "  -0.016458373873441397,\n",
       "  0.03155743399100209,\n",
       "  0.013506528282781372,\n",
       "  0.030437997270572784,\n",
       "  -0.006240189210691237,\n",
       "  -0.0496816347874333,\n",
       "  -0.003728121599874969,\n",
       "  -0.0013001782635708487,\n",
       "  -0.0070631081837600855,\n",
       "  0.003524890711544921,\n",
       "  -0.03483578265694594,\n",
       "  -0.009062101329538204,\n",
       "  0.021975594117598665,\n",
       "  -0.007236354209828059,\n",
       "  0.015485531054723474,\n",
       "  -0.008295820274410618,\n",
       "  -0.016445048509755048,\n",
       "  0.005060783697060381,\n",
       "  -0.0063901139876629065,\n",
       "  -0.03611513678012778,\n",
       "  0.015618797730103313,\n",
       "  0.005653818260458694,\n",
       "  0.006503389823545434,\n",
       "  -0.0218956344848998,\n",
       "  -0.010248169525012235,\n",
       "  0.0010503041785562463,\n",
       "  0.0006350967006543871,\n",
       "  0.016938133532279777,\n",
       "  0.02651997154303801,\n",
       "  -0.027506141588087472,\n",
       "  0.010548019078955573,\n",
       "  0.01327331206652795,\n",
       "  -0.008822221500119012,\n",
       "  -0.041872237785026835,\n",
       "  -0.006493394869458076,\n",
       "  -0.005567195014594058,\n",
       "  0.01913702529414376,\n",
       "  0.021482509095073935,\n",
       "  0.014192848773887495,\n",
       "  0.017511178187503376,\n",
       "  -0.023348236962456106,\n",
       "  0.03734118572327384,\n",
       "  -0.04040630994378418,\n",
       "  0.02477418498734933,\n",
       "  -0.04320489708824446,\n",
       "  -0.008409096110293146,\n",
       "  0.008309146569419563,\n",
       "  0.027372874912707637,\n",
       "  -0.019150350657830113,\n",
       "  -0.01696478612229767,\n",
       "  0.010288150272684262,\n",
       "  0.013992949692140332,\n",
       "  0.008262504071226955,\n",
       "  -0.02484081739371665,\n",
       "  0.012493705647714015,\n",
       "  -0.021056058341561715,\n",
       "  0.0031051021742145813,\n",
       "  0.0017724403098598187,\n",
       "  0.017764382449286322,\n",
       "  0.002560376710792265,\n",
       "  0.015565490687422339,\n",
       "  0.03147747435830322,\n",
       "  0.003858056235841274,\n",
       "  -0.009548522738897164,\n",
       "  -0.018004261347382918,\n",
       "  -0.019616783090336952,\n",
       "  0.0031034365037537876,\n",
       "  0.022162167835659476,\n",
       "  0.00956851264707188,\n",
       "  0.0018540658458501265,\n",
       "  0.023454849185172863,\n",
       "  0.03467586339154821,\n",
       "  0.00400464897339941,\n",
       "  0.007669469042167345,\n",
       "  -0.001613353763277161,\n",
       "  -0.015418897717033555,\n",
       "  -0.025027391111777462,\n",
       "  0.024760857761017788,\n",
       "  -0.025387209458922356,\n",
       "  0.0076494791339926285,\n",
       "  -0.02017650051922901,\n",
       "  0.014299461927926848,\n",
       "  0.007189710780312856,\n",
       "  0.003481579088612603,\n",
       "  -0.004424437510729749,\n",
       "  -0.0040546237438362,\n",
       "  -0.03390291779193226,\n",
       "  0.016844846673249373,\n",
       "  0.014792546950451579,\n",
       "  0.031850618069134466,\n",
       "  -0.017004765938647103,\n",
       "  -0.006936505121546017,\n",
       "  0.02424112014847516,\n",
       "  -0.009541859125731394,\n",
       "  0.0013884670634809538,\n",
       "  -0.004634331546564271,\n",
       "  0.02314833788070894,\n",
       "  0.021935614301249233,\n",
       "  -0.006290163981128028,\n",
       "  -0.014805873245460526,\n",
       "  -0.7023661288825848,\n",
       "  -0.01635176165072464,\n",
       "  0.0042611864387491335,\n",
       "  -0.017311277243111024,\n",
       "  0.02217549319934583,\n",
       "  0.031370860272941274,\n",
       "  0.02464091831196949,\n",
       "  0.0020939449767773707,\n",
       "  -0.005986983551924398,\n",
       "  0.021855654668550367,\n",
       "  -0.022668578221870562,\n",
       "  -0.012700267411304355,\n",
       "  -0.009202030686761217,\n",
       "  -0.013206679660160626,\n",
       "  -0.014139542662529116,\n",
       "  -0.02314833788070894,\n",
       "  -0.01695145889596613,\n",
       "  -0.0033249913969671096,\n",
       "  -0.017604463183888593,\n",
       "  0.013106730119287044,\n",
       "  0.014979119737189796,\n",
       "  0.015472204759714529,\n",
       "  -0.01571208458913372,\n",
       "  -0.016711579997869534,\n",
       "  0.0005126583966689254,\n",
       "  0.008935498267324135,\n",
       "  0.013100066506121274,\n",
       "  -0.006123581568225824,\n",
       "  0.017484523734840295,\n",
       "  0.016884826489598805,\n",
       "  -0.026240112828591983,\n",
       "  -0.0035215591377926844,\n",
       "  -0.0009320303999687423,\n",
       "  0.008535699172507214,\n",
       "  0.033236586277678254,\n",
       "  -0.002067291688267533,\n",
       "  -0.014472708419656118,\n",
       "  -0.025040716475463815,\n",
       "  0.016604967775152777,\n",
       "  0.0009203696007976038,\n",
       "  -0.01680486685689994,\n",
       "  -0.0003496155334959091,\n",
       "  0.004830899287389846,\n",
       "  -0.0017291288033428253,\n",
       "  -0.004571030481118534,\n",
       "  -0.0024737536977582778,\n",
       "  0.01883051212703465,\n",
       "  0.0003673149410602162,\n",
       "  0.0274528345454065,\n",
       "  0.017684422816587456,\n",
       "  -0.016578313322489695,\n",
       "  -0.0005447255216299788,\n",
       "  0.006663309554604462,\n",
       "  -0.0035182273312097992,\n",
       "  0.0010453067015125672,\n",
       "  -0.010421416016741507,\n",
       "  0.02160244854412223,\n",
       "  0.00775609228803198,\n",
       "  0.010834541406567373,\n",
       "  0.016604967775152777,\n",
       "  -0.001151086671075549,\n",
       "  0.011860691267966269,\n",
       "  -0.005657149601380282,\n",
       "  -0.016871499263267264,\n",
       "  -0.028945416839312236,\n",
       "  -0.007496223016099372,\n",
       "  -0.020776199627115688,\n",
       "  0.02253531340913591,\n",
       "  0.0010594661809978831,\n",
       "  -0.01899043139243238,\n",
       "  -0.0037581064621370437,\n",
       "  0.012520358237731908,\n",
       "  -0.010261496751343776,\n",
       "  -0.01972339531305371,\n",
       "  0.020749545174452606,\n",
       "  0.015485531054723474,\n",
       "  0.009761748115653274,\n",
       "  -0.00994832090239149,\n",
       "  -0.009355286338993176,\n",
       "  0.01677821240423686,\n",
       "  0.0037014683113651315,\n",
       "  0.0036215086786662663,\n",
       "  -0.014366095265616767,\n",
       "  -0.015418897717033555,\n",
       "  0.010847867701576318,\n",
       "  -0.020283114604590955,\n",
       "  -0.04611010204335707,\n",
       "  -0.01296013714889826,\n",
       "  -0.0066999575643710094,\n",
       "  0.013386588833733074,\n",
       "  0.02160244854412223,\n",
       "  0.001438441950333069,\n",
       "  -0.007955991835440442,\n",
       "  -0.00046601519998437115,\n",
       "  0.008908844745983647,\n",
       "  -0.002800255841719509,\n",
       "  0.0011885678653184664,\n",
       "  0.003934684294787902,\n",
       "  0.03009150614975943,\n",
       "  -0.0019306939047967551,\n",
       "  0.0015000775781486605,\n",
       "  0.00244710040924844,\n",
       "  0.0006804904698703595,\n",
       "  0.011740750887595377,\n",
       "  0.02112269074792904,\n",
       "  0.0052240347690409965,\n",
       "  0.003648161967176104,\n",
       "  0.024467671820240216,\n",
       "  0.018270794698142596,\n",
       "  -0.008495719356157781,\n",
       "  -0.0071697208721381404,\n",
       "  -0.008995467991848284,\n",
       "  -0.009535195512565623,\n",
       "  0.0015650447797164885,\n",
       "  -0.016071902936278613,\n",
       "  -0.03416945114269194,\n",
       "  0.007989308038624103,\n",
       "  0.011241003183227469,\n",
       "  0.0023404874880397367,\n",
       "  0.0033899585985349375,\n",
       "  0.014059583029830252,\n",
       "  0.01772440263293689,\n",
       "  0.013299965587868438,\n",
       "  -0.001631677884575759,\n",
       "  0.0045843567761274795,\n",
       "  -0.002488746128889315,\n",
       "  0.003974664111137336,\n",
       "  -0.01943021123492133,\n",
       "  -0.017204665020394267,\n",
       "  -0.009701778391129125,\n",
       "  -0.019456863824939222,\n",
       "  -0.0017857669541147378,\n",
       "  0.03371634779916183,\n",
       "  -0.014166196183869604,\n",
       "  0.007702785711012305,\n",
       "  0.0023038392454425407,\n",
       "  0.003831402947331436,\n",
       "  -0.005277340880399375,\n",
       "  0.021349244282339284,\n",
       "  -0.009748421820644327,\n",
       "  -0.013020106873422409,\n",
       "  0.013313291882877384,\n",
       "  -0.006963158177225206,\n",
       "  -0.009068764011381378,\n",
       "  -0.01195397719567408,\n",
       "  -0.022308759874725668,\n",
       "  -0.01801758857371446,\n",
       "  -0.0030268084448071587,\n",
       "  0.0009944989794300552,\n",
       "  0.00994165728922572,\n",
       "  -0.022068880976629072,\n",
       "  0.0014101228749471128,\n",
       "  -0.010128230075963937,\n",
       "  -0.005847054194701384,\n",
       "  -0.008095921192663455,\n",
       "  -0.011081083917829738,\n",
       "  -0.017044745754996534,\n",
       "  -0.026293419871272955,\n",
       "  0.0010411420596992851,\n",
       "  -0.001472591337993101,\n",
       "  -0.008648975939712335,\n",
       "  0.0175778105938707,\n",
       "  -0.006223531109099406,\n",
       "  0.0011360942399445118,\n",
       "  -0.02653329876936955,\n",
       "  -0.018310774514492027,\n",
       "  -0.03302336183224474,\n",
       "  0.03318328109764247,\n",
       "  -0.009988300718740923,\n",
       "  -0.033876267064559554,\n",
       "  -0.01835075433084146,\n",
       "  0.005207376201787868,\n",
       "  -0.013473212079597709,\n",
       "  -0.0020423043030491374,\n",
       "  0.005070778651147739,\n",
       "  -0.010587998895305007,\n",
       "  -0.015685430136470637,\n",
       "  0.007862704976410034,\n",
       "  0.010328130089033695,\n",
       "  -0.021349244282339284,\n",
       "  -0.004967497303691272,\n",
       "  0.0034749159411081305,\n",
       "  0.012227173228276933,\n",
       "  0.002070623494850418,\n",
       "  0.011760740795770093,\n",
       "  0.015059079369888661,\n",
       "  0.013479874761440884,\n",
       "  0.007049781423089842,\n",
       "  -0.012600317870430772,\n",
       "  0.002543718376369785,\n",
       "  0.00800263433363305,\n",
       "  0.0033033355855009504,\n",
       "  -0.014659281206394335,\n",
       "  0.009321970135809515,\n",
       "  -0.007702785711012305,\n",
       "  0.019856661988433548,\n",
       "  -0.029718360576282996,\n",
       "  0.010421416016741507,\n",
       "  0.01288684019804257,\n",
       "  0.023841321984980835,\n",
       "  0.022588618589171695,\n",
       "  0.008655638621555512,\n",
       "  0.0038880410981033484,\n",
       "  -0.022748537854569425,\n",
       "  0.00016585394401008583,\n",
       "  -0.035422154538501074,\n",
       "  -0.013126720027461762,\n",
       "  -0.002710301022102637,\n",
       "  0.013533181804121858,\n",
       "  0.00628683264020644,\n",
       "  0.01288684019804257,\n",
       "  -0.0175778105938707,\n",
       "  -0.02468089812831892,\n",
       "  0.017364584285791997,\n",
       "  0.021549143364086448,\n",
       "  0.005447255565545761,\n",
       "  -0.010128230075963937,\n",
       "  -0.002821911653185668,\n",
       "  -0.006803238911827476,\n",
       "  -0.0020256459686266576,\n",
       "  0.006739937380720442,\n",
       "  0.004847557854642974,\n",
       "  0.012520358237731908,\n",
       "  -0.007276334026177492,\n",
       "  -0.016258474791694233,\n",
       "  0.011507535602664551,\n",
       "  0.028598923855853695,\n",
       "  0.03262356366875042,\n",
       "  0.02300174397899756,\n",
       "  -0.024147833289444757,\n",
       "  -0.018390734147190894,\n",
       "  0.0025986906238502547,\n",
       "  0.0008878860000136897,\n",
       "  0.0012443730644446575,\n",
       "  0.0031684037053216155,\n",
       "  0.010474722128099884,\n",
       "  0.02240204673375607,\n",
       "  -0.01941688400858979,\n",
       "  0.028758843121251425,\n",
       "  0.0003606516189327567,\n",
       "  0.011480882081324065,\n",
       "  0.018857166579697732,\n",
       "  0.028625578308516777,\n",
       "  -0.008495719356157781,\n",
       "  0.022921784346298695,\n",
       "  -0.00012087659968526976,\n",
       "  0.028172473102341476,\n",
       "  0.008575678988856646,\n",
       "  -0.004051291937253315,\n",
       "  0.03153077953833901,\n",
       "  -0.010707938344353305,\n",
       "  0.0018573975360176873,\n",
       "  0.0009944989794300552,\n",
       "  0.004204548055146573,\n",
       "  0.032463644403352684,\n",
       "  -0.02685313730016501,\n",
       "  -0.013399915128742019,\n",
       "  0.0019190331638332789,\n",
       "  0.014326115449267334,\n",
       "  0.03115763582750776,\n",
       "  0.02221547301569526,\n",
       "  0.0050208038807109485,\n",
       "  -0.002633672963156008,\n",
       "  0.0012277147300221777,\n",
       "  0.023188317697058373,\n",
       "  -0.004537713812273574,\n",
       "  -0.013226669568335342,\n",
       "  0.0034066169329574174,\n",
       "  -0.019976601437481846,\n",
       "  -0.013040096781597127,\n",
       "  -0.005637159693205566,\n",
       "  -0.0014459381658985875,\n",
       "  0.01805756839006389,\n",
       "  0.0024820827485541935,\n",
       "  0.02464091831196949,\n",
       "  0.010581335282139236,\n",
       "  0.02590694707146498,\n",
       "  0.012020610533364,\n",
       "  0.016458373873441397,\n",
       "  0.0014925813625831419,\n",
       "  -0.009102081145887635,\n",
       "  -0.022428699323773966,\n",
       "  0.0006725777784819802,\n",
       "  0.012207183320102217,\n",
       "  0.012280479339635312,\n",
       "  -0.011687444776236998,\n",
       "  -0.010228179616837519,\n",
       "  0.0046576532613218724,\n",
       "  -0.007462906812915709,\n",
       "  0.006500058482623847,\n",
       "  0.004001317166816525,\n",
       "  -1.1660791895180796e-05,\n",
       "  0.01709805279767751,\n",
       "  -0.02316166324439529,\n",
       "  -5.3124274259723366e-05,\n",
       "  0.015978616077248205,\n",
       "  0.008835548726450553,\n",
       "  -0.0055572000605067,\n",
       "  -0.010148219984138653,\n",
       "  -0.007816062478217426,\n",
       "  0.011201023366878036,\n",
       "  -0.007576182648798236,\n",
       "  0.00879556891010112,\n",
       "  0.00620354120092469,\n",
       "  0.020149847929211116,\n",
       "  0.0002575785675953825,\n",
       "  -0.002617014861564177,\n",
       "  -0.01295347353573249,\n",
       "  -0.013466548466431938,\n",
       "  -0.010301476567693209,\n",
       "  0.01801758857371446,\n",
       "  0.0038114130391567196,\n",
       "  -0.014006276918471873,\n",
       "  -0.005980320404419925,\n",
       "  0.015565490687422339,\n",
       "  -0.005697129417729715,\n",
       "  0.0015317282272868532,\n",
       "  0.013319955496043154,\n",
       "  0.01681819222058629,\n",
       "  0.012826870473518421,\n",
       "  -0.015352265310666231,\n",
       "  -0.00684988234134268,\n",
       "  -0.018097548206413325,\n",
       "  0.01072126463936225,\n",
       "  0.04882873328040887,\n",
       "  0.04315159190820868,\n",
       "  0.0025637082845445013,\n",
       "  -0.010941154560606724,\n",
       "  0.003691473357277773,\n",
       "  -0.019363576965908815,\n",
       "  -0.026959749522881767,\n",
       "  -0.013499864669615601,\n",
       "  0.006393445328584495,\n",
       "  -0.0021755705127676785,\n",
       "  -0.00785604229456686,\n",
       "  -0.0043211561632732825,\n",
       "  0.016285129244357315,\n",
       "  0.001352651772529479,\n",
       "  0.016618293138839127,\n",
       "  0.007116414295118464,\n",
       "  -0.0014392749019787902,\n",
       "  -0.0024820827485541935,\n",
       "  -0.009448573198023582,\n",
       "  -0.012260489431460596,\n",
       "  0.0029385195284817294,\n",
       "  0.007702785711012305,\n",
       "  0.007629489225817913,\n",
       "  0.0033932904051178227,\n",
       "  0.012073916644722378,\n",
       "  -0.021349244282339284,\n",
       "  0.0077227756191870205,\n",
       "  0.0016916477255152322,\n",
       "  0.00025570451370400283,\n",
       "  -0.02036307423728982,\n",
       "  -0.014152868957538063,\n",
       "  0.019230310290528976,\n",
       "  0.010101577485946044,\n",
       "  0.018097548206413325,\n",
       "  -0.0006234359014832486,\n",
       "  0.03286344256684701,\n",
       "  -0.004447759225487351,\n",
       "  -0.01573873717915161,\n",
       "  0.00778940895687694,\n",
       "  0.004840894241477204,\n",
       "  0.009222020594935933,\n",
       "  0.004571030481118534,\n",
       "  -0.00339162450182638,\n",
       "  -0.008269166753070132,\n",
       "  -0.007129741055788707,\n",
       "  0.00024820826903465315,\n",
       "  -0.016604967775152777,\n",
       "  0.0033999535526222954,\n",
       "  -0.008662302234721283,\n",
       "  -0.00755619274062352,\n",
       "  0.012080580257888149,\n",
       "  -0.01773772985926843,\n",
       "  -0.0012060590349713433,\n",
       "  -0.015445551238374041,\n",
       "  0.028732190531233534,\n",
       "  0.02326827732975724,\n",
       "  0.004401115795972148,\n",
       "  0.006843218728176909,\n",
       "  -0.0021672414619717632,\n",
       "  -0.017564483367539158,\n",
       "  -0.02987827984168073,\n",
       "  -0.0067499323348078006,\n",
       "  -0.011367606245441537,\n",
       "  -0.009301980227634798,\n",
       "  -0.011380932540450483,\n",
       "  -0.030411344680554894,\n",
       "  -0.023494829001522295,\n",
       "  -0.013333282722374695,\n",
       "  -0.02017650051922901,\n",
       "  0.0037381165539623275,\n",
       "  0.010987797058799332,\n",
       "  -0.01759113782020224,\n",
       "  -0.024760857761017788,\n",
       "  -0.016231822201676342,\n",
       "  0.02899872201934802,\n",
       "  -0.0139529698757909,\n",
       "  0.0005209875056725032,\n",
       "  0.013106730119287044,\n",
       "  0.0015358928691001352,\n",
       "  0.022588618589171695,\n",
       "  -0.010861193996585264,\n",
       "  -0.02249533359278648,\n",
       "  0.005863712296293216,\n",
       "  -0.024147833289444757,\n",
       "  -0.005843722388118499,\n",
       "  0.016858173899580914,\n",
       "  -0.00397133230455445,\n",
       "  0.006753264141390686,\n",
       "  -0.0027519467417435124,\n",
       "  0.010421416016741507,\n",
       "  -0.006043621469865662,\n",
       "  -0.0024654244141317136,\n",
       "  -0.004880874057826637,\n",
       "  -0.029371867592824456,\n",
       "  0.01635176165072464,\n",
       "  -0.008728935572411202,\n",
       "  0.0019939954359037893,\n",
       "  0.007442916904740993,\n",
       "  -0.008668965847887053,\n",
       "  -0.004154573284709782,\n",
       "  -0.015538838097404448,\n",
       "  -0.0015217332731994949,\n",
       "  -0.009128733735905527,\n",
       "  0.0016108550247553213,\n",
       "  0.020749545174452606,\n",
       "  -0.019030411208781815,\n",
       "  -0.013373262538724128,\n",
       "  0.016911479079616695,\n",
       "  -0.02329492991977513,\n",
       "  -0.0077094488585167775,\n",
       "  -0.010488049354431424,\n",
       "  -0.018137528022762757,\n",
       "  0.009601828850255543,\n",
       "  -0.020003255890144928,\n",
       "  0.002393793832228764,\n",
       "  0.01741789132847297,\n",
       "  -0.007442916904740993,\n",
       "  0.021948941527580774,\n",
       "  -0.010301476567693209,\n",
       "  -0.03019811837247619,\n",
       "  7.584720315855387e-05,\n",
       "  -0.009808391545168476,\n",
       "  0.011094410212838685,\n",
       "  0.03563537898393459,\n",
       "  0.022068880976629072,\n",
       "  0.029211948327426726,\n",
       "  -0.008762251775594863,\n",
       "  -0.003048464023442669,\n",
       "  -0.014206176000219036,\n",
       "  0.007909348405925238,\n",
       "  -0.005074109992069326,\n",
       "  0.001172742482541708,\n",
       "  -0.0077960721043814135,\n",
       "  -0.004184558146971856,\n",
       "  -0.01884383935336619,\n",
       "  -0.011447565878140402,\n",
       "  -0.0015292294887650135,\n",
       "  0.013646457640004385,\n",
       "  -0.005264014585390429,\n",
       "  -0.010661294914838101,\n",
       "  -0.015059079369888661,\n",
       "  0.013673111161344871,\n",
       "  0.01139425883545943,\n",
       "  -0.009168713552254959,\n",
       "  -0.02572037521604936,\n",
       "  -0.029824972798999753,\n",
       "  -0.021655755586803205,\n",
       "  0.010108240167789221,\n",
       "  -0.020602953135386418,\n",
       "  0.000884554309846129,\n",
       "  0.021096038157911147,\n",
       "  0.0003396205503047198,\n",
       "  -0.011647464959887565,\n",
       "  -0.004904195772584238,\n",
       "  0.015818696811850476,\n",
       "  -0.029371867592824456,\n",
       "  -0.004824236139885373,\n",
       "  0.006163561384575258,\n",
       "  0.016231822201676342,\n",
       "  0.0271196687882795,\n",
       "  0.022681905448202103,\n",
       "  -0.01163413866487862,\n",
       "  0.011707434684411714,\n",
       "  0.01986998921476509,\n",
       "  -0.00342494117067134,\n",
       "  0.009635145053439206,\n",
       "  -0.009441909584857812,\n",
       "  -0.013779724315384224,\n",
       "  -0.01637841424074253,\n",
       "  0.0025637082845445013,\n",
       "  0.010008290626915639,\n",
       "  0.0067599272888951584,\n",
       "  0.014486034714665065,\n",
       "  -0.009415256063517325,\n",
       "  0.015485531054723474,\n",
       "  0.009002130673691459,\n",
       "  -0.0016783210812603131,\n",
       "  0.0010244838416921295,\n",
       "  -0.0046709800219921155,\n",
       "  -0.023508156227853835,\n",
       "  -0.011907333766158877,\n",
       "  0.01741789132847297,\n",
       "  -0.00862898603153762,\n",
       "  0.016551660732471805,\n",
       "  -0.01363979495816121,\n",
       "  -0.020389726827307712,\n",
       "  0.024547631452939082,\n",
       "  0.0046443265006516285,\n",
       "  0.015698757362802178,\n",
       "  0.015045753074879716,\n",
       "  0.018590633228938055,\n",
       "  -0.005986983551924398,\n",
       "  0.0007916844505075432,\n",
       "  0.0011752412210635475,\n",
       "  0.005747104653827803,\n",
       "  -0.01234044906415946,\n",
       "  0.017471198371153945,\n",
       "  -0.02221547301569526,\n",
       "  -0.015059079369888661,\n",
       "  0.025080696291813247,\n",
       "  0.0010053267687478102,\n",
       "  0.011334289110935281,\n",
       "  -0.008489056674314606,\n",
       "  0.014699261022743769,\n",
       "  0.011847364041634728,\n",
       "  0.004257854632166248,\n",
       "  0.010148219984138653,\n",
       "  0.005030798834798306,\n",
       "  -0.010767908068877454,\n",
       "  0.006743269187303327,\n",
       "  -0.023681402719583106,\n",
       "  -0.0315041288109663,\n",
       "  -0.0027719366499182286,\n",
       "  -0.00801596155996459,\n",
       "  0.015978616077248205,\n",
       "  0.012300469247810027,\n",
       "  -0.012140549982412298,\n",
       "  0.012200519706936447,\n",
       "  -0.021136017974260582,\n",
       "  -0.013393252446898844,\n",
       "  0.004514392097515973,\n",
       "  -0.0007025626989517168,\n",
       "  0.04962832960739752,\n",
       "  0.007536202832448804,\n",
       "  0.005323984309914578,\n",
       "  0.017511178187503376,\n",
       "  0.000552638213018358,\n",
       "  -0.024707550718336812,\n",
       "  0.017231317610412158,\n",
       "  -0.018364081557173,\n",
       "  0.0014717585027627042,\n",
       "  0.040379655491121096,\n",
       "  0.02589361984513344,\n",
       "  -0.017044745754996534,\n",
       "  -0.013606477823654952,\n",
       "  -0.0077361023798572645,\n",
       "  0.024534306089252732,\n",
       "  0.013326619109208925,\n",
       "  -0.017950956167347133,\n",
       "  -0.017817689491967294,\n",
       "  0.01944353659860768,\n",
       "  -0.0027286252598165595,\n",
       "  -0.005277340880399375,\n",
       "  -0.007569519501293764,\n",
       "  -0.040113122140361425,\n",
       "  0.015085732891229149,\n",
       "  -0.000767529958727207,\n",
       "  -0.003831402947331436,\n",
       "  -0.022068880976629072,\n",
       "  -0.0016899818222237896,\n",
       "  -0.022588618589171695,\n",
       "  0.007322976990031397,\n",
       "  -0.03619509641282665,\n",
       "  0.019936621621132414,\n",
       "  0.011894007471149931,\n",
       "  -0.02899872201934802,\n",
       "  -0.007876032202741575,\n",
       "  0.0018640607999374846,\n",
       "  -0.03286344256684701,\n",
       "  0.0014317785699979471,\n",
       "  -0.00015939885979405666,\n",
       "  0.028465657180473856,\n",
       "  -0.010747918160702738,\n",
       "  -0.001590865116580605,\n",
       "  0.011227676888218524,\n",
       "  0.008529036490664037,\n",
       "  -0.0172446448367437,\n",
       "  0.007029791514915126,\n",
       "  -0.0024487663125398827,\n",
       "  0.026293419871272955,\n",
       "  -0.013992949692140332,\n",
       "  -0.01102111419330559,\n",
       "  -0.00037626876379808486,\n",
       "  0.006300158935215386,\n",
       "  0.008462403152974118,\n",
       "  -0.0037414483605452127,\n",
       "  -0.002617014861564177,\n",
       "  -0.002217216232408554,\n",
       "  -0.008595668897031363,\n",
       "  0.003914694386613187,\n",
       "  -0.0022521985717143074,\n",
       "  0.018750552494335788,\n",
       "  -0.0013176694332237254,\n",
       "  -0.022855151939931373,\n",
       "  -0.011407586061790969,\n",
       "  -0.006263510925448839,\n",
       "  -0.013979623397131387,\n",
       "  -0.008742261867420147,\n",
       "  -0.014792546950451579,\n",
       "  -0.026240112828591983,\n",
       "  -0.014246155816568469,\n",
       "  0.005870375443797688,\n",
       "  0.0016699919140490734,\n",
       "  0.021709062629484178,\n",
       "  0.009974974423731978,\n",
       "  -0.014406075081966199,\n",
       "  0.021549143364086448,\n",
       "  0.027852632708900826,\n",
       "  -0.023694728083269456,\n",
       "  -0.0009420253540561005,\n",
       "  -0.006356797318817948,\n",
       "  -5.944921341908747e-05,\n",
       "  -0.0076161629308089665,\n",
       "  0.0029018712858845334,\n",
       "  0.006876535397021868,\n",
       "  -0.009801727932002706,\n",
       "  0.0274528345454065,\n",
       "  -0.014152868957538063,\n",
       "  -0.024174485879462648,\n",
       "  -0.036728163114346,\n",
       "  -0.0058603804897103305,\n",
       "  0.01551218457606396,\n",
       "  -0.01279355427033476,\n",
       "  -0.0024937436059329944,\n",
       "  -0.005430596998292632,\n",
       "  0.0006709119916058619,\n",
       "  0.018697247314300003,\n",
       "  0.02497408406909649,\n",
       "  -0.012200519706936447,\n",
       "  0.019390231418571897,\n",
       "  -0.004587688582710365,\n",
       "  -0.001805756862289454,\n",
       "  0.005710456178399958,\n",
       "  -0.024001241250378568,\n",
       "  -0.007203037540983099,\n",
       "  0.008988804378682514,\n",
       "  -0.0016566652697941543,\n",
       "  0.009028784195031945,\n",
       "  -0.017964281531033487,\n",
       "  -0.017950956167347133,\n",
       "  0.0070631081837600855,\n",
       "  0.022735212490883075,\n",
       "  0.0026203464353164135,\n",
       "  -0.015232325861617933,\n",
       "  -0.030358037637873918,\n",
       "  -0.00045768606187696233,\n",
       "  -0.0049475073955165556,\n",
       "  -0.0018623950130613664,\n",
       "  0.007576182648798236,\n",
       "  -0.008589006215188186,\n",
       "  -0.021042731115230175,\n",
       "  0.000430616384855757,\n",
       "  -0.011854027654800498,\n",
       "  0.005647154647292924,\n",
       "  -0.00862898603153762,\n",
       "  0.005214039814953638,\n",
       "  -0.016831519446917832,\n",
       "  -0.011880681176140986,\n",
       "  -0.03347646517577485,\n",
       "  -0.020962771482531308,\n",
       "  0.0015800372108475257,\n",
       "  0.007689458950342062,\n",
       "  0.015458877533382988,\n",
       "  -0.015179018818936959,\n",
       "  -0.027199628420978363,\n",
       "  -0.007422926530904979,\n",
       "  -0.04048626957648305,\n",
       "  -0.02004323570649436,\n",
       "  -0.017217992246725808,\n",
       "  -0.0010144888876047714,\n",
       "  0.016938133532279777,\n",
       "  0.016458373873441397,\n",
       "  -0.012160539890587014,\n",
       "  0.03203719178719528,\n",
       "  0.006903188452701057,\n",
       "  0.033316545910377124,\n",
       "  -0.02052299350268755,\n",
       "  -0.01943021123492133,\n",
       "  0.024534306089252732,\n",
       "  -0.010594662508470777,\n",
       "  0.007263007265507248,\n",
       "  0.004221206622399701,\n",
       "  -0.010461395833090938,\n",
       "  -0.021349244282339284,\n",
       "  0.022468679140123397,\n",
       "  0.0023488165388356524,\n",
       "  0.009235346889944878,\n",
       "  0.021042731115230175,\n",
       "  0.004810909379215129,\n",
       "  -0.02822577828237726,\n",
       "  -0.008682292142895998,\n",
       "  0.018590633228938055,\n",
       "  0.001141091716988191,\n",
       "  0.0033066673920838356,\n",
       "  -0.0005917851941373937,\n",
       "  -0.0071230779082842345,\n",
       "  0.0035015692296179683,\n",
       "  0.003654825347511226,\n",
       "  0.0003721041801924649,\n",
       "  -0.002355479919170774,\n",
       "  -0.004674311828575001,\n",
       "  0.0047309497465162645,\n",
       "  -0.004387789500963202,\n",
       "  0.002200557897986074,\n",
       "  -0.03382295815923339,\n",
       "  -0.006120249761642939,\n",
       "  -0.010001627945072464,\n",
       "  -0.016458373873441397,\n",
       "  0.009895014791033111,\n",
       "  -0.014139542662529116,\n",
       "  -0.005670476362050526,\n",
       "  0.03355642480847372,\n",
       "  0.003901367625942943,\n",
       "  -0.0124737157395393,\n",
       "  -0.011887343857984161,\n",
       "  0.020123195339193226,\n",
       "  -0.013873010243092034,\n",
       "  -0.005813737525856425,\n",
       "  0.003534885665632279,\n",
       "  -0.011414248743634146,\n",
       "  0.001613353763277161,\n",
       "  -0.000857068186105894,\n",
       "  -0.0008274997414590181,\n",
       "  -0.01511238548124704,\n",
       "  0.0023571458224622165,\n",
       "  -0.004654321454738987,\n",
       "  -0.006696626223449422,\n",
       "  -0.018297447288160486,\n",
       "  0.002287180911020061,\n",
       "  0.014486034714665065,\n",
       "  -0.02573370057973571,\n",
       "  -0.012666951208120692,\n",
       "  -0.02062960572540431,\n",
       "  0.02017650051922901,\n",
       "  0.0005101596581470859,\n",
       "  -0.005104094854331401,\n",
       "  -0.00988168756470157,\n",
       "  -0.010221516934994342,\n",
       "  0.014779220655442633,\n",
       "  0.02390795439134816,\n",
       "  -0.013513191895947142,\n",
       "  0.0008241681094991194,\n",
       "  0.01356649800730552,\n",
       "  0.014419401376975146,\n",
       "  -0.003648161967176104,\n",
       "  0.008382443520275253,\n",
       "  0.23198977615478417,\n",
       "  0.005717119325904431,\n",
       "  0.017337931695774106,\n",
       "  0.0324369899506896,\n",
       "  -0.006416767043342096,\n",
       "  0.033663038893835665,\n",
       "  0.007542866445614574,\n",
       "  -0.010101577485946044,\n",
       "  -0.008415759723458916,\n",
       "  0.02138922409868872,\n",
       "  0.016245149428007883,\n",
       "  0.01456599434736393,\n",
       "  -0.005157401431351077,\n",
       "  0.005254019631303071,\n",
       "  -0.0020373068260054585,\n",
       "  -0.0019356913818404343,\n",
       "  -0.02314833788070894,\n",
       "  -0.021709062629484178,\n",
       "  -0.035208926367777185,\n",
       "  -0.016711579997869534,\n",
       "  0.0011485879325537096,\n",
       "  -0.022002248570261746,\n",
       "  0.01913702529414376,\n",
       "  -0.010961144468781442,\n",
       "  0.014605974163713363,\n",
       "  -0.01620516774901326,\n",
       "  -0.019989928663813387,\n",
       "  -0.0016983109894350295,\n",
       "  0.02161577577045377,\n",
       "  0.020882811849832445,\n",
       "  -0.013593151528646007,\n",
       "  0.0012893503578377693,\n",
       "  0.012740248158976382,\n",
       "  -0.00397133230455445,\n",
       "  -0.03131755509290549,\n",
       "  -0.005627164739118207,\n",
       "  0.03051795690327165,\n",
       "  -0.0037581064621370437,\n",
       "  0.02557378131433798,\n",
       "  -0.008429086949790457,\n",
       "  0.006250184164778595,\n",
       "  -0.024334407007505568,\n",
       "  0.00644675190560417,\n",
       "  -0.014326115449267334,\n",
       "  0.004461085520496297,\n",
       "  0.026946424159195417,\n",
       "  ...]]"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "doc_result = embeddings.embed_documents([text])\n",
    "doc_result"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面这个例子是使用HuggingFace Hub的embeddings模型，让我们看看它的一个代码样例："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.embeddings import HuggingFaceEmbeddings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#使用前需要保证库中有sentence_transformers\n",
    "%pip install sentence_transformers"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "于此同时将会下载安装相关依赖文件以及众多json格式的解析处理文件，安装之后就可以正常使用HuggingFaceEmbeddings了"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [],
   "source": [
    "embeddings_hf = HuggingFaceEmbeddings()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [],
   "source": [
    "text_hf = \"I just test the embeddings method in huggingface\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[-0.02886896708001464,\n",
       " 0.0044634718471356595,\n",
       " 0.006389421280895015,\n",
       " -0.009913570977266759,\n",
       " -0.0027808003958931854,\n",
       " 0.023057329930755516,\n",
       " 0.018016073486322687,\n",
       " -0.007974105724918213,\n",
       " -0.02111110793326557,\n",
       " -0.030896281195072053,\n",
       " -0.003899202602766403,\n",
       " 0.009474319213141963,\n",
       " -0.002067861337994345,\n",
       " -0.0010989737706240251,\n",
       " -0.008920186017807362,\n",
       " 0.00898100603730552,\n",
       " 0.01804310419217851,\n",
       " 0.010190636643611502,\n",
       " 0.007784889852604894,\n",
       " -0.026733528278888818,\n",
       " 0.019691986562609287,\n",
       " -0.01051500790784905,\n",
       " -0.010170363614219635,\n",
       " -0.010170363614219635,\n",
       " -0.016096882194688564,\n",
       " 0.011109686230644828,\n",
       " 0.03597808556093373,\n",
       " -0.027652577865922144,\n",
       " -0.006450240834731895,\n",
       " -0.009197252615474662,\n",
       " 0.014083082501236506,\n",
       " -0.012393653140699437,\n",
       " -0.005676482222996175,\n",
       " -0.025692840515504287,\n",
       " -0.014461514245863142,\n",
       " -0.002672677106808614,\n",
       " 0.0069536905252642715,\n",
       " -0.025300891555304628,\n",
       " 0.025963147574062522,\n",
       " -0.0059940948794472105,\n",
       " -0.00814980624430362,\n",
       " 0.0155832950581377,\n",
       " -0.010940743387723377,\n",
       " -0.035086064817111115,\n",
       " -0.01600227286154807,\n",
       " 0.01851614340730053,\n",
       " -0.006511060388568775,\n",
       " -0.01908379195556304,\n",
       " -0.008311991410761116,\n",
       " 0.002961569405457198,\n",
       " 0.0017958632305056508,\n",
       " 0.0202596351108718,\n",
       " -0.019475740915762704,\n",
       " 0.003885687249838491,\n",
       " -0.0010710981223795684,\n",
       " 0.017353817467564793,\n",
       " 0.009433773154358227,\n",
       " 0.00968380811484715,\n",
       " -0.00799437875431008,\n",
       " 0.0015576537381673744,\n",
       " -0.011535422641841713,\n",
       " -0.0015263992516909397,\n",
       " -0.003103481542522826,\n",
       " 0.010325791104213175,\n",
       " -0.015096739558765213,\n",
       " -0.014110113207092328,\n",
       " 0.023516855655594734,\n",
       " 0.01372492285467918,\n",
       " 0.03797836897013532,\n",
       " 0.026922745082524693,\n",
       " 0.0035444224929329715,\n",
       " 0.014218236030515622,\n",
       " -0.022219372461289657,\n",
       " -0.005862319257077516,\n",
       " 0.04646606276292695,\n",
       " 0.007460519519689903,\n",
       " 0.009501349918997785,\n",
       " 0.0052304727824074235,\n",
       " 0.00515951671387461,\n",
       " 0.016448283233459378,\n",
       " -0.0019310175746920033,\n",
       " -0.011900339033540439,\n",
       " -0.009285103340828644,\n",
       " 0.0047675691506587825,\n",
       " 0.010961016417115246,\n",
       " 0.00806871412673615,\n",
       " -0.00456821676364953,\n",
       " 0.015029162794125655,\n",
       " -0.000812193129455108,\n",
       " 0.0037606695367633915,\n",
       " 0.01984065730746143,\n",
       " 0.006865840265571568,\n",
       " 0.005848803904149604,\n",
       " -0.0006136851609655513,\n",
       " -0.00512572833155483,\n",
       " 0.01959737909211391,\n",
       " -0.019178401288703534,\n",
       " 0.02278702100955217,\n",
       " 0.0033805479073594873,\n",
       " -0.025219799437737157,\n",
       " -0.026692982220105084,\n",
       " 0.013826288932961072,\n",
       " -0.03714041336331458,\n",
       " -0.017894435309971483,\n",
       " -0.012400410817163393,\n",
       " 0.00270984456019101,\n",
       " 0.0026692982685766363,\n",
       " -0.027666093218850055,\n",
       " -0.006565121800280422,\n",
       " 0.014475029598791055,\n",
       " -0.009210767968402573,\n",
       " 0.019489256268690615,\n",
       " -0.024111533047067958,\n",
       " -0.023314123499030948,\n",
       " -0.016150943606400212,\n",
       " -0.010596100025416519,\n",
       " -0.0046695823762701455,\n",
       " -0.02040830585572394,\n",
       " -0.008136290891375709,\n",
       " 0.0027284281704668886,\n",
       " 0.03189642289967285,\n",
       " 0.02681462039645629,\n",
       " 0.010913712681867554,\n",
       " -0.013393794845300233,\n",
       " 0.035572621247806156,\n",
       " 0.0007868516680922945,\n",
       " -0.008413357489043009,\n",
       " -0.04895290260282359,\n",
       " -0.016177974312256035,\n",
       " -0.018070134898034333,\n",
       " 0.01638070646881982,\n",
       " 0.019191916641631446,\n",
       " 0.017678187800479785,\n",
       " -0.012434199199483171,\n",
       " -0.003780942566155259,\n",
       " 0.04800682044728933,\n",
       " -0.001748559262427321,\n",
       " 0.010663677721378634,\n",
       " -0.016150943606400212,\n",
       " 0.0017620747317705521,\n",
       " 0.017948496721683128,\n",
       " 0.0215976625013155,\n",
       " 0.007102360338793855,\n",
       " 0.005504160541842746,\n",
       " -0.025827994044783406,\n",
       " 0.03197751501724032,\n",
       " -0.003186263544867563,\n",
       " 0.015691417881560994,\n",
       " 0.034112955681011255,\n",
       " -0.024895429104822165,\n",
       " 0.010393368800175289,\n",
       " -0.010244698986645704,\n",
       " -0.019637925150897642,\n",
       " -0.0008624536391940781,\n",
       " -0.006497545035640864,\n",
       " 0.016867261036869752,\n",
       " 0.0005528657235439905,\n",
       " -0.010190636643611502,\n",
       " -0.04438468351086767,\n",
       " -0.014610184059392727,\n",
       " -0.002968327081921154,\n",
       " 0.026490250063541297,\n",
       " -0.015718450450061927,\n",
       " -0.014569638000608991,\n",
       " 0.01455612264768108,\n",
       " 0.026165879730626306,\n",
       " 0.01952980232747435,\n",
       " -0.014353390491117293,\n",
       " -0.003211605064438036,\n",
       " -0.003503876201318598,\n",
       " -0.020367757934295093,\n",
       " 0.015434625244608115,\n",
       " -0.02513870732016969,\n",
       " 0.009954117036050493,\n",
       " 0.012981573787031261,\n",
       " -0.024016925576572576,\n",
       " 0.02293569175440431,\n",
       " -0.015434625244608115,\n",
       " -0.009663535085455281,\n",
       " -0.022408589264925532,\n",
       " -0.01804310419217851,\n",
       " 0.029301460236352925,\n",
       " 0.030517849450445417,\n",
       " 0.001139520062238399,\n",
       " -0.013961443393562745,\n",
       " -0.02736875359179089,\n",
       " -0.0066833820697222044,\n",
       " -0.022516712088348826,\n",
       " 0.017826856682686812,\n",
       " -0.020719160835711018,\n",
       " 0.015353533127040646,\n",
       " 0.02382771063558181,\n",
       " 0.008318749087225071,\n",
       " -0.004422925788351924,\n",
       " -0.6366309990135384,\n",
       " -0.023760132008297144,\n",
       " 0.032572194271358657,\n",
       " -0.020840799012062223,\n",
       " 0.03200454572309614,\n",
       " 0.013143758953488757,\n",
       " 0.0010922159777447499,\n",
       " -0.021867971422518843,\n",
       " -0.0345184162688486,\n",
       " 0.04233033868995443,\n",
       " -0.01884051560286063,\n",
       " -0.0013912450239199066,\n",
       " -0.0193676162296943,\n",
       " -0.003106860380754804,\n",
       " 0.025544169770652148,\n",
       " -0.005872455771773449,\n",
       " 0.030571910862157065,\n",
       " -0.03286953576106293,\n",
       " 0.010102786849580077,\n",
       " 0.02389528740022137,\n",
       " -0.016488829292243115,\n",
       " 0.027585001101282588,\n",
       " -0.010217668280789882,\n",
       " -0.005287913498012326,\n",
       " -0.01890809236750019,\n",
       " 0.00484866173388753,\n",
       " 0.006926659819408448,\n",
       " -0.014880492980596072,\n",
       " 0.013610041423469376,\n",
       " 0.002333101769019084,\n",
       " -0.025936116868206696,\n",
       " 0.01661046746859432,\n",
       " 0.032410010036223715,\n",
       " -0.012535565277765065,\n",
       " 0.03197751501724032,\n",
       " -0.015380563832896469,\n",
       " -0.020840799012062223,\n",
       " 0.01884051560286063,\n",
       " -0.008217383940265733,\n",
       " 0.03970834532077868,\n",
       " -0.035788870619942965,\n",
       " -0.013839804285888985,\n",
       " -0.008197110910873866,\n",
       " 0.011528664965377758,\n",
       " -0.009440530830822183,\n",
       " 0.02681462039645629,\n",
       " 0.0022722822151822033,\n",
       " -0.003649167176616204,\n",
       " -0.009068855831336946,\n",
       " 0.025544169770652148,\n",
       " 0.0028112101728116254,\n",
       " 0.011697607808299208,\n",
       " 0.0019394647866872677,\n",
       " 0.007811920558460717,\n",
       " 0.0180295888392506,\n",
       " -0.009947359359586537,\n",
       " 0.020421821208651852,\n",
       " -0.004277634813054318,\n",
       " 0.026246971848193777,\n",
       " 0.021962580755659336,\n",
       " 0.013339733433588586,\n",
       " 0.010589342348952563,\n",
       " -0.008663393380854485,\n",
       " -0.016475313939315204,\n",
       " -0.025963147574062522,\n",
       " 0.035572621247806156,\n",
       " 0.01095425874065129,\n",
       " 0.018205290289958563,\n",
       " -0.005727165262137121,\n",
       " -0.04727698766389188,\n",
       " 0.008467418900754656,\n",
       " 0.013042393806529419,\n",
       " -0.021205715403760952,\n",
       " -0.0064907868935156305,\n",
       " 0.014623699412320638,\n",
       " 0.03200454572309614,\n",
       " 0.024881913751894254,\n",
       " -0.014610184059392727,\n",
       " -0.01516431632340477,\n",
       " 0.025530654417724237,\n",
       " 0.011454330524274242,\n",
       " -0.0027216704940029326,\n",
       " -0.00803492574441637,\n",
       " 0.022178826402505923,\n",
       " 0.008339023047939494,\n",
       " 0.018070134898034333,\n",
       " -0.022976237813188045,\n",
       " -0.004439819979511814,\n",
       " -0.029625830569267913,\n",
       " -0.006000852555911166,\n",
       " 0.02208421893201054,\n",
       " 0.04184378225925939,\n",
       " -0.02430074985070383,\n",
       " -0.04330345155134451,\n",
       " 0.015623841116921434,\n",
       " -0.010460945564814847,\n",
       " 0.008933701370735275,\n",
       " -0.001609181370450996,\n",
       " 0.016718592154662724,\n",
       " -0.007845708940780497,\n",
       " 0.007014510079101152,\n",
       " -0.0022689033769502254,\n",
       " 0.017759279918047256,\n",
       " 0.004950027812169423,\n",
       " 0.01143405656355982,\n",
       " -0.004095176617204955,\n",
       " -0.006565121800280422,\n",
       " 0.0051324860080187865,\n",
       " -0.003689713468230578,\n",
       " 0.0006715481149341314,\n",
       " -0.03381561419130697,\n",
       " -0.019651440503825553,\n",
       " 0.0052946711744762814,\n",
       " 0.015488687587642318,\n",
       " -0.010933985711259421,\n",
       " -0.04922320966138182,\n",
       " 0.0037302597598449514,\n",
       " 0.0053284595567960604,\n",
       " 0.01321133664945087,\n",
       " -0.01031903342774922,\n",
       " -0.012163890278279828,\n",
       " -0.0022689033769502254,\n",
       " 0.01546165595046394,\n",
       " -0.005646072678908374,\n",
       " 0.002725049332234911,\n",
       " 0.001511194479647039,\n",
       " -0.01482643063756187,\n",
       " -0.021881486775446754,\n",
       " -0.019962295483812634,\n",
       " 0.02153008573667594,\n",
       " 0.019421677641405944,\n",
       " -0.015623841116921434,\n",
       " 0.009913570977266759,\n",
       " -0.013711407501751267,\n",
       " 0.019489256268690615,\n",
       " 0.028463502766887066,\n",
       " 0.02513870732016969,\n",
       " -0.009663535085455281,\n",
       " 0.012603142042404623,\n",
       " -0.008257929999049469,\n",
       " 0.0010001420538467544,\n",
       " 0.01716460066392892,\n",
       " 0.0034396778092497396,\n",
       " 0.0031609220252970895,\n",
       " -0.02967989198097956,\n",
       " -0.028139132433972078,\n",
       " -0.015218378666438973,\n",
       " 0.02048939797329141,\n",
       " 0.005862319257077516,\n",
       " 0.009339164752540291,\n",
       " -0.014231751383443533,\n",
       " -0.004977058518025246,\n",
       " -0.013170790590667135,\n",
       " -0.00022828412592801136,\n",
       " -0.010663677721378634,\n",
       " -0.014177689971731886,\n",
       " -0.009487834566069874,\n",
       " -0.025530654417724237,\n",
       " 0.0027402541042788114,\n",
       " -0.027166021435227103,\n",
       " 0.006835430721483767,\n",
       " 0.043492666492335276,\n",
       " 0.01875942162264805,\n",
       " -0.0048114945133357736,\n",
       " -0.012724781150078383,\n",
       " 0.001553430190377402,\n",
       " -0.01828638240752603,\n",
       " 0.031436900900123854,\n",
       " -0.004007326357512252,\n",
       " -0.02619291043648213,\n",
       " 0.004159374543612536,\n",
       " -0.04235736939581025,\n",
       " 0.01178545853365319,\n",
       " 0.01977308054282187,\n",
       " -0.002116854725188664,\n",
       " 0.021935548187158403,\n",
       " -0.02753093782692583,\n",
       " -0.0025831371951692828,\n",
       " -0.01008251382018821,\n",
       " -0.026057756907203015,\n",
       " -0.004517533724508583,\n",
       " 0.0040478724162959865,\n",
       " 0.0058690769335414715,\n",
       " 0.016799684272230192,\n",
       " 0.032653286388926124,\n",
       " 0.008737727821957999,\n",
       " 0.004125586161292757,\n",
       " 0.00819035323440991,\n",
       " -0.010487976270670671,\n",
       " 0.021084077227409747,\n",
       " 0.006889492598856691,\n",
       " -0.026368611887190092,\n",
       " -0.04135722582856435,\n",
       " 0.006544848770888555,\n",
       " -0.006078566300907935,\n",
       " 0.009149948880226971,\n",
       " 0.00794707501906239,\n",
       " 0.01171112316122712,\n",
       " 0.004774326827122738,\n",
       " 0.018178257721457626,\n",
       " 0.0056426938406763955,\n",
       " 0.017218662075640566,\n",
       " 0.0036930923064625557,\n",
       " -0.007967348048454257,\n",
       " 0.0029767744103317375,\n",
       " 0.001376884845018681,\n",
       " 0.002973395339269121,\n",
       " -0.013434340904083969,\n",
       " -0.003350138130441047,\n",
       " 0.010609615378344431,\n",
       " 0.01952980232747435,\n",
       " 0.005581874286839515,\n",
       " -0.045168581431266994,\n",
       " -0.013589768394077507,\n",
       " -0.0015424488497081544,\n",
       " -0.004328317852195265,\n",
       " -0.011021836436613404,\n",
       " -7.813609795677769e-06,\n",
       " -0.007507823720598872,\n",
       " 0.0021354385682951812,\n",
       " -0.013684376795895445,\n",
       " 0.010068998467260297,\n",
       " 0.008197110910873866,\n",
       " 0.020854314364990138,\n",
       " 0.0009004658021344685,\n",
       " -0.00526426163038848,\n",
       " 0.012373380111307569,\n",
       " -0.008920186017807362,\n",
       " 0.015367048479968557,\n",
       " -0.008095744832591973,\n",
       " -0.002578068937821316,\n",
       " 0.049169148249670176,\n",
       " 0.010427157182495069,\n",
       " 0.00421343642098546,\n",
       " 0.014623699412320638,\n",
       " 0.006115733987120971,\n",
       " 0.020854314364990138,\n",
       " 0.011528664965377758,\n",
       " 0.0025172496168150743,\n",
       " 0.0053723849194730515,\n",
       " -0.012555838307156932,\n",
       " 0.0020932028575648184,\n",
       " 0.006355632898575236,\n",
       " -0.025287376202376716,\n",
       " 0.02350334030266682,\n",
       " 0.020921891129629694,\n",
       " 0.02792288678712549,\n",
       " -0.0026963289744324596,\n",
       " -0.02205718822615472,\n",
       " 0.0052743981450844146,\n",
       " -0.014461514245863142,\n",
       " -0.0026895712979685036,\n",
       " 0.005537948924162525,\n",
       " 0.013704649825287312,\n",
       " 0.021084077227409747,\n",
       " -0.01606985148883274,\n",
       " -0.014515575657574789,\n",
       " 0.02816616500247301,\n",
       " 0.021462508972036384,\n",
       " 0.05149380385443187,\n",
       " 0.009609473673743634,\n",
       " 0.008278203028441336,\n",
       " 0.012163890278279828,\n",
       " 0.010143332908363813,\n",
       " 0.03224782580108878,\n",
       " -0.018962153779211836,\n",
       " -0.024800821634326786,\n",
       " -0.015610325763993523,\n",
       " -0.02335466955781468,\n",
       " -0.018408020583877235,\n",
       " -0.016583436762738497,\n",
       " -0.001876955930149718,\n",
       " 0.007149664539702824,\n",
       " -0.013711407501751267,\n",
       " 0.03235594862451207,\n",
       " 0.014718307814138576,\n",
       " 0.028193195708328834,\n",
       " 0.012224710297777986,\n",
       " 0.01961089444504182,\n",
       " 0.027895856081269665,\n",
       " -0.011035351789541315,\n",
       " -0.029869108784615437,\n",
       " 0.05025038393448355,\n",
       " 0.0008147272581290915,\n",
       " 0.0026490250063541296,\n",
       " -0.024179111674352625,\n",
       " -0.027409299650574624,\n",
       " -0.0005786294814781415,\n",
       " -0.02198961146151516,\n",
       " 0.016191489665183946,\n",
       " 0.014975100451091452,\n",
       " 0.016691561448806902,\n",
       " 0.028490535335388003,\n",
       " -0.0024851504207806453,\n",
       " -0.0034515039758923013,\n",
       " 0.03635651544291525,\n",
       " 0.014353390491117293,\n",
       " 0.006389421280895015,\n",
       " 0.019205431994559357,\n",
       " -0.001809378699848882,\n",
       " -0.0023685798032854906,\n",
       " -0.021651725775672256,\n",
       " -0.010217668280789882,\n",
       " -0.008663393380854485,\n",
       " 0.045898414214664444,\n",
       " -0.01067043539784259,\n",
       " 0.0056122838309273166,\n",
       " -0.014083082501236506,\n",
       " -0.006487408055283653,\n",
       " -0.004493881856884738,\n",
       " 0.005608904992695339,\n",
       " 0.01763764174169605,\n",
       " -0.018543175975801466,\n",
       " 0.0009291861017292599,\n",
       " -0.009609473673743634,\n",
       " 0.04343860508062363,\n",
       " -0.01750248634977182,\n",
       " -0.002000284107693509,\n",
       " 0.016286097135679328,\n",
       " 0.01344785625701188,\n",
       " -0.02185445606959093,\n",
       " -0.027111960023515458,\n",
       " 0.0026017210382758,\n",
       " -0.0032301886747139147,\n",
       " 0.028923028491726285,\n",
       " 0.028003978904692958,\n",
       " -0.0055784954486075376,\n",
       " -0.0003112773347660765,\n",
       " -0.011684092455371296,\n",
       " -0.007825435911388628,\n",
       " -0.018380989878021413,\n",
       " 0.007838951264316541,\n",
       " -0.005250745811799291,\n",
       " 0.0028635826310685606,\n",
       " -0.03781618473500038,\n",
       " -0.036653856932619536,\n",
       " 0.023692555243657584,\n",
       " -0.010967774093579201,\n",
       " 0.01661046746859432,\n",
       " 0.0018786453492657068,\n",
       " 0.004392515778602845,\n",
       " -0.010623130731272343,\n",
       " 0.013326218080660675,\n",
       " -0.0022317359235678297,\n",
       " -0.005923138810914396,\n",
       " -0.007325365059088231,\n",
       " -0.006625941354117302,\n",
       " 0.007210484093539704,\n",
       " 0.030031294882395487,\n",
       " -0.0020847557619848734,\n",
       " 0.008987763713769477,\n",
       " 0.01773224921219143,\n",
       " 0.013961443393562745,\n",
       " -0.032707351525928,\n",
       " 0.00794707501906239,\n",
       " 0.016961870370010245,\n",
       " -0.010481218594206714,\n",
       " 0.013069424512385241,\n",
       " -0.010636646084200254,\n",
       " 0.036383546148771075,\n",
       " 0.004017462872208185,\n",
       " 0.016326643194463062,\n",
       " 0.019881203366245163,\n",
       " -0.009244556350722353,\n",
       " 0.008764758527813823,\n",
       " 0.017178116016856832,\n",
       " -0.0015990447393397426,\n",
       " -0.003845140958224117,\n",
       " -0.018543175975801466,\n",
       " -0.011603000337803827,\n",
       " -0.01356949536468564,\n",
       " 0.008170080205018042,\n",
       " -0.0009663534969039962,\n",
       " -0.01898918448506766,\n",
       " 0.007284819000304496,\n",
       " 0.008088987156128017,\n",
       " -0.030653002979724533,\n",
       " -0.0145290910105027,\n",
       " 0.01621852037103977,\n",
       " 0.021097592580337658,\n",
       " -0.024706212301186293,\n",
       " 0.002817967849275581,\n",
       " -0.0073388804120161424,\n",
       " -0.006173174237064595,\n",
       " -0.01811068095681807,\n",
       " -0.013258640384698562,\n",
       " -0.007872740577958875,\n",
       " -0.017934981368755216,\n",
       " -0.0067610958147189745,\n",
       " -0.015623841116921434,\n",
       " -0.014137143912948152,\n",
       " 0.005673103384764197,\n",
       " -0.010346064133605042,\n",
       " 0.005723786423905143,\n",
       " -0.0008666771869840505,\n",
       " -0.007386184612925111,\n",
       " -0.04416843786402108,\n",
       " 0.00806871412673615,\n",
       " 0.009102645144979282,\n",
       " 0.024152080968496803,\n",
       " 0.01020415292786197,\n",
       " 0.009954117036050493,\n",
       " -0.010771800544801926,\n",
       " -0.015488687587642318,\n",
       " -0.0032065365742594302,\n",
       " -0.0101298175554359,\n",
       " 0.009954117036050493,\n",
       " -0.005075045292413884,\n",
       " -0.01031903342774922,\n",
       " 0.012603142042404623,\n",
       " -0.018921607720428103,\n",
       " 0.0068590825891076115,\n",
       " -0.013001847747745683,\n",
       " 0.0010744769606115465,\n",
       " 0.01600227286154807,\n",
       " 0.008311991410761116,\n",
       " 0.007676766563520323,\n",
       " -0.038410863989118714,\n",
       " 0.016421252527603555,\n",
       " 0.02581447869185549,\n",
       " 0.014569638000608991,\n",
       " 0.009886540271410935,\n",
       " -0.01515080097047686,\n",
       " -0.010981290377829668,\n",
       " 0.02281405171540799,\n",
       " -0.022408589264925532,\n",
       " 0.00032204743983816584,\n",
       " -0.0037403962745408852,\n",
       " 0.0018144469571968486,\n",
       " 0.0125761113365488,\n",
       " -0.00642658896710805,\n",
       " 0.020178542993304332,\n",
       " -0.017664672447551873,\n",
       " 0.0018617510416904979,\n",
       " 0.0008962421961368364,\n",
       " 4.919802804552858e-06,\n",
       " -0.008055198773808237,\n",
       " -0.00022638351487060885,\n",
       " 0.01135296444599235,\n",
       " -0.019448708347261767,\n",
       " -0.01977308054282187,\n",
       " 0.033842644897162794,\n",
       " -0.02731469218007924,\n",
       " 0.004781084503586695,\n",
       " -0.02705789861180381,\n",
       " 0.004419546950119946,\n",
       " 0.035653713365373624,\n",
       " 0.011001563407221535,\n",
       " -0.0034802241008641136,\n",
       " 0.03005832558825131,\n",
       " -0.0019158126862327833,\n",
       " -0.0037708060514593253,\n",
       " -0.03632948473705943,\n",
       " 0.014894008333523983,\n",
       " -0.017705218506335607,\n",
       " 0.03668088763847536,\n",
       " 0.0007898081515452752,\n",
       " 0.007778132176140938,\n",
       " -0.020313696522583448,\n",
       " -0.03251813285964701,\n",
       " -0.006044777918588156,\n",
       " -0.007061814280010121,\n",
       " 0.005169653228570543,\n",
       " 0.0022722822151822033,\n",
       " -0.005696755252388042,\n",
       " -0.015515718293498142,\n",
       " 0.0008523170662904846,\n",
       " -0.0030798294420683416,\n",
       " -0.0016801373225684905,\n",
       " -0.02311139134246716,\n",
       " -0.035572621247806156,\n",
       " 0.035788870619942965,\n",
       " 0.000670703405376137,\n",
       " 0.021219230756688863,\n",
       " 0.010575826996024652,\n",
       " 0.022030157520298896,\n",
       " -0.029544738451700445,\n",
       " -0.01877293697557596,\n",
       " 0.0017147706472769029,\n",
       " -0.0049905738709531584,\n",
       " 0.016326643194463062,\n",
       " 0.03797836897013532,\n",
       " 0.013468130217726302,\n",
       " 0.020448851914507675,\n",
       " 0.055521401378690875,\n",
       " -0.024152080968496803,\n",
       " 0.039275850301795284,\n",
       " -0.0026659194303446584,\n",
       " -0.0006204428956371669,\n",
       " 0.007318607382624275,\n",
       " -0.004095176617204955,\n",
       " -0.012792357914717941,\n",
       " -0.00013821643088599762,\n",
       " 0.0036525460148481817,\n",
       " 0.001841477895883311,\n",
       " -0.03295062787863041,\n",
       " 0.012258498680097764,\n",
       " 0.003155853767949123,\n",
       " 0.0031169968954507378,\n",
       " 0.005615662669159294,\n",
       " -0.016096882194688564,\n",
       " -0.008257929999049469,\n",
       " -0.020381273287223004,\n",
       " -0.015299470784006444,\n",
       " -0.014447998892935231,\n",
       " 0.02086782971791805,\n",
       " -0.011379995151848173,\n",
       " 0.01938113158262221,\n",
       " -0.029463646334132974,\n",
       " 0.0014596668473634176,\n",
       " 0.004743917283034937,\n",
       " 0.0022368044137464353,\n",
       " 0.017367332820492704,\n",
       " 0.02730117682715133,\n",
       " 0.02959879986341209,\n",
       " -0.028058040316404607,\n",
       " -0.00565283035537233,\n",
       " -0.0014148970079590714,\n",
       " -0.02690922786695167,\n",
       " -0.01130566071074466,\n",
       " -0.012359863827057102,\n",
       " -0.010481218594206714,\n",
       " -0.02603072620134719,\n",
       " 0.018637783446296845,\n",
       " -0.012562595983620887,\n",
       " 0.009109402821443237,\n",
       " 0.0015770821744165667,\n",
       " 0.017461940290988087,\n",
       " -0.025273860849448805,\n",
       " 0.018610752740441022,\n",
       " 0.008122775538447797,\n",
       " -0.024476451301411795,\n",
       " -0.00028445766078060255,\n",
       " -0.02388177204729346,\n",
       " -0.016096882194688564,\n",
       " -0.05235879016710844,\n",
       " -0.017826856682686812,\n",
       " -0.004598626307737331,\n",
       " -0.00894045904719923,\n",
       " -0.0035140127160145315,\n",
       " -0.024881913751894254,\n",
       " 0.008406599812579053,\n",
       " -0.0036694402060080712,\n",
       " -0.008643120351462618,\n",
       " -0.033166873525476995,\n",
       " 0.0036525460148481817,\n",
       " 0.03970834532077868,\n",
       " 0.007284819000304496,\n",
       " 0.0210300158156981,\n",
       " 0.01463721476524855,\n",
       " -0.006281297991810444,\n",
       " -0.02461160483069091,\n",
       " 0.01923246270041518,\n",
       " -0.0008012117887858603,\n",
       " -0.010771800544801926,\n",
       " 0.0047337807683390035,\n",
       " 0.029220368118785454,\n",
       " -0.019354100876766388,\n",
       " -0.032896566466918756,\n",
       " 0.011123201583572741,\n",
       " -0.005946790678538242,\n",
       " -0.004311423195374097,\n",
       " -0.023692555243657584,\n",
       " 0.020151512287448506,\n",
       " -0.008359296077331362,\n",
       " -0.004422925788351924,\n",
       " -0.012488260611194818,\n",
       " 0.0005790518944647985,\n",
       " -0.031436900900123854,\n",
       " 0.004095176617204955,\n",
       " -0.02278702100955217,\n",
       " -0.011102928554180873,\n",
       " 0.001799242185152948,\n",
       " -0.0029666376628051648,\n",
       " 0.0062002054085816966,\n",
       " 0.02722008284693875,\n",
       " -0.0016801373225684905,\n",
       " 0.02280053636248008,\n",
       " -0.005831909712989715,\n",
       " 0.009582442967887812,\n",
       " -0.003747153951004841,\n",
       " 0.0034498145567763124,\n",
       " -0.010305518074821307,\n",
       " 0.0155832950581377,\n",
       " 0.019327070170910562,\n",
       " 0.03500497269954364,\n",
       " -0.011697607808299208,\n",
       " 0.00016155166339282948,\n",
       " 0.021895002128374665,\n",
       " -0.008994521390233432,\n",
       " 0.0069536905252642715,\n",
       " -0.011447572847810287,\n",
       " -0.005744059453297011,\n",
       " 0.029625830569267913,\n",
       " -0.039275850301795284,\n",
       " -0.021205715403760952,\n",
       " -0.0036593036913121373,\n",
       " 0.024260203791920096,\n",
       " 0.014623699412320638,\n",
       " -0.008636362674998663,\n",
       " -0.016488829292243115,\n",
       " -0.022922176401476396,\n",
       " -0.004618899337129199,\n",
       " 0.0014115181697270936,\n",
       " 0.03822164532283773,\n",
       " -0.010521765584313005,\n",
       " 0.008899912988415495,\n",
       " -0.003493739686622664,\n",
       " -0.011535422641841713,\n",
       " 0.006838809559715745,\n",
       " -0.011548937994769625,\n",
       " 0.0025932739426958555,\n",
       " 0.012285529385953588,\n",
       " -0.012738296503006296,\n",
       " -0.017786310623903078,\n",
       " -0.002476703325200701,\n",
       " -0.006622562515885324,\n",
       " 0.04138425653442017,\n",
       " -0.016015788214475985,\n",
       " -0.0016936527919117217,\n",
       " 0.005054772263022016,\n",
       " 0.016988901075866068,\n",
       " -0.0013836425214826368,\n",
       " -0.002929470209422769,\n",
       " -0.0022384938328624242,\n",
       " -0.009278345664364689,\n",
       " -0.016583436762738497,\n",
       " -0.0193676162296943,\n",
       " -0.01835395917216559,\n",
       " 0.000876813759887644,\n",
       " 0.0016041129966877096,\n",
       " 0.006629320192349281,\n",
       " -0.010433914858959024,\n",
       " -0.01709702389928936,\n",
       " 0.005193305561855667,\n",
       " 0.01779982597683099,\n",
       " 0.01684023033101393,\n",
       " -0.0016852056963317767,\n",
       " -0.009744628134345307,\n",
       " -0.011623273367195694,\n",
       " 0.007778132176140938,\n",
       " -0.029977231608038727,\n",
       " -0.0003412646986978597,\n",
       " 0.009764901163737174,\n",
       " -0.006419831290644094,\n",
       " 0.00020188679216961967,\n",
       " 0.0183674745250935,\n",
       " -0.007237514799395528,\n",
       " -0.016421252527603555,\n",
       " -0.013866834991744808,\n",
       " -0.006375905927967104,\n",
       " -0.0019597378160791353,\n",
       " -0.031247682233842868,\n",
       " 0.0003847675028690441,\n",
       " -0.0106839507507705,\n",
       " 0.018083650250962247,\n",
       " -0.0011276940120111569,\n",
       " -0.01527244007815062,\n",
       " -0.00676785349118293,\n",
       " 0.004220194097449416,\n",
       " -0.01163003104365965,\n",
       " 0.03292359717277458,\n",
       " 0.0018059998616169039,\n",
       " -0.011994947435358377,\n",
       " 0.004659445861574211,\n",
       " 0.0042539829454304725,\n",
       " 0.026949775788380516,\n",
       " 0.015123770264621037,\n",
       " -0.026003695495491367,\n",
       " -0.009312134046684467,\n",
       " -0.03322093493718864,\n",
       " 0.003360274645136981,\n",
       " -0.024152080968496803,\n",
       " 0.010075756143724253,\n",
       " 0.01734030211463688,\n",
       " 0.023597947773162202,\n",
       " 0.024341295909487567,\n",
       " -0.026787589690600466,\n",
       " -0.01961089444504182,\n",
       " -0.006146143531208772,\n",
       " -0.008278203028441336,\n",
       " -0.006710412775578028,\n",
       " -0.021705787187383904,\n",
       " -0.0013067734860438614,\n",
       " 0.0018347201030040356,\n",
       " 0.0033264862628172016,\n",
       " -0.01582657327348522,\n",
       " 0.022449135323709266,\n",
       " 0.006473892702355741,\n",
       " -0.00018953284212437575,\n",
       " -0.0022165311515239287,\n",
       " -0.002208084055943984,\n",
       " -0.015921180743980603,\n",
       " -0.015218378666438973,\n",
       " -0.025746901927215935,\n",
       " -0.0028703403075325167,\n",
       " -0.03678901046189865,\n",
       " 0.013366764139444409,\n",
       " 0.027760700689345438,\n",
       " 0.004321560175731309,\n",
       " 0.011880066004148572,\n",
       " 0.051061308835448474,\n",
       " -0.0005718718050141857,\n",
       " -0.004757432635962849,\n",
       " -0.02715250608229919,\n",
       " 0.004558080248953596,\n",
       " 0.032410010036223715,\n",
       " 0.002426020286059754,\n",
       " 0.020948921835485516,\n",
       " -0.007940317342598433,\n",
       " -0.02057049009085888,\n",
       " 0.012805873267645854,\n",
       " -0.01654289070395476,\n",
       " 0.002365200965053513,\n",
       " -0.03357233783860456,\n",
       " 0.01071773913309028,\n",
       " 0.013441098580547924,\n",
       " 0.010041967761404475,\n",
       " -0.03668088763847536,\n",
       " 0.024016925576572576,\n",
       " -0.011481361230130067,\n",
       " -0.011156990897215075,\n",
       " 0.0275985164542105,\n",
       " -0.013589768394077507,\n",
       " -0.011724638514155032,\n",
       " 0.03381561419130697,\n",
       " 0.010055483114332386,\n",
       " 0.007325365059088231,\n",
       " -0.005409552605686086,\n",
       " 0.0330587507020537,\n",
       " 0.007142906863238868,\n",
       " 0.0005000710269233772,\n",
       " 0.032653286388926124,\n",
       " -0.021949065402731425,\n",
       " -0.02792288678712549,\n",
       " -0.019097307308490952,\n",
       " -0.0021337491491791923,\n",
       " -0.03424810921029037,\n",
       " -0.013231609678842737,\n",
       " -0.009704081144239016,\n",
       " -0.0028416199497300655,\n",
       " 0.0023516853792949623,\n",
       " -0.0024125049331318424,\n",
       " -0.0007420817704802885,\n",
       " -0.028003978904692958,\n",
       " 0.004456714170671703,\n",
       " -0.016745622860518547,\n",
       " 0.022759990303696347,\n",
       " -0.0014351701537662583,\n",
       " -0.0120219781412142,\n",
       " 0.017124054605145184,\n",
       " -0.008332265371475538,\n",
       " 0.013880350344672719,\n",
       " 0.010785315897729839,\n",
       " -0.011494876583057978,\n",
       " -0.026990321847164253,\n",
       " -0.019016215190923485,\n",
       " 0.035707774777085276,\n",
       " 0.02182742536373511,\n",
       " 0.01215713260181587,\n",
       " 0.2147332088184969,\n",
       " -0.03968131461492286,\n",
       " 0.019637925150897642,\n",
       " 0.0018516144105792446,\n",
       " -0.0027284281704668886,\n",
       " -0.02428723449777592,\n",
       " -0.020056902954308016,\n",
       " 0.011582726377089403,\n",
       " 0.002005352597872115,\n",
       " 0.009987905418370273,\n",
       " 0.024503482007267617,\n",
       " -0.0070482989270822085,\n",
       " -0.006909765628248558,\n",
       " -0.0023297229307871056,\n",
       " -0.004625657013593154,\n",
       " -0.014502060304646878,\n",
       " -0.035572621247806156,\n",
       " -0.011961159053038597,\n",
       " -0.019340585523838473,\n",
       " -0.008899912988415495,\n",
       " 0.008102502509055929,\n",
       " -0.007088844985865943,\n",
       " -0.008845851576703848,\n",
       " -0.016326643194463062,\n",
       " 0.018637783446296845,\n",
       " -0.01494806974523563,\n",
       " -0.012103071190104226,\n",
       " -0.015840088626413132,\n",
       " 0.014407452834151496,\n",
       " 0.025679325162576375,\n",
       " -0.00992708633019467,\n",
       " 0.02008393552280895,\n",
       " 0.02255725814713256,\n",
       " -0.009109402821443237,\n",
       " -0.012272014033025675,\n",
       " -0.014461514245863142,\n",
       " 0.019691986562609287,\n",
       " 0.026936260435452605,\n",
       " 0.020137996934520595,\n",
       " -0.0038620351493840067,\n",
       " 0.03140986646897781,\n",
       " -0.035302314189247924,\n",
       " 0.010893439652475688,\n",
       " 0.0013642140852334443,\n",
       " -0.01811068095681807,\n",
       " 0.020746191541566844,\n",
       " ...]"
      ]
     },
     "execution_count": 54,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "query_result = embeddings_hf.embed_query(text_hf)\n",
    "query_result"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[[-0.02886896708001464,\n",
       "  0.0044634718471356595,\n",
       "  0.006389421280895015,\n",
       "  -0.009913570977266759,\n",
       "  -0.0027808003958931854,\n",
       "  0.023057329930755516,\n",
       "  0.018016073486322687,\n",
       "  -0.007974105724918213,\n",
       "  -0.02111110793326557,\n",
       "  -0.030896281195072053,\n",
       "  -0.003899202602766403,\n",
       "  0.009474319213141963,\n",
       "  -0.002067861337994345,\n",
       "  -0.0010989737706240251,\n",
       "  -0.008920186017807362,\n",
       "  0.00898100603730552,\n",
       "  0.01804310419217851,\n",
       "  0.010190636643611502,\n",
       "  0.007784889852604894,\n",
       "  -0.026733528278888818,\n",
       "  0.019691986562609287,\n",
       "  -0.01051500790784905,\n",
       "  -0.010170363614219635,\n",
       "  -0.010170363614219635,\n",
       "  -0.016096882194688564,\n",
       "  0.011109686230644828,\n",
       "  0.03597808556093373,\n",
       "  -0.027652577865922144,\n",
       "  -0.006450240834731895,\n",
       "  -0.009197252615474662,\n",
       "  0.014083082501236506,\n",
       "  -0.012393653140699437,\n",
       "  -0.005676482222996175,\n",
       "  -0.025692840515504287,\n",
       "  -0.014461514245863142,\n",
       "  -0.002672677106808614,\n",
       "  0.0069536905252642715,\n",
       "  -0.025300891555304628,\n",
       "  0.025963147574062522,\n",
       "  -0.0059940948794472105,\n",
       "  -0.00814980624430362,\n",
       "  0.0155832950581377,\n",
       "  -0.010940743387723377,\n",
       "  -0.035086064817111115,\n",
       "  -0.01600227286154807,\n",
       "  0.01851614340730053,\n",
       "  -0.006511060388568775,\n",
       "  -0.01908379195556304,\n",
       "  -0.008311991410761116,\n",
       "  0.002961569405457198,\n",
       "  0.0017958632305056508,\n",
       "  0.0202596351108718,\n",
       "  -0.019475740915762704,\n",
       "  0.003885687249838491,\n",
       "  -0.0010710981223795684,\n",
       "  0.017353817467564793,\n",
       "  0.009433773154358227,\n",
       "  0.00968380811484715,\n",
       "  -0.00799437875431008,\n",
       "  0.0015576537381673744,\n",
       "  -0.011535422641841713,\n",
       "  -0.0015263992516909397,\n",
       "  -0.003103481542522826,\n",
       "  0.010325791104213175,\n",
       "  -0.015096739558765213,\n",
       "  -0.014110113207092328,\n",
       "  0.023516855655594734,\n",
       "  0.01372492285467918,\n",
       "  0.03797836897013532,\n",
       "  0.026922745082524693,\n",
       "  0.0035444224929329715,\n",
       "  0.014218236030515622,\n",
       "  -0.022219372461289657,\n",
       "  -0.005862319257077516,\n",
       "  0.04646606276292695,\n",
       "  0.007460519519689903,\n",
       "  0.009501349918997785,\n",
       "  0.0052304727824074235,\n",
       "  0.00515951671387461,\n",
       "  0.016448283233459378,\n",
       "  -0.0019310175746920033,\n",
       "  -0.011900339033540439,\n",
       "  -0.009285103340828644,\n",
       "  0.0047675691506587825,\n",
       "  0.010961016417115246,\n",
       "  0.00806871412673615,\n",
       "  -0.00456821676364953,\n",
       "  0.015029162794125655,\n",
       "  -0.000812193129455108,\n",
       "  0.0037606695367633915,\n",
       "  0.01984065730746143,\n",
       "  0.006865840265571568,\n",
       "  0.005848803904149604,\n",
       "  -0.0006136851609655513,\n",
       "  -0.00512572833155483,\n",
       "  0.01959737909211391,\n",
       "  -0.019178401288703534,\n",
       "  0.02278702100955217,\n",
       "  0.0033805479073594873,\n",
       "  -0.025219799437737157,\n",
       "  -0.026692982220105084,\n",
       "  0.013826288932961072,\n",
       "  -0.03714041336331458,\n",
       "  -0.017894435309971483,\n",
       "  -0.012400410817163393,\n",
       "  0.00270984456019101,\n",
       "  0.0026692982685766363,\n",
       "  -0.027666093218850055,\n",
       "  -0.006565121800280422,\n",
       "  0.014475029598791055,\n",
       "  -0.009210767968402573,\n",
       "  0.019489256268690615,\n",
       "  -0.024111533047067958,\n",
       "  -0.023314123499030948,\n",
       "  -0.016150943606400212,\n",
       "  -0.010596100025416519,\n",
       "  -0.0046695823762701455,\n",
       "  -0.02040830585572394,\n",
       "  -0.008136290891375709,\n",
       "  0.0027284281704668886,\n",
       "  0.03189642289967285,\n",
       "  0.02681462039645629,\n",
       "  0.010913712681867554,\n",
       "  -0.013393794845300233,\n",
       "  0.035572621247806156,\n",
       "  0.0007868516680922945,\n",
       "  -0.008413357489043009,\n",
       "  -0.04895290260282359,\n",
       "  -0.016177974312256035,\n",
       "  -0.018070134898034333,\n",
       "  0.01638070646881982,\n",
       "  0.019191916641631446,\n",
       "  0.017678187800479785,\n",
       "  -0.012434199199483171,\n",
       "  -0.003780942566155259,\n",
       "  0.04800682044728933,\n",
       "  -0.001748559262427321,\n",
       "  0.010663677721378634,\n",
       "  -0.016150943606400212,\n",
       "  0.0017620747317705521,\n",
       "  0.017948496721683128,\n",
       "  0.0215976625013155,\n",
       "  0.007102360338793855,\n",
       "  0.005504160541842746,\n",
       "  -0.025827994044783406,\n",
       "  0.03197751501724032,\n",
       "  -0.003186263544867563,\n",
       "  0.015691417881560994,\n",
       "  0.034112955681011255,\n",
       "  -0.024895429104822165,\n",
       "  0.010393368800175289,\n",
       "  -0.010244698986645704,\n",
       "  -0.019637925150897642,\n",
       "  -0.0008624536391940781,\n",
       "  -0.006497545035640864,\n",
       "  0.016867261036869752,\n",
       "  0.0005528657235439905,\n",
       "  -0.010190636643611502,\n",
       "  -0.04438468351086767,\n",
       "  -0.014610184059392727,\n",
       "  -0.002968327081921154,\n",
       "  0.026490250063541297,\n",
       "  -0.015718450450061927,\n",
       "  -0.014569638000608991,\n",
       "  0.01455612264768108,\n",
       "  0.026165879730626306,\n",
       "  0.01952980232747435,\n",
       "  -0.014353390491117293,\n",
       "  -0.003211605064438036,\n",
       "  -0.003503876201318598,\n",
       "  -0.020367757934295093,\n",
       "  0.015434625244608115,\n",
       "  -0.02513870732016969,\n",
       "  0.009954117036050493,\n",
       "  0.012981573787031261,\n",
       "  -0.024016925576572576,\n",
       "  0.02293569175440431,\n",
       "  -0.015434625244608115,\n",
       "  -0.009663535085455281,\n",
       "  -0.022408589264925532,\n",
       "  -0.01804310419217851,\n",
       "  0.029301460236352925,\n",
       "  0.030517849450445417,\n",
       "  0.001139520062238399,\n",
       "  -0.013961443393562745,\n",
       "  -0.02736875359179089,\n",
       "  -0.0066833820697222044,\n",
       "  -0.022516712088348826,\n",
       "  0.017826856682686812,\n",
       "  -0.020719160835711018,\n",
       "  0.015353533127040646,\n",
       "  0.02382771063558181,\n",
       "  0.008318749087225071,\n",
       "  -0.004422925788351924,\n",
       "  -0.6366309990135384,\n",
       "  -0.023760132008297144,\n",
       "  0.032572194271358657,\n",
       "  -0.020840799012062223,\n",
       "  0.03200454572309614,\n",
       "  0.013143758953488757,\n",
       "  0.0010922159777447499,\n",
       "  -0.021867971422518843,\n",
       "  -0.0345184162688486,\n",
       "  0.04233033868995443,\n",
       "  -0.01884051560286063,\n",
       "  -0.0013912450239199066,\n",
       "  -0.0193676162296943,\n",
       "  -0.003106860380754804,\n",
       "  0.025544169770652148,\n",
       "  -0.005872455771773449,\n",
       "  0.030571910862157065,\n",
       "  -0.03286953576106293,\n",
       "  0.010102786849580077,\n",
       "  0.02389528740022137,\n",
       "  -0.016488829292243115,\n",
       "  0.027585001101282588,\n",
       "  -0.010217668280789882,\n",
       "  -0.005287913498012326,\n",
       "  -0.01890809236750019,\n",
       "  0.00484866173388753,\n",
       "  0.006926659819408448,\n",
       "  -0.014880492980596072,\n",
       "  0.013610041423469376,\n",
       "  0.002333101769019084,\n",
       "  -0.025936116868206696,\n",
       "  0.01661046746859432,\n",
       "  0.032410010036223715,\n",
       "  -0.012535565277765065,\n",
       "  0.03197751501724032,\n",
       "  -0.015380563832896469,\n",
       "  -0.020840799012062223,\n",
       "  0.01884051560286063,\n",
       "  -0.008217383940265733,\n",
       "  0.03970834532077868,\n",
       "  -0.035788870619942965,\n",
       "  -0.013839804285888985,\n",
       "  -0.008197110910873866,\n",
       "  0.011528664965377758,\n",
       "  -0.009440530830822183,\n",
       "  0.02681462039645629,\n",
       "  0.0022722822151822033,\n",
       "  -0.003649167176616204,\n",
       "  -0.009068855831336946,\n",
       "  0.025544169770652148,\n",
       "  0.0028112101728116254,\n",
       "  0.011697607808299208,\n",
       "  0.0019394647866872677,\n",
       "  0.007811920558460717,\n",
       "  0.0180295888392506,\n",
       "  -0.009947359359586537,\n",
       "  0.020421821208651852,\n",
       "  -0.004277634813054318,\n",
       "  0.026246971848193777,\n",
       "  0.021962580755659336,\n",
       "  0.013339733433588586,\n",
       "  0.010589342348952563,\n",
       "  -0.008663393380854485,\n",
       "  -0.016475313939315204,\n",
       "  -0.025963147574062522,\n",
       "  0.035572621247806156,\n",
       "  0.01095425874065129,\n",
       "  0.018205290289958563,\n",
       "  -0.005727165262137121,\n",
       "  -0.04727698766389188,\n",
       "  0.008467418900754656,\n",
       "  0.013042393806529419,\n",
       "  -0.021205715403760952,\n",
       "  -0.0064907868935156305,\n",
       "  0.014623699412320638,\n",
       "  0.03200454572309614,\n",
       "  0.024881913751894254,\n",
       "  -0.014610184059392727,\n",
       "  -0.01516431632340477,\n",
       "  0.025530654417724237,\n",
       "  0.011454330524274242,\n",
       "  -0.0027216704940029326,\n",
       "  -0.00803492574441637,\n",
       "  0.022178826402505923,\n",
       "  0.008339023047939494,\n",
       "  0.018070134898034333,\n",
       "  -0.022976237813188045,\n",
       "  -0.004439819979511814,\n",
       "  -0.029625830569267913,\n",
       "  -0.006000852555911166,\n",
       "  0.02208421893201054,\n",
       "  0.04184378225925939,\n",
       "  -0.02430074985070383,\n",
       "  -0.04330345155134451,\n",
       "  0.015623841116921434,\n",
       "  -0.010460945564814847,\n",
       "  0.008933701370735275,\n",
       "  -0.001609181370450996,\n",
       "  0.016718592154662724,\n",
       "  -0.007845708940780497,\n",
       "  0.007014510079101152,\n",
       "  -0.0022689033769502254,\n",
       "  0.017759279918047256,\n",
       "  0.004950027812169423,\n",
       "  0.01143405656355982,\n",
       "  -0.004095176617204955,\n",
       "  -0.006565121800280422,\n",
       "  0.0051324860080187865,\n",
       "  -0.003689713468230578,\n",
       "  0.0006715481149341314,\n",
       "  -0.03381561419130697,\n",
       "  -0.019651440503825553,\n",
       "  0.0052946711744762814,\n",
       "  0.015488687587642318,\n",
       "  -0.010933985711259421,\n",
       "  -0.04922320966138182,\n",
       "  0.0037302597598449514,\n",
       "  0.0053284595567960604,\n",
       "  0.01321133664945087,\n",
       "  -0.01031903342774922,\n",
       "  -0.012163890278279828,\n",
       "  -0.0022689033769502254,\n",
       "  0.01546165595046394,\n",
       "  -0.005646072678908374,\n",
       "  0.002725049332234911,\n",
       "  0.001511194479647039,\n",
       "  -0.01482643063756187,\n",
       "  -0.021881486775446754,\n",
       "  -0.019962295483812634,\n",
       "  0.02153008573667594,\n",
       "  0.019421677641405944,\n",
       "  -0.015623841116921434,\n",
       "  0.009913570977266759,\n",
       "  -0.013711407501751267,\n",
       "  0.019489256268690615,\n",
       "  0.028463502766887066,\n",
       "  0.02513870732016969,\n",
       "  -0.009663535085455281,\n",
       "  0.012603142042404623,\n",
       "  -0.008257929999049469,\n",
       "  0.0010001420538467544,\n",
       "  0.01716460066392892,\n",
       "  0.0034396778092497396,\n",
       "  0.0031609220252970895,\n",
       "  -0.02967989198097956,\n",
       "  -0.028139132433972078,\n",
       "  -0.015218378666438973,\n",
       "  0.02048939797329141,\n",
       "  0.005862319257077516,\n",
       "  0.009339164752540291,\n",
       "  -0.014231751383443533,\n",
       "  -0.004977058518025246,\n",
       "  -0.013170790590667135,\n",
       "  -0.00022828412592801136,\n",
       "  -0.010663677721378634,\n",
       "  -0.014177689971731886,\n",
       "  -0.009487834566069874,\n",
       "  -0.025530654417724237,\n",
       "  0.0027402541042788114,\n",
       "  -0.027166021435227103,\n",
       "  0.006835430721483767,\n",
       "  0.043492666492335276,\n",
       "  0.01875942162264805,\n",
       "  -0.0048114945133357736,\n",
       "  -0.012724781150078383,\n",
       "  0.001553430190377402,\n",
       "  -0.01828638240752603,\n",
       "  0.031436900900123854,\n",
       "  -0.004007326357512252,\n",
       "  -0.02619291043648213,\n",
       "  0.004159374543612536,\n",
       "  -0.04235736939581025,\n",
       "  0.01178545853365319,\n",
       "  0.01977308054282187,\n",
       "  -0.002116854725188664,\n",
       "  0.021935548187158403,\n",
       "  -0.02753093782692583,\n",
       "  -0.0025831371951692828,\n",
       "  -0.01008251382018821,\n",
       "  -0.026057756907203015,\n",
       "  -0.004517533724508583,\n",
       "  0.0040478724162959865,\n",
       "  0.0058690769335414715,\n",
       "  0.016799684272230192,\n",
       "  0.032653286388926124,\n",
       "  0.008737727821957999,\n",
       "  0.004125586161292757,\n",
       "  0.00819035323440991,\n",
       "  -0.010487976270670671,\n",
       "  0.021084077227409747,\n",
       "  0.006889492598856691,\n",
       "  -0.026368611887190092,\n",
       "  -0.04135722582856435,\n",
       "  0.006544848770888555,\n",
       "  -0.006078566300907935,\n",
       "  0.009149948880226971,\n",
       "  0.00794707501906239,\n",
       "  0.01171112316122712,\n",
       "  0.004774326827122738,\n",
       "  0.018178257721457626,\n",
       "  0.0056426938406763955,\n",
       "  0.017218662075640566,\n",
       "  0.0036930923064625557,\n",
       "  -0.007967348048454257,\n",
       "  0.0029767744103317375,\n",
       "  0.001376884845018681,\n",
       "  0.002973395339269121,\n",
       "  -0.013434340904083969,\n",
       "  -0.003350138130441047,\n",
       "  0.010609615378344431,\n",
       "  0.01952980232747435,\n",
       "  0.005581874286839515,\n",
       "  -0.045168581431266994,\n",
       "  -0.013589768394077507,\n",
       "  -0.0015424488497081544,\n",
       "  -0.004328317852195265,\n",
       "  -0.011021836436613404,\n",
       "  -7.813609795677769e-06,\n",
       "  -0.007507823720598872,\n",
       "  0.0021354385682951812,\n",
       "  -0.013684376795895445,\n",
       "  0.010068998467260297,\n",
       "  0.008197110910873866,\n",
       "  0.020854314364990138,\n",
       "  0.0009004658021344685,\n",
       "  -0.00526426163038848,\n",
       "  0.012373380111307569,\n",
       "  -0.008920186017807362,\n",
       "  0.015367048479968557,\n",
       "  -0.008095744832591973,\n",
       "  -0.002578068937821316,\n",
       "  0.049169148249670176,\n",
       "  0.010427157182495069,\n",
       "  0.00421343642098546,\n",
       "  0.014623699412320638,\n",
       "  0.006115733987120971,\n",
       "  0.020854314364990138,\n",
       "  0.011528664965377758,\n",
       "  0.0025172496168150743,\n",
       "  0.0053723849194730515,\n",
       "  -0.012555838307156932,\n",
       "  0.0020932028575648184,\n",
       "  0.006355632898575236,\n",
       "  -0.025287376202376716,\n",
       "  0.02350334030266682,\n",
       "  0.020921891129629694,\n",
       "  0.02792288678712549,\n",
       "  -0.0026963289744324596,\n",
       "  -0.02205718822615472,\n",
       "  0.0052743981450844146,\n",
       "  -0.014461514245863142,\n",
       "  -0.0026895712979685036,\n",
       "  0.005537948924162525,\n",
       "  0.013704649825287312,\n",
       "  0.021084077227409747,\n",
       "  -0.01606985148883274,\n",
       "  -0.014515575657574789,\n",
       "  0.02816616500247301,\n",
       "  0.021462508972036384,\n",
       "  0.05149380385443187,\n",
       "  0.009609473673743634,\n",
       "  0.008278203028441336,\n",
       "  0.012163890278279828,\n",
       "  0.010143332908363813,\n",
       "  0.03224782580108878,\n",
       "  -0.018962153779211836,\n",
       "  -0.024800821634326786,\n",
       "  -0.015610325763993523,\n",
       "  -0.02335466955781468,\n",
       "  -0.018408020583877235,\n",
       "  -0.016583436762738497,\n",
       "  -0.001876955930149718,\n",
       "  0.007149664539702824,\n",
       "  -0.013711407501751267,\n",
       "  0.03235594862451207,\n",
       "  0.014718307814138576,\n",
       "  0.028193195708328834,\n",
       "  0.012224710297777986,\n",
       "  0.01961089444504182,\n",
       "  0.027895856081269665,\n",
       "  -0.011035351789541315,\n",
       "  -0.029869108784615437,\n",
       "  0.05025038393448355,\n",
       "  0.0008147272581290915,\n",
       "  0.0026490250063541296,\n",
       "  -0.024179111674352625,\n",
       "  -0.027409299650574624,\n",
       "  -0.0005786294814781415,\n",
       "  -0.02198961146151516,\n",
       "  0.016191489665183946,\n",
       "  0.014975100451091452,\n",
       "  0.016691561448806902,\n",
       "  0.028490535335388003,\n",
       "  -0.0024851504207806453,\n",
       "  -0.0034515039758923013,\n",
       "  0.03635651544291525,\n",
       "  0.014353390491117293,\n",
       "  0.006389421280895015,\n",
       "  0.019205431994559357,\n",
       "  -0.001809378699848882,\n",
       "  -0.0023685798032854906,\n",
       "  -0.021651725775672256,\n",
       "  -0.010217668280789882,\n",
       "  -0.008663393380854485,\n",
       "  0.045898414214664444,\n",
       "  -0.01067043539784259,\n",
       "  0.0056122838309273166,\n",
       "  -0.014083082501236506,\n",
       "  -0.006487408055283653,\n",
       "  -0.004493881856884738,\n",
       "  0.005608904992695339,\n",
       "  0.01763764174169605,\n",
       "  -0.018543175975801466,\n",
       "  0.0009291861017292599,\n",
       "  -0.009609473673743634,\n",
       "  0.04343860508062363,\n",
       "  -0.01750248634977182,\n",
       "  -0.002000284107693509,\n",
       "  0.016286097135679328,\n",
       "  0.01344785625701188,\n",
       "  -0.02185445606959093,\n",
       "  -0.027111960023515458,\n",
       "  0.0026017210382758,\n",
       "  -0.0032301886747139147,\n",
       "  0.028923028491726285,\n",
       "  0.028003978904692958,\n",
       "  -0.0055784954486075376,\n",
       "  -0.0003112773347660765,\n",
       "  -0.011684092455371296,\n",
       "  -0.007825435911388628,\n",
       "  -0.018380989878021413,\n",
       "  0.007838951264316541,\n",
       "  -0.005250745811799291,\n",
       "  0.0028635826310685606,\n",
       "  -0.03781618473500038,\n",
       "  -0.036653856932619536,\n",
       "  0.023692555243657584,\n",
       "  -0.010967774093579201,\n",
       "  0.01661046746859432,\n",
       "  0.0018786453492657068,\n",
       "  0.004392515778602845,\n",
       "  -0.010623130731272343,\n",
       "  0.013326218080660675,\n",
       "  -0.0022317359235678297,\n",
       "  -0.005923138810914396,\n",
       "  -0.007325365059088231,\n",
       "  -0.006625941354117302,\n",
       "  0.007210484093539704,\n",
       "  0.030031294882395487,\n",
       "  -0.0020847557619848734,\n",
       "  0.008987763713769477,\n",
       "  0.01773224921219143,\n",
       "  0.013961443393562745,\n",
       "  -0.032707351525928,\n",
       "  0.00794707501906239,\n",
       "  0.016961870370010245,\n",
       "  -0.010481218594206714,\n",
       "  0.013069424512385241,\n",
       "  -0.010636646084200254,\n",
       "  0.036383546148771075,\n",
       "  0.004017462872208185,\n",
       "  0.016326643194463062,\n",
       "  0.019881203366245163,\n",
       "  -0.009244556350722353,\n",
       "  0.008764758527813823,\n",
       "  0.017178116016856832,\n",
       "  -0.0015990447393397426,\n",
       "  -0.003845140958224117,\n",
       "  -0.018543175975801466,\n",
       "  -0.011603000337803827,\n",
       "  -0.01356949536468564,\n",
       "  0.008170080205018042,\n",
       "  -0.0009663534969039962,\n",
       "  -0.01898918448506766,\n",
       "  0.007284819000304496,\n",
       "  0.008088987156128017,\n",
       "  -0.030653002979724533,\n",
       "  -0.0145290910105027,\n",
       "  0.01621852037103977,\n",
       "  0.021097592580337658,\n",
       "  -0.024706212301186293,\n",
       "  0.002817967849275581,\n",
       "  -0.0073388804120161424,\n",
       "  -0.006173174237064595,\n",
       "  -0.01811068095681807,\n",
       "  -0.013258640384698562,\n",
       "  -0.007872740577958875,\n",
       "  -0.017934981368755216,\n",
       "  -0.0067610958147189745,\n",
       "  -0.015623841116921434,\n",
       "  -0.014137143912948152,\n",
       "  0.005673103384764197,\n",
       "  -0.010346064133605042,\n",
       "  0.005723786423905143,\n",
       "  -0.0008666771869840505,\n",
       "  -0.007386184612925111,\n",
       "  -0.04416843786402108,\n",
       "  0.00806871412673615,\n",
       "  0.009102645144979282,\n",
       "  0.024152080968496803,\n",
       "  0.01020415292786197,\n",
       "  0.009954117036050493,\n",
       "  -0.010771800544801926,\n",
       "  -0.015488687587642318,\n",
       "  -0.0032065365742594302,\n",
       "  -0.0101298175554359,\n",
       "  0.009954117036050493,\n",
       "  -0.005075045292413884,\n",
       "  -0.01031903342774922,\n",
       "  0.012603142042404623,\n",
       "  -0.018921607720428103,\n",
       "  0.0068590825891076115,\n",
       "  -0.013001847747745683,\n",
       "  0.0010744769606115465,\n",
       "  0.01600227286154807,\n",
       "  0.008311991410761116,\n",
       "  0.007676766563520323,\n",
       "  -0.038410863989118714,\n",
       "  0.016421252527603555,\n",
       "  0.02581447869185549,\n",
       "  0.014569638000608991,\n",
       "  0.009886540271410935,\n",
       "  -0.01515080097047686,\n",
       "  -0.010981290377829668,\n",
       "  0.02281405171540799,\n",
       "  -0.022408589264925532,\n",
       "  0.00032204743983816584,\n",
       "  -0.0037403962745408852,\n",
       "  0.0018144469571968486,\n",
       "  0.0125761113365488,\n",
       "  -0.00642658896710805,\n",
       "  0.020178542993304332,\n",
       "  -0.017664672447551873,\n",
       "  0.0018617510416904979,\n",
       "  0.0008962421961368364,\n",
       "  4.919802804552858e-06,\n",
       "  -0.008055198773808237,\n",
       "  -0.00022638351487060885,\n",
       "  0.01135296444599235,\n",
       "  -0.019448708347261767,\n",
       "  -0.01977308054282187,\n",
       "  0.033842644897162794,\n",
       "  -0.02731469218007924,\n",
       "  0.004781084503586695,\n",
       "  -0.02705789861180381,\n",
       "  0.004419546950119946,\n",
       "  0.035653713365373624,\n",
       "  0.011001563407221535,\n",
       "  -0.0034802241008641136,\n",
       "  0.03005832558825131,\n",
       "  -0.0019158126862327833,\n",
       "  -0.0037708060514593253,\n",
       "  -0.03632948473705943,\n",
       "  0.014894008333523983,\n",
       "  -0.017705218506335607,\n",
       "  0.03668088763847536,\n",
       "  0.0007898081515452752,\n",
       "  0.007778132176140938,\n",
       "  -0.020313696522583448,\n",
       "  -0.03251813285964701,\n",
       "  -0.006044777918588156,\n",
       "  -0.007061814280010121,\n",
       "  0.005169653228570543,\n",
       "  0.0022722822151822033,\n",
       "  -0.005696755252388042,\n",
       "  -0.015515718293498142,\n",
       "  0.0008523170662904846,\n",
       "  -0.0030798294420683416,\n",
       "  -0.0016801373225684905,\n",
       "  -0.02311139134246716,\n",
       "  -0.035572621247806156,\n",
       "  0.035788870619942965,\n",
       "  0.000670703405376137,\n",
       "  0.021219230756688863,\n",
       "  0.010575826996024652,\n",
       "  0.022030157520298896,\n",
       "  -0.029544738451700445,\n",
       "  -0.01877293697557596,\n",
       "  0.0017147706472769029,\n",
       "  -0.0049905738709531584,\n",
       "  0.016326643194463062,\n",
       "  0.03797836897013532,\n",
       "  0.013468130217726302,\n",
       "  0.020448851914507675,\n",
       "  0.055521401378690875,\n",
       "  -0.024152080968496803,\n",
       "  0.039275850301795284,\n",
       "  -0.0026659194303446584,\n",
       "  -0.0006204428956371669,\n",
       "  0.007318607382624275,\n",
       "  -0.004095176617204955,\n",
       "  -0.012792357914717941,\n",
       "  -0.00013821643088599762,\n",
       "  0.0036525460148481817,\n",
       "  0.001841477895883311,\n",
       "  -0.03295062787863041,\n",
       "  0.012258498680097764,\n",
       "  0.003155853767949123,\n",
       "  0.0031169968954507378,\n",
       "  0.005615662669159294,\n",
       "  -0.016096882194688564,\n",
       "  -0.008257929999049469,\n",
       "  -0.020381273287223004,\n",
       "  -0.015299470784006444,\n",
       "  -0.014447998892935231,\n",
       "  0.02086782971791805,\n",
       "  -0.011379995151848173,\n",
       "  0.01938113158262221,\n",
       "  -0.029463646334132974,\n",
       "  0.0014596668473634176,\n",
       "  0.004743917283034937,\n",
       "  0.0022368044137464353,\n",
       "  0.017367332820492704,\n",
       "  0.02730117682715133,\n",
       "  0.02959879986341209,\n",
       "  -0.028058040316404607,\n",
       "  -0.00565283035537233,\n",
       "  -0.0014148970079590714,\n",
       "  -0.02690922786695167,\n",
       "  -0.01130566071074466,\n",
       "  -0.012359863827057102,\n",
       "  -0.010481218594206714,\n",
       "  -0.02603072620134719,\n",
       "  0.018637783446296845,\n",
       "  -0.012562595983620887,\n",
       "  0.009109402821443237,\n",
       "  0.0015770821744165667,\n",
       "  0.017461940290988087,\n",
       "  -0.025273860849448805,\n",
       "  0.018610752740441022,\n",
       "  0.008122775538447797,\n",
       "  -0.024476451301411795,\n",
       "  -0.00028445766078060255,\n",
       "  -0.02388177204729346,\n",
       "  -0.016096882194688564,\n",
       "  -0.05235879016710844,\n",
       "  -0.017826856682686812,\n",
       "  -0.004598626307737331,\n",
       "  -0.00894045904719923,\n",
       "  -0.0035140127160145315,\n",
       "  -0.024881913751894254,\n",
       "  0.008406599812579053,\n",
       "  -0.0036694402060080712,\n",
       "  -0.008643120351462618,\n",
       "  -0.033166873525476995,\n",
       "  0.0036525460148481817,\n",
       "  0.03970834532077868,\n",
       "  0.007284819000304496,\n",
       "  0.0210300158156981,\n",
       "  0.01463721476524855,\n",
       "  -0.006281297991810444,\n",
       "  -0.02461160483069091,\n",
       "  0.01923246270041518,\n",
       "  -0.0008012117887858603,\n",
       "  -0.010771800544801926,\n",
       "  0.0047337807683390035,\n",
       "  0.029220368118785454,\n",
       "  -0.019354100876766388,\n",
       "  -0.032896566466918756,\n",
       "  0.011123201583572741,\n",
       "  -0.005946790678538242,\n",
       "  -0.004311423195374097,\n",
       "  -0.023692555243657584,\n",
       "  0.020151512287448506,\n",
       "  -0.008359296077331362,\n",
       "  -0.004422925788351924,\n",
       "  -0.012488260611194818,\n",
       "  0.0005790518944647985,\n",
       "  -0.031436900900123854,\n",
       "  0.004095176617204955,\n",
       "  -0.02278702100955217,\n",
       "  -0.011102928554180873,\n",
       "  0.001799242185152948,\n",
       "  -0.0029666376628051648,\n",
       "  0.0062002054085816966,\n",
       "  0.02722008284693875,\n",
       "  -0.0016801373225684905,\n",
       "  0.02280053636248008,\n",
       "  -0.005831909712989715,\n",
       "  0.009582442967887812,\n",
       "  -0.003747153951004841,\n",
       "  0.0034498145567763124,\n",
       "  -0.010305518074821307,\n",
       "  0.0155832950581377,\n",
       "  0.019327070170910562,\n",
       "  0.03500497269954364,\n",
       "  -0.011697607808299208,\n",
       "  0.00016155166339282948,\n",
       "  0.021895002128374665,\n",
       "  -0.008994521390233432,\n",
       "  0.0069536905252642715,\n",
       "  -0.011447572847810287,\n",
       "  -0.005744059453297011,\n",
       "  0.029625830569267913,\n",
       "  -0.039275850301795284,\n",
       "  -0.021205715403760952,\n",
       "  -0.0036593036913121373,\n",
       "  0.024260203791920096,\n",
       "  0.014623699412320638,\n",
       "  -0.008636362674998663,\n",
       "  -0.016488829292243115,\n",
       "  -0.022922176401476396,\n",
       "  -0.004618899337129199,\n",
       "  0.0014115181697270936,\n",
       "  0.03822164532283773,\n",
       "  -0.010521765584313005,\n",
       "  0.008899912988415495,\n",
       "  -0.003493739686622664,\n",
       "  -0.011535422641841713,\n",
       "  0.006838809559715745,\n",
       "  -0.011548937994769625,\n",
       "  0.0025932739426958555,\n",
       "  0.012285529385953588,\n",
       "  -0.012738296503006296,\n",
       "  -0.017786310623903078,\n",
       "  -0.002476703325200701,\n",
       "  -0.006622562515885324,\n",
       "  0.04138425653442017,\n",
       "  -0.016015788214475985,\n",
       "  -0.0016936527919117217,\n",
       "  0.005054772263022016,\n",
       "  0.016988901075866068,\n",
       "  -0.0013836425214826368,\n",
       "  -0.002929470209422769,\n",
       "  -0.0022384938328624242,\n",
       "  -0.009278345664364689,\n",
       "  -0.016583436762738497,\n",
       "  -0.0193676162296943,\n",
       "  -0.01835395917216559,\n",
       "  0.000876813759887644,\n",
       "  0.0016041129966877096,\n",
       "  0.006629320192349281,\n",
       "  -0.010433914858959024,\n",
       "  -0.01709702389928936,\n",
       "  0.005193305561855667,\n",
       "  0.01779982597683099,\n",
       "  0.01684023033101393,\n",
       "  -0.0016852056963317767,\n",
       "  -0.009744628134345307,\n",
       "  -0.011623273367195694,\n",
       "  0.007778132176140938,\n",
       "  -0.029977231608038727,\n",
       "  -0.0003412646986978597,\n",
       "  0.009764901163737174,\n",
       "  -0.006419831290644094,\n",
       "  0.00020188679216961967,\n",
       "  0.0183674745250935,\n",
       "  -0.007237514799395528,\n",
       "  -0.016421252527603555,\n",
       "  -0.013866834991744808,\n",
       "  -0.006375905927967104,\n",
       "  -0.0019597378160791353,\n",
       "  -0.031247682233842868,\n",
       "  0.0003847675028690441,\n",
       "  -0.0106839507507705,\n",
       "  0.018083650250962247,\n",
       "  -0.0011276940120111569,\n",
       "  -0.01527244007815062,\n",
       "  -0.00676785349118293,\n",
       "  0.004220194097449416,\n",
       "  -0.01163003104365965,\n",
       "  0.03292359717277458,\n",
       "  0.0018059998616169039,\n",
       "  -0.011994947435358377,\n",
       "  0.004659445861574211,\n",
       "  0.0042539829454304725,\n",
       "  0.026949775788380516,\n",
       "  0.015123770264621037,\n",
       "  -0.026003695495491367,\n",
       "  -0.009312134046684467,\n",
       "  -0.03322093493718864,\n",
       "  0.003360274645136981,\n",
       "  -0.024152080968496803,\n",
       "  0.010075756143724253,\n",
       "  0.01734030211463688,\n",
       "  0.023597947773162202,\n",
       "  0.024341295909487567,\n",
       "  -0.026787589690600466,\n",
       "  -0.01961089444504182,\n",
       "  -0.006146143531208772,\n",
       "  -0.008278203028441336,\n",
       "  -0.006710412775578028,\n",
       "  -0.021705787187383904,\n",
       "  -0.0013067734860438614,\n",
       "  0.0018347201030040356,\n",
       "  0.0033264862628172016,\n",
       "  -0.01582657327348522,\n",
       "  0.022449135323709266,\n",
       "  0.006473892702355741,\n",
       "  -0.00018953284212437575,\n",
       "  -0.0022165311515239287,\n",
       "  -0.002208084055943984,\n",
       "  -0.015921180743980603,\n",
       "  -0.015218378666438973,\n",
       "  -0.025746901927215935,\n",
       "  -0.0028703403075325167,\n",
       "  -0.03678901046189865,\n",
       "  0.013366764139444409,\n",
       "  0.027760700689345438,\n",
       "  0.004321560175731309,\n",
       "  0.011880066004148572,\n",
       "  0.051061308835448474,\n",
       "  -0.0005718718050141857,\n",
       "  -0.004757432635962849,\n",
       "  -0.02715250608229919,\n",
       "  0.004558080248953596,\n",
       "  0.032410010036223715,\n",
       "  0.002426020286059754,\n",
       "  0.020948921835485516,\n",
       "  -0.007940317342598433,\n",
       "  -0.02057049009085888,\n",
       "  0.012805873267645854,\n",
       "  -0.01654289070395476,\n",
       "  0.002365200965053513,\n",
       "  -0.03357233783860456,\n",
       "  0.01071773913309028,\n",
       "  0.013441098580547924,\n",
       "  0.010041967761404475,\n",
       "  -0.03668088763847536,\n",
       "  0.024016925576572576,\n",
       "  -0.011481361230130067,\n",
       "  -0.011156990897215075,\n",
       "  0.0275985164542105,\n",
       "  -0.013589768394077507,\n",
       "  -0.011724638514155032,\n",
       "  0.03381561419130697,\n",
       "  0.010055483114332386,\n",
       "  0.007325365059088231,\n",
       "  -0.005409552605686086,\n",
       "  0.0330587507020537,\n",
       "  0.007142906863238868,\n",
       "  0.0005000710269233772,\n",
       "  0.032653286388926124,\n",
       "  -0.021949065402731425,\n",
       "  -0.02792288678712549,\n",
       "  -0.019097307308490952,\n",
       "  -0.0021337491491791923,\n",
       "  -0.03424810921029037,\n",
       "  -0.013231609678842737,\n",
       "  -0.009704081144239016,\n",
       "  -0.0028416199497300655,\n",
       "  0.0023516853792949623,\n",
       "  -0.0024125049331318424,\n",
       "  -0.0007420817704802885,\n",
       "  -0.028003978904692958,\n",
       "  0.004456714170671703,\n",
       "  -0.016745622860518547,\n",
       "  0.022759990303696347,\n",
       "  -0.0014351701537662583,\n",
       "  -0.0120219781412142,\n",
       "  0.017124054605145184,\n",
       "  -0.008332265371475538,\n",
       "  0.013880350344672719,\n",
       "  0.010785315897729839,\n",
       "  -0.011494876583057978,\n",
       "  -0.026990321847164253,\n",
       "  -0.019016215190923485,\n",
       "  0.035707774777085276,\n",
       "  0.02182742536373511,\n",
       "  0.01215713260181587,\n",
       "  0.2147332088184969,\n",
       "  -0.03968131461492286,\n",
       "  0.019637925150897642,\n",
       "  0.0018516144105792446,\n",
       "  -0.0027284281704668886,\n",
       "  -0.02428723449777592,\n",
       "  -0.020056902954308016,\n",
       "  0.011582726377089403,\n",
       "  0.002005352597872115,\n",
       "  0.009987905418370273,\n",
       "  0.024503482007267617,\n",
       "  -0.0070482989270822085,\n",
       "  -0.006909765628248558,\n",
       "  -0.0023297229307871056,\n",
       "  -0.004625657013593154,\n",
       "  -0.014502060304646878,\n",
       "  -0.035572621247806156,\n",
       "  -0.011961159053038597,\n",
       "  -0.019340585523838473,\n",
       "  -0.008899912988415495,\n",
       "  0.008102502509055929,\n",
       "  -0.007088844985865943,\n",
       "  -0.008845851576703848,\n",
       "  -0.016326643194463062,\n",
       "  0.018637783446296845,\n",
       "  -0.01494806974523563,\n",
       "  -0.012103071190104226,\n",
       "  -0.015840088626413132,\n",
       "  0.014407452834151496,\n",
       "  0.025679325162576375,\n",
       "  -0.00992708633019467,\n",
       "  0.02008393552280895,\n",
       "  0.02255725814713256,\n",
       "  -0.009109402821443237,\n",
       "  -0.012272014033025675,\n",
       "  -0.014461514245863142,\n",
       "  0.019691986562609287,\n",
       "  0.026936260435452605,\n",
       "  0.020137996934520595,\n",
       "  -0.0038620351493840067,\n",
       "  0.03140986646897781,\n",
       "  -0.035302314189247924,\n",
       "  0.010893439652475688,\n",
       "  0.0013642140852334443,\n",
       "  -0.01811068095681807,\n",
       "  0.020746191541566844,\n",
       "  ...]]"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "doc_result = embeddings_hf.embed_documents([text_hf])\n",
    "doc_result"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上面给出的例子用于熟悉基本models中的相关操作，详细的API开放接口还有很多，例如针对微软Azure的AzureOpenAI，Cohere和Llama-cpp等，点击[此处](https://python.langchain.com/en/latest/modules/models/text_embedding.html)可以直接跳转到相关位置。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## Prompt\n",
    "Prompt是一种为了更好地使用预训练模型的知识，采用在输入端添加额外文本的技术。Prompt的设计有时候可以极大地激发出大模型的潜力，基于prompt可以完成很多我们想象不到的任务，那么，现在我们就来熟悉一下什么是Prompt以及在LangChain中如何去更好地使用和完善它们，LangChain提供了几个类和函数来轻松构建Prompt模板。\n",
    "\n",
    "- [Prompt Templates](#prompt-templates)\n",
    "- [Chat Prompt Template](#chat-model-prompt-template)\n",
    "- [Example Selectors](#example-selectors)\n",
    "- [Output Parser](#output-parser)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Prompt Templates\n",
    "语言模型的文本输入字符串本身就是一种提示，通常来说prompt由模板、一些例子（有的话）和用户的输入构成，其概念和含义并不复杂。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 什么是prompt template：\n",
    "Prompt Template，也就是提示模版，是一种可重复使用生成prompt的方式，用template可以模版化提示指令，它由一个字符串和可供用户输入的参数组成。\n",
    "提示模版包含如下的内容：\n",
    "- 给语言模型的指令\n",
    "- 几个例子，帮助语言模型更好地回答\n",
    "- 一个让语言模型回答的问题\n",
    "- ..."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "现在我们看一个创建prompt template的例子："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "#一般导入的必要库，为了进一步模版化处理\n",
    "from langchain import PromptTemplate\n",
    "\n",
    "#具体的模版内容，其中要进行补全的地方（也就是为了定制我们具体问题以及要求的地方用一个变量进行代替）用中括号\n",
    "#进行变量的放置，不需要其他的操作，类似于字符串中对于某一个变量对其的format格式化\n",
    "template = \"\"\"\n",
    "I want you to act as a naming consultant for new companies.\n",
    "Here are some examples of good company names:\n",
    "- search engine, Google\n",
    "- social media, Facebook\n",
    "- video sharing, YouTube\n",
    "The name should be short, catchy and easy to remember.\n",
    "What is a good name for a company that makes {product}?\n",
    "\"\"\"\n",
    "\n",
    "\"\"\"input_variables就是模版中需要填补的变量，在prompttemplate中格式化模版可以将变量以及模版内容进行很好的组合\"\"\"\n",
    "prompt = PromptTemplate(\n",
    "    input_variables=[\"product\"],\n",
    "    template=template,\n",
    ")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "让我们看看prompt长什么样子："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "PromptTemplate(input_variables=['product'], output_parser=None, partial_variables={}, template='\\nI want you to act as a naming consultant for new companies.\\nHere are some examples of good company names:\\n- search engine, Google\\n- social media, Facebook\\n- video sharing, YouTube\\nThe name should be short, catchy and easy to remember.\\nWhat is a good name for a company that makes {product}?\\n', template_format='f-string', validate_template=True)"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 创建prompt template\n",
    "我们可以从上面的示例中看到：通过LangChain的PromptTemplate类，可以生成具体的prompt，那么通过继续使用这个类生成几个具体的prompts。prompt template接受多个输入变量，用来格式化生成prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Tell me a joke.'"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain import PromptTemplate\n",
    "\n",
    "# input_variables的值可以为空，说明其中没有任何变量\n",
    "no_input_prompt = PromptTemplate(input_variables=[], template=\"Tell me a joke.\")\n",
    "no_input_prompt.format()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Tell me a funny joke.'"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 一个input_variable的示例，这样模版化之后的提示将把adjective作为参数传入\n",
    "one_input_prompt = PromptTemplate(input_variables=[\"adjective\"], template=\"Tell me a {adjective} joke.\")\n",
    "one_input_prompt.format(adjective=\"funny\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Tell me a funny joke about chickens.'"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 多个input_variables的示例，模版后的提示将adjective和content作为参数传入\n",
    "multiple_input_prompt = PromptTemplate(\n",
    "    input_variables=[\"adjective\", \"content\"],\n",
    "    template=\"Tell me a {adjective} joke about {content}.\"\n",
    ")\n",
    "multiple_input_prompt.format(adjective=\"funny\", content=\"chickens\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 从LangChainHub加载本地prompt template"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.prompts import load_prompt\n",
    "#可以加载用json格式写好的prompt\n",
    "test_prompt = PromptTemplate(\n",
    "    input_variables=[\"input\"],\n",
    "    template=\"{input}, tell me the answer by using Chinese.\"\n",
    ")\n",
    "#将创建好的提示模版格式化，其中的补全的变量就是1+1等于几？\n",
    "test_prompt.format(input=\"what is 1+1?\")\n",
    "#然后将创建好的模版保存在指定位置处\n",
    "test_prompt.save(\"test_prompt.json\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template='{input}, tell me the answer by using Chinese.', template_format='f-string', validate_template=True)"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#加载本地的模版使用到的方法是固定的，将具体文件的位置传入即可\n",
    "prompt = load_prompt(\"./test_prompt.json\")\n",
    "prompt"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 在prompt template中添加几个例子\n",
    "如果生成带有几个例子的模版，该怎么做？通过PromptTemplate类固然是可以，将例子固定在具体的模版中是我们想到的方法。那么LangChain有没有提供直接的类可以用来达到我们的目的和要求。下面是给语言模型几个合适的例子从而使大模型能够更准确、更合适地回答问题，LangChain中使用FewShotPromptTemplate类生成带有例子的prompt。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "下面是创建一个让大语言模型去回答反义词的提示模版："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 126,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Give the antonym of every input\n",
      "\n",
      "\n",
      "Word: happy\n",
      "Antonym: sad\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "Word: tall\n",
      "Antonym: short\n",
      "\n",
      "\n",
      "\n",
      "Word: big\n",
      "Antonym:\n"
     ]
    }
   ],
   "source": [
    "from langchain import PromptTemplate, FewShotPromptTemplate\n",
    "#如下是几个提示示例，让大模型知道几个例子回答的时候可以类比着回答\n",
    "few_examples = [\n",
    "    {\"word\": \"happy\", \"antonym\": \"sad\"},\n",
    "    {\"word\": \"tall\", \"antonym\": \"short\"},\n",
    "]\n",
    "\n",
    "#我们定义如下的提示模版\n",
    "example_formatter_template = \"\"\"\n",
    "Word: {word}\n",
    "Antonym: {antonym}\\n\n",
    "\"\"\"\n",
    "#模版格式化提示模版\n",
    "example_prompt = PromptTemplate(\n",
    "    input_variables=[\"word\", \"antonym\"],\n",
    "    template=example_formatter_template,\n",
    ")\n",
    "\n",
    "#现在构建一个少样本提示模版对象：few_shot_prompt\n",
    "'''\n",
    "其中的参数有examples, example_prompt, prefix, suffix, input_variables和example_separator。其中，\n",
    "    examples:示例，样例，这个就是我们写好的几个供大模型了解格式以及例子的具体样例\n",
    "    example_prompt:具体的模版化后的提示模版，这个可以就是通过promptTemplate构建的提示模版，只等待填入具体的变量，\n",
    "其格式内容中已经将待补全的变量放在其中\n",
    "    prefix:前缀，一般作为大模型接受指令的入口，可以去声明大模型即将承担的角色以及告诉大模型即将要做的事情等\n",
    "    suffix:后缀，一般放置于样例之后，作为提示模版的最后一部分\n",
    "    input_variables:填补的变量\n",
    "    example_separator:分割符，分割前缀、样例和后缀的标志\n",
    "'''\n",
    "few_shot_prompt = FewShotPromptTemplate(\n",
    "    examples=few_examples,\n",
    "    example_prompt=example_prompt,\n",
    "    prefix=\"Give the antonym of every input\",\n",
    "    suffix=\"Word: {input}\\nAntonym:\",\n",
    "    input_variables=[\"input\"],\n",
    "    example_separator=\"\\n\\n\",\n",
    ")\n",
    "\n",
    "print(few_shot_prompt.format(input=\"big\"))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 从prompt template中选取例子\n",
    "上面那个代码案例中提供的是少量样本示例，如果有非常多可供LLM参考的例子时，使用ExampleSelector类来可控制地选择几个最好的例子供LLM学习。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "下面是一个使用LengthBasedExampleSelector选择一定长输入的例子，它是一个自动选择例子的方法，我们具体来看：**如果用户担心prompt超过输入窗口大小时，这很有用。当用户输入很长时，它自动选择少量的例子；当用户输入很短时，它选择更多的例子。**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 127,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Give the antonym of every input\n",
      "\n",
      "\n",
      "Word: happy\n",
      "Antonym: sad\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "Word: tall\n",
      "Antonym: short\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "Word: energetic\n",
      "Antonym: lethargic\n",
      "\n",
      "\n",
      "\n",
      "Word: big\n",
      "Antonym:\n"
     ]
    }
   ],
   "source": [
    "from langchain.prompts.example_selector import LengthBasedExampleSelector\n",
    "#依然和上面一样，反义词对儿的组合，这里给出下面几种：\n",
    "few_examples = [\n",
    "    {\"word\": \"beautiful\", \"antonym\": \"ugly\"},\n",
    "    {\"word\": \"outgoing\", \"antonym\": \"incoming\"},\n",
    "    {\"word\": \"happy\", \"antonym\": \"sad\"},\n",
    "    {\"word\": \"tall\", \"antonym\": \"short\"},\n",
    "    {\"word\": \"energetic\", \"antonym\": \"lethargic\"},\n",
    "    {\"word\": \"sunny\", \"antonym\": \"gloomy\"},\n",
    "    {\"word\": \"windy\", \"antonym\": \"calm\"},\n",
    "]\n",
    "#我们定义如下的提示模版\n",
    "example_formatter_template = \"\"\"\n",
    "Word: {word}\n",
    "Antonym: {antonym}\\n\n",
    "\"\"\"\n",
    "#模版格式化提示模版\n",
    "example_prompt = PromptTemplate(\n",
    "    input_variables=[\"word\", \"antonym\"],\n",
    "    template=example_formatter_template,\n",
    ")\n",
    "\n",
    "'''\n",
    "使用LengthBasedExampleSelector选择例子。\n",
    "其中的examples:同上示例中的作用\n",
    "example_prompt:同上示例中的作用\n",
    "max_length:这是格式化后的例子的最大长度。长度由下面的get_text_length函数决定。\n",
    "'''\n",
    "example_selector = LengthBasedExampleSelector(\n",
    "    examples=few_examples,\n",
    "    example_prompt=example_prompt,\n",
    "    max_length=25,\n",
    ")\n",
    "\n",
    "#现在使用example_selector来创建fewshotprompttemplate\n",
    "dynamic_prompt = FewShotPromptTemplate(\n",
    "    example_selector=example_selector,\n",
    "    example_prompt=example_prompt,\n",
    "    prefix=\"Give the antonym of every input\",\n",
    "    suffix=\"Word: {input}\\nAntonym:\",\n",
    "    input_variables=[\"input\"],\n",
    "    example_separator=\"\\n\\n\",\n",
    ")\n",
    "\n",
    "print(dynamic_prompt.format(input=\"big\"))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "根据上面的例子我们知道：当输入问题很长时，LengthBasedExampleSelector会选择更少的提示"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 128,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Give the antonym of every input\n",
      "\n",
      "Word: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else\n",
      "Antonym:\n"
     ]
    }
   ],
   "source": [
    "long_string = \"big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else\"\n",
    "print(dynamic_prompt.format(input=long_string))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### chat model prompt template\n",
    "下面的示例是利用prompt template在chat model中使用的情况：chat model可以使用以前的历史信息进行单次生成回复，单次输入包含了过去聊天中的一系列模板、例子、用户问题的组合。LangChain提供了一些类和方法使得构建和使用prompt更加容易。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.prompts import (\n",
    "    ChatPromptTemplate,\n",
    "    PromptTemplate,\n",
    "    SystemMessagePromptTemplate,\n",
    "    AIMessagePromptTemplate,\n",
    "    HumanMessagePromptTemplate,\n",
    ")\n",
    "from langchain.schema import (\n",
    "    AIMessage,\n",
    "    HumanMessage,\n",
    "    SystemMessage\n",
    ")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "ChatPromptTemplate可以使用一个或者多个MessagePromptTemplate类构建prompt。可以使用ChatPromptTemplate的format_prompt函数返回prompt值，然后将其转化为string或message对象。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "system_template=\"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
    "system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)\n",
    "human_template=\"{text}\"\n",
    "human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 131,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),\n",
       " HumanMessage(content='I love programming.', additional_kwargs={})]"
      ]
     },
     "execution_count": 131,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])\n",
    "# 从格式化的信息中获取完整的聊天信息\n",
    "chat_prompt.format_prompt(input_language=\"English\", output_language=\"French\", text=\"I love programming.\").to_messages()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "下面是一种快速构建MessagePromptTemplate的方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "prompt=PromptTemplate(\n",
    "    template=\"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
    "    input_variables=[\"input_language\", \"output_language\"],\n",
    ")\n",
    "system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Example Selectors\n",
    "根据模型功能需要动态选择提示词\n",
    "LangChain中的BaseExampleSelector类用于选择例子，select_examples函数接收输入变量并返回一系列例子。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 基本接口定义如下"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 133,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from abc import ABC, abstractmethod\n",
    "from typing import Dict,List\n",
    "'''它是一个抽象基类，不能被示例化，而是用于定义其他类的接口和规范'''\n",
    "class BaseExampleSelector(ABC):\n",
    "    \"\"\"\n",
    "    这是一个抽象方法，由派生类实现，以满足BaseExampleSelector接口的要求，这个方法是根据输入变量选择要包含在提示中的示例，返回一个字典列表\n",
    "    \"\"\"\n",
    "    @abstractmethod\n",
    "    def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "一个ExampleSelector类必须实现两个方法：\n",
    "- 1. add_example方法接受一个例子将其添加到example列表中\n",
    "- 2. select_examples告诉选择器如何选择例子并返回例子\n",
    "\n",
    "因此在一个示例选择器中可以随时调用上述两种方法，下面实现一个自定义的example selector"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 134,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.prompts.example_selector.base import BaseExampleSelector\n",
    "from typing import Dict, List\n",
    "import numpy as np\n",
    "\n",
    "class CustomExampleSelector(BaseExampleSelector):\n",
    "\n",
    "    def __init__(self, examples: List[Dict[str, str]]):\n",
    "        self.examples = examples\n",
    "\n",
    "    def add_example(self, example: Dict[str, str]) -> None:\n",
    "        \"\"\"添加新的例子来存储一个键\"\"\"\n",
    "        self.examples.append(example)\n",
    "\n",
    "    def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n",
    "        \"\"\"根据输入的信息选择要使用的例子\"\"\"\n",
    "        return np.random.choice(self.examples, size=2, replace=False)\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "使用自定义的example selector"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 135,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([{'foo': '1'}, {'foo': '3'}], dtype=object)"
      ]
     },
     "execution_count": 135,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\n",
    "examples = [\n",
    "    {\"foo\": \"1\"},\n",
    "    {\"foo\": \"2\"},\n",
    "    {\"foo\": \"3\"}\n",
    "]\n",
    "\n",
    "# 初始化示例选择器\n",
    "example_selector = CustomExampleSelector(examples)\n",
    "# 选择示例\n",
    "example_selector.select_examples({\"foo\": \"foo\"})\n",
    "# 将新的例子添加到例子集中\n",
    "example_selector.add_example({\"foo\": \"4\"})\n",
    "example_selector.examples\n",
    "# 选择示例\n",
    "example_selector.select_examples({\"foo\": \"foo\"})"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 基于长度的示例选择器\n",
    "LengthBased ExampleSelector根据用户输入自动选择一定数量的示例；使总长度不超过LLM输入窗口大小。下面的示例我们上面就遇到过，现在我们走到这里再看一下这个代码示例，是否更加了解它的原理？"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 136,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.prompts import PromptTemplate\n",
    "from langchain.prompts import FewShotPromptTemplate\n",
    "from langchain.prompts.example_selector import LengthBasedExampleSelector\n",
    "# 这些是很多关于创造反义词的例子。\n",
    "examples = [\n",
    "    {\"input\": \"happy\", \"output\": \"sad\"},\n",
    "    {\"input\": \"tall\", \"output\": \"short\"},\n",
    "    {\"input\": \"energetic\", \"output\": \"lethargic\"},\n",
    "    {\"input\": \"sunny\", \"output\": \"gloomy\"},\n",
    "    {\"input\": \"windy\", \"output\": \"calm\"},\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 137,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "example_prompt = PromptTemplate(\n",
    "    input_variables=[\"input\", \"output\"],\n",
    "    template=\"Input: {input}\\nOutput: {output}\",\n",
    ")\n",
    "example_selector = LengthBasedExampleSelector(\n",
    "    examples=examples,\n",
    "    example_prompt=example_prompt,\n",
    "    max_length=25,\n",
    ")\n",
    "dynamic_prompt = FewShotPromptTemplate(\n",
    "    example_selector=example_selector,\n",
    "    example_prompt=example_prompt,\n",
    "    prefix=\"Give the antonym of every input\",\n",
    "    suffix=\"Input: {adjective}\\nOutput:\",\n",
    "    input_variables=[\"adjective\"],\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 138,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Give the antonym of every input\n",
      "\n",
      "Input: happy\n",
      "Output: sad\n",
      "\n",
      "Input: tall\n",
      "Output: short\n",
      "\n",
      "Input: energetic\n",
      "Output: lethargic\n",
      "\n",
      "Input: sunny\n",
      "Output: gloomy\n",
      "\n",
      "Input: windy\n",
      "Output: calm\n",
      "\n",
      "Input: big\n",
      "Output:\n"
     ]
    }
   ],
   "source": [
    "print(dynamic_prompt.format(adjective=\"big\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 139,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Give the antonym of every input\n",
      "\n",
      "Input: happy\n",
      "Output: sad\n",
      "\n",
      "Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else\n",
      "Output:\n"
     ]
    }
   ],
   "source": [
    "long_string = \"big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else\"\n",
    "print(dynamic_prompt.format(adjective=long_string))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 140,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Give the antonym of every input\n",
      "\n",
      "Input: happy\n",
      "Output: sad\n",
      "\n",
      "Input: tall\n",
      "Output: short\n",
      "\n",
      "Input: energetic\n",
      "Output: lethargic\n",
      "\n",
      "Input: sunny\n",
      "Output: gloomy\n",
      "\n",
      "Input: windy\n",
      "Output: calm\n",
      "\n",
      "Input: big\n",
      "Output: small\n",
      "\n",
      "Input: enthusiastic\n",
      "Output:\n"
     ]
    }
   ],
   "source": [
    "# You can add an example to an example selector as well.\n",
    "new_example = {\"input\": \"big\", \"output\": \"small\"}\n",
    "dynamic_prompt.example_selector.add_example(new_example)\n",
    "print(dynamic_prompt.format(adjective=\"enthusiastic\"))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Similarity Example Selector\n",
    "The Similarity Example Selector selects examples based on their similarity to the input. This selector works by calculating the cosine similarity between the example and the word embedding vectors of the input."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 141,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.prompts.example_selector import SemanticSimilarityExampleSelector\n",
    "from langchain.vectorstores import Chroma\n",
    "from langchain.embeddings import OpenAIEmbeddings\n",
    "from langchain.prompts import FewShotPromptTemplate, PromptTemplate\n",
    "\n",
    "example_prompt = PromptTemplate(\n",
    "    input_variables=[\"input\", \"output\"],\n",
    "    template=\"Input: {input}\\nOutput: {output}\",\n",
    ")\n",
    "\n",
    "# These are many examples of creating antonyms.\n",
    "few_examples = [\n",
    "    {\"input\": \"happy\", \"output\": \"sad\"},\n",
    "    {\"input\": \"tall\", \"output\": \"short\"},\n",
    "    {\"input\": \"energetic\", \"output\": \"lethargic\"},\n",
    "    {\"input\": \"sunny\", \"output\": \"gloomy\"},\n",
    "    {\"input\": \"windy\", \"output\": \"calm\"},\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 142,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using embedded DuckDB without persistence: data will be transient\n"
     ]
    }
   ],
   "source": [
    "example_selector = SemanticSimilarityExampleSelector.from_examples(\n",
    "    few_examples,\n",
    "    OpenAIEmbeddings(),\n",
    "    Chroma,\n",
    "    k=1\n",
    ")\n",
    "similar_prompt = FewShotPromptTemplate(\n",
    "    example_selector=example_selector,\n",
    "    example_prompt=example_prompt,\n",
    "    prefix=\"Give the antonym of every input\",\n",
    "    suffix=\"Input: {adjective}\\nOutput:\",\n",
    "    input_variables=[\"adjective\"],\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 143,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Give the antonym of every input\n",
      "\n",
      "Input: happy\n",
      "Output: sad\n",
      "\n",
      "Input: worried\n",
      "Output:\n"
     ]
    }
   ],
   "source": [
    "print(similar_prompt.format(adjective=\"worried\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 144,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Give the antonym of every input\n",
      "\n",
      "Input: happy\n",
      "Output: sad\n",
      "\n",
      "Input: fat\n",
      "Output:\n"
     ]
    }
   ],
   "source": [
    "print(similar_prompt.format(adjective=\"fat\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 145,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Give the antonym of every input\n",
      "\n",
      "Input: happy\n",
      "Output: sad\n",
      "\n",
      "Input: joyful\n",
      "Output:\n"
     ]
    }
   ],
   "source": [
    "similar_prompt.example_selector.add_example({\"input\": \"enthusiastic\", \"output\": \"apathetic\"})\n",
    "print(similar_prompt.format(adjective=\"joyful\"))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Output Parser\n",
    "Output parsers, as their name implies, are used to parse the output answers. They instruct the model to format the output content and parse it into the desired format. The class structure of output parsers helps to structure the response information from the model. For example, if we define in advance that the output should include properties A and B, when we send the question to the large model, the model will not specifically divide the answer for you. Therefore, we need to parse this answer and provide it to users so that they can use the parsed results for more convenient applications or achieve their goals."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Below is an example of PydanticOutputParser, which allows the LLM output to be in JSON format. The effectiveness of this feature is related to the capabilities of the LLM to generate outputs.\n",
    "\n",
    "Below is an example of using PydanticOutputParser."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 146,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate\n",
    "from langchain.llms import OpenAI\n",
    "from langchain.chat_models import ChatOpenAI\n",
    "\n",
    "from langchain.output_parsers import PydanticOutputParser\n",
    "from pydantic import BaseModel, Field, validator\n",
    "from typing import List"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 147,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "model_name = 'text-davinci-003'\n",
    "temperature = 0.0\n",
    "model = OpenAI(model_name=model_name, temperature=temperature)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 148,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "# Define the data structure you want\n",
    "\"\"\"\n",
    "The following code defines a Pydantic model class called Joke, \n",
    "which is used to represent a joke. After instantiation, it is used to \n",
    "prompt the large model to fill this data structure.\n",
    "\"\"\"\n",
    "class Joke(BaseModel):\n",
    "    setup: str = Field(description=\"question to set up a joke\")\n",
    "    punchline: str = Field(description=\"answer to resolve the joke\")\n",
    "    @validator('setup')\n",
    "    def question_ends_with_question_mark(cls, field):\n",
    "        if field[-1] != '?':\n",
    "            raise ValueError(\"Badly formed question!\")\n",
    "        return field"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 149,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "# The parameter value is the previously defined joke class, and the parser object's role is to parse the output results into a joke object\n",
    "parser = PydanticOutputParser(pydantic_object=Joke)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 150,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "'''\n",
    "The two placeholders for template parameters in prompttemplate are format_instructions \n",
    "and query. However, the specified input_variables is query, while partial_variables is \n",
    "a dictionary that contains variable information with keys as format_instructions. This \n",
    "value is obtained by calling the get_format_instructions method on the previously defined \n",
    "parser object and returning a format specification string.\n",
    "'''\n",
    "prompt = PromptTemplate(\n",
    "    template=\"Answer the user query.\\n{format_instructions}\\n{query}\\n\",\n",
    "    input_variables=[\"query\"],\n",
    "    partial_variables={\"format_instructions\": parser.get_format_instructions()}\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 151,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "joke_query = \"Tell me a joke.\"\n",
    "_input = prompt.format_prompt(query=joke_query)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 152,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "output = model(_input.to_string())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 153,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')"
      ]
     },
     "execution_count": 153,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "parser.parse(output)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Here is another example. Let's continue to look at it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 154,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Actor(name='Tom Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Cast Away', 'Toy Story'])"
      ]
     },
     "execution_count": 154,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\"\"\"\n",
    "The following code defines a data structure called actor, which is also created based on the basemodel class. This actor has two attributes, name and file_names, which represent the actor's name and the list of movies they have acted in. These two attributes are also subject to the field class, with description attributes assigned to each of them to describe their roles.\n",
    "Here are some details:\n",
    "actor_query: The user's question\n",
    "parser: An instance of the PydanticOutputParser class, which is used to parse the output results into an actor object\n",
    "prompt: An instance of the prompttemplate class, which contains a string template and some variables\n",
    "\"\"\"\n",
    "class Actor(BaseModel):\n",
    "    name: str = Field(description=\"name of an actor\")\n",
    "    film_names: List[str] = Field(description=\"list of names of films they starred in\")\n",
    "\n",
    "actor_query = \"Generate the filmography for a random actor.\"\n",
    "\n",
    "parser = PydanticOutputParser(pydantic_object=Actor)\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=\"Answer the user query.\\n{format_instructions}\\n{query}\\n\",\n",
    "    input_variables=[\"query\"],\n",
    "    partial_variables={\"format_instructions\": parser.get_format_instructions()}\n",
    ")\n",
    "\n",
    "_input = prompt.format_prompt(query=actor_query)\n",
    "\n",
    "output = model(_input.to_string())\n",
    "\n",
    "parser.parse(output)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## Index\n",
    "This module is used to accept user queries and return the most relevant concepts.\n",
    "Here, LangChain mainly builds an index for the user-provided documents, combining them into a searcher; then it can establish a question-answering chain for model questioning and answering.\n",
    "LangChain uses chromadb to build a vector pool called vectorstore, which is used for retrieval and word embedding lookups.\n",
    "For more detailed documentation, click 👉[here](https://python.langchain.com/en/latest/modules/indexes.html) 👈 directly."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Understanding basic index concepts is also important. The following is an interface for the indexer Retriever, and users can implement how to return relevant documents on their own. LangChain focuses on using the Vectorstore retriever for retrieving related documents."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 155,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from abc import ABC, abstractmethod\n",
    "from typing import List\n",
    "from langchain.schema import Document\n",
    "\n",
    "class BaseRetriever(ABC):\n",
    "    @abstractmethod\n",
    "    def get_relevant_documents(self, query: str) -> List[Document]:\n",
    "        \"\"\"Get texts relevant for a query.\n",
    "\n",
    "        Args:\n",
    "            query: string to find relevant texts for\n",
    "\n",
    "        Returns:\n",
    "            List of relevant documents\n",
    "        \"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 156,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.chains import RetrievalQA\n",
    "from langchain.llms import OpenAI\n",
    "from langchain.document_loaders import TextLoader\n",
    "loader = TextLoader('state_of_the_union.txt',encoding='utf-8')"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "create the index"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Using the VectorstoreIndexCreator to create index directly"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 157,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using embedded DuckDB without persistence: data will be transient\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "The VectorstoreIndexCreator class is used to create a vector space index\n",
    "(Vector Space Index). To create an object named index, call the from_loaders \n",
    "method with a parameter list containing only one element, which is the TextLoader \n",
    "object loader mentioned earlier. The purpose of from_loaders is to package multiple \n",
    "text data loaders (TextLoader) into a vector space index object. \n",
    "Therefore, the role of index is to read the contents of the state_of_the_union.txt \n",
    "file into memory and store it in the variable loader as a string, and then use the \n",
    "VectorstoreIndexCreator class to package it into an index object called index.\n",
    "\"\"\"\n",
    "from langchain.indexes import VectorstoreIndexCreator\n",
    "index = VectorstoreIndexCreator().from_loaders([loader])"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "we constructed the index along with you can answer the queston by providing docs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 158,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
      ]
     },
     "execution_count": 158,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "query = \"What did the president say about Ketanji Brown Jackson\"\n",
    "index.query(query)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 159,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'question': 'What did the president say about Ketanji Brown Jackson',\n",
       " 'answer': \" The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\\n\",\n",
       " 'sources': 'state_of_the_union.txt'}"
      ]
     },
     "execution_count": 159,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "query = \"What did the president say about Ketanji Brown Jackson\"\n",
    "index.query_with_sources(query)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "if you only see the vectorstore, you can use following coding :"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 160,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<langchain.vectorstores.chroma.Chroma at 0x18fa13c05b0>"
      ]
     },
     "execution_count": 160,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "index.vectorstore"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "or maybe you wanna see the format of VectorstoreRetriever"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 161,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x0000018FA13C05B0>, search_type='similarity', search_kwargs={})"
      ]
     },
     "execution_count": 161,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "index.vectorstore.as_retriever()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "The above example shows how to use an index for querying. So, how is the index created? There are three steps:\n",
    "\n",
    "- Divide the documents into chunks\n",
    "- Create word embeddings for each chunk\n",
    "- Store both the documents and word embeddings in a vector pool"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "load the docs:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 162,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "documents = loader.load()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "split the docs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 163,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.text_splitter import CharacterTextSplitter\n",
    "\"\"\"\n",
    "The CharacterTextSplitter class is used to chunk text data according \n",
    "to a specified size. Next, the code creates an object named text_splitter \n",
    "by calling the split_documents method, which takes two parameters: \n",
    "chunk_size, which indicates the size of each document chunk; and chunk_overlap, \n",
    "which indicates the overlap size between documents. The chunk_size is set to 1000, \n",
    "and the chunk_overlap is set to 0. Then, the code calls the split_documents method \n",
    "on the documents list to split the documents into chunks of the specified size and \n",
    "returns a list containing all the split documents. Specifically, the split_documents \n",
    "method splits a list of texts into chunks of a specified size and returns a list \n",
    "containing all the split documents.\n",
    "\"\"\"\n",
    "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
    "texts = text_splitter.split_documents(documents)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Select an appropriate word embedding method as needed for word embedding"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 164,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.embeddings import OpenAIEmbeddings\n",
    "embeddings = OpenAIEmbeddings()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Next, use word embeddings and chunk creation to create a vector pool"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 165,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using embedded DuckDB without persistence: data will be transient\n"
     ]
    }
   ],
   "source": [
    "from langchain.vectorstores import Chroma\n",
    "\"\"\"\n",
    "The Chroma class is used to combine a Vector Space Index (Vector Space Index) with text data\n",
    "to achieve a vectorized representation of text data. Next, the code creates an object named \n",
    "db by calling the from_documents method, which takes two parameters: texts and embeddings. \n",
    "Among them, texts is a list containing all text data, and embeddings is a two-dimensional \n",
    "array containing all vector data. The purpose of from_documents is to combine the specified \n",
    "text data and vector data and return a two-dimensional array containing all document vectors. \n",
    "Specifically, from_documents combines the specified text data and vector data to achieve a \n",
    "vectorized representation of text data.\n",
    "\"\"\"\n",
    "db = Chroma.from_documents(texts, embeddings)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "then, construct the index"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 166,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "The from_documents method of Chroma class is used to combine text data and vector \n",
    "data to create a document vector representation of the text data. The resulting \n",
    "document vectors are stored in a two-dimensional array called embeddings.\n",
    "To retrieve relevant document vectors for a given query, we can use the as_retriever \n",
    "method of the db object. This method converts the db object into a retriever object, \n",
    "which can be used to search for document vectors that match the query\n",
    "\"\"\"\n",
    "retriever = db.as_retriever()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "next, you can construct a chain to answer the question:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 167,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\" The President said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\""
      ]
     },
     "execution_count": 167,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\"\"\"\n",
    "The RetrievalQA class is used to implement the Question Answering (QA) task based on \n",
    "the retriever. Next, the code creates an object named qa by calling the from_chain_type \n",
    "method, which takes three parameters: llm, chain_type, and retriever. Among them, llm \n",
    "represents a language model (Language Model), chain_type represents the query type, and \n",
    "retriever represents a retriever (retriever). The purpose of from_chain_type is to create \n",
    "an object named qa based on the specified query type and retriever for later use in calculations. The llm is set to OpenAI(), chain_type is set to \"stuff\", and retriever is set to the previously created retriever object.\n",
    "\"\"\"\n",
    "qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", retriever=retriever)\n",
    "query = \"What did the president say about Ketanji Brown Jackson\"\n",
    "\"\"\"\n",
    "The run method performs the following operations: First, it retrieves document vectors \n",
    "related to the given query from the database; then, it matches the user input question \n",
    "with each document vector and finds the best document vector as the answer; finally, it \n",
    "encapsulates the answer into a triple and returns it to the user.\n",
    "\"\"\"\n",
    "qa.run(query)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "To summarize the above process, VectorstoreIndexCreator performs the steps of chunking, word embedding, and index creation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 168,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "index_creator = VectorstoreIndexCreator(\n",
    "    vectorstore_cls=Chroma,\n",
    "    embedding=OpenAIEmbeddings(),\n",
    "    text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
    ")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## Memory\n",
    "Memory involves maintaining state concepts throughout the entire interaction between the user and the language model. The interaction between the user and the language model is captured in the concept of ChatMessages, which thus boils down to capturing, capturing, transforming, and extracting knowledge from a series of chat messages. There are many different methods to achieve this, each with its own type of memory. Typically, there are two ways to understand the use of memory for each type. These are independent functions that extract information from a series of messages, and then you can use this type of memory in your chain. Memory can return multiple pieces of information (e.g., the N most recent messages and a summary of all previous messages). The returned information can be strings or message lists. We will introduce the simplest form of memory: \"Buffer\" memory, which only involves keeping a buffer for all previous messages. We will demonstrate here how to use modular utility functions and show how to use it in chains (both returning strings and message lists).\n",
    "\n",
    "LLMs and chat models are stateless; every input request is independent; Chains and Agents are developed based on underlying modules and are also stateless. In some applications, such as chatbots, it is important for the language model to know about the previous chat content. This is why the Memory module exists.\n",
    "\n",
    "LangChain provides two types of memory components; first, LangChain provides helper utilities to manage and operate on previous chat information; second, LangChain provides methods to merge these programs into Chains modules.\n",
    "\n",
    "To view the full documentation and example instructions, click 👉[LangChain Official Website](https://python.langchain.com/en/latest/index.html).\n",
    "\n",
    "The following are two examples of adding Memory to quickly understand:"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- [Add Memory to an LLMChain](#add-memory-to-an-llmchain)\n",
    "- [Add Memory to an Agent](#add-memory-to-an-agent)\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will use the ConversationBufferMemory as an example, so let's take a look at what this class looks like first:"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### ConversationBufferMemory\n",
    "The ConversationBufferMemory can help users easily create a conversation history, as follows:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.memory import ConversationBufferMemory\n",
    "\n",
    "\"\"\"\n",
    "The ConversationBufferMemory class is used to implement a conversational \n",
    "system based on memory caches (Memories). Next, the code creates an object \n",
    "named memory by calling the from_memory method without any parameters. The \n",
    "purpose of this method is to create an object named memory based on default \n",
    "configuration for later use in calculations. Memory will be used to store the \n",
    "conversation history between the user and AI. Then, the chat_memory attribute \n",
    "of the memory object is called to create a conversation buffer (Conversation Buffer) \n",
    "and add two messages: one from the user saying \"hi!\" and the other from the AI asking \n",
    "\"what's up?\". Specifically, the chat_memory.add_user_message() method adds a user message\n",
    " to the conversation buffer, while the chat_memory.add_ai_message() method adds an AI \n",
    " message to the conversation buffer. In this way, the chat history has been successfully \n",
    " added to the conversation buffer.\n",
    "\"\"\"\n",
    "memory = ConversationBufferMemory()\n",
    "memory.chat_memory.add_user_message(\"hi!\")\n",
    "memory.chat_memory.add_ai_message(\"whats up?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'history': 'Human: hi!\\nAI: whats up?'}"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Loading variables stored in the conversation buffer. The method takes an empty \n",
    "# dictionary as a parameter, indicating that no variables need to be loaded.\n",
    "memory.load_memory_variables({})"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "list can return the history news"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "memory = ConversationBufferMemory(return_messages=True)\n",
    "memory.chat_memory.add_user_message(\"hi!\")\n",
    "memory.chat_memory.add_ai_message(\"whats up?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'history': [HumanMessage(content='hi!', additional_kwargs={}),\n",
       "  AIMessage(content='whats up?', additional_kwargs={})]}"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "memory.load_memory_variables({})"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is a simple and direct way of recording and obtaining historical conversations. Next, let's take a look at how to add them to the Chain through several examples:"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Add Memory to an LLMChain\n",
    "\n",
    "The following shows how to add ConversationBufferMemory. For other types of Memory, you can click [here](https://python.langchain.com/en/latest/modules/memory/how_to_guides.html) to view and use them."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.llms import OpenAI\n",
    "from langchain.chains import ConversationChain\n",
    "from langchain import LLMChain, PromptTemplate\n",
    "\n",
    "template = \"\"\"You are a chatbot having a conversation with a human.\n",
    "{chat_history}\n",
    "Human: {human_input}\n",
    "Chatbot:\"\"\"\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    input_variables=[\"chat_history\", \"human_input\"], \n",
    "    template=template\n",
    ")\n",
    "memory = ConversationBufferMemory(memory_key=\"chat_history\")\n",
    "\n",
    "llm_chain = LLMChain(\n",
    "    llm=OpenAI(), \n",
    "    prompt=prompt, \n",
    "    verbose=True, \n",
    "    memory=memory,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
      "Prompt after formatting:\n",
      "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
      "\n",
      "Current conversation:\n",
      "\n",
      "Human: Hi there!\n",
      "AI:\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "\" Hi there! It's nice to meet you. My name is AI. What's your name?\""
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm_chain.predict(human_input=\"Hi there!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
      "Prompt after formatting:\n",
      "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
      "\n",
      "Current conversation:\n",
      "Human: Hi there!\n",
      "AI:  Hi there! It's nice to meet you. My name is AI. What's your name?\n",
      "Human: I'm doing well! Just having a conversation with an AI.\n",
      "AI:\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "\" That's great! It's always nice to have a conversation with someone new. What would you like to talk about?\""
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm_chain.predict(human_input=\"I'm doing well! Just having a conversation with an AI.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
      "Prompt after formatting:\n",
      "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
      "\n",
      "Current conversation:\n",
      "Human: Hi there!\n",
      "AI:  Hi there! It's nice to meet you. My name is AI. What's your name?\n",
      "Human: I'm doing well! Just having a conversation with an AI.\n",
      "AI:  That's great! It's always nice to have a conversation with someone new. What would you like to talk about?\n",
      "Human: Tell me about yourself.\n",
      "AI:\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "\" Sure! I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural language and provide helpful information. I'm also able to learn from my conversations and experiences, so I'm constantly growing and evolving. What else would you like to know?\""
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm_chain.predict(input=\"Tell me about yourself.\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "In another scenario, users need to save historical information because they have the need to load and reuse it when needed. The schema class of LangChain can conveniently convert historical information into Python data structures such as dictionaries; it can also be converted into JSON format; and then load historical information from dictionaries or JSON. As shown below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "import json\n",
    "\n",
    "from langchain.memory import ChatMessageHistory\n",
    "from langchain.schema import messages_from_dict, messages_to_dict\n",
    "\n",
    "history = ChatMessageHistory()\n",
    "history.add_user_message(\"hi!\")\n",
    "history.add_ai_message(\"whats up?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "dicts = messages_to_dict(history.messages)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'type': 'human', 'data': {'content': 'hi!', 'additional_kwargs': {}}},\n",
       " {'type': 'ai', 'data': {'content': 'whats up?', 'additional_kwargs': {}}}]"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dicts"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "new_messages = messages_from_dict(dicts)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[HumanMessage(content='hi!', additional_kwargs={}),\n",
       " AIMessage(content='whats up?', additional_kwargs={})]"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "new_messages"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following is a case study on adding Memory to an Agent:"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Add Memory to an Agent\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In order to add Memory to an Agent, we will follow these steps:\n",
    "\n",
    "- We will create an LLMChain with Memory.\n",
    "- We will use this LLMChain to create a custom agent.\n",
    "- We will create a simple custom agent that can access search tools and use the ConversationBufferMemory class."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.agents import ZeroShotAgent, Tool, AgentExecutor\n",
    "from langchain.memory import ConversationBufferMemory\n",
    "from langchain import OpenAI, LLMChain\n",
    "from langchain.utilities import GoogleSearchAPIWrapper\n",
    "\n",
    "search = GoogleSearchAPIWrapper()\n",
    "tools = [\n",
    "    Tool(\n",
    "        name = \"Search\",\n",
    "        func=search.run,\n",
    "        description=\"useful for when you need to answer questions about current events\"\n",
    "    )\n",
    "]\n",
    "\n",
    "\n",
    "prefix = \"\"\"Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:\"\"\"\n",
    "suffix = \"\"\"Begin!\"\n",
    "\n",
    "{chat_history}\n",
    "Question: {input}\n",
    "{agent_scratchpad}\"\"\"\n",
    "\n",
    "prompt = ZeroShotAgent.create_prompt(\n",
    "    tools, \n",
    "    prefix=prefix, \n",
    "    suffix=suffix, \n",
    "    input_variables=[\"input\", \"chat_history\", \"agent_scratchpad\"]\n",
    ")\n",
    "memory = ConversationBufferMemory(memory_key=\"chat_history\")\n",
    "\n",
    "llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)\n",
    "agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)\n",
    "agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)\n",
    "\n",
    "agent_chain.run(input=\"How many people live in canada?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To test the agent's memory, we can ask a follow-up question that depends on the information from previous conversations in order to provide an accurate answer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent_chain.run(input=\"what is their national anthem called?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will compare it with an agent without memory."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "prefix = \"\"\"Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:\"\"\"\n",
    "suffix = \"\"\"Begin!\"\n",
    "\n",
    "Question: {input}\n",
    "{agent_scratchpad}\"\"\"\n",
    "\n",
    "prompt = ZeroShotAgent.create_prompt(\n",
    "    tools, \n",
    "    prefix=prefix, \n",
    "    suffix=suffix, \n",
    "    input_variables=[\"input\", \"agent_scratchpad\"]\n",
    ")\n",
    "llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)\n",
    "agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)\n",
    "agent_without_memory = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)\n",
    "agent_without_memory.run(\"How many people live in canada?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent_without_memory.run(\"what is their national anthem called?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Of course, the above operation examples are for readers to quickly understand and get started. In order to apply more proficiently in development, it is still necessary to have a comprehensive review of official tutorials for further integration. Click [here](https://python.langchain.com/en/latest/modules/memory.html) to go directly to the Memory tutorial location."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## Chains\n",
    "When implementing some simple applications, it is very feasible to use only one large language model. However, in some complex cases, we need to combine different large language models or combine large language models with other modules to complete some huge and complex work. LangChain provides a standard interface and some commonly used implementation methods for Chains.\n",
    "\n",
    "For example, as we have learned before - we can create a chain that receives user input, formats it using the Prompt module, and then passes the formatted content to the LLM; In complex scenarios such as combining multiple chains, we can combine multiple chains together or add other modules within a chain to work collaboratively.\n",
    "\n",
    "This tutorial covers the following topics:\n",
    "\n",
    "- Using a simple LLM chain\n",
    "- Creating serialized links\n",
    "- Creating custom chains\n",
    "\n",
    "If you want to learn more, you can click [here](https://python.langchain.com/en/latest/modules/chains.html) to go directly to the official Chains tutorial of LangChain."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- [LLMChain Basic Usage](#llmchain-usage)\n",
    "- [Using SequentialChain to combine multiple chain](#using-sequentialchain-to-combine-with-multiple-chains)\n",
    "- [Customize the Chain](#customize-the-chain-class)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### LLMChain usage\n",
    "\n",
    "LLMChain is the simplest and most commonly used chain. It receives a prompt template, formats the input information using this template, and returns the user's query response, which is what we want. Here is an example of its use:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.prompts import PromptTemplate\n",
    "from langchain.llms import OpenAI\n",
    "llm = OpenAI(temperature=0.9)\n",
    "prompt = PromptTemplate(\n",
    "    input_variables=[\"product\"],\n",
    "    template=\"What is a good name for a company that makes {product}?\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "Brightly Socks!\n"
     ]
    }
   ],
   "source": [
    "from langchain.chains import LLMChain\n",
    "chain = LLMChain(llm=llm, prompt=prompt)\n",
    "\n",
    "print(chain.run(\"colorful socks\"))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "To complete the interaction process of the chatbot using the chat model in LLMChain, here is an example:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "RainbowSocks\n"
     ]
    }
   ],
   "source": [
    "from langchain.chat_models import ChatOpenAI\n",
    "from langchain.prompts.chat import (\n",
    "    ChatPromptTemplate,\n",
    "    HumanMessagePromptTemplate,\n",
    ")\n",
    "human_message_prompt = HumanMessagePromptTemplate(\n",
    "        prompt=PromptTemplate(\n",
    "            template=\"What is a good name for a company that makes {product}?\",\n",
    "            input_variables=[\"product\"],\n",
    "        )\n",
    "    )\n",
    "chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])\n",
    "chat = ChatOpenAI(temperature=0.9)\n",
    "chain = LLMChain(llm=chat, prompt=chat_prompt_template)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Rainbow Socks Co.\n"
     ]
    }
   ],
   "source": [
    "print(chain.run(\"colorful socks\"))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Using SequentialChain to combine with multiple chains\n",
    "\n",
    "The SequentialChain can combine multiple chains. The SequentialChain parameter takes a list of chains as input and executes each chain in sequence, passing the return value of the first chain to the second chain, and so on. Here is an example of its use:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.chat_models import ChatOpenAI\n",
    "from langchain.prompts.chat import (\n",
    "    ChatPromptTemplate,\n",
    "    HumanMessagePromptTemplate,\n",
    ")\n",
    "human_message_prompt = HumanMessagePromptTemplate(\n",
    "        prompt=PromptTemplate(\n",
    "            template=\"What is a good name for a company that makes {product}?\",\n",
    "            input_variables=[\"product\"],\n",
    "        )\n",
    "    )\n",
    "second_prompt = PromptTemplate(\n",
    "    input_variables=[\"company_name\"],\n",
    "    template=\"Write a catchphrase for the following company: {company_name}\",\n",
    ")\n",
    "chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])\n",
    "chat = ChatOpenAI(temperature=0.9)\n",
    "chain_one = LLMChain(llm=chat, prompt=chat_prompt_template)\n",
    "chain_two = LLMChain(llm=llm, prompt=second_prompt)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "The next way to combine two simple LLMChains together is to use the SimpleSequentialChain to complete, ultimately achieving the process of naming the company first, and then giving this already named company a slogan."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new SimpleSequentialChain chain...\u001b[0m\n",
      "\u001b[36;1m\u001b[1;3mRainbow Sock Co.\u001b[0m\n",
      "\u001b[33;1m\u001b[1;3m\n",
      "\n",
      "\"Walk on the wild side with Rainbow Socks!\"\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "\n",
      "\n",
      "\"Walk on the wild side with Rainbow Socks!\"\n"
     ]
    }
   ],
   "source": [
    "from langchain.chains import SimpleSequentialChain\n",
    "overall_chain = SimpleSequentialChain(chains=[chain, chain_two], verbose=True)\n",
    "catchphrase = overall_chain.run(\"colorful socks\")\n",
    "print(catchphrase)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Customize the Chain class\n",
    "Using already-packaged LangChains is certainly not a problem because they provide many pre-built chains that can be used out of the box. However, sometimes users may want to create their own custom classes for special purposes.\n",
    "\n",
    "The process of creating a custom class follows these steps:\n",
    "\n",
    "Inherit from the Chain class\n",
    "Fill in the input_keys and output_keys attributes\n",
    "Implement the private method _call() that demonstrates how to execute the chain\n",
    "Here is the method for creating a custom chain class."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Fistly, you need to define a ConcatenateChain class, which can input a query and return the results by two LLMChain simultaneously。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.chains import LLMChain\n",
    "from langchain.chains.base import Chain\n",
    "from typing import Dict, List\n",
    "\"\"\"\n",
    "The following code defines a class named ConcatenateChain that inherits from Chain. \n",
    "This class represents a chain that concatenates two other chains together.First, we\n",
    "import the LLMChain and Chain classes from the langchain.chains module, as well as the \n",
    "Dict and List type annotations from the typing module.\n",
    "\n",
    "In the ConcatenateChain class, there are two properties: chain_1 and chain_2, which \n",
    "represent the two chains to be connected. These properties are instances of the LLMChain type.\n",
    "\n",
    "The class also defines two methods: input_keys() and output_keys(). The input_keys() \n",
    "method returns a set of input variables after concatenation, which is the union of the \n",
    "input variables of the two original chains; while the output_keys() method returns a \n",
    "list of strings, indicating that there is only one output variable with the name \n",
    "concat_output after concatenation.\n",
    "\n",
    "Finally, the class implements a private method _call(), which is used to perform \n",
    "the actual concatenation operation. This method takes a dictionary-type parameter inputs, \n",
    "representing the input variables and their values.\n",
    "\n",
    "This method first calls the run() method of each chain to obtain their output results, \n",
    "then adds them together to get a new output result, which is stored in a dictionary and \n",
    "returned.\n",
    "\"\"\"\n",
    "\n",
    "class ConcatenateChain(Chain):\n",
    "    chain_1: LLMChain\n",
    "    chain_2: LLMChain\n",
    "\n",
    "    @property\n",
    "    def input_keys(self) -> List[str]:\n",
    "        all_input_vars = set(self.chain_1.input_keys).union(set(self.chain_2.input_keys))\n",
    "        return list(all_input_vars)\n",
    "\n",
    "    @property\n",
    "    def output_keys(self) -> List[str]:\n",
    "        return ['concat_output']\n",
    "\n",
    "    def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:\n",
    "        output_1 = self.chain_1.run(inputs)\n",
    "        output_2 = self.chain_2.run(inputs)\n",
    "        return {'concat_output': output_1 + output_2}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Concatenated output:\n",
      "\n",
      "\n",
      "Vivid Sockery.\n",
      "\n",
      "\"Step Into Colorful Comfort!\"\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "The prompt_1 variable is of type PromptTemplate, which represents a prompt template \n",
    "containing an input variable \"product\" with the template being \"What is a good name \n",
    "for a company that makes {product}?\".\n",
    "\n",
    "The chain_1 variable is of type LLMChain and represents an instance of LLMChain with \n",
    "the llm parameter set to the previously defined llm object and the prompt parameter \n",
    "set to prompt_1.\n",
    "\n",
    "The chain_2 variable is also of type LLMChain and represents another instance of LLMChain \n",
    "with the llm parameter set to the previously defined llm object and the prompt parameter \n",
    "set to prompt_2.\n",
    "\n",
    "The concat_chain variable is of type ConcatenateChain and represents an object that connects \n",
    "two chains. The chain_1 and chain_2 properties are respectively corresponding to the two \n",
    "chain instances defined above.\n",
    "\n",
    "The concat_output variable is of type string and represents the result obtained by calling \n",
    "the run() method on the concat_chain object. This result is obtained by concatenating the \n",
    "output results of chain_1 and chain_2.\n",
    "\n",
    "Finally, the connection result is displayed through printing.\n",
    "\"\"\"\n",
    "prompt_1 = PromptTemplate(\n",
    "    input_variables=[\"product\"],\n",
    "    template=\"What is a good name for a company that makes {product}?\",\n",
    ")\n",
    "chain_1 = LLMChain(llm=llm, prompt=prompt_1)\n",
    "\n",
    "prompt_2 = PromptTemplate(\n",
    "    input_variables=[\"product\"],\n",
    "    template=\"What is a good slogan for a company that makes {product}?\",\n",
    ")\n",
    "chain_2 = LLMChain(llm=llm, prompt=prompt_2)\n",
    "\n",
    "concat_chain = ConcatenateChain(chain_1=chain_1, chain_2=chain_2)\n",
    "concat_output = concat_chain.run(\"colorful socks\")\n",
    "print(f\"Concatenated output:\\n{concat_output}\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## Agents\n",
    "In practical applications, not only pre-defined chains may be required, but also hidden chains can be generated based on user input requests. The agent acts as a proxy and determines the actions to be taken, tools to be used, LLM outputs, observation results, or returns LLM results to users based on user input.\n",
    "\n",
    "Before using agents, the following concepts need to be understood:\n",
    "\n",
    "Tools: Functions that implement specific functions; can be Google search, database lookup, or Python interaction interfaces. The interface of a tool is a function that takes a string and returns a string.\n",
    "LLM: The large language model that drives the agent.\n",
    "Agent: The agent used.\n",
    "\n",
    "The relationship between the three is as follows:\n",
    "\n",
    "<img src=\"./agents.png\" align=center width=100% />\n",
    "\n",
    "LangChain will drive the agent to process based on user requirements, during which the large model will be called to complete the request. This request can further drive tools to perform specific tasks according to specific tasks, and then return the specific operation results of the tools to users. In this process, if the user does not need to call tools to complete the processing, they will directly get feedback from the large model.\n",
    "\n",
    "For more detailed content, click 👉[here](https://python.langchain.com/en/latest/modules/agents.html)👈 to jump to the official Agents tutorial."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- [The simple example of using Agent](#a-simple-example-with-using-agent)\n",
    "- [Tools](#tools)\n",
    "- [Agent](#agent)\n",
    "- [Toolkits](#toolkits)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### A simple example with using Agent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.agents import load_tools\n",
    "from langchain.agents import initialize_agent\n",
    "from langchain.agents import AgentType\n",
    "from langchain.llms import OpenAI\n",
    "llm = OpenAI(temperature=0)\n",
    "tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
    "agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
      "Action: Search\n",
      "Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3mNina Agdal: 2016 to 2017 ... Leo and Nina were together for almost exactly a year until a source confirmed their breakup with a very familiar ...\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I need to find out Nina Agdal's age\n",
      "Action: Search\n",
      "Action Input: \"Nina Agdal age\"\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3m31 years\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I need to calculate 31 raised to the 0.43 power\n",
      "Action: Calculator\n",
      "Action Input: 31^0.43\u001b[0m\n",
      "Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.378098500976803\n",
      "\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
      "Final Answer: Nina Agdal is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 4.378098500976803.\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "\"Nina Agdal is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 4.378098500976803.\""
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Tools\n",
    "Tools are the way that agents interact with their external environment. Tools can be utilities, chains, other agents, etc.\n",
    "\n",
    "Most of LangChain's tools are related to search, and here are just a few examples of unique tools."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- [Bing Search](#bing-search)\n",
    "- [Google Search](#google-search)\n",
    "- [Google Serper API](#google-serper-api)\n",
    "- [Python REPL](#python-repl)\n",
    "- [Bash](#bash)\n",
    "- [Wikipedia API](#wikipedia-api)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "Tools are typically loaded in the following way, and for chains and agents that are used as tools, initialization is required."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.agents import load_tools\n",
    "tool_names = [...]\n",
    "tools = load_tools(tool_names)\n",
    "llm = ...\n",
    "tools = load_tools(tool_names, llm=llm)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Bing Search"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following example demonstrates how to use the Bing Web Search API component:\n",
    "\n",
    "First, you need to set appropriate API keys and environment variables. Click [here](https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e) to read the instructions and complete the configuration.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ[\"BING_SUBSCRIPTION_KEY\"] = \"\"\n",
    "os.environ[\"BING_SEARCH_URL\"] = \"\"\n",
    "\n",
    "from langchain.utilities import BingSearchAPIWrapper\n",
    "search = BingSearchAPIWrapper()\n",
    "search.run(\"python\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "search = BingSearchAPIWrapper(k=1)\n",
    "search.run(\"python\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Running the code below will return three items: snippet, title, and link.\n",
    "\n",
    "Snippet: Description of the result.\n",
    "\n",
    "Title: Title of the result.\n",
    "\n",
    "Link: Link to the result."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "search = BingSearchAPIWrapper()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "search.results(\"apples\", 5)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Google Search"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following example demonstrates how to use the Google Search component. You need to set appropriate API keys and environment variables. To set them up, create a GOOGLE_API_KEY in this [console](https://console.cloud.google.com/apis/credentials) and a GOOGLE_CSE_ID [here](https://programmablesearchengine.google.com/)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ[\"GOOGLE_CSE_ID\"] = \"\"\n",
    "os.environ[\"GOOGLE_API_KEY\"] = \"\"\n",
    "from langchain.utilities import GoogleSearchAPIWrapper\n",
    "search = GoogleSearchAPIWrapper()\n",
    "search.run(\"Obama's first name?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "First, create a GoogleSearchAPIWrapper object search and pass in the parameter k=1 to \n",
    "return only the first result in the search results.\n",
    "Then call the run() method of the search object and pass in the parameter \"python\" to \n",
    "search for information about Python on the Google search engine. Next, create another \n",
    "GoogleSearchAPIWrapper object search and pass in an empty value to clear the previous \n",
    "search results. Finally, call the results() method of the search object and pass in the \n",
    "parameters \"apples\" and 5 to search for information about apples on the Google search \n",
    "engine and return the top 5 results.\n",
    "\"\"\"\n",
    "search = GoogleSearchAPIWrapper(k=1)\n",
    "search.run(\"python\")\n",
    "search = GoogleSearchAPIWrapper()\n",
    "search.results(\"apples\", 5)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Google Serper API"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In order to better use the Google Web Search API, you need to register a free account to obtain an API_KEY. The registration location can be found by clicking [here](https://serper.dev/). Below is an example code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ[\"SERPER_API_KEY\"] = \"\"\n",
    "\n",
    "from langchain.utilities import GoogleSerperAPIWrapper\n",
    "search = GoogleSerperAPIWrapper()\n",
    "search.run(\"Obama's first name?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "os.environ['OPENAI_API_KEY'] = \"YOUR_OPENAI_KEY\"\n",
    "from langchain.utilities import GoogleSerperAPIWrapper\n",
    "from langchain.llms.openai import OpenAI\n",
    "from langchain.agents import initialize_agent, Tool\n",
    "from langchain.agents import AgentType\n",
    "\n",
    "\"\"\"\n",
    "First, create an OpenAI LLM model with a temperature parameter set to 0.\n",
    "Then create a GoogleSearchAPIWrapper object search for conducting searches on the Google \n",
    "search engine. Next, define a list of tools named \"Intermediate Answer\" which includes a \n",
    "function that calls the run() method of the search object and returns the search results \n",
    "as an answer. Afterwards, initialize an agent called self_ask_with_search that uses the \n",
    "tools in the list to interact with the LLM model and supports asking questions using the \"self ask with search\" method.\n",
    "Finally, call the run() method of the self_ask_with_search object and pass in the question \n",
    "to obtain the corresponding answer.\n",
    "\"\"\"\n",
    "llm = OpenAI(temperature=0)\n",
    "search = GoogleSerperAPIWrapper()\n",
    "tools = [\n",
    "    Tool(\n",
    "        name=\"Intermediate Answer\",\n",
    "        func=search.run,\n",
    "        description=\"useful for when you need to ask with search\"\n",
    "    )\n",
    "]\n",
    "\n",
    "self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)\n",
    "self_ask_with_search.run(\"What is the hometown of the reigning men's U.S. Open champion?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "#### Python REPL"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'27\\n'"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\"\"\"\n",
    "First, import the PythonREPL class, which is used to execute interactive code in the Python \n",
    "interpreter. Next, create a PythonREPL object called python_repl for executing code in the \n",
    "Python interpreter. Finally, call the run() method of the python_repl object and pass in a \n",
    "simple Python statement \"print(3**3)\" to print the value of the number 27.\n",
    "\"\"\"\n",
    "from langchain.utilities import PythonREPL\n",
    "python_repl = PythonREPL()\n",
    "python_repl.run(\"print(3**3)\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "#### Bash"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\"My name is name\" \r\n",
      "\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "First, import the BashProcess class, which is used to execute command line operations in the Linux system.\n",
    "Next, create a BashProcess object called bash for executing command line operations in the Linux system.\n",
    "Finally, call the run() method of the bash object and pass in a simple command \"echo 'My name is name'\" \n",
    "to print the string \"My name is name\" on the terminal.\n",
    "\"\"\"\n",
    "from langchain.utilities import BashProcess\n",
    "bash = BashProcess()\n",
    "print(bash.run(' echo \"My name is name\" '))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Wikipedia API"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here is an example of how to use Wikipedia:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already satisfied: wikipedia in /Users/yfcao/Anaconda/anaconda3/envs/egovlp/lib/python3.8/site-packages (1.4.0)\n",
      "Requirement already satisfied: requests<3.0.0,>=2.0.0 in /Users/yfcao/Anaconda/anaconda3/envs/egovlp/lib/python3.8/site-packages (from wikipedia) (2.28.1)\n",
      "Requirement already satisfied: beautifulsoup4 in /Users/yfcao/Anaconda/anaconda3/envs/egovlp/lib/python3.8/site-packages (from wikipedia) (4.12.0)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /Users/yfcao/Anaconda/anaconda3/envs/egovlp/lib/python3.8/site-packages (from requests<3.0.0,>=2.0.0->wikipedia) (3.4)\n",
      "Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/yfcao/Anaconda/anaconda3/envs/egovlp/lib/python3.8/site-packages (from requests<3.0.0,>=2.0.0->wikipedia) (1.26.13)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /Users/yfcao/Anaconda/anaconda3/envs/egovlp/lib/python3.8/site-packages (from requests<3.0.0,>=2.0.0->wikipedia) (2022.12.7)\n",
      "Requirement already satisfied: charset-normalizer<3,>=2 in /Users/yfcao/Anaconda/anaconda3/envs/egovlp/lib/python3.8/site-packages (from requests<3.0.0,>=2.0.0->wikipedia) (2.1.1)\n",
      "Requirement already satisfied: soupsieve>1.2 in /Users/yfcao/Anaconda/anaconda3/envs/egovlp/lib/python3.8/site-packages (from beautifulsoup4->wikipedia) (2.4)\n",
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "%pip install wikipedia"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.utilities import WikipediaAPIWrapper"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "wikipedia = WikipediaAPIWrapper()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Page: Hunter × Hunter\\nSummary: Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\\'s Toonami programming block from April 2016 to June 2019.\\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\\n\\nPage: Hunter × Hunter (2011 TV series)\\nSummary: Hunter × Hunter is an anime television series that aired from 2011 to 2014 based on Yoshihiro Togashi\\'s manga series Hunter × Hunter. The story begins with a young boy named Gon Freecss, who one day discovers that the father who he thought was dead, is in fact alive and well. He learns that his father, Ging, is a legendary \"Hunter\", an individual who has proven themselves an elite member of humanity. Despite the fact that Ging left his son with his relatives in order to pursue his own dreams, Gon becomes determined to follow in his father\\'s footsteps, pass the rigorous \"Hunter Examination\", and eventually find his father to become a Hunter in his own right.\\nThis new Hunter × Hunter anime was announced on July 24, 2011. It is a complete reboot of the anime adaptation starting from the beginning of the manga, with no connections to the first anime from 1999. Produced by Nippon TV, VAP, Shueisha and Madhouse, the series is directed by Hiroshi Kōjina, with Atsushi Maekawa and Tsutomu Kamishiro handling series composition, Takahiro Yoshimatsu designing the characters and Yoshihisa Hirano composing the music. Instead of having the old cast reprise their roles for the new adaptation, the series features an entirely new cast to voice the characters. The new series premiered airing weekly on Nippon TV and the nationwide Nippon News Network from October 2, 2011.  The series started to be collected in both DVD and Blu-ray format on January 25, 2012. Viz Media has licensed the anime for a DVD/Blu-ray release in North America with an English dub. On television, the series began airing on Adult Swim\\'s Toonami programming block on April 17, 2016, and ended on June 23, 2019.The anime series\\' opening theme is alternated between the song \"Departure!\" and an alternate version titled \"Departure! -Second Version-\" both sung by Galneryus\\' vocalist Masatoshi Ono. Five pieces of music were used as the ending theme; \"Just Awake\" by the Japanese band Fear, and Loathing in Las Vegas in episodes 1 to 26, \"Hunting for Your Dream\" by Galneryus in episodes 27 to 58, \"Reason\" sung by Japanese duo Yuzu in episodes 59 to 75, \"Nagareboshi Kirari\" also sung by Yuzu from episode 76 to 98, which was originally from the anime film adaptation, Hunter × Hunter: Phantom Rouge, and \"Hyōri Ittai\" by Yuzu featuring Hyadain from episode 99 to 146, which was also used in the film Hunter × Hunter: The Last Mission. The background music and soundtrack for the series was composed by Yoshihisa Hirano.\\n\\n\\n\\nPage: List of Hunter × Hunter characters\\nSummary: The Hunter × Hunter manga series, created by Yoshihiro Togashi, features an extensive cast of characters. It takes place in a fictional universe where licensed specialists known as Hunters travel the world taking on special jobs ranging from treasure hunting to assassination. The story initially focuses on Gon Freecss and his quest to become a Hunter in order to find his father, Ging, who is himself a famous Hunter. On the way, Gon meets and becomes close friends with Killua Zoldyck, Kurapika and Leorio Paradinight.\\nAlthough most characters are human, most possess superhuman strength and/or supernatural abilities due to Nen, the ability to control one\\'s own life energy or aura. The world of the series also includes fantastical beasts such as the Chimera Ants or the Five great calamities.'"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "wikipedia.run('HUNTER X HUNTER')"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Agent"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will now move on to discuss the agent-related content. The agent uses the LLM to determine what actions to take and in what order. An action can be either using tools and observing their output or returning a response to the user."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- [Type](#type)\n",
    "- [Customize the Agent](#custom-agent)\n",
    "- [Customize LLMs of Agent](#custom-llm-agent)\n",
    "- [Self Ask With Search](#self-ask-with-search)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Type"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As a professional translator, my goal is to translate Chinese into English. Now, please translate the following passage: [The following are the Agent types available in LangChain:\n",
    "\n",
    "We use the ReAct framework for this.\n",
    "\n",
    "**zero-shot-react-description**: This Agent determines which tools to use based solely on their descriptions. It can provide any number of tools, and requires descriptions for each tool.\n",
    "\n",
    "**react-docstore**: This Agent uses ReAct to interact with a document library. It must provide two tools: a search tool and a find tool (with the same name). The search tool searches for documents, while the find tool looks up terms in the most recently found document. This Agent is equivalent to the original ReAct paper, particularly the Wikipedia example.\n",
    "\n",
    "**self-ask-with-search**: This Agent uses an Intermediate Answer tool that should be able to find factual answers to questions. This Agent is equivalent to the original self ask with search paper, which provided a Google Search API as the tool.\n",
    "\n",
    "**conversational-react-description**: This Agent is designed for conversational settings. Its prompts are designed to help the agent provide assistance and facilitate conversation. It uses ReAct to decide which tool to use and uses memory to remember previous interactions in a conversation."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Custom Agent\n",
    "\n",
    "How to construct Agent："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.agents import Tool,AgentExecutor\n",
    "from langchain.agents import BaseSingleActionAgent\n",
    "from langchain import OpenAI, SerpAPIWrapper\n",
    "from typing import List, Tuple, Any, Union\n",
    "from langchain.schema import AgentAction, AgentFinish\n",
    "\"\"\"\n",
    "First, create a SerpAPIWrapper object named search. Then, define a list called tools \n",
    "which contains a Tool object named \"Search\" that can be called to search for relevant \n",
    "information. Next, define a class called FakeAgent that inherits from BaseSingleActionAgent. \n",
    "The purpose of this class is to simulate a fake intelligent agent with an input_keys \n",
    "attribute that receives user input. In the plan() method, based on the current state \n",
    "and user input, the agent decides what action to take and returns an AgentAction or \n",
    "AgentFinish object. Finally, create a FakeAgent object named agent and create an AgentExecutor \n",
    "object using AgentExecutor.from_agent_and_tools(). Then call the run() method and pass in \n",
    "a query statement \"How many people live in canada as of 2023?\", which will be executed by \n",
    "the agent.\n",
    "\"\"\"\n",
    "search = SerpAPIWrapper()\n",
    "tools = [\n",
    "    Tool(\n",
    "        name = \"Search\",\n",
    "        func=search.run,\n",
    "        description=\"useful for when you need to answer questions about current events\",\n",
    "        return_direct=True\n",
    "    )\n",
    "]\n",
    "class FakeAgent(BaseSingleActionAgent):\n",
    "    @property\n",
    "    def input_keys(self):\n",
    "        return [\"input\"]\n",
    "    \n",
    "    def plan(\n",
    "        self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n",
    "    ) -> Union[AgentAction, AgentFinish]:\n",
    "        return AgentAction(tool=\"Search\", tool_input=kwargs[\"input\"], log=\"\")\n",
    "\n",
    "    async def aplan(\n",
    "        self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n",
    "    ) -> Union[AgentAction, AgentFinish]:\n",
    "        return AgentAction(tool=\"Search\", tool_input=kwargs[\"input\"], log=\"\")\n",
    "\n",
    "agent = FakeAgent()\n",
    "agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)\n",
    "agent_executor.run(\"How many people live in canada as of 2023?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Custom LLM Agent"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The LLM agent consists of several components:\n",
    "\n",
    "- PromptTemplate: This is the prompt template that instructs the language model what to do.\n",
    "\n",
    "- LLM: This is the language model that provides support for the agent.\n",
    "\n",
    "- StopSequence: This instructs the LLM to stop generating output once it finds this string.\n",
    "\n",
    "- OutputParser: This determines how the LLM output is parsed into an AgentAction or AgentFinish object.\n",
    "\n",
    "The LLM agent is used in an AgentExecutor, which can be largely considered a loop:\n",
    "\n",
    "- Pass user input and any previous steps to the agent (in this case, the LLMAgent)\n",
    "\n",
    "- If the agent returns an AgentFinish, return it directly to the user.\n",
    "\n",
    "- If the agent returns an AgentAction, use it to call a tool and obtain an observation.\n",
    "\n",
    "- Repeat until an AgentFinish is issued by the agent.\n",
    "\n",
    "An AgentAction consists of a response action_input, which is created by combining action and and action_input, which refers to the input of the tool used. The log can also provide additional context (which can be used for logging, tracking, etc.).\n",
    "\n",
    "An AgentFinish is a response that contains the final message to send back to the user. It should be used to end the agent's operation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser\n",
    "from langchain.prompts import StringPromptTemplate\n",
    "from langchain import OpenAI, SerpAPIWrapper, LLMChain\n",
    "from typing import List, Union\n",
    "from langchain.schema import AgentAction, AgentFinish\n",
    "import re\n",
    "search = SerpAPIWrapper()\n",
    "tools = [\n",
    "    Tool(\n",
    "        name = \"Search\",\n",
    "        func=search.run,\n",
    "        description=\"useful for when you need to answer questions about current events\"\n",
    "    )\n",
    "]"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This part instructs the agent what to do. Generally, the template should include:\n",
    "\n",
    "- tools: The agent can choose which tools to access and how and when to call them.\n",
    "\n",
    "- intermediate_steps: These are tuple pairs corresponding to previous (AgentAction, Observation). These are generally not directly passed to the model, but are formatted by the prompt template in a specific way before being sent to the LLM.\n",
    "\n",
    "- input: The general user input content"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\\n\\n{tools}\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin! Remember to speak as a pirate when giving your final answer. Use lots of \"Arg\"s\\n\\nQuestion: {input}\\n{agent_scratchpad}'"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "template = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
    "\n",
    "{tools}\n",
    "\n",
    "Use the following format:\n",
    "\n",
    "Question: the input question you must answer\n",
    "Thought: you should always think about what to do\n",
    "Action: the action to take, should be one of [{tool_names}]\n",
    "Action Input: the input to the action\n",
    "Observation: the result of the action\n",
    "... (this Thought/Action/Action Input/Observation can repeat N times)\n",
    "Thought: I now know the final answer\n",
    "Final Answer: the final answer to the original input question\n",
    "\n",
    "Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Arg\"s\n",
    "\n",
    "Question: {input}\n",
    "{agent_scratchpad}\"\"\"\n",
    "template"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "This code defines a class called CustomPromptTemplate that inherits from StringPromptTemplate.\n",
    "\n",
    "template is a string property that represents the template of the prompt message;\n",
    "\n",
    "tools is a list property that represents the list of available tools;\n",
    "\n",
    "format is a method that takes in parameters and formats them into a prompt message. In the \n",
    "format method, first pop a keyword parameter named intermediate_steps from the passed-in \n",
    "parameters. Then, iterate through this intermediate step list, concatenating the operation \n",
    "log and observation results of each step into a string thoughts. Next, store thoughts in \n",
    "agent_scratchpad and add the available tool information to tools and tool_names. Finally, \n",
    "call the parent class's format method and return the formatted prompt message.\n",
    "\"\"\"\n",
    "class CustomPromptTemplate(StringPromptTemplate):\n",
    "    template: str\n",
    "    tools: List[Tool]\n",
    "    def format(self, **kwargs) -> str:\n",
    "        intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
    "        thoughts = \"\"\n",
    "        for action, observation in intermediate_steps:\n",
    "            thoughts += action.log\n",
    "            thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
    "        kwargs[\"agent_scratchpad\"] = thoughts\n",
    "        kwargs[\"tools\"] = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in self.tools])\n",
    "        kwargs[\"tool_names\"] = \", \".join([tool.name for tool in self.tools])\n",
    "        return self.template.format(**kwargs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "This code defines a variable named prompt, of type CustomPromptTemplate. \n",
    "It is initialized by calling the constructor of the parent class StringPromptTemplate, \n",
    "passing in three parameters: template, tools, and input_variables. The template parameter \n",
    "is of string type and represents the template for the prompt message. The tools parameter \n",
    "is of list type and represents the list of available tools. The input_variables parameter \n",
    "is also of list type and represents the list of names for input variables. In this case, \n",
    "the names of the input variables are input and intermediate_steps.\n",
    "\"\"\"\n",
    "prompt = CustomPromptTemplate(\n",
    "    template=template,\n",
    "    tools=tools,\n",
    "    input_variables=[\"input\", \"intermediate_steps\"]\n",
    ")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "OutputParser is responsible for parsing the output of an LLM into AgentAction and AgentFinish. This typically depends largely on the prompts used.\n",
    "\n",
    "At this point, you can modify the parsing to handle retries, blanks, or other scenarios.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "A variable named output_parser of type CustomOutputParser is defined. This variable is \n",
    "initialized by calling the constructor of the parent class AgentOutputParser.\n",
    "\n",
    "Within the CustomOutputParser class, a method named parse is defined to parse the LLM output. \n",
    "This method takes a string parameter llm_output and returns an object of type AgentAction or \n",
    "AgentFinish.\n",
    "\n",
    "In the parse method, the input LLM output is first checked for the presence of the string \n",
    "\"Final Answer:\". If this string is found, it indicates that the task has been completed and \n",
    "a AgentFinish object is returned. Otherwise, regular expressions are used to extract action \n",
    "and input information from the LLM output. If either action or input information cannot be \n",
    "extracted, an exception is thrown.\n",
    "\n",
    "Finally, an AgentAction object is created based on the extracted information and returned.\n",
    "\"\"\"\n",
    "class CustomOutputParser(AgentOutputParser):\n",
    "    def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
    "        if \"Final Answer:\" in llm_output:\n",
    "            return AgentFinish(\n",
    "                return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
    "                log=llm_output,\n",
    "            )\n",
    "        regex = r\"Action: (.*?)[\\n]*Action Input:[\\s]*(.*)\"\n",
    "        match = re.search(regex, llm_output, re.DOTALL)\n",
    "        if not match:\n",
    "            raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
    "        action = match.group(1).strip()\n",
    "        action_input = match.group(2)\n",
    "        return AgentAction(tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output)\n",
    "\n",
    "output_parser = CustomOutputParser()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "llm = OpenAI(temperature=0)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Define a StopSequence, which is important because it informs the LLM when to stop generating.\n",
    "\n",
    "This largely depends on the prompts and model you are using. Typically, you want this to be any token in your prompt that represents the start of an Observation (otherwise, the LLM may generate a virtual Observation for you).\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
    "tool_names = [tool.name for tool in tools]\n",
    "\n",
    "\"\"\"\n",
    "The following parameters are passed in: llm_chain, an instance of the LLMChain \n",
    "class representing the LLM chain; output_parser, an instance of the CustomOutputParser \n",
    "class representing the class for parsing the LLM output; and allowed_tools, a list \n",
    "representing the list of available tools.\n",
    "\"\"\"\n",
    "agent = LLMSingleActionAgent(\n",
    "    llm_chain=llm_chain, \n",
    "    output_parser=output_parser,\n",
    "    stop=[\"\\nObservation:\"], \n",
    "    allowed_tools=tool_names\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "A variable named agent_executor of type AgentExecutor is defined. This variable is initialized \n",
    "by calling the constructor of the AgentExecutor class.\n",
    "\n",
    "In the constructor, three parameters are passed in: agent, a LLMSingleActionAgent object \n",
    "representing the LLM agent; tools, a list representing the list of available tools; and \n",
    "verbose, a boolean value indicating whether detailed information should be output.\n",
    "\n",
    "Next, the static method from_agent_and_tools of the AgentExecutor class is called with \n",
    "agent, tools, and verbose as arguments. This method returns a new AgentExecutor object \n",
    "for executing the LLM task.\n",
    "\"\"\"\n",
    "agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent_executor.run(\"How many people live in canada as of 2023?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Self Ask With Search"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain import OpenAI, SerpAPIWrapper\n",
    "from langchain.agents import initialize_agent, Tool\n",
    "from langchain.agents import AgentType\n",
    "\"\"\"\n",
    "This code imports the modules of OpenAI and SerpAPIWrapper, as well as the classes \n",
    "of initialize_agent, Tool, and AgentType.\n",
    "\n",
    "Then, an OpenAI object named llm is defined for executing LLM tasks, and a SerpAPIWrapper \n",
    "object named search is defined for searching answers.\n",
    "\n",
    "Next, a list named tools is defined, which contains a tool object of type Intermediate \n",
    "Answer. This tool object has a property called func, which specifies using the search.run() \n",
    "method to search for answers.\n",
    " \"\"\"\n",
    "llm = OpenAI(temperature=0)\n",
    "search = SerpAPIWrapper()\n",
    "tools = [\n",
    "    Tool(\n",
    "        name=\"Intermediate Answer\",\n",
    "        func=search.run,\n",
    "        description=\"useful for when you need to ask with search\"\n",
    "    )\n",
    "]\n",
    "self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)\n",
    "self_ask_with_search.run(\"What is the hometown of the reigning men's U.S. Open champion?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Toolkits\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- [CSV Agent](#csv-agent)\n",
    "- [Python Agent](#python-agent)\n",
    "- [Vectorstore Agent](#vectorstore-agent)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "#### CSV Agent\n",
    "Answer user questions based on the content of the given csv file."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.agents import create_csv_agent\n",
    "from langchain.llms import OpenAI\n",
    "agent = create_csv_agent(OpenAI(temperature=0), 'titanic.csv', verbose=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3mThought: I need to count the number of rows\n",
      "Action: python_repl_ast\n",
      "Action Input: len(df)\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3m891\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
      "Final Answer: There are 891 rows in the dataframe.\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'There are 891 rows in the dataframe.'"
      ]
     },
     "execution_count": 58,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent.run(\"how many rows are there?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3mThought: I need to count the number of people with more than 3 siblings\n",
      "Action: python_repl_ast\n",
      "Action Input: df[df['SibSp'] > 3].shape[0]\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3m30\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
      "Final Answer: 30 people have more than 3 siblings.\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'30 people have more than 3 siblings.'"
      ]
     },
     "execution_count": 60,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent.run(\"how many people have more than 3 sibligngs\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3mThought: I need to calculate the average age first\n",
      "Action: python_repl_ast\n",
      "Action Input: df['Age'].mean()\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3m29.69911764705882\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now need to calculate the square root of this\n",
      "Action: python_repl_ast\n",
      "Action Input: math.sqrt(df['Age'].mean())\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3mname 'math' is not defined\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I need to import the math library\n",
      "Action: python_repl_ast\n",
      "Action Input: import math\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3m\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now need to calculate the square root of the average age\n",
      "Action: python_repl_ast\n",
      "Action Input: math.sqrt(df['Age'].mean())\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3m5.449689683556195\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
      "Final Answer: 5.449689683556195\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'5.449689683556195'"
      ]
     },
     "execution_count": 61,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent.run(\"whats the square root of the average age?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "#### Python Agent\n",
    "This agent is used to generate or execute a piece of Python code based on user requirements."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from langchain.agents.agent_toolkits import create_python_agent\n",
    "from langchain.tools.python.tool import PythonREPLTool\n",
    "from langchain.python import PythonREPL\n",
    "from langchain.llms.openai import OpenAI\n",
    "\"\"\"\n",
    "This code defines a variable named agent_executor with the type AgentExecutor. \n",
    "The variable is initialized by calling the create_python_agent function.\n",
    "\n",
    "In the constructor, three parameters are passed in: llm, tool, and verbose. llm is \n",
    "an OpenAI object used for performing LLM tasks; tool is a PythonREPLTool object \n",
    "representing the tool for interactive interpretation; verbose is a boolean value \n",
    "indicating whether detailed information should be outputted.\n",
    "\n",
    "Next, the create_python_agent function is called with llm, tool, and verbose as \n",
    "arguments. This function returns a new AgentExecutor object which can be used to \n",
    "perform LLM tasks and provide interactive interpretation functionality.\n",
    "\"\"\"\n",
    "agent_executor = create_python_agent(\n",
    "    llm=OpenAI(temperature=0, max_tokens=1000),\n",
    "    tool=PythonREPLTool(),\n",
    "    verbose=True\n",
    ")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "generate the Fibonacci list"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3m I need to calculate the 10th fibonacci number\n",
      "Action: Python REPL\n",
      "Action Input: def fibonacci(n):\n",
      "    if n == 0:\n",
      "        return 0\n",
      "    elif n == 1:\n",
      "        return 1\n",
      "    else:\n",
      "        return fibonacci(n-1) + fibonacci(n-2)\n",
      "\n",
      "print(fibonacci(10))\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3m55\n",
      "\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
      "Final Answer: 55\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'55'"
      ]
     },
     "execution_count": 63,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent_executor.run(\"What is the 10th fibonacci number?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "training the CNN"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3m I need to write a neural network in PyTorch and train it on the given data.\n",
      "Action: Python REPL\n",
      "Action Input: \n",
      "import torch\n",
      "\n",
      "# Define the model\n",
      "model = torch.nn.Sequential(\n",
      "    torch.nn.Linear(1, 1)\n",
      ")\n",
      "\n",
      "# Define the loss\n",
      "loss_fn = torch.nn.MSELoss()\n",
      "\n",
      "# Define the optimizer\n",
      "optimizer = torch.optim.SGD(model.parameters(), lr=0.01)\n",
      "\n",
      "# Define the data\n",
      "x_data = torch.tensor([[1.0], [2.0], [3.0], [4.0]])\n",
      "y_data = torch.tensor([[2.0], [4.0], [6.0], [8.0]])\n",
      "\n",
      "# Train the model\n",
      "for epoch in range(1000):\n",
      "    # Forward pass\n",
      "    y_pred = model(x_data)\n",
      "\n",
      "    # Compute and print loss\n",
      "    loss = loss_fn(y_pred, y_data)\n",
      "    if (epoch+1) % 100 == 0:\n",
      "        print(f'Epoch {epoch+1}: loss = {loss.item():.4f}')\n",
      "\n",
      "    # Zero the gradients\n",
      "    optimizer.zero_grad()\n",
      "\n",
      "    # Backward pass\n",
      "    loss.backward()\n",
      "\n",
      "    # Update the weights\n",
      "    optimizer.step()\n",
      "\n",
      "# Make a prediction\n",
      "x_pred = torch.tensor([[5.0]])\n",
      "y_pred = model(x_pred)\n",
      "\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3mEpoch 100: loss = 0.2032\n",
      "Epoch 200: loss = 0.1116\n",
      "Epoch 300: loss = 0.0613\n",
      "Epoch 400: loss = 0.0336\n",
      "Epoch 500: loss = 0.0185\n",
      "Epoch 600: loss = 0.0101\n",
      "Epoch 700: loss = 0.0056\n",
      "Epoch 800: loss = 0.0031\n",
      "Epoch 900: loss = 0.0017\n",
      "Epoch 1000: loss = 0.0009\n",
      "\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
      "Final Answer: The prediction for x = 5 is y = 10.00.\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'The prediction for x = 5 is y = 10.00.'"
      ]
     },
     "execution_count": 64,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent_executor.run(\"\"\"Understand, write a single neuron neural network in PyTorch.\n",
    "Take synthetic data for y=2x. Train for 1000 epochs and print every 100 epochs.\n",
    "Return prediction for x = 5\"\"\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Vectorstore Agent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.embeddings.openai import OpenAIEmbeddings\n",
    "from langchain.vectorstores import Chroma\n",
    "from langchain.text_splitter import CharacterTextSplitter\n",
    "from langchain import OpenAI, VectorDBQA\n",
    "llm = OpenAI(temperature=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.document_loaders import TextLoader\n",
    "loader = TextLoader('../../../state_of_the_union.txt')\n",
    "documents = loader.load()\n",
    "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
    "texts = text_splitter.split_documents(documents)\n",
    "embeddings = OpenAIEmbeddings()\n",
    "\"\"\"\n",
    "The following code extracts a spectral graph from a document using the Chroma library \n",
    "and stores it in an object named state_of_union_store. Specifically, it performs the \n",
    "following tasks:\n",
    "Loads multiple text files from a list named texts.\n",
    "Converts each text file into a list containing words and their corresponding embedding \n",
    "vectors. These embedding vectors are generated by calling the generate() method of the \n",
    "OpenAIEmbeddings class.\n",
    "Merges all the embedding vector lists from the text files into a complete spectral \n",
    "graph using the Chroma.from_documents() method. This method takes three parameters: \n",
    "texts, embeddings, and collection_name, which represent the list of text files to load, \n",
    "the embedding vectors for each text file, and the name of the collection for the \n",
    "spectral graph, respectively.\n",
    "Finally, this code stores the generated spectral graph in the state_of_union_store \n",
    "object for future use.\n",
    "\"\"\"\n",
    "state_of_union_store = Chroma.from_documents(texts, embeddings, collection_name=\"state-of-union\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.document_loaders import WebBaseLoader\n",
    "loader = WebBaseLoader(\"https://beta.ruff.rs/docs/faq/\")\n",
    "docs = loader.load()\n",
    "ruff_texts = text_splitter.split_documents(docs)\n",
    "\"\"\"\n",
    "The Chroma.from_documents() method is used to convert all text passages into a \n",
    "spectral graph, which is then stored in an object named ruff_store. The method \n",
    "takes three parameters: ruff_texts, embeddings, and collection_name, which \n",
    "represent the list of all text passages, the embedding vectors for each text \n",
    "passage, and the name of the collection for the spectral graph, respectively. \n",
    "In this example, the name of the spectral graph collection is \"ruff\".\n",
    "\"\"\"\n",
    "ruff_store = Chroma.from_documents(ruff_texts, embeddings, collection_name=\"ruff\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.agents.agent_toolkits import (\n",
    "    create_vectorstore_agent,\n",
    "    VectorStoreToolkit,\n",
    "    VectorStoreInfo,\n",
    ")\n",
    "vectorstore_info = VectorStoreInfo(\n",
    "    name=\"state_of_union_address\",\n",
    "    description=\"the most recent state of the Union adress\",\n",
    "    vectorstore=state_of_union_store\n",
    ")\n",
    "toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)\n",
    "\"\"\"\n",
    "The create_vectorstore_agent() method is used to create a agent. \n",
    "The method takes three parameters: llm, which represents the language model; \n",
    "toolkit, which represents the VectorStoreToolkit object; and verbose, which \n",
    "indicates whether detailed output mode should be enabled. In this example, \n",
    "the agent will use llm as its language model and toolkit as its toolkit. \n",
    "The executor of the agent will be returned at the end, which can be used \n",
    "to perform specific tasks or operations.\n",
    "\"\"\"\n",
    "agent_executor = create_vectorstore_agent(\n",
    "    llm=llm,\n",
    "    toolkit=toolkit,\n",
    "    verbose=True\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent_executor.run(\"What did biden say about ketanji brown jackson is the state of the union address?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent_executor.run(\"What did biden say about ketanji brown jackson is the state of the union address? List the source.\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Multiple Vectorstores\n",
    "\n",
    "With multiple vector stores, we can easily initialize a proxy with multiple vector storage and connect them using the agent. To do this, the agent is optimized for connecting to one another, so it uses a different toolkit and initializer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.agents.agent_toolkits import (\n",
    "    create_vectorstore_router_agent,\n",
    "    VectorStoreRouterToolkit,\n",
    "    VectorStoreInfo,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ruff_vectorstore_info = VectorStoreInfo(\n",
    "    name=\"ruff\",\n",
    "    description=\"Information about the Ruff python linting library\",\n",
    "    vectorstore=ruff_store\n",
    ")\n",
    "router_toolkit = VectorStoreRouterToolkit(\n",
    "    vectorstores=[vectorstore_info, ruff_vectorstore_info],\n",
    "    llm=llm\n",
    ")\n",
    "agent_executor = create_vectorstore_router_agent(\n",
    "    llm=llm,\n",
    "    toolkit=router_toolkit,\n",
    "    verbose=True\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent_executor.run(\"What did biden say about ketanji brown jackson is the state of the union address?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent_executor.run(\"What tool does ruff use to run over Jupyter Notebooks?\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent_executor.run(\"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Coding Exampls\n",
    "\n",
    "- [Question answering in documents](#question-answering-in-documents)\n",
    "- [BabyAGI with Tools](#babyagi-with-tools)\n",
    "- [Auto-GPT Assistant](#auto-gpt-assistant)\n",
    "\n",
    "### Question answering in documents"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Question-answering can be performed using different chains, such as stuff, map_reduce, refine, and map_rerank. Stuff is the most commonly used method. Map_reduce splits the input data into small chunks, executes calculations independently on each chunk, and then aggregates the results to obtain the final output. Refine provides a preliminary answer based on context information and a question, and iteratively improves the answer based on the context and previous responses until it finds the best final answer. For detailed documentation, please click 👉[here](https://python.langchain.com/en/latest/use_cases/question_answering.html)👈. \n",
    "\n",
    "Below is an explanation of the code snippet in this example line by line."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ['HTTP_PROXY'] = 'http://127.0.0.1:your port'\n",
    "os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:your port'\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"Your OpenAI Key\"\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.embeddings.openai import OpenAIEmbeddings\n",
    "from langchain.embeddings.cohere import CohereEmbeddings\n",
    "from langchain.text_splitter import CharacterTextSplitter\n",
    "from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch\n",
    "from langchain.vectorstores import Chroma\n",
    "from langchain.docstore.document import Document\n",
    "from langchain.prompts import PromptTemplate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "with open(\"./state_of_the_union.txt\") as f:\n",
    "    state_of_the_union = f.read()\n",
    "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
    "texts = text_splitter.split_text(state_of_the_union)\n",
    "\n",
    "embeddings = OpenAIEmbeddings()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using embedded DuckDB without persistence: data will be transient\n"
     ]
    }
   ],
   "source": [
    "docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{\"source\": str(i)} for i in range(len(texts))])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "query = \"What did the president say about Justice Breyer\"\n",
    "docs = docsearch.similarity_search(query)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.chains.qa_with_sources import load_qa_with_sources_chain\n",
    "from langchain.llms import OpenAI"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'output_text': \" The president thanked Justice Breyer for his service and mentioned that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to continue Justice Breyer's legacy of excellence.\\nSOURCES: 31-pl\"}"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type=\"stuff\")\n",
    "query = \"What did the president say about Justice Breyer\"\n",
    "chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'output_text': '\\n總統沒有對司法大法官布雷耶發表評論。\\nSOURCES: 31, 32, 34'}"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "template = \"\"\"Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \n",
    "If you don't know the answer, just say that you don't know. Don't try to make up an answer.\n",
    "ALWAYS return a \"SOURCES\" part in your answer.\n",
    "Respond in Chinese.\n",
    "\n",
    "QUESTION: {question}\n",
    "=========\n",
    "{summaries}\n",
    "=========\n",
    "FINAL ANSWER IN ITALIAN:\"\"\"\n",
    "PROMPT = PromptTemplate(template=template, input_variables=[\"summaries\", \"question\"])\n",
    "chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type=\"stuff\", prompt=PROMPT)\n",
    "query = \"What did the president say about Justice Breyer\"\n",
    "chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### BabyAGI with Tools\n",
    "\n",
    "<img src=\"./aa.png\"/>\n",
    "\n",
    "BabyAGI is an autonomous artificial intelligence agent that generates and pretends to perform tasks based on given goals. (Is this function familiar? In fact, the recent popular AutoGPT was designed based on this functionality).\n",
    "\n",
    "Through this example, we will help you understand the components of creating your own recursive agent.\n",
    "\n",
    "Although BabyAGI uses specific vector storage/model providers (Pinecone, OpenAI), one of the benefits of implementing it using LangChain is that you can easily switch to different options. In this implementation, we use the FAISS vectorstore because it runs locally and is free.\n",
    "\n",
    "For more detailed information about BabyAGI, please click [here](https://github.com/yoheinakajima/babyagi) to access the repository.\n",
    "\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before we start with this case, let's first understand what BabyAGI is, right? Here is a user guide for BabyAGI. Once we are familiar with it, we can move on to the related code examples mentioned above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ['HTTP_PROXY'] = 'http://127.0.0.1:port'\n",
    "os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:port'\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"your openai key like :sk-xxxxxxxxxxxxxxxx\"\n",
    "os.environ['SERPAPI_API_KEY']='your search API KEY'\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from collections import deque\n",
    "from typing import Dict, List, Optional, Any\n",
    "from langchain import LLMChain, OpenAI, PromptTemplate\n",
    "from langchain.embeddings import OpenAIEmbeddings\n",
    "from langchain.llms import BaseLLM\n",
    "from langchain.vectorstores.base import VectorStore\n",
    "from pydantic import BaseModel, Field\n",
    "from langchain.chains.base import Chain"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "connect to vectorstore"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.vectorstores import FAISS\n",
    "from langchain.docstore import InMemoryDocstore"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If faiss library is not installed, please allow the code below. Otherwise, you can skip the pip step below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already satisfied: faiss-cpu in /Users/yfcao/Anaconda/anaconda3/envs/egovlp/lib/python3.8/site-packages (1.7.3)\n",
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "%pip install faiss-cpu"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define your embedding model:text-embedding-ada-002\n",
    "embeddings_model = OpenAIEmbeddings()\n",
    "# Initialize the vectorstore as empty\n",
    "import faiss\n",
    "embedding_size = 1536\n",
    "index = faiss.IndexFlatL2(embedding_size)\n",
    "vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Define Chains\n",
    "\n",
    "- BabyAGI depends on three LLM chains:\n",
    "\n",
    "- Task Creation Chain, which selects new tasks to be added to the list.\n",
    "\n",
    "- Task Priority Chain, used to re-prioritize tasks.\n",
    "\n",
    "- Execution Chain, which executes tasks."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "This code defines a class named TaskCreationChain, which is a subclass of LLMChain. The \n",
    "purpose of this class is to generate tasks and return an llmchain object.\n",
    "\n",
    "In the class definition, there is a static method from_llm() that takes a BaseLLM object \n",
    "and a boolean verbose parameter as input and returns an LLMChain object.\n",
    "\n",
    "The method uses a task_creation_template string to generate a prompt (prompt) that guides \n",
    "AI in creating tasks. Next, it defines a PromptTemplate class that binds templates (templates) \n",
    "with input variables (input variables). In this example, the template includes five input \n",
    "variables: \"result\", \"task_description\", \"incomplete_tasks\", \"objective\", and \n",
    "\"previous_task_result\". Finally, it creates an LLMChain object using the cls() method and \n",
    "saves it in the task_creation_chain variable.\n",
    "\"\"\"\n",
    "class TaskCreationChain(LLMChain):\n",
    "    @classmethod\n",
    "    def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:\n",
    "        \"\"\"Get the response parser.\n",
    "        Pay close attention to the design of the template, as it is important in \n",
    "        how it is designed: When setting a goal, you should also provide the result \n",
    "        that will be achieved by completing this goal, as well as the task description \n",
    "        information corresponding to this result. At this point, a complete task list \n",
    "        will continue to be listed based on the above-mentioned result.\n",
    "        \n",
    "        In other words, there are four parameters:\n",
    "        - objective - The goal\n",
    "        - result - The previous task's result\n",
    "        - task_description - Task description\n",
    "        - incomplete_tasks - Current task list]\n",
    "        \"\"\"\n",
    "        task_creation_template = (\n",
    "            \"You are an task creation AI that uses the result of an execution agent\"\n",
    "            \" to create new tasks with the following objective: {objective},\"\n",
    "            \" The last completed task has the result: {result}.\"\n",
    "            \" This result was based on this task description: {task_description}.\"\n",
    "            \" These are incomplete tasks: {incomplete_tasks}.\"\n",
    "            \" Based on the result, create new tasks to be completed\"\n",
    "            \" by the AI system that do not overlap with incomplete tasks.\"\n",
    "            \" Return the tasks as an array.\"\n",
    "        )\n",
    "        prompt = PromptTemplate(\n",
    "            template=task_creation_template,\n",
    "            input_variables=[\n",
    "                \"result\",\n",
    "                \"task_description\",\n",
    "                \"incomplete_tasks\",\n",
    "                \"objective\",\n",
    "            ],\n",
    "        )\n",
    "        return cls(prompt=prompt, llm=llm, verbose=verbose)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "This code defines a class named TaskPrioritizationChain, which is a subclass of LLMChain. \n",
    "The purpose of this class is to determine task priorities and return an llmchain object.\n",
    "\n",
    "In the class definition, there is a static method from_llm() that takes a BaseLLM object \n",
    "and a boolean verbose parameter as input and returns an LLMChain object. The method uses \n",
    "a task_prioritization_template string to generate a prompt (prompt) that guides AI in \n",
    "prioritizing tasks. Next, a PromptTemplate class is defined, which binds templates \n",
    "(templates) with input variables (input variables). We can see that the template includes \n",
    "three input variables: \"task_names\", \"next_task_id\", and \"objective\". Finally, it creates \n",
    "an LLMChain object using the cls() method and saves it in the task_prioritization_chain \n",
    "variable.\n",
    "\"\"\"\n",
    "class TaskPrioritizationChain(LLMChain):\n",
    "    @classmethod\n",
    "    def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:\n",
    "        task_prioritization_template = (\n",
    "            \"You are an task prioritization AI tasked with cleaning the formatting of and reprioritizing\"\n",
    "            \" the following tasks: {task_names}.\"\n",
    "            \" Consider the ultimate objective of your team: {objective}.\"\n",
    "            \" Do not remove any tasks. Return the result as a numbered list, like:\"\n",
    "            \" #. First task\"\n",
    "            \" #. Second task\"\n",
    "            \" Start the task list with number {next_task_id}.\"\n",
    "        )\n",
    "        prompt = PromptTemplate(\n",
    "            template=task_prioritization_template,\n",
    "            input_variables=[\"task_names\", \"next_task_id\", \"objective\"],\n",
    "        )\n",
    "        return cls(prompt=prompt, llm=llm, verbose=verbose)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "This code defines a class named ExecutionChain, which is a subclass of LLMChain. \n",
    "The purpose of this class is to execute tasks and return an llmchain object.\n",
    "\n",
    "In the class definition, there is a static method from_llm() that takes a BaseLLM \n",
    "object and a boolean verbose parameter as input and returns an LLMChain object. \n",
    "The method uses an execution_template string to generate a prompt (prompt) that \n",
    "guides AI in executing tasks. Next, a PromptTemplate class is defined, which \n",
    "binds templates (templates) with input variables (input variables). In this example, \n",
    "the template includes three input variables: \"objective\", \"context\", and \"task\". \n",
    "Finally, it creates an LLMChain object using the cls() method and saves it in the \n",
    "execution_chain variable.\n",
    "\"\"\"\n",
    "class ExecutionChain(LLMChain):\n",
    "    @classmethod\n",
    "    def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:\n",
    "        \"\"\"Get the response parser.\"\"\"\n",
    "        execution_template = (\n",
    "            \"You are an AI who performs one task based on the following objective: {objective}.\"\n",
    "            \" Take into account these previously completed tasks: {context}.\"\n",
    "            \" Your task: {task}.\"\n",
    "            \" Response:\"\n",
    "        )\n",
    "        prompt = PromptTemplate(\n",
    "            template=execution_template,\n",
    "            input_variables=[\"objective\", \"context\", \"task\"],\n",
    "        )\n",
    "        return cls(prompt=prompt, llm=llm, verbose=verbose)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Define the BabyAGI Controller\n",
    "\n",
    "The BabyAGI controller combines the chains defined above into a possibly infinite loop of closed-loop operational process.\n",
    "\n",
    "The process is as follows:\n",
    "\n",
    "- Extract the first task from the task list.\n",
    "- Send the task to the execution agent, which uses the OpenAI API to complete the task based on context.\n",
    "- Polish the results and store them.\n",
    "- Create new tasks based on the target and the result of the previous task, and sort the task list based on priority."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "This method returns a list of dictionaries. The input parameters include LLMchain, \n",
    "result, the task list in task_list, and an objective string representing the target.\n",
    "\n",
    "Here, the first chain in the three chains, TaskCreationChain, is used to build the \n",
    "task list. Then, in the chain.run method, the variables are passed into the template \n",
    "as input_variables to complete the completion. The response results are split by '\n",
    "' and stored in the new_tasks variable. Finally, the get_next_task method returns a \n",
    "list that wraps up the dictionary.\n",
    "\"\"\"\n",
    "def get_next_task(\n",
    "    task_creation_chain: LLMChain,\n",
    "    result: Dict,\n",
    "    task_description: str,\n",
    "    task_list: List[str],\n",
    "    objective: str,\n",
    ") -> List[Dict]:\n",
    "    \"\"\"Get the next task.\"\"\"\n",
    "    incomplete_tasks = \", \".join(task_list)\n",
    "\n",
    "    response = task_creation_chain.run(\n",
    "        result=result,\n",
    "        task_description=task_description,\n",
    "        incomplete_tasks=incomplete_tasks,\n",
    "        objective=objective,\n",
    "    )\n",
    "    new_tasks = response.split(\"\\n\")\n",
    "    return [{\"task_name\": task_name} for task_name in new_tasks if task_name.strip()]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "\"\"\"\n",
    "This code defines a function named prioritize_tasks that takes in four parameters: \n",
    "task_prioritization_chain, this_task_id, task_list, and objective. The purpose of \n",
    "this function is to prioritize the tasks in the task list based on the current task \n",
    "and objective, and return a prioritized list of tasks (List[Dict]).\n",
    "\n",
    "In the function, first, the task names (task_name) from each task dictionary (t) in \n",
    "the task list are extracted and generated into a list called task_names. Then, the \n",
    "current task ID (this_task_id) is converted to an integer and incremented by 1 to \n",
    "generate the next task ID (next_task_id).\n",
    "\n",
    "Next, using one of the three chains for task prioritization, input_variables are \n",
    "passed into it along with task_names, next_task_id, and objective to complete them. \n",
    "The response content is split by '\n",
    "' and stored in a new string list called new_tasks.\n",
    "\n",
    "Then, an empty list called prioritized_task_list is created to save the prioritized \n",
    "task list. A for loop is used to iterate over each task string in new_tasks. If the \n",
    "task string is an empty string or contains only spaces, it is skipped.\n",
    "\n",
    "Afterwards, the strip() method is used to remove leading and trailing spaces from \n",
    "the task string, splitting it at the \".\" separator forcibly. If the resulting list \n",
    "has a length of 2 after splitting, it means that the task ID and task name have \n",
    "been successfully split, and they are added to the prioritized_task_list.\n",
    "\n",
    "Finally, the function returns the prioritized_task_list.\n",
    "\"\"\"\n",
    "def prioritize_tasks(\n",
    "    task_prioritization_chain: LLMChain,\n",
    "    this_task_id: int,\n",
    "    task_list: List[Dict],\n",
    "    objective: str,\n",
    ") -> List[Dict]:\n",
    "    \"\"\"Prioritize tasks.\"\"\"\n",
    "    task_names = [t[\"task_name\"] for t in task_list]\n",
    "    next_task_id = int(this_task_id) + 1\n",
    "    response = task_prioritization_chain.run(\n",
    "        task_names=task_names, next_task_id=next_task_id, objective=objective\n",
    "    )\n",
    "    new_tasks = response.split(\"\\n\")\n",
    "    prioritized_task_list = []\n",
    "    for task_string in new_tasks:\n",
    "        if not task_string.strip():\n",
    "            continue\n",
    "        task_parts = task_string.strip().split(\".\", 1)\n",
    "        if len(task_parts) == 2:\n",
    "            task_id = task_parts[0].strip()\n",
    "            task_name = task_parts[1].strip()\n",
    "            prioritized_task_list.append({\"task_id\": task_id, \"task_name\": task_name})\n",
    "    return prioritized_task_list"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "This code defines a function named _get_top_tasks that takes in three parameters: \n",
    "vectorstore (a vector repository), query (a query statement), and k (the number of \n",
    "results to return).\n",
    "\n",
    "The purpose of this function is to return the top k vectors and their corresponding \n",
    "task names that are most relevant to the query based on the query statement. \n",
    "Firstly, the function calls the similarity_search_with_score method of the vectorstore \n",
    "object to perform a similarity search and stores the results in the variable results. \n",
    "If no matches are found, the function returns an empty list. Otherwise, the function \n",
    "uses Python's built-in sorted function to sort the elements in results by similarity \n",
    "score in descending order. The function specifies this sorting using a lambda \n",
    "expression that sorts by the second element (i.e., the score) and uses the \n",
    "reverse=True parameter for reverse sorting. Then, the function splits the sorted \n",
    "results into two tuples, converts each tuple of task names to a string, and finally \n",
    "returns these strings as a list.\n",
    "\"\"\"\n",
    "def _get_top_tasks(vectorstore, query: str, k: int) -> List[str]:\n",
    "    \"\"\"Get the top k tasks based on the query.\"\"\"\n",
    "    results = vectorstore.similarity_search_with_score(query, k=k)\n",
    "    if not results:\n",
    "        return []\n",
    "    sorted_results, _ = zip(*sorted(results, key=lambda x: x[1], reverse=True))\n",
    "    return [str(item.metadata[\"task\"]) for item in sorted_results]\n",
    "\n",
    "\"\"\"\n",
    "This code defines a function named execute_task that takes in four parameters: \n",
    "vectorstore (a vector repository), execution_chain (an execution chain object), \n",
    "objective (a string representing the objective), and task (a string representing \n",
    "the task to be executed).\n",
    "\n",
    "The purpose of this function is to perform the specified vector calculation \n",
    "based on the given objective and task using the given vectors. Firstly, the \n",
    "function calls the _get_top_tasks function to obtain the top k vectors and \n",
    "their corresponding task names that are most relevant to the objective, and \n",
    "stores the result in the variable context. Then, the function calls the run() \n",
    "method of the execution_chain object, passing in the objective, context, and \n",
    "task as parameters. Finally, the function returns the execution result.\n",
    "\"\"\"\n",
    "def execute_task(\n",
    "    vectorstore, execution_chain: LLMChain, objective: str, task: str, k: int = 5\n",
    ") -> str:\n",
    "    \"\"\"Execute a task.\"\"\"\n",
    "    context = _get_top_tasks(vectorstore, query=objective, k=k)\n",
    "    return execution_chain.run(objective=objective, context=context, task=task)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "This code defines a class named BabyAGI that inherits from both Chain and BaseModel \n",
    "classes. The BabyAGI class is used to control the execution flow of the model, and \n",
    "it includes the following definitions:\n",
    "\n",
    "A deque variable named task_list, which is used to store a list of tasks.\n",
    "Three chain instances named task_creation_chain, task_prioritization_chain, and \n",
    "execution_chain, which are used to create tasks, sort tasks, and execute tasks respectively.\n",
    "An integer variable named task_id_counter, which is used to record the task ID.\n",
    "An instance of VectorStore named vectorstore, which is used to store vector data.\n",
    "An integer variable named max_iterations, which represents the maximum number of iterations.\n",
    "Methods add_task(), print_task_list(), print_next_task(), print_task_result(), get_next_task(), and prioritize_tasks() are defined for adding tasks to the task list, printing the current and next tasks, printing the task execution result, getting the next task, and re-ordering the task list respectively.\n",
    "A Config class is also defined within the class definition for configuring pydantic \n",
    "objects, with the arbitrary_types_allowed attribute set to True to allow any type.\n",
    "\"\"\"\n",
    "class BabyAGI(Chain, BaseModel):\n",
    "    \"\"\"Controller model for the BabyAGI agent.\"\"\"\n",
    "    task_list: deque = Field(default_factory=deque)\n",
    "    task_creation_chain: TaskCreationChain = Field(...)\n",
    "    task_prioritization_chain: TaskPrioritizationChain = Field(...)\n",
    "    execution_chain: ExecutionChain = Field(...)\n",
    "    task_id_counter: int = Field(1)\n",
    "    vectorstore: VectorStore = Field(init=False)\n",
    "    max_iterations: Optional[int] = None\n",
    "\n",
    "    class Config:\n",
    "        arbitrary_types_allowed = True\n",
    "\n",
    "    def add_task(self, task: Dict):\n",
    "        self.task_list.append(task)\n",
    "\n",
    "    def print_task_list(self):\n",
    "        print(\"\\033[95m\\033[1m\" + \"\\n*****TASK LIST*****\\n\" + \"\\033[0m\\033[0m\")\n",
    "        for t in self.task_list:\n",
    "            print(str(t[\"task_id\"]) + \": \" + t[\"task_name\"])\n",
    "\n",
    "    def print_next_task(self, task: Dict):\n",
    "        print(\"\\033[92m\\033[1m\" + \"\\n*****NEXT TASK*****\\n\" + \"\\033[0m\\033[0m\")\n",
    "        print(str(task[\"task_id\"]) + \": \" + task[\"task_name\"])\n",
    "\n",
    "    def print_task_result(self, result: str):\n",
    "        print(\"\\033[93m\\033[1m\" + \"\\n*****TASK RESULT*****\\n\" + \"\\033[0m\\033[0m\")\n",
    "        print(result)\n",
    "\n",
    "    @property\n",
    "    def input_keys(self) -> List[str]:\n",
    "        return [\"objective\"]\n",
    "\n",
    "    @property\n",
    "    def output_keys(self) -> List[str]:\n",
    "        return []\n",
    "    \"\"\"\n",
    "    This method is used to execute a task. It takes a dictionary parameter inputs \n",
    "    as input and returns the execution result. Inside the method. Firstly, the \n",
    "    task ID and the first task are obtained from inputs. Then, based on the name \n",
    "    of the first task, a new task is created, added to the task list, and executed. \n",
    "    Next, based on the execution result, the text content and metadata in the vector \n",
    "    store are updated. A new task ID is generated, the new task is added to the \n",
    "    task list, and the task list is reordered again. If the maximum number of \n",
    "    iterations is reached, an end message is printed and the loop is exited. Some \n",
    "    helper methods and attributes are also defined within the class, such as \n",
    "    input_keys() which is used to get a list of input keys, output_keys() which \n",
    "    is used to get a list of output keys, and TaskCreationChain.from_list() which \n",
    "    is used to create a task creation chain from a task list.\n",
    "    \"\"\"\n",
    "    def _call(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n",
    "        \"\"\"Run the agent.\"\"\"\n",
    "        objective = inputs[\"objective\"]\n",
    "        first_task = inputs.get(\"first_task\", \"Make a todo list\")\n",
    "        self.add_task({\"task_id\": 1, \"task_name\": first_task})\n",
    "        num_iters = 0\n",
    "        while True:\n",
    "            if self.task_list:\n",
    "                self.print_task_list()\n",
    "\n",
    "                # Step 1: Pull the first task\n",
    "                task = self.task_list.popleft()\n",
    "                self.print_next_task(task)\n",
    "\n",
    "                # Step 2: Execute the task\n",
    "                result = execute_task(\n",
    "                    self.vectorstore, self.execution_chain, objective, task[\"task_name\"]\n",
    "                )\n",
    "                this_task_id = int(task[\"task_id\"])\n",
    "                self.print_task_result(result)\n",
    "\n",
    "                # Step 3: Store the result in Pinecone\n",
    "                result_id = f\"result_{task['task_id']}\"\n",
    "                self.vectorstore.add_texts(\n",
    "                    texts=[result],\n",
    "                    metadatas=[{\"task\": task[\"task_name\"]}],\n",
    "                    ids=[result_id],\n",
    "                )\n",
    "\n",
    "                # Step 4: Create new tasks and reprioritize task list\n",
    "                new_tasks = get_next_task(\n",
    "                    self.task_creation_chain,\n",
    "                    result,\n",
    "                    task[\"task_name\"],\n",
    "                    [t[\"task_name\"] for t in self.task_list],\n",
    "                    objective,\n",
    "                )\n",
    "                for new_task in new_tasks:\n",
    "                    self.task_id_counter += 1\n",
    "                    new_task.update({\"task_id\": self.task_id_counter})\n",
    "                    self.add_task(new_task)\n",
    "                self.task_list = deque(\n",
    "                    prioritize_tasks(\n",
    "                        self.task_prioritization_chain,\n",
    "                        this_task_id,\n",
    "                        list(self.task_list),\n",
    "                        objective,\n",
    "                    )\n",
    "                )\n",
    "            num_iters += 1\n",
    "            if self.max_iterations is not None and num_iters == self.max_iterations:\n",
    "                print(\n",
    "                    \"\\033[91m\\033[1m\" + \"\\n*****TASK ENDING*****\\n\" + \"\\033[0m\\033[0m\"\n",
    "                )\n",
    "                break\n",
    "        return {}\n",
    "\n",
    "    \"\"\"\n",
    "    This is a class method that initializes a BabyAGI controller from an LLM instance. The \n",
    "    class method takes four parameters: llm, vectorstore, verbose, and kwargs. llm \n",
    "    represents the LLM instance, vectorstore represents the vector store instance, \n",
    "    verbose indicates whether verbose output mode is enabled, and kwargs represents \n",
    "    other optional parameters. Inside the class method, TaskCreationChain.from_llm() \n",
    "    and TaskPrioritizationChain.from_llm() methods are called to create task creation \n",
    "    chains and task prioritization chains respectively. Then, ExecutionChain.from_llm() \n",
    "    method is called to create an execution chain. Finally, using these chains, vectorstore, \n",
    "    and other passed in parameters, a BabyAGI class instance is created and returned.\n",
    "    \"\"\"\n",
    "    @classmethod\n",
    "    def from_llm(\n",
    "        cls, llm: BaseLLM, vectorstore: VectorStore, verbose: bool = False, **kwargs\n",
    "    ) -> \"BabyAGI\":\n",
    "        \"\"\"Initialize the BabyAGI Controller.\"\"\"\n",
    "        task_creation_chain = TaskCreationChain.from_llm(llm, verbose=verbose)\n",
    "        task_prioritization_chain = TaskPrioritizationChain.from_llm(\n",
    "            llm, verbose=verbose\n",
    "        )\n",
    "        execution_chain = ExecutionChain.from_llm(llm, verbose=verbose)\n",
    "        return cls(\n",
    "            task_creation_chain=task_creation_chain,\n",
    "            task_prioritization_chain=task_prioritization_chain,\n",
    "            execution_chain=execution_chain,\n",
    "            vectorstore=vectorstore,\n",
    "            **kwargs,\n",
    "        )"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It's time to run BabyAGI and create a BabyAGI controller while observing it attempt to achieve your goal."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "OBJECTIVE = \"Make a plan to travel around the world for a month\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "llm = OpenAI(temperature=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "Defined three variables: verbose, max_iterations, and baby_agi.\n",
    "verbose: This is a boolean variable with an initial value of False. It indicates \n",
    "whether verbose output mode is enabled or not. If set to True, more information \n",
    "will be output during the execution of tasks.\n",
    "max_iterations: This is an optional integer variable with an initial value of 3. \n",
    "It represents the maximum number of iterations for the BabyAGI controller. If not \n",
    "specified, it defaults to 3.\n",
    "baby_agi: This is the result of initializing the BabyAGI controller from an LLM \n",
    "instance. It is an object of the BabyAGI class that contains information such as \n",
    "the task list generated from the LLM instance, the execution chain for tasks, \n",
    "vector storage, and other relevant data.\n",
    "\"\"\"\n",
    "verbose = False\n",
    "max_iterations: Optional[int] = 3\n",
    "baby_agi = BabyAGI.from_llm(\n",
    "    llm=llm, vectorstore=vectorstore, verbose=verbose, max_iterations=max_iterations\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[95m\u001b[1m\n",
      "*****TASK LIST*****\n",
      "\u001b[0m\u001b[0m\n",
      "1: Make a todo list\n",
      "\u001b[92m\u001b[1m\n",
      "*****NEXT TASK*****\n",
      "\u001b[0m\u001b[0m\n",
      "1: Make a todo list\n",
      "\u001b[93m\u001b[1m\n",
      "*****TASK RESULT*****\n",
      "\u001b[0m\u001b[0m\n",
      "\n",
      "\n",
      "1. Research potential destinations\n",
      "2. Create a budget\n",
      "3. Book flights\n",
      "4. Book accommodations\n",
      "5. Research local attractions\n",
      "6. Pack necessary items\n",
      "7. Make copies of important documents\n",
      "8. Notify bank and credit card companies of travel plans\n",
      "9. Purchase travel insurance\n",
      "10. Make a list of emergency contacts\n",
      "11. Research visa requirements\n",
      "12. Apply for necessary visas\n",
      "13. Make a list of must-see attractions\n",
      "14. Make a list of must-do activities\n",
      "15. Make a list of must-try foods\n",
      "16. Make a list of must-visit restaurants\n",
      "17. Make a list of must-visit shops\n",
      "18. Make a list of must-visit markets\n",
      "19. Make a list of must-visit museums\n",
      "20. Make a list of must-visit galleries\n",
      "21. Make a list of must-visit parks\n",
      "22. Make a list of must-visit historical sites\n",
      "23. Make a list of must-visit religious sites\n",
      "24. Make a list of must-visit natural sites\n",
      "25. Make a list of must-visit cultural sites\n",
      "26. Make a list of must-visit nightlife spots\n",
      "\n",
      "\u001b[95m\u001b[1m\n",
      "*****TASK LIST*****\n",
      "\u001b[0m\u001b[0m\n",
      "2: Research local currency exchange rates\n",
      "3: Research local safety and security information\n",
      "4: Research local customs and etiquette\n",
      "5: Research local language basics\n",
      "6: Research local weather conditions\n",
      "7: Research local medical facilities\n",
      "8: Research local emergency services\n",
      "9: Research local shopping options\n",
      "10: Research local entertainment options\n",
      "11: Research local cultural events\n",
      "12: Research local outdoor activities\n",
      "13: Research local festivals\n",
      "14: Research local sports teams\n",
      "15: Research local volunteer opportunities\n",
      "16: Research local job opportunities\n",
      "17: Research local educational opportunities\n",
      "18: Research local religious services\n",
      "19: Research local political activities\n",
      "20: Research local environmental initiatives\n",
      "21: Research local community organizations\n",
      "22: Research local charities\n",
      "23: Research local art galleries\n",
      "24: Research local music venues\n",
      "25: Research local theater venues\n",
      "26: Research local libraries\n",
      "1: Research local transportation options\n",
      "\u001b[92m\u001b[1m\n",
      "*****NEXT TASK*****\n",
      "\u001b[0m\u001b[0m\n",
      "2: Research local currency exchange rates\n",
      "\u001b[93m\u001b[1m\n",
      "*****TASK RESULT*****\n",
      "\u001b[0m\u001b[0m\n",
      "\n",
      "\n",
      "I will research local currency exchange rates for the countries I plan to visit during my month-long trip around the world. I will use online resources such as currency conversion websites and travel blogs to find the most up-to-date exchange rates. I will also research the best ways to exchange money in each country, such as using ATMs or exchanging cash at banks.\n",
      "\u001b[95m\u001b[1m\n",
      "*****TASK LIST*****\n",
      "\u001b[0m\u001b[0m\n",
      "3: Research local transportation costs\n",
      "4: Research local accommodation options\n",
      "5: Research local food options\n",
      "6: Research local tourist attractions\n",
      "7: Research local tourist discounts\n",
      "8: Research local tourist visas\n",
      "9: Research local tourist activities\n",
      "10: Research local tourist guides\n",
      "11: Research local tourist maps\n",
      "12: Research local tourist safety tips\n",
      "13: Research local safety and security information\n",
      "14: Research local customs and etiquette\n",
      "15: Research local language basics\n",
      "16: Research local weather conditions\n",
      "17: Research local medical facilities\n",
      "18: Research local emergency services\n",
      "19: Research local shopping options\n",
      "20: Research local entertainment options\n",
      "21: Research local cultural events\n",
      "22: Research local outdoor activities\n",
      "23: Research local festivals\n",
      "24: Research local sports teams\n",
      "25: Research local volunteer opportunities\n",
      "26: Research local job opportunities\n",
      "27: Research local educational opportunities\n",
      "28: Research local religious services\n",
      "29: Research local political activities\n",
      "30: Research local environmental initiatives\n",
      "31: Research local community organizations\n",
      "32: Research local charities\n",
      "33: Research local art galleries\n",
      "34: Research local\n",
      "\u001b[92m\u001b[1m\n",
      "*****NEXT TASK*****\n",
      "\u001b[0m\u001b[0m\n",
      "3: Research local transportation costs\n",
      "\u001b[93m\u001b[1m\n",
      "*****TASK RESULT*****\n",
      "\u001b[0m\u001b[0m\n",
      "\n",
      "\n",
      "I will research local transportation costs by looking up the cost of flights, trains, buses, and other forms of transportation in each of the countries I plan to visit. I will also look into the cost of renting a car or taking a taxi in each country. Additionally, I will research any discounts or promotions that may be available for transportation costs.\n",
      "\u001b[91m\u001b[1m\n",
      "*****TASK ENDING*****\n",
      "\u001b[0m\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'objective': 'Make a plan to travel around the world for a month'}"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "baby_agi({\"objective\": OBJECTIVE})"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we are familiar with the basic operations of BabyAGI, let's take a look at some code examples related to it. In the following cases, we will be using Google's search API to enable the agent to perform web searches and complete task planning.\n",
    "\n",
    "First, we need to import the relevant libraries."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from collections import deque\n",
    "from typing import Dict, List, Optional, Any\n",
    "\n",
    "from langchain import LLMChain, OpenAI, PromptTemplate\n",
    "from langchain.embeddings import OpenAIEmbeddings\n",
    "from langchain.llms import BaseLLM\n",
    "from langchain.vectorstores.base import VectorStore\n",
    "from pydantic import BaseModel, Field\n",
    "from langchain.chains.base import Chain"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Connect to Vector Store\n",
    "\n",
    "Depending on the vector storage you are using, this step may look different."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Note: you may need to restart the kernel to use updated packages.\n",
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "%pip install faiss-cpu > /dev/null\n",
    "%pip install google-search-results > /dev/null\n",
    "from langchain.vectorstores import FAISS\n",
    "from langchain.docstore import InMemoryDocstore"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "embeddings_model = OpenAIEmbeddings()\n",
    "import faiss\n",
    "embedding_size = 1536\n",
    "index = faiss.IndexFlatL2(embedding_size)\n",
    "vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面的定义链的过程，同上，大同小异，这里就不再赘述："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [],
   "source": [
    "class TaskCreationChain(LLMChain):\n",
    "    @classmethod\n",
    "    def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:\n",
    "        \"\"\"Get the response parser.\"\"\"\n",
    "        task_creation_template = (\n",
    "            \"You are an task creation AI that uses the result of an execution agent\"\n",
    "            \" to create new tasks with the following objective: {objective},\"\n",
    "            \" The last completed task has the result: {result}.\"\n",
    "            \" This result was based on this task description: {task_description}.\"\n",
    "            \" These are incomplete tasks: {incomplete_tasks}.\"\n",
    "            \" Based on the result, create new tasks to be completed\"\n",
    "            \" by the AI system that do not overlap with incomplete tasks.\"\n",
    "            \" Return the tasks as an array.\"\n",
    "        )\n",
    "        prompt = PromptTemplate(\n",
    "            template=task_creation_template,\n",
    "            input_variables=[\n",
    "                \"result\",\n",
    "                \"task_description\",\n",
    "                \"incomplete_tasks\",\n",
    "                \"objective\",\n",
    "            ],\n",
    "        )\n",
    "        return cls(prompt=prompt, llm=llm, verbose=verbose)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [],
   "source": [
    "class TaskPrioritizationChain(LLMChain):\n",
    "    @classmethod\n",
    "    def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:\n",
    "        \"\"\"Get the response parser.\"\"\"\n",
    "        task_prioritization_template = (\n",
    "            \"You are an task prioritization AI tasked with cleaning the formatting of and reprioritizing\"\n",
    "            \" the following tasks: {task_names}.\"\n",
    "            \" Consider the ultimate objective of your team: {objective}.\"\n",
    "            \" Do not remove any tasks. Return the result as a numbered list, like:\"\n",
    "            \" #. First task\"\n",
    "            \" #. Second task\"\n",
    "            \" Start the task list with number {next_task_id}.\"\n",
    "        )\n",
    "        prompt = PromptTemplate(\n",
    "            template=task_prioritization_template,\n",
    "            input_variables=[\"task_names\", \"next_task_id\", \"objective\"],\n",
    "        )\n",
    "        return cls(prompt=prompt, llm=llm, verbose=verbose)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will be using Google's search API, so we need to include it here."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.agents import ZeroShotAgent, Tool, AgentExecutor\n",
    "from langchain import OpenAI, SerpAPIWrapper, LLMChain\n",
    "\n",
    "todo_prompt = PromptTemplate.from_template(\n",
    "    \"You are a planner who is an expert at coming up with a todo list for a given objective. Come up with a todo list for this objective: {objective}\"\n",
    ")\n",
    "todo_chain = LLMChain(llm=OpenAI(temperature=0), prompt=todo_prompt)\n",
    "search = SerpAPIWrapper()\n",
    "\n",
    "tools = [\n",
    "    Tool(\n",
    "        name=\"Search\",\n",
    "        func=search.run,\n",
    "        description=\"useful for when you need to answer questions about current events\",\n",
    "    ),\n",
    "    Tool(\n",
    "        name=\"TODO\",\n",
    "        func=todo_chain.run,\n",
    "        description=\"useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!\",\n",
    "    ),\n",
    "]\n",
    "\n",
    "prefix = \"\"\"You are an AI who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.\"\"\"\n",
    "suffix = \"\"\"Question: {task}\n",
    "{agent_scratchpad}\"\"\"\n",
    "prompt = ZeroShotAgent.create_prompt(\n",
    "    tools,\n",
    "    prefix=prefix,\n",
    "    suffix=suffix,\n",
    "    input_variables=[\"objective\", \"task\", \"context\", \"agent_scratchpad\"],\n",
    ")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Define the AGI controller. In order to form a circular chain of three definitions, we will proceed to the next steps below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "This function is used to retrieve the next task. It first converts the list of \n",
    "previously executed tasks into a string and passes it as an argument to the run \n",
    "method of the task creation chain. Then, it extracts new tasks from the response \n",
    "of the task creation chain and stores them in a list. Finally, it returns a \n",
    "dictionary list containing new task names, each with a \"task_name\" key and \n",
    "its corresponding value.\n",
    "\n",
    "Parameters:\n",
    "\n",
    "task_creation_chain (LLMChain): The type of task creation chain, which represents \n",
    "the task creation chain.\n",
    "result (Dict): The type of previous task execution result.\n",
    "task_description (str): The type of task description to be generated for the new task.\n",
    "task_list (List[str]): The list of previously executed tasks.\n",
    "objective (str): The type of task objective.\n",
    "\"\"\"\n",
    "def get_next_task(\n",
    "    task_creation_chain: LLMChain,\n",
    "    result: Dict,\n",
    "    task_description: str,\n",
    "    task_list: List[str],\n",
    "    objective: str,\n",
    ") -> List[Dict]:\n",
    "    \"\"\"Get the next task.\"\"\"\n",
    "    incomplete_tasks = \", \".join(task_list)\n",
    "    response = task_creation_chain.run(\n",
    "        result=result,\n",
    "        task_description=task_description,\n",
    "        incomplete_tasks=incomplete_tasks,\n",
    "        objective=objective,\n",
    "    )\n",
    "    new_tasks = response.split(\"\\n\")\n",
    "    return [{\"task_name\": task_name} for task_name in new_tasks if task_name.strip()]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "The function's purpose is to determine the priority of each task. It first extracts all \n",
    "the task names from the task list and stores them in a list.\n",
    "\n",
    "Then, it calculates the ID of the next task and passes it as an argument to the run \n",
    "method of the task prioritization chain. Next, it extracts the new tasks from the \n",
    "response of the task prioritization chain and stores them in a list.\n",
    "\n",
    "Finally, it iterates over each task string in the new task list and splits it into \n",
    "two parts using a space separator. If the second part is not empty, it means that \n",
    "it is a task with an ID and name. It adds this task to a priority task table and \n",
    "returns this list.\n",
    "\n",
    "Parameters:\n",
    "\n",
    "task_prioritization_chain (LLMChain): The type of task prioritization chain, which \n",
    "represents the task prioritization chain.\n",
    "this_task_id (int): The type of current task ID.\n",
    "task_list (List[Dict]): The list of all tasks.\n",
    "objective (str): The type of task objective.\n",
    "\"\"\"\n",
    "def prioritize_tasks(\n",
    "    task_prioritization_chain: LLMChain,\n",
    "    this_task_id: int,\n",
    "    task_list: List[Dict],\n",
    "    objective: str,\n",
    ") -> List[Dict]:\n",
    "    \"\"\"Prioritize tasks.\"\"\"\n",
    "    task_names = [t[\"task_name\"] for t in task_list]\n",
    "    next_task_id = int(this_task_id) + 1\n",
    "    response = task_prioritization_chain.run(\n",
    "        task_names=task_names, next_task_id=next_task_id, objective=objective\n",
    "    )\n",
    "    new_tasks = response.split(\"\\n\")\n",
    "    prioritized_task_list = []\n",
    "    for task_string in new_tasks:\n",
    "        if not task_string.strip():\n",
    "            continue\n",
    "        task_parts = task_string.strip().split(\".\", 1)\n",
    "        if len(task_parts) == 2:\n",
    "            task_id = task_parts[0].strip()\n",
    "            task_name = task_parts[1].strip()\n",
    "            prioritized_task_list.append({\"task_id\": task_id, \"task_name\": task_name})\n",
    "    return prioritized_task_list"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "This function's purpose is to return the k most relevant task names based on a query.\n",
    "It first calls the similarity_search_with_score method of the similarity vector store \n",
    "object to perform the query and stores the results in the results variable. If there \n",
    "are no results, the function returns an empty list. Otherwise, it uses the zip and \n",
    "sorted functions to sort the results by relevance and stores them in the sorted_results \n",
    "variable. Finally, it iterates through each item in the sorted_results, extracts the \n",
    "task name from its metadata, and stores it in a list which is returned at the end.\n",
    "\n",
    "Parameters:\n",
    "\n",
    "None (NoneType): No parameters required.\n",
    "query (str): The type of query string.\n",
    "k (int): The type of number of most relevant tasks to return.\n",
    "similarity_vector_store (SimilarityVectorStore): The type of similarity vector store object.\n",
    "results (List[Dict]): The list of search results with scores.\n",
    "sorted_results (List[Dict]): The list of search results sorted by relevance.\n",
    "task_names (List[str]): The list of task names extracted from metadata.\n",
    "\"\"\"\n",
    "def _get_top_tasks(vectorstore, query: str, k: int) -> List[str]:\n",
    "    \"\"\"Get the top k tasks based on the query.\"\"\"\n",
    "    results = vectorstore.similarity_search_with_score(query, k=k)\n",
    "    if not results:\n",
    "        return []\n",
    "    sorted_results, _ = zip(*sorted(results, key=lambda x: x[1], reverse=True))\n",
    "    return [str(item.metadata[\"task\"]) for item in sorted_results]\n",
    "\n",
    "def execute_task(\n",
    "    vectorstore, execution_chain: LLMChain, objective: str, task: str, k: int = 5\n",
    ") -> str:\n",
    "    \"\"\"Execute a task.\"\"\"\n",
    "    context = _get_top_tasks(vectorstore, query=objective, k=k)\n",
    "    return execution_chain.run(objective=objective, context=context, task=task)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "The following code defines a class that controls the behavior of a simulation AI agent, \n",
    "allowing for the addition of tasks, task execution, and result output. The class also \n",
    "provides some configuration options to customize its behavior during runtime.\n",
    "\n",
    "Parameters:\n",
    "\n",
    "None (NoneType): No parameters required.\n",
    "task_list (deque[dict]): A deque storing the list of tasks.\n",
    "task_creation_chain (LLMChain): The chain for creating tasks.\n",
    "task_prioritization_chain (LLMChain): The chain for prioritizing tasks.\n",
    "execution_chain (LLMChain): The agent for executing tasks.\n",
    "task_id_counter (int): The type of task ID counter.\n",
    "vectorstore (VectorStore): The type of vector store repository.\n",
    "max_iterations (int): The maximum number of iterations.\n",
    "Methods and attributes in the class definition include:\n",
    "\n",
    "add_task(task): Adds a task to the task list.\n",
    "print_task_list(): Prints the task list.\n",
    "print_next_task(task): Prints the next task.\n",
    "print_task_result(result): Prints the task result.\n",
    "input_keys (List[str]): A list of input keyword strings.\n",
    "output_keys (List[str]): A list of output keyword strings.\n",
    "\"\"\"\n",
    "class BabyAGI(Chain, BaseModel):\n",
    "    \"\"\"Controller model for the BabyAGI agent.\"\"\"\n",
    "\n",
    "    task_list: deque = Field(default_factory=deque)\n",
    "    task_creation_chain: TaskCreationChain = Field(...)\n",
    "    task_prioritization_chain: TaskPrioritizationChain = Field(...)\n",
    "    execution_chain: AgentExecutor = Field(...)\n",
    "    task_id_counter: int = Field(1)\n",
    "    vectorstore: VectorStore = Field(init=False)\n",
    "    max_iterations: Optional[int] = None\n",
    "\n",
    "    class Config:\n",
    "        \"\"\"Configuration for this pydantic object.\"\"\"\n",
    "\n",
    "        arbitrary_types_allowed = True\n",
    "\n",
    "    def add_task(self, task: Dict):\n",
    "        self.task_list.append(task)\n",
    "\n",
    "    def print_task_list(self):\n",
    "        print(\"\\033[95m\\033[1m\" + \"\\n*****TASK LIST*****\\n\" + \"\\033[0m\\033[0m\")\n",
    "        for t in self.task_list:\n",
    "            print(str(t[\"task_id\"]) + \": \" + t[\"task_name\"])\n",
    "\n",
    "    def print_next_task(self, task: Dict):\n",
    "        print(\"\\033[92m\\033[1m\" + \"\\n*****NEXT TASK*****\\n\" + \"\\033[0m\\033[0m\")\n",
    "        print(str(task[\"task_id\"]) + \": \" + task[\"task_name\"])\n",
    "\n",
    "    def print_task_result(self, result: str):\n",
    "        print(\"\\033[93m\\033[1m\" + \"\\n*****TASK RESULT*****\\n\" + \"\\033[0m\\033[0m\")\n",
    "        print(result)\n",
    "\n",
    "    @property\n",
    "    def input_keys(self) -> List[str]:\n",
    "        return [\"objective\"]\n",
    "\n",
    "    @property\n",
    "    def output_keys(self) -> List[str]:\n",
    "        return []\n",
    "\n",
    "    def _call(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n",
    "        \"\"\"Run the agent.\"\"\"\n",
    "        objective = inputs[\"objective\"]\n",
    "        first_task = inputs.get(\"first_task\", \"Make a todo list\")\n",
    "        self.add_task({\"task_id\": 1, \"task_name\": first_task})\n",
    "        num_iters = 0\n",
    "        while True:\n",
    "            if self.task_list:\n",
    "                self.print_task_list()\n",
    "\n",
    "                # Step 1: Pull the first task\n",
    "                task = self.task_list.popleft()\n",
    "                self.print_next_task(task)\n",
    "\n",
    "                # Step 2: Execute the task\n",
    "                result = execute_task(\n",
    "                    self.vectorstore, self.execution_chain, objective, task[\"task_name\"]\n",
    "                )\n",
    "                this_task_id = int(task[\"task_id\"])\n",
    "                self.print_task_result(result)\n",
    "\n",
    "                # Step 3: Store the result in Pinecone\n",
    "                result_id = f\"result_{task['task_id']}\"\n",
    "                self.vectorstore.add_texts(\n",
    "                    texts=[result],\n",
    "                    metadatas=[{\"task\": task[\"task_name\"]}],\n",
    "                    ids=[result_id],\n",
    "                )\n",
    "\n",
    "                # Step 4: Create new tasks and reprioritize task list\n",
    "                new_tasks = get_next_task(\n",
    "                    self.task_creation_chain,\n",
    "                    result,\n",
    "                    task[\"task_name\"],\n",
    "                    [t[\"task_name\"] for t in self.task_list],\n",
    "                    objective,\n",
    "                )\n",
    "                for new_task in new_tasks:\n",
    "                    self.task_id_counter += 1\n",
    "                    new_task.update({\"task_id\": self.task_id_counter})\n",
    "                    self.add_task(new_task)\n",
    "                self.task_list = deque(\n",
    "                    prioritize_tasks(\n",
    "                        self.task_prioritization_chain,\n",
    "                        this_task_id,\n",
    "                        list(self.task_list),\n",
    "                        objective,\n",
    "                    )\n",
    "                )\n",
    "            num_iters += 1\n",
    "            if self.max_iterations is not None and num_iters == self.max_iterations:\n",
    "                print(\n",
    "                    \"\\033[91m\\033[1m\" + \"\\n*****TASK ENDING*****\\n\" + \"\\033[0m\\033[0m\"\n",
    "                )\n",
    "                break\n",
    "        return {}\n",
    "    @classmethod\n",
    "    def from_llm(\n",
    "        cls, llm: BaseLLM, vectorstore: VectorStore, verbose: bool = False, **kwargs\n",
    "    ) -> \"BabyAGI\":\n",
    "        \"\"\"Initialize the BabyAGI Controller.\"\"\"\n",
    "        task_creation_chain = TaskCreationChain.from_llm(llm, verbose=verbose)\n",
    "        task_prioritization_chain = TaskPrioritizationChain.from_llm(\n",
    "            llm, verbose=verbose\n",
    "        )\n",
    "        llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
    "        tool_names = [tool.name for tool in tools]\n",
    "        agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)\n",
    "        agent_executor = AgentExecutor.from_agent_and_tools(\n",
    "            agent=agent, tools=tools, verbose=True\n",
    "        )\n",
    "        return cls(\n",
    "            task_creation_chain=task_creation_chain,\n",
    "            task_prioritization_chain=task_prioritization_chain,\n",
    "            execution_chain=agent_executor,\n",
    "            vectorstore=vectorstore,\n",
    "            **kwargs,\n",
    "        )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [],
   "source": [
    "OBJECTIVE = \"Write a weather report for Beijing today\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [],
   "source": [
    "llm = OpenAI(temperature=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Logging of LLMChains\n",
    "verbose = False\n",
    "# If None, will keep on going forever\n",
    "max_iterations: Optional[int] = 3\n",
    "baby_agi = BabyAGI.from_llm(\n",
    "    llm=llm, vectorstore=vectorstore, verbose=verbose, max_iterations=max_iterations\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[95m\u001b[1m\n",
      "*****TASK LIST*****\n",
      "\u001b[0m\u001b[0m\n",
      "1: Make a todo list\n",
      "\u001b[92m\u001b[1m\n",
      "*****NEXT TASK*****\n",
      "\u001b[0m\u001b[0m\n",
      "1: Make a todo list\n",
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3mThought: I need to come up with a todo list\n",
      "Action: TODO\n",
      "Action Input: Write a weather report for Beijing today\u001b[0m\n",
      "Observation: \u001b[33;1m\u001b[1;3m\n",
      "\n",
      "1. Research current weather conditions in Beijing\n",
      "2. Gather data on temperature, humidity, wind speed, and other relevant weather conditions\n",
      "3. Research historical weather patterns in Beijing\n",
      "4. Analyze current and historical data to determine any trends\n",
      "5. Write a brief introduction to the weather report\n",
      "6. Describe current weather conditions in Beijing\n",
      "7. Discuss any trends in the weather\n",
      "8. Make predictions about the weather in Beijing for the next 24 hours\n",
      "9. Conclude the report with a summary of the weather conditions\n",
      "10. Proofread and edit the report for accuracy and clarity\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
      "Final Answer: A todo list for writing a weather report for Beijing today: \n",
      "1. Research current weather conditions in Beijing\n",
      "2. Gather data on temperature, humidity, wind speed, and other relevant weather conditions\n",
      "3. Research historical weather patterns in Beijing\n",
      "4. Analyze current and historical data to determine any trends\n",
      "5. Write a brief introduction to the weather report\n",
      "6. Describe current weather conditions in Beijing\n",
      "7. Discuss any trends in the weather\n",
      "8. Make predictions about the weather in Beijing for the next 24 hours\n",
      "9. Conclude the report with a summary of the weather conditions\n",
      "10. Proofread and edit the report for accuracy and clarity\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "\u001b[93m\u001b[1m\n",
      "*****TASK RESULT*****\n",
      "\u001b[0m\u001b[0m\n",
      "A todo list for writing a weather report for Beijing today: \n",
      "1. Research current weather conditions in Beijing\n",
      "2. Gather data on temperature, humidity, wind speed, and other relevant weather conditions\n",
      "3. Research historical weather patterns in Beijing\n",
      "4. Analyze current and historical data to determine any trends\n",
      "5. Write a brief introduction to the weather report\n",
      "6. Describe current weather conditions in Beijing\n",
      "7. Discuss any trends in the weather\n",
      "8. Make predictions about the weather in Beijing for the next 24 hours\n",
      "9. Conclude the report with a summary of the weather conditions\n",
      "10. Proofread and edit the report for accuracy and clarity\n",
      "\u001b[95m\u001b[1m\n",
      "*****TASK LIST*****\n",
      "\u001b[0m\u001b[0m\n",
      "2: Gather data on air quality, air pollution, and other relevant environmental conditions\n",
      "3: Research historical air quality patterns in Beijing\n",
      "4: Analyze current and historical data to determine any trends\n",
      "5: Compare current air quality to air quality standards\n",
      "6: Make predictions about the air quality in Beijing for the next 24 hours\n",
      "7: Discuss any trends in the air quality\n",
      "8: Write a brief summary of the air quality in Beijing\n",
      "9: Create a chart or graph to illustrate the air quality data\n",
      "10: Proofread and edit the report for accuracy and clarity\n",
      "1: Research current air quality in Beijing\n",
      "\u001b[92m\u001b[1m\n",
      "*****NEXT TASK*****\n",
      "\u001b[0m\u001b[0m\n",
      "2: Gather data on air quality, air pollution, and other relevant environmental conditions\n",
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3mThought: I need to search for data on air quality, air pollution, and other relevant environmental conditions\n",
      "Action: Search\n",
      "Action Input: air quality, air pollution, and other relevant environmental conditions\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3mDespite the dramatic progress to date, air pollution continues to threaten Americans' health and welfare. The main obstacles are climate change, ...\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
      "Final Answer: Despite the dramatic progress to date, air pollution continues to threaten Americans' health and welfare. The main obstacles are climate change, ...\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "\u001b[93m\u001b[1m\n",
      "*****TASK RESULT*****\n",
      "\u001b[0m\u001b[0m\n",
      "Despite the dramatic progress to date, air pollution continues to threaten Americans' health and welfare. The main obstacles are climate change, ...\n",
      "\u001b[95m\u001b[1m\n",
      "*****TASK LIST*****\n",
      "\u001b[0m\u001b[0m\n",
      "3: Compare current air quality to air quality standards\n",
      "4: Make predictions about the air quality in Beijing for the next 24 hours\n",
      "5: Research current air quality in Beijing\n",
      "6: Identify any sources of air pollution in Beijing and their potential impacts on air quality\n",
      "7: Investigate the effectiveness of current air quality regulations in Beijing\n",
      "8: Analyze the impact of climate change on air quality in Beijing\n",
      "9: Compare air quality in Beijing to other cities in China\n",
      "10: Examine the health effects of air pollution in Beijing\n",
      "11: Investigate the economic costs of air pollution in Beijing\n",
      "12: Research the potential solutions to improve air quality in Beijing\n",
      "13: Create a timeline of air quality in Beijing over the past decade\n",
      "14: Discuss any trends in the air quality\n",
      "15: Research historical air quality patterns in Beijing\n",
      "16: Analyze current and historical data to determine any trends\n",
      "17: Write a brief summary of the air quality in Beijing\n",
      "18: Create a chart or graph to illustrate the air quality data\n",
      "19: Proofread and edit the report for accuracy and clarity\n",
      "20: Summarize the findings of the report in a concise\n",
      "\u001b[92m\u001b[1m\n",
      "*****NEXT TASK*****\n",
      "\u001b[0m\u001b[0m\n",
      "3: Compare current air quality to air quality standards\n",
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3mThought: I need to search for air quality standards\n",
      "Action: Search\n",
      "Action Input: air quality standards\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3mAQI values at or below 100 are generally thought of as satisfactory. When AQI values are above 100, air quality is unhealthy: at first for certain sensitive groups of people, then for everyone as AQI values get higher.\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I need to search for current air quality\n",
      "Action: Search\n",
      "Action Input: current air quality\u001b[0m\n",
      "Observation: \u001b[36;1m\u001b[1;3mGet air quality data where you live. ; Air Quality Index Scale ; 0 - 50. Good ; 51 - 100. Moderate ; 101 - 150. Unhealthy for Sensitive Groups (USG) ; 151 - 200.\u001b[0m\n",
      "Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
      "Final Answer: The current air quality in Beijing is Unhealthy for Sensitive Groups (USG).\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "\u001b[93m\u001b[1m\n",
      "*****TASK RESULT*****\n",
      "\u001b[0m\u001b[0m\n",
      "The current air quality in Beijing is Unhealthy for Sensitive Groups (USG).\n",
      "\u001b[91m\u001b[1m\n",
      "*****TASK ENDING*****\n",
      "\u001b[0m\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'objective': 'Write a weather report for Beijing today'}"
      ]
     },
     "execution_count": 73,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "baby_agi({\"objective\": OBJECTIVE})"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Auto-GPT Assistant "
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src=\"./autogpt.png\"  height=\"20%\"/>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, we implemented AutoGPT based on langchain to understand its principles and steps."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, we establish tools and define search tool, write-file tool, and read-file tool. We also install some library functions that will be used in this column."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install duckduckgo_search\n",
    "%pip install playwright\n",
    "%pip install bs4\n",
    "%pip install nest_asyncio\n",
    "%pip install tiktoken"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ['HTTP_PROXY'] = 'http://127.0.0.1:xxxx'\n",
    "os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:xxxx'\n",
    "\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"sk-xxxxxxxxxxxx\"\n",
    "os.environ['SERPAPI_API_KEY']='--------------Your API KEY----------------------'"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following code comments are no longer specifically added as they have already been presented in the Coding Examples. If there are still any doubts, please refer to the [here](#coding-exampls)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.utilities import SerpAPIWrapper\n",
    "from langchain.agents import Tool\n",
    "from langchain.tools.file_management.write import WriteFileTool\n",
    "from langchain.tools.file_management.read import ReadFileTool\n",
    "\n",
    "search = SerpAPIWrapper()\n",
    "tools = [\n",
    "    Tool(\n",
    "        name = \"search\",\n",
    "        func=search.run,\n",
    "        description=\"useful for when you need to answer questions about current events. You should ask targeted questions\"\n",
    "    ),\n",
    "    WriteFileTool(),\n",
    "    ReadFileTool(),\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.vectorstores import FAISS\n",
    "from langchain.docstore import InMemoryDocstore\n",
    "from langchain.embeddings import OpenAIEmbeddings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "embeddings_model = OpenAIEmbeddings()\n",
    "import faiss\n",
    "embedding_size = 1536\n",
    "index = faiss.IndexFlatL2(embedding_size)\n",
    "vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.experimental.autonomous_agents.autogpt.agent import AutoGPT\n",
    "from langchain.chat_models import ChatOpenAI"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "The following code is used to create an intelligent agent named \"agent\" using AutoGPT. \n",
    "Specifically, the intelligent agent will play the role of an assistant and interact \n",
    "with the user using the specified tools and language model (ChatOpenAI) in this case. \n",
    "The parameters ai_name specifies that the name of the intelligent agent is \"Tom\" \n",
    "and ai_role specifies its role as an assistant; the parameter tools specifies a \n",
    "list of tools that the intelligent agent can use; the parameter llm specifies that \n",
    "the language model of the intelligent agent is ChatOpenAI and sets its temperature \n",
    "to 0, which means it will provide accurate answers as much as possible; the parameter \n",
    "memory specifies that the intelligent agent uses vectorstore.as_retriever() as its \n",
    "retriever. Through this part of the code, we can create an intelligent agent with \n",
    "custom roles, tools, and language models, and use the specified retriever to obtain \n",
    "information.\n",
    "\"\"\"\n",
    "agent = AutoGPT.from_llm_and_tools(\n",
    "    ai_name=\"Tom\",\n",
    "    ai_role=\"Assistant\",\n",
    "    tools=tools,\n",
    "    llm=ChatOpenAI(temperature=0),\n",
    "    memory=vectorstore.as_retriever()\n",
    ")\n",
    "agent.chain.verbose = True"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent.run([\"make a plan to travel around the China for a month, and give me five advices\"])\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "After running the code, many txt files will be created in the current directory, including but not limited to china_trip_plan.txt, china_info.txt, budge.txt, flight_info.txt, and so on. You can experience it by running this notebook code."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Due to the need for the interrupt command \"finish\" during the running process, some of the running results of the program we ran above are placed here:\n",
    "\n",
    "OUT：\n",
    "\n",
    "\n",
    "\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
    "Prompt after formatting:\n",
    "\u001b[32;1m\u001b[1;3mSystem: You are Tom, Assistant\n",
    "Your decisions must always be made independently without seeking user assistance.\n",
    "Play to your strengths as an LLM and pursue simple strategies with no legal complications.\n",
    "If you have completed all your tasks, make sure to use the \"finish\" command.\n",
    "\n",
    "GOALS:\n",
    "\n",
    "1. make a plan to travel around the China for a month, and give me five advices\n",
    "\n",
    "\n",
    "Constraints:\n",
    "1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\n",
    "2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\n",
    "3. No user assistance\n",
    "4. Exclusively use the commands listed in double quotes e.g. \"command name\"\n",
    "\n",
    "Commands:\n",
    "1. search: useful for when you need to answer questions about current events. You should ask targeted questions, args json schema: {\"query\": {\"title\": \"Query\", \"type\": \"string\"}}\n",
    "2. write_file: Write file to disk, args json schema: {\"file_path\": {\"title\": \"File Path\", \"description\": \"name of file\", \"type\": \"string\"}, \"text\": {\"title\": \"Text\", \"description\": \"text to write to file\", \"type\": \"string\"}}\n",
    "3. read_file: Read file from disk, args json schema: {\"file_path\": {\"title\": \"File Path\", \"description\": \"name of file\", \"type\": \"string\"}}\n",
    "4. finish: use this to signal that you have finished all your objectives, args: \"response\": \"final response to let people know you have finished your objectives\"\n",
    "\n",
    "Resources:\n",
    "1. Internet access for searches and information gathering.\n",
    "2. Long Term memory management.\n",
    "3. GPT-3.5 powered Agents for delegation of simple tasks.\n",
    "4. File output.\n",
    "\n",
    "Performance Evaluation:\n",
    "1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n",
    "2. Constructively self-criticize your big-picture behavior constantly.\n",
    "3. Reflect on past decisions and strategies to refine your approach.\n",
    "4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\n",
    "\n",
    "You should only respond in JSON format as described below \n",
    "Response Format: \n",
    "{\n",
    "    \"thoughts\": {\n",
    "        \"text\": \"thought\",\n",
    "        \"reasoning\": \"reasoning\",\n",
    "        \"plan\": \"- short bulleted\\n- list that conveys\\n- long-term plan\",\n",
    "        \"criticism\": \"constructive self-criticism\",\n",
    "        \"speak\": \"thoughts summary to say to user\"\n",
    "    },\n",
    "    \"command\": {\n",
    "        \"name\": \"command name\",\n",
    "        \"args\": {\n",
    "            \"arg name\": \"value\"\n",
    "        }\n",
    "    }\n",
    "} \n",
    "Ensure the response can be parsed by Python json.loads\n",
    "System: The current time and date is Wed Apr 26 16:17:15 2023\n",
    "System: This reminds you of these events from your past:\n",
    "['Assistant Reply: {\"thoughts\": {\"text\": \"I will use the information I retrieved from the affordable accommodations and transportation file to create a budget for my trip. I will use the write_file command to save the budget to a file. Then, I will use the search command to find popular tourist attractions in China and save the information to a file. Finally, I will use the write_file command to save a packing list to a file.\", \"reasoning\": \"I have retrieved the information I need to create a budget for my trip. I also need to find popular tourist attractions in China and create a packing list. I will use the write_file command to save both the budget and packing list to files.\", \"plan\": \"- Use the information from the affordable accommodations and transportation file to create a budget for my trip.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n- Use the write_file command to save the budget to a file.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n- Use the search command to find popular tourist attractions in China and save the information to a file.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n- Use the write_file command to save a packing list to a file.\", \"criticism\": \"I need to make sure I am not spending too much time on planning and that I am making progress towards actually traveling to China.\", \"speak\": \"I am going to use the information I retrieved from the affordable accommodations and transportation file to create a budget for my trip. I will use the write_file command to save the budget to a file. Then, I will use the search command to find popular tourist attractions in China and save the information to a file. Finally, I will use the write_file command to save a packing list to a file.\"}, \"command\": {\"name\": \"write_file\", \"args\": {\"file_path\": \"budget.txt\", \"text\": \"Budget for China Trip:\\\\\\\\nAccommodation: $80 per day\\\\\\\\nTransportation: $3 per day\\\\\\\\nFood: $30 per day\\\\\\\\nTotal: $113 per day\\\\\\\\nTotal for 30 days: $3,390\"}}} \\nResult: Command write_file returned: File written successfully to budget.txt. ', 'Assistant Reply: {\\n    \"thoughts\": {\\n        \"text\": \"Now that I have all the necessary information saved to files, I can start planning my trip. I will use the read_file command to access the information I saved earlier about affordable accommodations and transportation in China. Then, I will use this information to create a budget for my trip. After that, I will use the search command to find popular tourist attractions in China and save the information to a file. Finally, I will use the write_file command to save a packing list to a file.\",\\n        \"reasoning\": \"I have all the necessary information saved to files. I can use the information I saved earlier about affordable accommodations and transportation in China to create a budget for my trip. I also need to find popular tourist attractions in China and create a packing list. I will use the write_file command to save both the budget and packing list to files.\",\\n        \"plan\": \"- Use the read_file command to access the information I saved earlier about affordable accommodations and transportation in China.\\\\\\\\\\\\\\\\n- Use this information to create a budget for my trip.\\\\\\\\\\\\\\\\n- Use the search command to find popular tourist attractions in China and save the information to a file.\\\\\\\\\\\\\\\\n- Use the write_file command to save a packing list to a file.\",\\n        \"criticism\": \"I need to make sure I am not spending too much time on planning and that I am making progress towards actually traveling to China.\",\\n        \"speak\": \"Now that I have all the necessary information saved to files, I am going to use the read_file command to access the information I saved earlier about affordable accommodations and transportation in China. Then, I will use this information to create a budget for my trip. After that, I will use the search command to find popular tourist attractions in China and save the information to a file. Finally, I will use the write_file command to save a packing list to a file.\"\\n    },\\n    \"command\": {\\n        \"name\": \"read_file\",\\n        \"args\": {\\n            \"file_path\": \"china_accommodations_transportation.txt\"\\n        }\\n    }\\n} \\nResult: Command read_file returned: Accommodation: $80 per day; Transportation: $3 per day; Food: $30 per day ', 'Assistant Reply: {\\n    \"thoughts\": {\\n        \"text\": \"I am going to use the write_file command to save the information I found earlier about affordable accommodations and transportation in China to a file. This will make it easier for me to access the information later when I need it.\", \\n        \"reasoning\": \"I have found affordable accommodations and transportation options for my trip to China. I want to make sure I have easy access to this information later when I need it.\", \\n        \"plan\": \"- Use the write_file command to save the information about affordable accommodations and transportation in China to a file\", \\n        \"criticism\": \"I need to make sure I am not spending too much time on research and that I am making progress towards actually traveling to China.\", \\n        \"speak\": \"I am going to use the write_file command to save the information I found earlier about affordable accommodations and transportation in China to a file.\"\\n    }, \\n    \"command\": {\\n        \"name\": \"write_file\", \\n        \"args\": {\\n            \"file_path\": \"china_info.txt\", \\n            \"text\": \"Accommodation: $80 per day; Transportation: $3 per day; Food: $30 per ...\"\\n        }\\n    }\\n} \\nResult: Command write_file returned: File written successfully to china_info.txt. ', 'Assistant Reply: {\\n    \"thoughts\": {\\n        \"text\": \"I will use the write_file command to save the information about affordable accommodations and transportation in China to a file. Then, I will use the search command to find affordable flights to China and use the write_file command to save the flight information to a file.\",\\n        \"reasoning\": \"I have found information about affordable accommodations and transportation in China using the search command. I need to save this information to a file so that I can refer to it later. I also need to find affordable flights to China and save this information to a file.\",\\n        \"plan\": \"- Use the write_file command to save the information about affordable accommodations and transportation in China to a file.\\\\\\\\\\\\\\\\n- Use the search command to find affordable flights to China and use the write_file command to save the flight information to a file.\",\\n        \"criticism\": \"I need to make sure I am not spending too much time on research and that I am making progress towards actually traveling to China.\",\\n        \"speak\": \"I am going to use the write_file command to save the information about affordable accommodations and transportation in China to a file. Then, I will use the search command to find affordable flights to China and use the write_file command to save the flight information to a file.\"\\n    },\\n    \"command\": {\\n        \"name\": \"write_file\",\\n        \"args\": {\\n            \"file_path\": \"china_accommodations_transportation.txt\",\\n            \"text\": \"Accommodation: $80 per day; Transportation: $3 per day; Food: $30 per day\"\\n        }\\n    }\\n} \\nResult: Command write_file returned: File written successfully to china_accommodations_transportation.txt. ']\n",
    "\n",
    "\n",
    "Human: Determine which next command to use, and respond using the format specified above:\u001b[0m\n",
    "\n",
    "\u001b[1m> Finished chain.\u001b[0m\n",
    "{\n",
    "    \"thoughts\": {\n",
    "        \"text\": \"Now that I have all the necessary information saved to files, I can start planning my trip. I will use the read_file command to access the information I saved earlier about affordable accommodations and transportation in China. Then, I will use this information to create a budget for my trip. After that, I will use the search command to find popular tourist attractions in China and save the information to a file. Finally, I will use the write_file command to save a packing list to a file.\",\n",
    "        \"reasoning\": \"I have all the necessary information saved to files. I can use the information I saved earlier about affordable accommodations and transportation in China to create a budget for my trip. I also need to find popular tourist attractions in China and create a packing list. I will use the write_file command to save both the budget and packing list to files.\",\n",
    "        \"plan\": \"- Use the read_file command to access the information I saved earlier about affordable accommodations and transportation in China.\\\\n- Use this information to create a budget for my trip.\\\\n- Use the search command to find popular tourist attractions in China and save the information to a file.\\\\n- Use the write_file command to save a packing list to a file.\",\n",
    "        \"criticism\": \"I need to make sure I am not spending too much time on planning and that I am making progress towards actually traveling to China.\",\n",
    "        \"speak\": \"Now that I have all the necessary information saved to files, I am going to use the read_file command to access the information I saved earlier about affordable accommodations and transportation in China. Then, I will use this information to create a budget for my trip. After that, I will use the search command to find popular tourist attractions in China and save the information to a file. Finally, I will use the write_file command to save a packing list to a file.\"\n",
    "    },\n",
    "    \"command\": {\n",
    "        \"name\": \"read_file\",\n",
    "        \"args\": {\n",
    "            \"file_path\": \"china_accommodations_transportation.txt\"\n",
    "        }\n",
    "    }\n",
    "}\n",
    "\n",
    "\n",
    "\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
    "Prompt after formatting:\n",
    "\u001b[32;1m\u001b[1;3mSystem: You are Tom, Assistant\n",
    "Your decisions must always be made independently without seeking user assistance.\n",
    "Play to your strengths as an LLM and pursue simple strategies with no legal complications.\n",
    "If you have completed all your tasks, make sure to use the \"finish\" command.\n",
    "\n",
    "GOALS:\n",
    "\n",
    "1. make a plan to travel around the China for a month, and give me five advices\n",
    "\n",
    "\n",
    "Constraints:\n",
    "1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\n",
    "2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\n",
    "3. No user assistance\n",
    "4. Exclusively use the commands listed in double quotes e.g. \"command name\"\n",
    "\n",
    "Commands:\n",
    "1. search: useful for when you need to answer questions about current events. You should ask targeted questions, args json schema: {\"query\": {\"title\": \"Query\", \"type\": \"string\"}}\n",
    "2. write_file: Write file to disk, args json schema: {\"file_path\": {\"title\": \"File Path\", \"description\": \"name of file\", \"type\": \"string\"}, \"text\": {\"title\": \"Text\", \"description\": \"text to write to file\", \"type\": \"string\"}}\n",
    "3. read_file: Read file from disk, args json schema: {\"file_path\": {\"title\": \"File Path\", \"description\": \"name of file\", \"type\": \"string\"}}\n",
    "4. finish: use this to signal that you have finished all your objectives, args: \"response\": \"final response to let people know you have finished your objectives\"\n",
    "\n",
    "Resources:\n",
    "1. Internet access for searches and information gathering.\n",
    "2. Long Term memory management.\n",
    "3. GPT-3.5 powered Agents for delegation of simple tasks.\n",
    "4. File output.\n",
    "\n",
    "Performance Evaluation:\n",
    "1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n",
    "2. Constructively self-criticize your big-picture behavior constantly.\n",
    "3. Reflect on past decisions and strategies to refine your approach.\n",
    "4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\n",
    "\n",
    "You should only respond in JSON format as described below \n",
    "Response Format: \n",
    "{\n",
    "    \"thoughts\": {\n",
    "        \"text\": \"thought\",\n",
    "        \"reasoning\": \"reasoning\",\n",
    "        \"plan\": \"- short bulleted\\n- list that conveys\\n- long-term plan\",\n",
    "        \"criticism\": \"constructive self-criticism\",\n",
    "        \"speak\": \"thoughts summary to say to user\"\n",
    "    },\n",
    "    \"command\": {\n",
    "        \"name\": \"command name\",\n",
    "        \"args\": {\n",
    "            \"arg name\": \"value\"\n",
    "        }\n",
    "    }\n",
    "} \n",
    "Ensure the response can be parsed by Python json.loads\n",
    "System: The current time and date is Wed Apr 26 16:18:07 2023\n",
    "System: This reminds you of these events from your past:\n",
    "['Assistant Reply: {\"thoughts\": {\"text\": \"I will use the search command to find popular tourist attractions in China and save the information to a file. I should make sure to include the location, hours of operation, and admission fees for each attraction. Then, I will use the write_file command to save the information to a file.\", \"reasoning\": \"I need to find popular tourist attractions in China and save the information to a file. I will use the search command to find this information and the write_file command to save it to a file.\", \"plan\": \"- Use the search command to find popular tourist attractions in China and save the information to a file.\\\\\\\\\\\\\\\\n- Use the write_file command to save the information to a file.\", \"criticism\": \"I need to make sure I am not spending too much time on research and that I am making progress towards actually traveling to China.\", \"speak\": \"I am going to use the search command to find popular tourist attractions in China and save the information to a file. Then, I will use the write_file command to save the information to a file.\"}, \"command\": {\"name\": \"search\", \"args\": {\"query\": \"popular tourist attractions in China\"}}} \\nResult: Command search returned: China, officially the People\\'s Republic of China, is a country in East Asia. It has been the world\\'s most populous country since at least 1950, with a population exceeding 1.4 billion, but is expected to be overtaken by India in mid-2023. ', 'Assistant Reply: {\\n    \"thoughts\": {\\n        \"text\": \"I will use the search command to find information about the best places to visit in China. This will help me plan my itinerary for the month.\", \\n        \"reasoning\": \"Now that I have booked my accommodations, transportation, and flights to China, I need to plan my itinerary. I will use the search command to find information about the best places to visit in China.\", \\n        \"plan\": \"- Use the search command to find information about the best places to visit in China\", \\n        \"criticism\": \"I need to make sure I am not spending too much time on research and that I am making progress towards actually traveling to China.\", \\n        \"speak\": \"I am going to use the search command to find information about the best places to visit in China.\"\\n    }, \\n    \"command\": {\\n        \"name\": \"search\", \\n        \"args\": {\\n            \"query\": \"best places to visit in China\"\\n        }\\n    }\\n} \\nResult: Command search returned: China, officially the People\\'s Republic of China, is a country in East Asia. It has been the world\\'s most populous country since at least 1950, with a population exceeding 1.4 billion, but is expected to be overtaken by India in mid-2023. ', 'Assistant Reply: {\\n    \"thoughts\": {\\n        \"text\": \"I need to find the best places to visit in China. I should search for articles or travel guides that list the top tourist destinations in China.\",\\n        \"reasoning\": \"I want to make sure I don\\'t miss any important tourist destinations in China.\",\\n        \"plan\": \"- Search for articles or travel guides that list the top tourist destinations in China\",\\n        \"criticism\": \"I need to make sure I am not spending too much time on research and that I am making progress towards actually planning my trip.\",\\n        \"speak\": \"I am going to search for articles or travel guides that list the top tourist destinations in China.\"\\n    },\\n    \"command\": {\\n        \"name\": \"search\",\\n        \"args\": {\\n            \"query\": \"best tourist destinations in China\"\\n        }\\n    }\\n} \\nResult: Command search returned: China, officially the People\\'s Republic of China, is a country in East Asia. It has been the world\\'s most populous country since at least 1950, with a population exceeding 1.4 billion, but is expected to be overtaken by India in mid-2023. ', 'Assistant Reply: {\\n    \"thoughts\": {\\n        \"text\": \"I need to start planning my trip to China. I should start by researching the best places to visit and the most efficient way to travel between them.\",\\n        \"reasoning\": \"I need to gather information to make informed decisions about my trip.\",\\n        \"plan\": \"- Research the best places to visit in China\\\\n- Research the most efficient way to travel between destinations\\\\n- Create a rough itinerary for my trip\",\\n        \"criticism\": \"I need to make sure I am not spending too much time on research and that I am making progress towards actually planning my trip.\",\\n        \"speak\": \"I am going to start by researching the best places to visit in China and the most efficient way to travel between them.\"\\n    },\\n    \"command\": {\\n        \"name\": \"search\",\\n        \"args\": {\\n            \"query\": \"best places to visit in China\"\\n        }\\n    }\\n} \\nResult: Command search returned: China, officially the People\\'s Republic of China, is a country in East Asia. It has been the world\\'s most populous country since at least 1950, with a population exceeding 1.4 billion, but is expected to be overtaken by India in mid-2023. ']\n",
    "\n",
    "\n",
    "Human: Determine which next command to use, and respond using the format specified above:\u001b[0m\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following example is used to search for Winning Marathon Times using AutoGPT. The model used in this example code is GPT-4, so it is necessary to prepare the environment conditions required for GPT-4 calling in advance."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import pandas as pd\n",
    "from langchain.experimental.autonomous_agents.autogpt.agent import AutoGPT\n",
    "from langchain.chat_models import ChatOpenAI\n",
    "from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent\n",
    "from langchain.docstore.document import Document\n",
    "\"\"\"\n",
    "Asyncio is a Python standard library used for writing asynchronous code. \n",
    "Nest_asyncio is a third-party library that allows non-blocking I/O operations to \n",
    "be used with asyncio. The function call of nest_asyncio.apply() is used to run all \n",
    "asynchronous code.\n",
    "\"\"\"\n",
    "import asyncio\n",
    "import nest_asyncio\n",
    "\n",
    "nest_asyncio.apply()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "llm = ChatOpenAI(model_name=\"gpt-4\", temperature=1.0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from contextlib import contextmanager\n",
    "from typing import Optional\n",
    "from langchain.agents import tool\n",
    "from langchain.tools.file_management.read import ReadFileTool\n",
    "from langchain.tools.file_management.write import WriteFileTool\n",
    "\n",
    "ROOT_DIR = \"./data/\"\n",
    "\"\"\"\n",
    "The function \"pushed\" defines a context manager named \"pushd\" for changing the \n",
    "current working directory. It takes a parameter \"new_dir\" representing the new \n",
    "working directory path. When entering the pushd context manager, the current \n",
    "working directory is saved as \"prev_dir\", and then the current working directory \n",
    "is changed to \"new_dir\" using the os.chdir() function. When leaving the context \n",
    "manager, \"prev_dir\" is restored as the current working directory. This context \n",
    "manager can be used to temporarily change the working directory while maintaining \n",
    "the correctness of paths and file names in the code. For example, in a Python script, \n",
    "the pushd context manager can be used to switch to the directory containing the dataset, \n",
    "perform data preprocessing or model training tasks, and then switch back to the original \n",
    "directory using the same context manager after completing the tasks.\n",
    "\"\"\"\n",
    "@contextmanager\n",
    "def pushd(new_dir):\n",
    "    \"\"\"Context manager for changing the current working directory.\"\"\"\n",
    "    prev_dir = os.getcwd()\n",
    "    os.chdir(new_dir)\n",
    "    try:\n",
    "        yield\n",
    "    finally:\n",
    "        os.chdir(prev_dir)\n",
    "\n",
    "\"\"\"\n",
    "The function is named \"process_csv\" and is used to process CSV files and perform \n",
    "some operations. It accepts three parameters: csv_file_path (a string representing \n",
    "the path of the CSV file), instructions (a string representing instructions), and \n",
    "output_path (an optional string representing the output path). In the function, \n",
    "pandas library is used to read the CSV file, and then a pandas data frame proxy \n",
    "object called \"agent\" is created. Next, based on the instructions passed in, the \n",
    "data is processed and the results are saved to the specified output path. If an \n",
    "exception occurs during processing, an error message is returned. Additionally, \n",
    "the code includes a comment @tool that describes the purpose of the function, \n",
    "which is to use pandas to process CSV files in REPL environments.\n",
    "\"\"\"\n",
    "@tool\n",
    "def process_csv(\n",
    "    csv_file_path: str, instructions: str, output_path: Optional[str] = None\n",
    ") -> str:\n",
    "    \"\"\"Process a CSV by with pandas in a limited REPL.\\\n",
    " Only use this after writing data to disk as a csv file.\\\n",
    " Any figures must be saved to disk to be viewed by the human.\\\n",
    " Instructions should be written in natural language, not code. Assume the dataframe is already loaded.\"\"\"\n",
    "    with pushd(ROOT_DIR):\n",
    "        try:\n",
    "            df = pd.read_csv(csv_file_path)\n",
    "        except Exception as e:\n",
    "            return f\"Error: {e}\"\n",
    "        agent = create_pandas_dataframe_agent(llm, df, max_iterations=30, verbose=True)\n",
    "        if output_path is not None:\n",
    "            instructions += f\" Save output to disk at {output_path}\"\n",
    "        try:\n",
    "            result = agent.run(instructions)\n",
    "            return result\n",
    "        except Exception as e:\n",
    "            return f\"Error: {e}\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "async def async_load_playwright(url: str) -> str:\n",
    "    from bs4 import BeautifulSoup\n",
    "    from playwright.async_api import async_playwright\n",
    "\n",
    "    results = \"\" \n",
    "    async with async_playwright() as p:\n",
    "        browser = await p.chromium.launch(headless=True)\n",
    "        try:\n",
    "            page = await browser.new_page()\n",
    "            await page.goto(url)\n",
    "\n",
    "            page_source = await page.content()\n",
    "            soup = BeautifulSoup(page_source, \"html.parser\")\n",
    "\n",
    "            for script in soup([\"script\", \"style\"]):\n",
    "                script.extract()\n",
    "\n",
    "            text = soup.get_text()\n",
    "            lines = (line.strip() for line in text.splitlines())\n",
    "            chunks = (phrase.strip() for line in lines for phrase in line.split(\"  \"))\n",
    "            results = \"\\n\".join(chunk for chunk in chunks if chunk)\n",
    "        except Exception as e:\n",
    "            results = f\"Error: {e}\"\n",
    "        await browser.close()\n",
    "    return results\n",
    "def run_async(coro):\n",
    "    event_loop = asyncio.get_event_loop()\n",
    "    return event_loop.run_until_complete(coro)\n",
    "\n",
    "@tool\n",
    "def browse_web_page(url: str) -> str:\n",
    "    return run_async(async_load_playwright(url))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.tools import BaseTool, DuckDuckGoSearchTool\n",
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "\n",
    "from pydantic import Field\n",
    "from langchain.chains.qa_with_sources.loading import load_qa_with_sources_chain, BaseCombineDocumentsChain\n",
    "\n",
    "\"\"\"\n",
    "Defined a function named _get_text_splitter() which returns a RecursiveCharacterTextSplitter \n",
    "object. This object is a tool for splitting text into smaller parts. In the function, \n",
    "some parameters are set to control the behavior of the split, such as the block size, \n",
    "the overlap size between blocks, and the specified length function. These parameters \n",
    "can be adjusted according to specific requirements.\n",
    "\"\"\"\n",
    "def _get_text_splitter():\n",
    "    return RecursiveCharacterTextSplitter(\n",
    "        chunk_size = 500,\n",
    "        chunk_overlap  = 20,\n",
    "        length_function = len,\n",
    "    )\n",
    "\n",
    "\"\"\"\n",
    "A class named WebpageQATool has been defined, which inherits from the BaseTool class. \n",
    "In this class, we have defined some properties and methods for implementing the function \n",
    "of browsing a webpage and extracting text information. Specifically, we have defined a \n",
    "property called text_splitter, which stores a text splitter object; a property called \n",
    "qa_chain, which stores a qa chain object; and implemented both the run method and an \n",
    "asynchronous run method, which are used to implement the logic for specific tasks. \n",
    "In the run method, we first call the browse_web_page.run method to browse a webpage \n",
    "and extract text information. Then, we store the extracted text information in a \n",
    "Document object and use the text_splitter object to split it. Finally, we use the \n",
    "qa_chain object to process the split document and return the result. In the asynchronous \n",
    "run method, we also call the browse_web_page.run method to browse a webpage and extract \n",
    "text information. However, due to the need to wait for the task to complete before \n",
    "continuing with the next step, we need to use the await keyword in the function. It \n",
    "should be noted that there are still some unimplemented functions in the asynchronous \n",
    "run method at present. Specifically, we used raise NotImplementedError at the end of \n",
    "the function to indicate that this method has not yet been implemented. Therefore, if \n",
    "you need to use the asynchronous run method in practice, you will need to complete the \n",
    "missing parts first.\n",
    "\"\"\"\n",
    "class WebpageQATool(BaseTool):\n",
    "    name = \"query_webpage\"\n",
    "    description = \"Browse a webpage and retrieve the information relevant to the question.\"\n",
    "    text_splitter: RecursiveCharacterTextSplitter = Field(default_factory=_get_text_splitter)\n",
    "    qa_chain: BaseCombineDocumentsChain\n",
    "\n",
    "    def _run(self, url: str, question: str) -> str:\n",
    "        result = browse_web_page.run(url)\n",
    "        docs = [Document(page_content=result, metadata={\"source\": url})]\n",
    "        web_docs = self.text_splitter.split_documents(docs)\n",
    "        results = []\n",
    "        for i in range(0, len(web_docs), 4):\n",
    "            input_docs = web_docs[i:i+4]\n",
    "            window_result = self.qa_chain({\"input_documents\": input_docs, \"question\": question}, return_only_outputs=True)\n",
    "            results.append(f\"Response from window {i} - {window_result}\")\n",
    "        results_docs = [Document(page_content=\"\\n\".join(results), metadata={\"source\": url})]\n",
    "        return self.qa_chain({\"input_documents\": results_docs, \"question\": question}, return_only_outputs=True)\n",
    "    \n",
    "    async def _arun(self, url: str, question: str) -> str:\n",
    "        raise NotImplementedError\n",
    "      "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "query_website_tool = WebpageQATool(qa_chain=load_qa_with_sources_chain(llm))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "import faiss\n",
    "from langchain.vectorstores import FAISS\n",
    "from langchain.docstore import InMemoryDocstore\n",
    "from langchain.embeddings import OpenAIEmbeddings\n",
    "from langchain.tools.human.tool import HumanInputRun\n",
    "\n",
    "embeddings_model = OpenAIEmbeddings()\n",
    "embedding_size = 1536\n",
    "index = faiss.IndexFlatL2(embedding_size)\n",
    "vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# !pip install duckduckgo_search\n",
    "web_search = DuckDuckGoSearchTool()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "tools = [\n",
    "    web_search,\n",
    "    WriteFileTool(root_dir=\"./data\"),\n",
    "    ReadFileTool(root_dir=\"./data\"),\n",
    "    process_csv,\n",
    "    query_website_tool,\n",
    "    # human_in_the_loop=True\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent = AutoGPT.from_llm_and_tools(\n",
    "    ai_name=\"Tom\",\n",
    "    ai_role=\"Assistant\",\n",
    "    tools=tools,\n",
    "    llm=llm,\n",
    "    memory=vectorstore.as_retriever(search_kwargs={\"k\": 8}),\n",
    "    # human_in_the_loop=True\n",
    ")\n",
    "# agent.chain.verbose = True"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agent.run([\"What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times.\"])"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 🌹The above content is from the [LangChain official tutorial](https://python.langchain.com/en/latest/index.html) and has been appropriately changed. If there are commercial actions, we will make special notes. Please do not randomly repost this document. If you have any relevant suggestions or changes needed, please [contact us✉️](helloegoalpha@gmail.com). Thank you again!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.15"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
