{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "bd2b2eba-b7fd-4856-960f-f2cbadcc12af",
   "metadata": {},
   "source": [
    "# Building a Exa Search Powered Data Agent\n",
    "\n",
    "<a href=\"https://colab.research.google.com/github/run-llama/llama_index/blob/main/llama-index-integrations/tools/llama-index-tools-exa/examples/exa.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
    "\n",
    "This tutorial walks through using the LLM tools provided by the [Exa API](https://exa.ai) to allow LLMs to use semantic queries to search for and retrieve rich web content from the internet.\n",
    "\n",
    "To get started, you will need an [OpenAI api key](https://platform.openai.com/account/api-keys) and an [Exa API key](https://dashboard.exa.ai/api-keys)\n",
    "\n",
    "We will import the relevant agents and tools and pass them our keys here:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "aa61e0d0",
   "metadata": {},
   "outputs": [],
   "source": [
    "# # Install the relevant LlamaIndex packages, incl. core and Exa tool\n",
    "!pip install llama-index llama-index-core llama-index-tools-exa"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "df2a0ecd-22e9-4cef-b069-89e4286e4d75",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "search\n",
      "retrieve_documents\n",
      "search_and_retrieve_documents\n",
      "search_and_retrieve_highlights\n",
      "find_similar\n",
      "current_date\n"
     ]
    }
   ],
   "source": [
    "# Get OS for environment variables\n",
    "import os\n",
    "\n",
    "# Set up OpenAI\n",
    "from llama_index.core.agent.workflow import FunctionAgent\n",
    "from llama_index.llms.openai import OpenAI\n",
    "\n",
    "# NOTE:\n",
    "# You must have an OpenAI API key in the environment variable OPENAI_API_KEY\n",
    "# You must have an Exa API key in the environment variable EXA_API_KEY\n",
    "\n",
    "# Set up the Exa search tool\n",
    "# from llama_index.tools.exa import ExaToolSpec\n",
    "\n",
    "# Instantiate\n",
    "exa_tool = ExaToolSpec(\n",
    "    api_key=os.environ[\"EXA_API_KEY\"],\n",
    "    # max_characters=2000   # this is the default\n",
    ")\n",
    "\n",
    "# Get the list of tools to see what Exa offers\n",
    "exa_tool_list = exa_tool.to_tool_list()\n",
    "for tool in exa_tool_list:\n",
    "    print(tool.metadata.name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe8e3012-bab0-4e55-858a-e3721282552c",
   "metadata": {},
   "source": [
    "## Testing the Exa tools\n",
    "\n",
    "We've imported our OpenAI agent, set up the API keys, and initialized our tool, checking the methods that it has available. Let's test out the tool before setting up our Agent.\n",
    "\n",
    "All of the Exa search tools make use of the `AutoPrompt` option where Exa will pass the query through an LLM to refine it in line with Exa query best-practice."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e64da618-b4ab-42d7-903d-f4eeb624f43c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Exa Tool] Autoprompt: Here is a comprehensive guide on machine learning transformers:\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[Document(id_='af546f08-f706-40d3-a4fd-6c8613b0bab6', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='The famous paper “Attention is all you need” in 2017 changed the way we were thinking about attention. With enough data, matrix multiplications, linear layers, and layer normalization we can perform state-of-the-art-machine-translation. Nonetheless, 2020 was definitely the year of transformers! From natural language now they are into computer vision tasks. How did we go from attention to self-attention? Why does the transformer work so damn well? What are the critical components for its success? Read on and find out! In my opinion, transformers are not so hard to grasp. It\\'s the combination of all the surrounding concepts that may be confusing, including attention. That’s why we will slowly build around all the fundamental concepts. With Recurrent Neural Networks (RNN’s) we used to treat sequences sequentially to keep the order of the sentence in place. To satisfy that design, each RNN component (layer) needs the previous (hidden) output. As such, stacked LSTM computations were performed sequentially. Until transformers came out! The fundamental building block of a transformer is self-attention. To begin with, we need to get over sequential processing, recurrency, and LSTM’s! How? By simply changing the input representation! For a complete book to guide your learning on NLP, take a look at the \"Deep Learning for Natural Language Processing\" book. Use the code aisummer35 to get an exclusive 35% discount from your favorite AI blog :) Representing the input sentence Sets and tokenization The transformer revolution started with a simple question: Why don’t we feed the entire input sequence? No dependencies between hidden states! That might be cool! As an example the sentence “hello, I love you”:\\n\\nThis processing step is usually called tokenization and it\\'s the first out of three steps before we feed the input in the model. So instead of a sequence of elements, we now have a set. Sets are a collection of distinct elements, where the arrangement of the elements in the set', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='cff34bc3-e330-4b8d-adf9-052bb5bfe4cd', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Deep Learning\\nNatural Language Processing\\nAwesome Guides\\nExplaining BERT Simply Using Sketches By Rahul Agarwal\\n24 July 2021 In my last series of posts on Transformers, I talked about how a transformer works and how to implement one yourself for a translation task. read more', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='483a62bf-d7bb-471f-ba4e-0880bb43a1a9', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='“The preeminent book for the preeminent transformers library—a model of clarity!”\\n—Jeremy Howard, cofounder of fast.ai and professor at University of Queensland\\n\\n“A wonderfully clear and incisive guide to modern NLP’s most essential library. Recommended!”\\n—Christopher Manning, Thomas M. Siebel Professor in Machine Learning, Stanford University\\n\\nBuy the book on Amazon\\nRead the book online at O’Reilly\\nDownload the book’s code \\nSince their introduction in 2017, transformers have quickly become the dominant architecture for achieving state-of-the-art results on a variety of natural language processing tasks. If you’re a data scientist or coder, this practical book shows you how to train and scale these large models using Hugging Face Transformers, a Python-based deep learning library.\\nTransformers have been used to write realistic news stories, improve Google Search queries, and even create chatbots that tell corny jokes. In this guide, authors Lewis Tunstall, Leandro von Werra, and Thomas Wolf use a hands-on approach to teach you how transformers work and how to integrate them in your applications. You’ll quickly learn a variety of tasks they can help you solve.\\n\\nBuild, debug, and optimize transformer models for core NLP tasks, such as text classification, named entity recognition, and question answering\\n Learn how transformers can be used for cross-lingual transfer learning\\n Apply transformers in real-world scenarios where labeled data is scarce\\n Make transformer models efficient for deployment using techniques such as distillation, pruning, and quantization\\n Train transformers from scratch and learn how to scale to multiple GPUs and distributed environments\\n\\nNews 🗞️\\nJanuary 31, 2022\\nLewis will be joining Abhishek Thakur to talk about the book and various techniques you can use to optimize Transformer models for production environments. We’ll also be giving away 5 copies of the book – join the event here!\\nJune 17, 2022\\nDue to the popularity of the book, O’Reilly has ', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n')]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "exa_tool.search_and_retrieve_documents(\"machine learning transformers\", num_results=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c55a0c7b-4c58-4725-8543-29bb1b7278ed",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'title': 'Transformers: a Primer',\n",
       "  'url': 'http://www.columbia.edu/~jsl2239/transformers.html',\n",
       "  'id': 'http://www.columbia.edu/~jsl2239/transformers.html'},\n",
       " {'title': 'Illustrated Guide to Transformers- Step by Step Explanation',\n",
       "  'url': 'https://towardsdatascience.com/illustrated-guide-to-transformers-step-by-step-explanation-f74876522bc0?gi=8fe76db5c4d9',\n",
       "  'id': 'https://towardsdatascience.com/illustrated-guide-to-transformers-step-by-step-explanation-f74876522bc0?gi=8fe76db5c4d9'},\n",
       " {'title': 'The Transformer Attention Mechanism - MachineLearningMastery.com',\n",
       "  'url': 'https://machinelearningmastery.com/the-transformer-attention-mechanism/',\n",
       "  'id': 'https://machinelearningmastery.com/the-transformer-attention-mechanism/'}]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "exa_tool.find_similar(\n",
    "    \"https://www.mihaileric.com/posts/transformers-attention-in-disguise/\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1fc8665d-ddb8-411f-b187-93a132d19e7a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Exa Tool] Autoprompt: Here is a summary of recent research around diffusion models:\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[Document(id_='855f04cf-ec0c-462e-8fa7-c8cc23e54fc4', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Getting-started-with-Diffusion-Literature\\nSummary of the most important papers and blogs about diffusion models for students to learn about diffusion models. Also contains an overview of all published robotics diffusion papers\\nLearning about Diffusion models\\nWhile there exist many tutorials for Diffusion models, below you can find an overview of some introduction blog posts and video, which I found the most intuitive and useful:\\n\\nWhat are Diffusion Models?: an introduction video, which introduces the general idea of diffusion models and some high-level math about how the model works\\n Generative Modeling by Estimating Gradients of the Data Distribution: blog post from the one of the most influential authors in this area, which introduces diffusion models from the score-based perspective\\n What are Diffusion Models: a in-depth blog post about the theory of diffusion models with a general summary on how diffusion model improved over time\\n Understanding Diffusion Models: an in-depth explanation paper, which explains the diffusion models from both perspectives with detailed derivations\\n\\nIf you don\\'t like reading blog posts and prefer the original papers, below you can find a list with the most important diffusion theory papers:\\n\\npaper link, Sohl-Dickstein, Jascha, et al. \"Deep unsupervised learning using nonequilibrium thermodynamics.\" International Conference on Machine Learning. PMLR, 2015.\\n paper link, Ho, Jonathan, Ajay Jain, and Pieter Abbeel. \"Denoising diffusion probabilistic models.\" Advances in Neural Information Processing Systems 33 (2020): 6840-6851.\\n paper link,Song, Yang, et al. \"Score-Based Generative Modeling through Stochastic Differential Equations.\" International Conference on Learning Representations. 2020.\\n paper link, Ho, Jonathan, and Tim Salimans. \"Classifier-Free Diffusion Guidance.\" NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications. 2021.\\n\\nOur current model implementation is based on this paper:\\n\\npaper link, Karras, Tero', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n')]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "exa_tool.search_and_retrieve_documents(\n",
    "    \"This is a summary of recent research around diffusion models:\", num_results=1\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d9325841-9f9a-4b9e-a602-fe542be8f364",
   "metadata": {},
   "source": [
    "While `search_and_retrieve_documents` returns raw text from the source document, `search_and_retrieve_highlights` returns relevant curated snippets."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a64b37f6-ec12-45e8-9291-fa2fe51de311",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Exa Tool] Autoprompt: Here is a summary of recent research around diffusion models:\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[Document(id_='38203997-c7f6-49d7-b977-ae3d57e22d26', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Diffusion models are a type of generative model and in this field, the main focus are vision based applications, thus all theory papers mentioned in the text below are mostly focused on image synthesis or similar tasks related to it. Diffusion models can be viewed from two perspectives: one is based on the initial idea of of Sohl-Dickstein et al., (2015) and the other is based on a different direction of research: score-based generative models. Song & Ermon, (2019) introduced the score-based generative models category. They presented the noise-conditioned score network (NCSN), which is a predecessor to diffusion model. The main idea of the paper is to learn the score function of the unknown data distribution with a neural network.', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n')]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "exa_tool.search_and_retrieve_highlights(\n",
    "    \"This is a summary of recent research around diffusion models:\", num_results=1\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2f2caa3a",
   "metadata": {},
   "source": [
    "### Exploring other Exa functionalities\n",
    "There are additional parameters that you can pass to Exa methods."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ff324312",
   "metadata": {},
   "source": [
    "You can filter return results based on the date that entity was published"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "157d4aef",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Exa Tool] Autoprompt: Here is a recent advancement in quantum computing:\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[Document(id_='9f6e1924-ba7f-46a4-b0ea-269625fde0e1', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='e08b7309-7f59-43c3-a11b-43a581c74f15', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='References Goldin, G. A., Menikoff, R. & Sharp, D. H. Comments on ‘general theory for quantum statistics in two dimensions’. Phys. Rev. Lett. 54, 603–603 (1985). Article \\n ADS \\n MathSciNet \\n CAS \\n PubMed\\n\\nGoogle Scholar \\n Moore, G. & Seiberg, N. Classical and quantum conformal field theory. Commun. Math. Phys. 123, 177–254 (1989). Article \\n ADS \\n MathSciNet\\n\\nGoogle Scholar \\n Moore, G. & Read, N. Nonabelions in the fractional quantum Hall effect. Nucl. Phys. B 360, 362–396 (1991). Article \\n ADS \\n MathSciNet\\n\\nGoogle Scholar \\n Wen, X. G. Non-Abelian statistics in the fractional quantum Hall states. Phys. Rev. Lett. 66, 802–805 (1991). Article \\n ADS \\n MathSciNet \\n CAS \\n PubMed\\n\\nGoogle Scholar \\n Kitaev, A. Y. Fault-tolerant quantum computation by anyons. Ann. Phys. 303, 2–30 (2003). Article \\n ADS \\n MathSciNet \\n CAS\\n\\nGoogle Scholar \\n Nayak, C., Simon, S. H., Stern, A., Freedman, M. & Das Sarma, S. Non-Abelian anyons and topological quantum computation. Rev. Mod. Phys. 80, 1083–1159 (2008). Article \\n ADS \\n MathSciNet \\n CAS\\n\\nGoogle Scholar \\n Wen, X.-G. Quantum Field Theory of Many-body Systems Oxford Graduate Texts (Oxford Univ. Press, 2010). Leinaas, J. M. & Myrheim, J. On the theory of identical particles. Nuovo Cim. B 37, 1–23 (1977). Article \\n ADS\\n\\nGoogle Scholar \\n Goldin, G. A., Menikoff, R. & Sharp, D. H. Representations of a local current algebra in nonsimply connected space and the Aharonov–Bohm effect. J. Math. Phys. 22, 1664–1668 (1981). Article \\n ADS \\n MathSciNet\\n\\nGoogle Scholar \\n Wilczek, F. Quantum mechanics of fractional-spin particles. Phys. Rev. Lett. 49, 957–959 (1982). Article \\n ADS \\n MathSciNet \\n CAS\\n\\nGoogle Scholar \\n Fowler, A. G., Mariantoni, M., Martinis, J. M. & Cleland, A. N. Surface codes: towards practical large-scale quantum computation. Phys. Rev. A 86, 032324 (2012). Article \\n ADS\\n\\nGoogle Scholar \\n Nakamura, J., Liang, S., Gardner, G. C. & Manfra, M. J. Direct observation of anyonic braiding statistics. Nat. Phys. 16, 931–936 (2020). Article \\n C', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='eb06dfef-c68f-4e24-8f37-04f1c46e865b', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Insider Brief\\n\\nQuantinuum and University of Colorado researchers successfully entangled four logical qubits with better fidelity than their physical counterparts.\\nThe advance demonstrates improved error protection and operational reliability, essential steps toward developing practical and scalable quantum computers.\\nThe achievement also shows Quantinuum’s commitment to making quantum computing more accessible and reliable, combining the advanced H2 quantum processor with innovative error-correcting codes.\\n\\nQuantinuum and the University of Colorado say they have — for the first time — entangled four error-protected logical qubits that have better fidelity than their physical counterparts. It’s not just an academic exercise, though. They report in a paper they posted on the pre-print server ArXiv that the system will improve the accuracy of quantum operations leading to an enhancement in error protection and operational reliability, both crucial steps towards making quantum computers a practical reality.\\nThe researchers added that they implemented the error correcting codes — called high-rate non-local quantum Low-Density Parity-Check (qLDPC) code — on Quantinuum’s H2 quantum processor.\\n The Challenge of Quantum Error Correction \\nQuantum error correction is a critical component in the quest for reliable and practical quantum computers, according to the team. Quantum systems are inherently fragile, susceptible to errors from environmental interference and imperfect operations. \\nThe researchers write in a blog post: “For a quantum computer to be useful, it must be universal, have lots of qubits, and be able to detect and correct errors. The error correction step must be done so well that in the final calculations, you only see an error in less than one in a billion (or maybe even one in a trillion) tries. Correcting errors on a quantum computer is quite tricky, and most current error correcting schemes are quite expensive for quantum computers to run.”\\nTo mitigate thes', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n')]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Example 1: Calling search_and_contents with date filters\n",
    "exa_tool.search_and_retrieve_documents(\n",
    "    \"Advancements in quantum computing\",\n",
    "    num_results=3,\n",
    "    start_published_date=\"2024-01-01\",\n",
    "    end_published_date=\"2024-07-10\",\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "afa8b4f4",
   "metadata": {},
   "source": [
    "You can constrain results to only return from specified domains (or exclude domains)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9d6c30c0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Exa Tool] Autoprompt: Here is a comprehensive climate change mitigation strategy:\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[Document(id_='de6db381-d546-4eef-9fc4-af19585b61be', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text=\"Published: 17 June 2012\\n\\nNature Climate Change \\n volume 2, pages 471–474 (2012)Cite this article\\n\\n3119 Accesses\\n\\n51 Citations\\n\\n141 Altmetric\\n\\nMetrics details\\n\\nSubjects\\n\\nTwenty-one coherent major initiatives could together stimulate sufficient reductions by 2020 to bridge the global greenhouse-gas emissions gap.\\n\\nWe propose a new approach — which we call 'wedging the gap' — consisting of 21 coherent major initiatives that together would trigger greenhouse-gas emission reductions of around 10 gigatonnes of carbon dioxide equivalent (Gt CO2e) by 2020, plus the benefits of enhanced reductions in air-pollutant emissions. This supports and goes substantially beyond the emission reductions proposed by national governments under the United Nations Framework Convention on Climate Change (UNFCCC). The approach would play a significant part in bridging the gap between current emission trends and what is necessary to put the world on a path that would limit global temperature increase to 2 °C above pre-industrial levels. The proposed initiatives build on actions that promise numerous benefits to the organizations and individuals undertaking them, and front-runners are already demonstrating that such benefits are real. These initiatives aim to take these benefits to the mainstream, drastically amplifying their impacts and showing all organizations involved that together they can play a leading role in solving the climate challenge. Many of the initiatives also generate significant 'green growth' benefits, stimulating economic development based on environmentally sound solutions and providing additional motivation to engage. We expect that working together on a grand coalition would serve as a catalyst for action, greatly enhancing the willingness of a range of sub-sovereign and non-state actors to contribute to greenhouse-gas emission reductions. This in turn would support the implementation and strengthening of the pledges for which national governments remain responsible, and \", mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='091ce5ac-4a1b-4ea2-98d0-945b73e82a7d', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Main Humanity is well into the Anthropocene 6 , the proposed new geological epoch where human pressures have put the Earth system on a trajectory moving rapidly away from the stable Holocene state of the past 12,000 years, which is the only state of the Earth system we have evidence of being able to support the world as we know it 7,8 . These rapid changes to the Earth system undermine critical life-support systems 1,9,10 , with significant societal impacts already felt 1,3 , and they could lead to triggering tipping points that irreversibly destabilize the Earth system 7,11,12 . These changes are mostly driven by social and economic systems run on unsustainable resource extraction and consumption. Contributions to Earth system change and the consequences of its impacts vary greatly among social groups and countries. Given these interdependencies between inclusive human development and a stable and resilient Earth system 1,2,3,13 , an assessment of safe and just boundaries is required that accounts for Earth system resilience and human well-being in an integrated framework 4,5 . We propose a set of safe and just Earth system boundaries (ESBs) for climate, the biosphere, fresh water, nutrients and air pollution at global and subglobal scales. These domains were chosen for the following reasons. They span the major components of the Earth system (atmosphere, hydrosphere, geosphere, biosphere and cryosphere) and their interlinked processes (carbon, water and nutrient cycles), the ‘global commons’ 14 that underpin the planet’s life-support systems and, thereby, human well-being on Earth; they have impacts on policy-relevant timescales; they are threatened by human activities; and they could affect Earth system stability and future development globally. Our proposed ESBs are based on existing scholarship, expert judgement and widely shared norms, such as Agenda 2030. They are meant as a transparent proposal for further debate and refinement by scholars and wider society.', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='b4a3e66d-4816-488e-a02d-cd49c2c74b89', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Supporting Information Appendix Natural climate solutions\\n\\nBronson W Griscom \\nJustin Adams \\nPeter W Ellis \\nRichard A Houghton \\nGuy Lomax \\nDaniela A Miteva \\nWilliam H Schlesinger \\nDavid Shoch \\nJuha V Siikamäki \\nPete Smith \\nPeter Woodbury \\nChris Zganjar \\nAllen Blackman \\nJoão Campari \\nRichard T Conant \\nChristopher Delgado \\nPatricia Elias \\nTrisha Gopalakrishna \\nMarisa R Hamsik \\nMario Herrero \\nJoseph Kiesecker \\nEmily Landis \\nLars Laestadius \\nSara M Leavitt \\nSusan Minnemeyer \\nStephen Polasky \\nPeter Potapov \\nFrancis E Putz \\nJonathan Sanderman \\nMarcel Silvius \\nEva Wollenberg \\nJoe Fargione \\nSupporting Information Appendix Natural climate solutions\\nPage 1\\n\\nContents\\n Tables Table S1\\n. Maximum mitigation potential of natural pathways by 2030. …………………………………………….......p. 7-11 Table S2. Summary of pathway definition, extent, and methods for estimating maximum mitigation potential. ..…….. p. 12-17 Table S3. Country level maximum mitigation potential with safeguards for 8 NCS pathways (TgCO2e yr-1). ……….. p. 18-22 Table S4. Cost effective NCS mitigation levels contributing to holding global warming below 2°C. …………………. p. 23-27 Table S5. Co-benefits associated with natural pathways. ……………………………………………….……………..... p. 28-30 Table S6. Alignment of multilateral announcements…with <2°C mitigation levels for 20 natural pathways. …..……. p. 31-33 Table S3 for all country results.\\n\\nGrazing -Legumes in Pastures\\n\\nGrazing -Optimal Intensity Improved Rice Cultivation A B C Numbers assigned to countries indicate total TgCO 2 e yr -1 Fig. S2. Distribution of mitigation opportunity for rice and grazing pathways. Hues indicate mean density of additional mitigation potential (maximum mitigation per country or region divided by ice-free land area). Orange hues indicate density of avoided emissions potential for Improved Rice (A). Green hues indicate density of sequestration potential for Grazing -Optimal Intensity (B) and Legumes in Pastures (C). Numbers in bold indicate total TgCO 2 e yr -1 for the cou', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n')]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Example 2: Calling search_and_contents with include_domains filters\n",
    "exa_tool.search_and_retrieve_documents(\n",
    "    \"Climate change mitigation strategies\",\n",
    "    num_results=3,\n",
    "    include_domains=[\"www.nature.com\", \"www.sciencemag.org\", \"www.pnas.org\"],\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be90823f",
   "metadata": {},
   "source": [
    "You can turn off autoprompt, enabling more direct and fine grained control of Exa querying."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "74b03906",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Exa Tool] Autoprompt: None\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[Document(id_='dae22188-b0a2-4941-bf06-282f7a99f435', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='14 June 2023\\n\\n‘Benchmark’ experiment suggests quantum computers could have useful real-world applications within two years.\\n\\nFour years ago, physicists at Google claimed their quantum computer could outperform classical machines — although only at a niche calculation with no practical applications. Now their counterparts at IBM say they have evidence that quantum computers will soon beat ordinary ones at useful tasks, such as calculating properties of materials or the interactions of elementary particles.\\n\\nAccess options\\n Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription cancel any time Subscribe to this journal Receive 51 print issues and online access $199.00 per year only $3.90 per issue Rent or buy this article Prices vary by article type $1.95 $39.95 Prices may be subject to local taxes which are calculated during checkout\\n\\nAdditional access options:\\n\\nLog in\\n\\nLearn about institutional subscriptions\\n\\nRead our FAQs\\n\\nContact customer support\\n\\nNature 618, 656-657 (2023)\\n doi: https://doi.org/10.1038/d41586-023-01965-3 \\n References \\nRelated Articles\\n\\nUnderdog technologies gain ground in quantum-computing race\\n\\nQuantum computers: what are they good for?\\n\\nGoogle’s quantum computer hits key milestone by reducing errors\\n\\nPhysicists propose football-pitch-sized quantum computer\\n\\nQuantum sensors will start a revolution — if we deploy them right\\n\\nHello quantum world! Google publishes landmark quantum supremacy claim\\n\\nBeyond quantum supremacy: the hunt for useful quantum computers\\n\\nThese ‘quantum-proof’ algorithms could safeguard against future cyberattacks\\n\\nSubjects\\n\\nLatest on:', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='10c67800-8608-40dd-afd6-64247440769a', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Quantum computing stands at the frontier of technological advancement, promising to revolutionize how we solve the world’s most complex problems—from material science to machine learning. This guide is designed to demystify quantum computing for engineers and enthusiasts alike, starting with the very basics of quantum mechanics and extending to the cutting-edge applications and future predictions. Whether you’re a seasoned engineer or a curious newcomer, this guide will provide you with a clear understanding of the fundamental concepts and the potential impact of quantum computing on various industries. Introduction to Quantum Computing Definition and Basic Concept Brief History and Evolution Importance in Modern Technology Fundamental Concepts of Quantum Computing Key Principles Explanation of Qubits Versus Classical Bits Parallel Computations How Quantum Computers Work Architecture of a Quantum Computer Types of Quantum Computing Qubits: Creation, Manipulation, and Measurement The Quantum Advantage Comparison with Classical Computers: Capabilities and Limitations Examples of Problems Suited for Quantum Computing Current Technologies and Major Players Practical Applications of Quantum Computing Industry Impact Challenges and Limitations of Quantum Computing Preparing for a Future with Quantum Computing The Road Ahead: Predictions and Future Research in Quantum Computing Conclusion Appendix Understanding Classical Computation Glossary of Key Quantum Computing Terms Frequently Asked Questions about Quantum Computing This article is sponsored by RPMC Lasers - Leading US supplier of various laser technologies. From early computers to quantum leaps: a glimpse into the evolution at a computer museum. Image courtesy of Farai Gandiya Introduction to Quantum Computing Quantum computing harnesses principles of quantum mechanics to perform computations at unprecedented speeds and with superior efficiency compared to classical computers. In this section, we outline the foundat', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='95695886-286e-4d95-acab-14c288ce4279', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='One of the secrets to building the world’s most powerful computer is probably perched by your bathroom sink. At IBM’s Thomas J. Watson Research Center in New York State’s Westchester County, scientists always keep a box of dental floss—Reach is the preferred brand—close by in case they need to tinker with their oil-drum-size quantum computers, the latest of which can complete certain tasks millions of times as fast as your laptop. Inside the shimmering aluminum canister of IBM’s System One, which sits shielded by the same kind of protective glass as the Mona Lisa, are three cylinders of diminishing circumference, rather like a set of Russian dolls. Together, these encase a chandelier of looping silver wires that cascade through chunky gold plates to a quantum chip in the base. To work properly, this chip requires super-cooling to 0.015 kelvins—a smidgen above absolute zero and colder than outer space. Most materials contract or grow brittle and snap under such intense chill. But ordinary dental floss, it turns out, maintains its integrity remarkably well if you need to secure wayward wires. “But only the unwaxed, unflavored kind,” says Jay Gambetta, IBM’s vice president of quantum. “Otherwise, released vapors mess everything up.”\\n\\nPhotograph by Thomas Prior for TIME\\n\\nBuy a print of the Quantum cover here It’s a curiously homespun facet of a technology that is set to transform pretty much everything. Quantum’s unique ability to crunch stacks of data is already optimizing the routes of thousands of fuel tankers traversing the globe, helping decide which ICU patients require the most urgent care, and mimicking chemical processes at the atomic level to better design new materials. It also promises to supercharge artificial intelligence, with the power to better train algorithms that can finally turn driverless cars and drone taxis into a reality. Quantum AI simulations exhibit a “degree of effectiveness and efficiency that is mind-boggling,” U.S. National Cyber Director', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n')]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Example 3: Calling search_and_contents with autoprompt off\n",
    "exa_tool.search_and_retrieve_documents(\n",
    "    \"Here is an article on the advancements of quantum computing\",\n",
    "    num_results=3,\n",
    "    use_autoprompt=False,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1d9ec11e",
   "metadata": {},
   "source": [
    "Exa also has an option to do standard keyword based seach by specifying `type=\"keyword\"`. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2fd06873",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Exa Tool] Autoprompt: None\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[Document(id_='f360e2cc-ddae-42f6-81f8-3ce268826ba3', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Data Acquisition, Preparation, and the Need for Application Knowledge Data is acquired in a\\xa0classical form; the quantum properties are then applied to the data in the quantum computer. Data that will be used in subsequent quantum computations has a\\xa0limited lifetime; the information degrades with time. The current state of quantum computing is referred to as the noisy intermediate-scale quantum (NISQ) era. These processors are sensitive to their environment, prone to quantum decoherence, and not yet capable of continuous quantum error correction. This is improving significantly with advancements in materials, and there are techniques that can be applied to refresh the information, such as, “Dynamical Decoupling”.', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='113d0972-7d36-4e78-bce0-547a3d55356a', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='The first is a 10-year, $100 million plan with IBM, the University of Chicago and the University of Tokyo to develop the blueprints for building a quantum-centric supercomputer powered by 100,000 qubits. The second is a strategic partnership between the University of Chicago, the University of Tokyo and Google, with Google investing up to $50 million over 10 years, to accelerate the development of a fault-tolerant quantum computer and to help train the quantum workforce of the future. Read more about the historic partnership    Illinois governor proposes $500M for quantum technologies in new budget  Illinois Governor JB Pritzker is asking state legislators for half a billion dollars for quantum technologies in a proposed budget—the latest show of support for a regional quantum ecosystem that has attracted millions of dollars in corporate and government investment in recent years and is emerging as a central driver of US leadership in the field. Learn how the proposed budget could impact quantum technology', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='4e434349-f875-48b5-bc50-afb712af203a', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='\\tNov. 23, 2022 — Physicists have experimentally demonstrated for the first time that there is a negative correlation between the two spins of an entangled pair of electrons from a superconductor. For their study, the ...  \\t\\t Quantum Algorithms Save Time in the Calculation of Electron Dynamics  \\tNov. 22, 2022 — Quantum computers promise significantly shorter computing times for complex problems. But there are still only a few quantum computers worldwide with a limited number of so-called qubits.', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n')]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Example 4: Calling search_and_retrieve_highlights with keyword search type\n",
    "exa_tool.search_and_retrieve_highlights(\n",
    "    \"Advancements in quantum computing\", num_results=3, type=\"keyword\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4627a1b",
   "metadata": {},
   "source": [
    "Last, Magic Search is a new feature available in Exa, where queries will route to the best suited search type intelligently: either their proprietary neural search or industry-standard keyword search mentioned above"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4159f545",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Exa Tool] Autoprompt: Here is a recent advancement in quantum computing:\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[Document(id_='4d458de1-18c8-4e51-9dd9-d5f0be08c219', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Four years ago, physicists at Google claimed their quantum computer could outperform classical machines — although only at a niche calculation with no practical applications. Now their counterparts at IBM say they have evidence that quantum computers will soon beat ordinary ones at useful tasks, such as calculating properties of materials or the interactions of elementary particles. Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription cancel any time Subscribe to this journal Receive 51 print issues and online access $199.00 per year only $3.90 per issue Rent or buy this article Prices vary by article type $1.95 $39.95 Prices may be subject to local taxes which are calculated during checkout', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='b5a79211-9dfc-4ced-9198-33ec451803dc', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Main The ultimate requirements for useful quantum hardware are set by fault tolerance (FT), whereby information is encoded in a way that contains and negates errors with a combination of redundancy, symmetry and careful scheduling of operations. FT requires, in part, that qubits be well-isolated from microscopic sources of noise and controlled with precision and high speed, all in a platform capable of scaling to sizes of computational relevance. Achieving the necessary scale favours lithographically defined qubits such as superconducting transmons 39 or single electron spins in Si quantum dots 22 . These approaches have enjoyed significant recent progress in scaling 3,40 , control fidelity 5,6,7,41 and advanced fabrication 42,43 . Crucially, however, FT also depends sensitively on the structure and correlation of the errors it is responsible for mitigating.', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'),\n",
       " Document(id_='4d453ed0-b5ea-49f1-940e-22960b17bcb9', embedding=None, metadata={}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='Now scientists have fabricated more than 150,000 silicon-based qubits on a chip that they may be able to link together with light, to help form powerful quantum computers connected by a quantum Internet. Classical computers switch transistors either on or off to represent data as ones or zeroes. In contrast, quantum computers use quantum bits, also known as qubits. Because of the surreal nature of quantum physics, qubits can exist in a state called superposition, in which they are essentially both 1 and 0 at the same time. This phenomenon lets each qubit perform two calculations at once.', mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n')]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Example 5: Calling search_and_retrieve_highlights with magic search (explicitly)\n",
    "exa_tool.search_and_retrieve_highlights(\n",
    "    \"Advancements in quantum computing\", num_results=3, type=\"magic\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1210906d-87a7-466a-9712-1d17dba2c2ec",
   "metadata": {},
   "source": [
    "We can see we have different tools to search for results, retrieve the results, find similar results to a web page, and finally a tool that combines search and document retrieval into a single tool. We will test them out in LLM Agents below:\n",
    "\n",
    "### Using the Search and Retrieve documents tools in an Agent\n",
    "\n",
    "We can create an agent with access to the above tools and start testing it out:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9d88c2ee-184a-4371-995b-a086b34db24f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# We don't give the Agent our unwrapped retrieve document tools, instead pass the wrapped tools\n",
    "agent = FunctionAgent(\n",
    "    tools=exa_tool_list,\n",
    "    llm=OpenAI(model=\"gpt-4.1\"),\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f69a53fd-55c4-4e18-8fbe-6a29d5f3cef0",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(await agent.run(\"What are the best resturants in toronto?\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "939c7b98-0d75-4ef0-ac47-fd3bd24d3e50",
   "metadata": {},
   "source": [
    "## Avoiding Context Window Issues\n",
    "\n",
    "The above example shows the core uses of the Exa tool. We can easily retrieve a clean list of links related to a query, and then we can fetch the content of the article as a cleaned up html extract. Alternatively, the search_and_retrieve_documents tool directly returns the documents from our search result.\n",
    "\n",
    "We can see that the content of the articles is somewhat long and may overflow current LLM context windows.  \n",
    "\n",
    "1. Use `search_and_retrieve_highlights`: This is an endpoint offered by Exa that directly retrieves relevant highlight snippets from the web, instead of full web articles. As a result you don't need to worry about indexing/chunking offline yourself!\n",
    "\n",
    "2. Wrap `search_and_retrieve_documents` with `LoadAndSearchToolSpec`: We set up and use a \"wrapper\" tool from LlamaIndex that allows us to load text from any tool into a VectorStore, and query it for retrieval. This is where the `search_and_retrieve_documents` tool become particularly useful. The Agent can make a single query to retrieve a large number of documents, using a very small number of tokens, and then make queries to retrieve specific information from the documents."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5b8c51fc-8a22-408e-94c9-14248bad61c1",
   "metadata": {},
   "source": [
    "### 1. Using `search_and_retrieve_highlights`\n",
    "\n",
    "The easiest is to just use `search_and_retrieve_highlights` from Exa. This is essentially a \"web RAG\" endpoint - they handle chunking/embedding under the hood."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "197d241b-cd53-4038-a824-d493c69166b6",
   "metadata": {},
   "outputs": [],
   "source": [
    "tools = exa_tool.to_tool_list(\n",
    "    spec_functions=[\"search_and_retrieve_highlights\", \"current_date\"]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1c412501-4fa3-4bb5-a324-075e809737d9",
   "metadata": {},
   "outputs": [],
   "source": [
    "agent = FunctionAgent(\n",
    "    tools=tools,\n",
    "    llm=OpenAI(model=\"gpt-4.1\"),\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "242c779e-9dbc-4aec-8838-c152bf8f304b",
   "metadata": {},
   "outputs": [],
   "source": [
    "response = await agent.run(\"Tell me more about the recent news on semiconductors\")\n",
    "print(f\"Response: {str(response)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "96c801b9-7f61-470b-9d05-c00622d5fbd7",
   "metadata": {},
   "source": [
    "### 2. Using `LoadAndSearchToolSpec`\n",
    "\n",
    "Here we wrap the `search_and_retrieve_documents` functionality with the `load_and_search_tool_spec`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a017cc61-1696-4a03-8d09-a628f9049cfd",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.tools.tool_spec.load_and_search import (\n",
    "    LoadAndSearchToolSpec,\n",
    ")\n",
    "\n",
    "# The search_and_retrieve_documents tool is the third in the tool list, as seen above\n",
    "search_and_retrieve_docs_tool = exa_tool.to_tool_list(\n",
    "    spec_functions=[\"search_and_retrieve_documents\"]\n",
    ")[0]\n",
    "date_tool = exa_tool.to_tool_list(spec_functions=[\"current_date\"])[0]\n",
    "wrapped_retrieve = LoadAndSearchToolSpec.from_defaults(search_and_retrieve_docs_tool)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80b47437-8f6d-4e94-97ca-4e35f78336f2",
   "metadata": {},
   "source": [
    "Our wrapped retrieval tools separate loading and reading into separate interfaces. We use `load` to load the documents into the vector store, and `read` to query the vector store. Let's try it out again"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e4f81bd3-a5b9-452c-93f4-91d16c4c0df1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Exa Tool] Autoprompt: Here is the best explanation for machine learning transformers:\n",
      "A transformer is a type of neural network architecture that is well-suited for tasks involving processing sequences as inputs. It is designed to create a numerical representation for each element within a sequence, capturing essential information about the element and its neighboring context. Transformers have been instrumental in revolutionizing natural language processing tasks, such as translation and autocomplete services, by leveraging their capabilities in understanding and generating natural language text.\n",
      "The first paper on transformers was written in 2017.\n"
     ]
    }
   ],
   "source": [
    "wrapped_retrieve.load(\"This is the best explanation for machine learning transformers:\")\n",
    "print(wrapped_retrieve.read(\"what is a transformer\"))\n",
    "print(wrapped_retrieve.read(\"who wrote the first paper on transformers\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "85be6977-c4e8-43a4-99be-3322d4b72b07",
   "metadata": {},
   "source": [
    "## Creating the Agent\n",
    "\n",
    "We now are ready to create an Agent that can use Exa's services to their full potential. We will use our wrapped read and load tools, as well as the `get_date` utility for the following agent and test it out below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3a893f26-dbb6-4b72-9795-702eaf749564",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Just pass the wrapped tools and the get_date utility\n",
    "agent = FunctionAgent(\n",
    "    tools=[*wrapped_retrieve.to_tool_list(), date_tool],\n",
    "    llm=OpenAI(model=\"gpt-4.1\"),\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5835d058-da9c-4d42-9d2a-941c73b88a51",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\n",
    "    await agent.run(\n",
    "        \"Can you summarize everything published in the last month regarding news on superconductors\"\n",
    "    )\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7ee91ca-6730-4fdd-8189-ac21022f34f1",
   "metadata": {},
   "source": [
    "We asked the agent to retrieve documents related to superconductors from this month. It used the `get_date` tool to determine the current month, and then applied the filters in Exa based on publication date when calling `search`. It then loaded the documents using `retrieve_documents` and read them using `read_retrieve_documents`.\n",
    "\n",
    "We can make another query to the vector store to read from it again, now that the articles are loaded:"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
