{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "L6CYH07ysGHi"
   },
   "source": [
    "## AI Trends Searcher using CrewAI RAG"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "sSLA0KQ8qrcX"
   },
   "source": [
    "Using Crew AI to create sophisticated AI news search and writer agents. Using CrewAI RAG create AI Assistants to Run News Agency for AI trends with summarized reports.\n",
    "\n",
    "This innovative approach combines Retrieving and Generating information, revolutionising how we search for AI news articles, analyse them, and deliver in-depth reports.\n",
    "\n",
    "![Screenshot from 2024-03-22 16-23-32.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Q_wvRRAXxPSR"
   },
   "source": [
    "Get Free News API Key at [link](https://newsapi.org/)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "HV1DyBd2RHrN"
   },
   "source": [
    "### Install required packages\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "xOj-BuKARJrH",
    "outputId": "a0a33fc4-65f6-4a83-e2f1-cf24825c3739"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m42.0/42.0 kB\u001b[0m \u001b[31m2.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m67.3/67.3 kB\u001b[0m \u001b[31m6.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n",
      "  Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n",
      "  Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m48.5/48.5 kB\u001b[0m \u001b[31m4.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m215.4/215.4 kB\u001b[0m \u001b[31m11.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.4/2.4 MB\u001b[0m \u001b[31m49.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.7/50.7 kB\u001b[0m \u001b[31m4.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m41.3/41.3 kB\u001b[0m \u001b[31m3.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m27.4/27.4 MB\u001b[0m \u001b[31m59.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m30.5/30.5 MB\u001b[0m \u001b[31m17.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m131.8/131.8 kB\u001b[0m \u001b[31m11.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m628.3/628.3 kB\u001b[0m \u001b[31m45.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.4/2.4 MB\u001b[0m \u001b[31m85.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m463.0/463.0 kB\u001b[0m \u001b[31m31.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m70.1/70.1 kB\u001b[0m \u001b[31m7.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m55.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m409.5/409.5 kB\u001b[0m \u001b[31m32.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m6.4/6.4 MB\u001b[0m \u001b[31m96.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m389.9/389.9 kB\u001b[0m \u001b[31m30.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m55.8/55.8 kB\u001b[0m \u001b[31m5.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m59.2/59.2 kB\u001b[0m \u001b[31m5.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.6/5.6 MB\u001b[0m \u001b[31m90.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.0/3.0 MB\u001b[0m \u001b[31m81.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m756.0/756.0 kB\u001b[0m \u001b[31m47.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m14.6/14.6 MB\u001b[0m \u001b[31m85.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m278.6/278.6 kB\u001b[0m \u001b[31m23.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m147.8/147.8 kB\u001b[0m \u001b[31m13.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m211.4/211.4 kB\u001b[0m \u001b[31m18.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.1/1.1 MB\u001b[0m \u001b[31m50.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m94.8/94.8 kB\u001b[0m \u001b[31m8.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m325.2/325.2 kB\u001b[0m \u001b[31m26.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.9/1.9 MB\u001b[0m \u001b[31m72.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.5/49.5 kB\u001b[0m \u001b[31m4.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m93.2/93.2 kB\u001b[0m \u001b[31m8.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m13.3/13.3 MB\u001b[0m \u001b[31m90.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m54.8/54.8 kB\u001b[0m \u001b[31m5.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m319.7/319.7 kB\u001b[0m \u001b[31m27.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.8/2.8 MB\u001b[0m \u001b[31m85.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m57.6/57.6 kB\u001b[0m \u001b[31m5.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m9.7/9.7 MB\u001b[0m \u001b[31m99.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m63.8/63.8 kB\u001b[0m \u001b[31m5.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m233.5/233.5 kB\u001b[0m \u001b[31m18.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m249.7/249.7 kB\u001b[0m \u001b[31m22.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m131.6/131.6 kB\u001b[0m \u001b[31m12.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m442.1/442.1 kB\u001b[0m \u001b[31m33.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.6/1.6 MB\u001b[0m \u001b[31m53.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m44.3/44.3 kB\u001b[0m \u001b[31m3.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m83.2/83.2 kB\u001b[0m \u001b[31m8.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m298.0/298.0 kB\u001b[0m \u001b[31m25.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m71.1/71.1 kB\u001b[0m \u001b[31m6.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m73.2/73.2 kB\u001b[0m \u001b[31m6.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m481.7/481.7 kB\u001b[0m \u001b[31m34.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.8/3.8 MB\u001b[0m \u001b[31m84.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m442.6/442.6 kB\u001b[0m \u001b[31m34.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m168.2/168.2 kB\u001b[0m \u001b[31m15.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m46.0/46.0 kB\u001b[0m \u001b[31m4.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.1/3.1 MB\u001b[0m \u001b[31m80.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m86.8/86.8 kB\u001b[0m \u001b[31m8.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m209.0/209.0 kB\u001b[0m \u001b[31m19.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m267.2/267.2 kB\u001b[0m \u001b[31m22.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m78.9/78.9 kB\u001b[0m \u001b[31m7.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.4/2.4 MB\u001b[0m \u001b[31m80.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.9/5.9 MB\u001b[0m \u001b[31m18.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m57.5/57.5 kB\u001b[0m \u001b[31m4.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Building wheel for docx2txt (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Building wheel for pypika (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
      "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
      "tensorflow 2.17.1 requires protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3, but you have protobuf 5.29.0 which is incompatible.\n",
      "tensorflow-metadata 1.13.1 requires protobuf<5,>=3.20.3, but you have protobuf 5.29.0 which is incompatible.\u001b[0m\u001b[31m\n",
      "\u001b[0m"
     ]
    }
   ],
   "source": [
    "!pip install crewai langchain-community langchain-openai langchain-google-genai requests duckduckgo-search lancedb -q"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "HNnorUJjbVPH"
   },
   "source": [
    "### Importing modules"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "sk-GEhxCbUDU",
    "outputId": "3ca7f240-718d-4c62-9b88-db69e0f148a3"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:langchain_community.utils.user_agent:USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
     ]
    }
   ],
   "source": [
    "from crewai import Agent, Task, Crew, Process\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_google_genai import ChatGoogleGenerativeAI\n",
    "from langchain_core.retrievers import BaseRetriever\n",
    "from langchain_openai import OpenAIEmbeddings\n",
    "from langchain_community.embeddings import LlamafileEmbeddings\n",
    "from langchain.tools import tool\n",
    "from langchain_community.document_loaders import WebBaseLoader\n",
    "import requests, os\n",
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "from langchain_community.vectorstores import LanceDB\n",
    "from langchain_community.tools import DuckDuckGoSearchRun"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "nRv8-Ja1R8S6"
   },
   "source": [
    "### Set API keys as environment variable"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "J9U8qs3aRLfh"
   },
   "outputs": [],
   "source": [
    "os.environ[\"NEWSAPI_KEY\"] = \"....\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "s6cLIDz9fzB5"
   },
   "source": [
    "### Using OpenAI embedding function and LLM"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "6tNMJXRSVy7e"
   },
   "outputs": [],
   "source": [
    "# @title LLM Config\n",
    "\n",
    "LLM = \"OPENAI_GPT4\"  # @param [\"OPENAI_GPT4\", \"GEMINI_PRO\"]\n",
    "API_KEY = \"sk-proj-...\"  # @param {type:\"string\"}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "id": "ljQ1FLt2fx_F"
   },
   "outputs": [],
   "source": [
    "if LLM == \"OPENAI_GPT4\":\n",
    "    os.environ[\"OPENAI_API_KEY\"] = API_KEY\n",
    "    embedding_function = OpenAIEmbeddings()\n",
    "    llm = ChatOpenAI(model=\"gpt-4o\")\n",
    "elif LLM == \"GEMINI_PRO\":\n",
    "    os.environ[\"GOOGLE_API_KEY\"] = API_KEY\n",
    "    embeddding_function = LlamafileEmbeddings()\n",
    "    llm = ChatGoogleGenerativeAI(model=\"gemini-pro\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "251A9PjVT6s3"
   },
   "source": [
    "### Set up the LanceDB vectorDB"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "id": "-jAOVSSISABN"
   },
   "outputs": [],
   "source": [
    "import lancedb\n",
    "\n",
    "\n",
    "# creating lancedb table with dummy data\n",
    "def lanceDBConnection(dataset):\n",
    "    db = lancedb.connect(\"/tmp/lancedb\")\n",
    "    table = db.create_table(\"tb\", data=dataset, mode=\"overwrite\")\n",
    "    return table\n",
    "\n",
    "\n",
    "embedding = OpenAIEmbeddings()\n",
    "emb = embedding.embed_query(\"hello_world\")\n",
    "dataset = [{\"vector\": emb, \"text\": \"dummy_text\"}]\n",
    "\n",
    "# LanceDB as vector store\n",
    "table = lanceDBConnection(dataset)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0AJygbxmb_gd"
   },
   "source": [
    "### Save latest AI News in vectorDB"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {
    "id": "ZffJSsX-uKt3"
   },
   "outputs": [],
   "source": [
    "from crewai_tools import BaseTool\n",
    "\n",
    "\n",
    "class SearchNewsDB(BaseTool):\n",
    "    def __init__(self):\n",
    "        super().__init__(\n",
    "            name=\"News DB Tool\",\n",
    "            description=\"Fetch and process news articles from various sources\",\n",
    "        )\n",
    "\n",
    "    def _run(self, query: str):\n",
    "        \"\"\"Fetch news articles and process their contents.\"\"\"\n",
    "        API_KEY = os.getenv(\"NEWSAPI_KEY\")  # Fetch API key from environment variable\n",
    "        base_url = \"https://newsapi.org/v2/top-headlines?sources=techcrunch\"\n",
    "        params = {\n",
    "            \"sortBy\": \"publishedAt\",\n",
    "            \"apiKey\": API_KEY,\n",
    "            \"language\": \"en\",\n",
    "            \"pageSize\": 15,\n",
    "        }\n",
    "\n",
    "        response = requests.get(base_url, params=params)\n",
    "        if response.status_code != 200:\n",
    "            return \"Failed to retrieve news.\"\n",
    "\n",
    "        articles = response.json().get(\"articles\", [])\n",
    "        all_splits = []\n",
    "\n",
    "        for article in articles:\n",
    "            try:\n",
    "                # Assuming WebBaseLoader can handle a list of URLs\n",
    "                loader = WebBaseLoader(article[\"url\"])\n",
    "                docs = loader.load()\n",
    "                text_splitter = RecursiveCharacterTextSplitter(\n",
    "                    chunk_size=1000, chunk_overlap=200\n",
    "                )\n",
    "                splits = text_splitter.split_documents(docs)\n",
    "                all_splits.extend(splits)  # Accumulate splits from all articles\n",
    "            except Exception as e:\n",
    "                print(f\"Error processing article {article['url']}: {e}\")\n",
    "\n",
    "        # Index the accumulated content splits if there are any\n",
    "        if all_splits:\n",
    "            try:\n",
    "                vectorstore = LanceDB.from_documents(\n",
    "                    all_splits, embedding=embedding_function\n",
    "                )\n",
    "                retriever = vectorstore.similarity_search(query)\n",
    "                return retriever\n",
    "            except Exception as e:\n",
    "                return f\"Error indexing documents: {e}\"\n",
    "        else:\n",
    "            return \"No content available for processing.\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tKrbDvCdcRoK"
   },
   "source": [
    "### Building RAG to get news from vectorDB"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {
    "id": "N05IrsK0vE5g"
   },
   "outputs": [],
   "source": [
    "class GetNews(BaseTool):\n",
    "    def __init__(self):\n",
    "        super().__init__(\n",
    "            name=\"Get News Tool\",\n",
    "            description=\"Search LanceDB for relevant news information based on a query\",\n",
    "        )\n",
    "\n",
    "    def _run(self, query: str) -> str:\n",
    "        \"\"\"Search LanceDB for relevant news information based on a query.\"\"\"\n",
    "        try:\n",
    "            vectorstore = LanceDB(embedding=embedding_function)\n",
    "            retriever = vectorstore.similarity_search(query)\n",
    "            return retriever\n",
    "        except Exception as e:\n",
    "            return f\"Error retrieving news: {e}\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "BSumWCcRUUDl"
   },
   "source": [
    "### Setup search tool for News articles on the web"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "DiaqNKjsT4jc",
    "outputId": "e467b781-d176-4894-ae2c-68d3399d1e41"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already satisfied: duckduckgo_search in /usr/local/lib/python3.10/dist-packages (6.3.7)\n",
      "Requirement already satisfied: click>=8.1.7 in /usr/local/lib/python3.10/dist-packages (from duckduckgo_search) (8.1.7)\n",
      "Requirement already satisfied: primp>=0.8.1 in /usr/local/lib/python3.10/dist-packages (from duckduckgo_search) (0.8.1)\n"
     ]
    }
   ],
   "source": [
    "# Make sure to Install duckduckgo-search for this example\n",
    "!pip install -U duckduckgo_search\n",
    "\n",
    "from duckduckgo_search import DDGS\n",
    "\n",
    "search_tool = DDGS()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "-a9pqtdqUgtJ"
   },
   "source": [
    "### Setting up Agents"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {
    "id": "SLDSHOuCUXna"
   },
   "outputs": [],
   "source": [
    "# Defining Search and Writer agents with roles and goals\n",
    "news_search_agent = Agent(\n",
    "    role=\"AI News Searcher\",\n",
    "    goal=\"Generate key points for each news article from the latest news\",\n",
    "    backstory=\"\"\"You work at a leading tech think tank.\n",
    "    Your expertise lies in identifying emerging trends in the field of AI.\n",
    "    You have a knack for dissecting complex data and presenting\n",
    "    actionable insights.\"\"\",\n",
    "    tools=[SearchNewsDB()],  # Note the instantiation here\n",
    "    allow_delegation=True,\n",
    "    verbose=True,\n",
    ")\n",
    "\n",
    "# Agent configuration\n",
    "writer_agent = Agent(\n",
    "    role=\"Writer\",\n",
    "    goal=\"Identify all the topics received. Use the GetNews tool to verify each topic. Use the search tool for detailed exploration of each topic. Summarise the retrieved information in depth for every topic.\",\n",
    "    backstory=\"\"\"You are a renowned Content Strategist, known for\n",
    "    your insightful and engaging articles.\n",
    "    You transform complex concepts into compelling narratives.\"\"\",\n",
    "    tools=[GetNews()],  # Instantiate GetNews directly\n",
    "    allow_delegation=True,\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "dmdxvELxV6J1"
   },
   "source": [
    "### Tasks to perform"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {
    "id": "9ZG9GxqWV0md"
   },
   "outputs": [],
   "source": [
    "from crewai import Task\n",
    "\n",
    "# Creating search and writer tasks for agents\n",
    "news_search_task = Task(\n",
    "    description=\"\"\"Conduct a comprehensive analysis of the latest advancements in AI in 2024.\n",
    "    Identify key trends, breakthrough technologies, and potential industry impacts.\n",
    "    Your final answer MUST be a full analysis report\"\"\",\n",
    "    expected_output=\"Create key points list for each news\",\n",
    "    agent=news_search_agent,\n",
    "    tools=[SearchNewsDB()],  # Pass the tool instance, not the _run method\n",
    ")\n",
    "\n",
    "writer_task = Task(\n",
    "    description=\"\"\"Using the insights provided, summarize each post and\n",
    "    highlight the most significant AI advancements.\n",
    "    Your post should be informative yet accessible, catering to a tech-savvy audience.\n",
    "    Make it sound cool, avoid complex words so it doesn't sound like AI.\n",
    "    Your final answer MUST not be more than 50 words.\"\"\",\n",
    "    expected_output=\"Write a short summary under 50 words for each news Headline separately\",\n",
    "    agent=writer_agent,\n",
    "    context=[news_search_task],\n",
    "    tools=[GetNews()],  # Pass the tool instance, not the _run method\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "yvjya2FzWR5m"
   },
   "source": [
    "### Create a Crew"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "yYykbGq2WRSP",
    "outputId": "9a024678-75ce-4e6a-a420-a89693e34574"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:opentelemetry.trace:Overriding of current TracerProvider is not allowed\n"
     ]
    }
   ],
   "source": [
    "# Instantiate Crew with Agents and their tasks\n",
    "news_crew = Crew(\n",
    "    agents=[news_search_agent, writer_agent],\n",
    "    tasks=[news_search_task, writer_task],\n",
    "    process=Process.sequential,\n",
    "    manager_llm=llm,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "xSBr8D6tWloT",
    "outputId": "274e6f3d-32e1-46fb-aa95-9a6b8c38a642"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Crew(id=d9e467a2-39d5-4ab8-aa97-e2c0929c1a85, process=sequential, number_of_agents=2, number_of_tasks=2)"
      ]
     },
     "execution_count": 67,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "news_crew"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "-tjmzH9cXOaB"
   },
   "source": [
    "### Kickoff the crew - let the magic happen"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "VU3LdZmIWqQH",
    "outputId": "a1364e3b-df46-485d-ca2e-ba9bc0754ad7"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1m\u001b[95m# Agent:\u001b[00m \u001b[1m\u001b[92mAI News Searcher\u001b[00m\n",
      "\u001b[95m## Task:\u001b[00m \u001b[92mConduct a comprehensive analysis of the latest advancements in AI in 2024.\n",
      "    Identify key trends, breakthrough technologies, and potential industry impacts.\n",
      "    Your final answer MUST be a full analysis report\u001b[00m\n",
      "\u001b[91m \n",
      "\n",
      "I encountered an error while trying to use the tool. This was the error: 1 validation error for SearchNewsDBSchema\n",
      "query\n",
      "  Input should be a valid string [type=string_type, input_value={'description': 'latest a...AI 2024', 'type': 'str'}, input_type=dict]\n",
      "    For further information visit https://errors.pydantic.dev/2.9/v/string_type.\n",
      " Tool News DB Tool accepts these inputs: Tool Name: News DB Tool\n",
      "Tool Arguments: {'query': {'description': None, 'type': 'str'}}\n",
      "Tool Description: Fetch and process news articles from various sources\n",
      "\u001b[00m\n",
      "\n",
      "\n",
      "\u001b[1m\u001b[95m# Agent:\u001b[00m \u001b[1m\u001b[92mAI News Searcher\u001b[00m\n",
      "\u001b[95m## Thought:\u001b[00m \u001b[92mI need to gather data on the latest advancements in AI for 2024. This will involve searching for relevant news articles to identify key trends, breakthrough technologies, and potential industry impacts.\u001b[00m\n",
      "\u001b[95m## Using tool:\u001b[00m \u001b[92mNews DB Tool\u001b[00m\n",
      "\u001b[95m## Tool Input:\u001b[00m \u001b[92m\n",
      "\"{\\\"query\\\": {\\\"description\\\": \\\"latest advancements in AI 2024\\\", \\\"type\\\": \\\"str\\\"}}\"\u001b[00m\n",
      "\u001b[95m## Tool Output:\u001b[00m \u001b[92m\n",
      "\n",
      "I encountered an error while trying to use the tool. This was the error: 1 validation error for SearchNewsDBSchema\n",
      "query\n",
      "  Input should be a valid string [type=string_type, input_value={'description': 'latest a...AI 2024', 'type': 'str'}, input_type=dict]\n",
      "    For further information visit https://errors.pydantic.dev/2.9/v/string_type.\n",
      " Tool News DB Tool accepts these inputs: Tool Name: News DB Tool\n",
      "Tool Arguments: {'query': {'description': None, 'type': 'str'}}\n",
      "Tool Description: Fetch and process news articles from various sources.\n",
      "Moving on then. I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. To Use the following format:\n",
      "\n",
      "Thought: you should always think about what to do\n",
      "Action: the action to take, should be one of [News DB Tool]\n",
      "Action Input: the input to the action, dictionary enclosed in curly braces\n",
      "Observation: the result of the action\n",
      "... (this Thought/Action/Action Input/Result can repeat N times)\n",
      "Thought: I now can give a great answer\n",
      "Final Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n",
      "\n",
      " \u001b[00m\n",
      "\u001b[91m \n",
      "\n",
      "I encountered an error while trying to use the tool. This was the error: 1 validation error for SearchNewsDBSchema\n",
      "query\n",
      "  Input should be a valid string [type=string_type, input_value={'description': 'latest a...or 2024', 'type': 'str'}, input_type=dict]\n",
      "    For further information visit https://errors.pydantic.dev/2.9/v/string_type.\n",
      " Tool News DB Tool accepts these inputs: Tool Name: News DB Tool\n",
      "Tool Arguments: {'query': {'description': None, 'type': 'str'}}\n",
      "Tool Description: Fetch and process news articles from various sources\n",
      "\u001b[00m\n",
      "\n",
      "\n",
      "\u001b[1m\u001b[95m# Agent:\u001b[00m \u001b[1m\u001b[92mAI News Searcher\u001b[00m\n",
      "\u001b[95m## Thought:\u001b[00m \u001b[92mThought: I need to conduct another search for the latest advancements in AI for 2024 to gather comprehensive insights. To do this, I will format my query correctly to comply with the tool's requirements.\u001b[00m\n",
      "\u001b[95m## Using tool:\u001b[00m \u001b[92mNews DB Tool\u001b[00m\n",
      "\u001b[95m## Tool Input:\u001b[00m \u001b[92m\n",
      "\"{\\\"query\\\": {\\\"description\\\": \\\"latest advancements in AI for 2024\\\", \\\"type\\\": \\\"str\\\"}}\"\u001b[00m\n",
      "\u001b[95m## Tool Output:\u001b[00m \u001b[92m\n",
      "\n",
      "I encountered an error while trying to use the tool. This was the error: 1 validation error for SearchNewsDBSchema\n",
      "query\n",
      "  Input should be a valid string [type=string_type, input_value={'description': 'latest a...or 2024', 'type': 'str'}, input_type=dict]\n",
      "    For further information visit https://errors.pydantic.dev/2.9/v/string_type.\n",
      " Tool News DB Tool accepts these inputs: Tool Name: News DB Tool\n",
      "Tool Arguments: {'query': {'description': None, 'type': 'str'}}\n",
      "Tool Description: Fetch and process news articles from various sources.\n",
      "Moving on then. I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. To Use the following format:\n",
      "\n",
      "Thought: you should always think about what to do\n",
      "Action: the action to take, should be one of [News DB Tool]\n",
      "Action Input: the input to the action, dictionary enclosed in curly braces\n",
      "Observation: the result of the action\n",
      "... (this Thought/Action/Action Input/Result can repeat N times)\n",
      "Thought: I now can give a great answer\n",
      "Final Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n",
      "\n",
      " \u001b[00m\n",
      "\n",
      "\n",
      "\u001b[1m\u001b[95m# Agent:\u001b[00m \u001b[1m\u001b[92mAI News Searcher\u001b[00m\n",
      "\u001b[95m## Thought:\u001b[00m \u001b[92mThought: I need to reformulate my approach to query the news database correctly, ensuring that I utilize a valid string for the search. This will help me identify the latest advancements in AI for 2024.\u001b[00m\n",
      "\u001b[95m## Using tool:\u001b[00m \u001b[92mNews DB Tool\u001b[00m\n",
      "\u001b[95m## Tool Input:\u001b[00m \u001b[92m\n",
      "\"{\\\"query\\\": \\\"latest advancements in AI 2024\\\"}\"\u001b[00m\n",
      "\u001b[95m## Tool Output:\u001b[00m \u001b[92m\n",
      "[Document(metadata={'description': 'Autonomous, AI-based players are coming to a gaming experience near you, and a new startup,\\xa0Altera, is joining the fray to build this new guard of AI', 'language': 'en-US', 'source': 'https://techcrunch.com/2024/05/08/bye-bye-bots-alteras-game-playing-ai-agents-get-backing-from-eric-schmidt/', 'title': 'Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt | TechCrunch'}, page_content='At the helm of Altera is Robert Yang, a neuroscientist and former assistant professor at MIT. In December 2023, Yang and Altera’s other co-founders — Andrew Ahn, Nico Christie and Shuying Luo — stepped away from their applied research lab at MIT to focus on a new goal: developing AI agents (or “AI buddies,” as Yang calls them) with “social-emotional intelligence” that can interact with players and make their own decisions in-game.\\n“It has been my life goal as a neuroscientist to go all the way and build a digital human being — redefining what we thought AI was capable of,” Yang told TechCrunch. That is not to say that Yang is coming from a misanthropic point of view. “Our solidly pro-human framework means that we are building agents that will enhance humanity, not replace it,” he insists.'), Document(metadata={'description': 'Autonomous, AI-based players are coming to a gaming experience near you, and a new startup,\\xa0Altera, is joining the fray to build this new guard of AI', 'language': 'en-US', 'source': 'https://techcrunch.com/2024/05/08/bye-bye-bots-alteras-game-playing-ai-agents-get-backing-from-eric-schmidt/', 'title': 'Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt | TechCrunch'}, page_content='Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt \\n\\n\\n\\n\\n\\nLauren Forristal\\n\\n\\n\\n\\n\\n8:14 AM PDT · May 8, 2024 \\n\\n\\n\\n\\n\\n\\n\\n \\n\\n\\n \\n\\n \\n\\n \\n\\n \\n\\n \\n\\n \\n\\n \\n\\n\\n\\n\\n\\n\\nAutonomous, AI-based players are coming to a gaming experience near you, and a new startup,\\xa0Altera, is joining the fray to build this new guard of AI agents.\\nThe company announced Wednesday that it raised $9 million in an oversubscribed seed round, co-led by First Spark Ventures (Eric Schmidt’s deep-tech fund) and Patron (the seed-stage fund co-founded by Riot Games alums).'), Document(metadata={'description': 'Google DeepMind has taken the wraps off a new version of AlphaFold, their transformative machine learning model that predicts the shape and behavior of', 'language': 'en-US', 'source': 'https://techcrunch.com/2024/05/08/google-deepmind-debuts-huge-alphafold-update-and-free-proteomics-as-a-service-web-app/', 'title': 'Google DeepMind debuts huge AlphaFold update and free proteomics-as-a-service web app | TechCrunch'}, page_content=\"Topics\\n\\nAI, AlphaFold, Biotech & Health, DeepMind, Google, google deepmind \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nMost Popular\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n AWS brings prompt routing and caching to its Bedrock LLM service\\n\\n\\n\\nFrederic Lardinois\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Hugging Face CEO has concerns about Chinese open source AI models\\n\\n\\n\\nCharles Rollet\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n SpaceX mulls tender offer at $350B valuation\\n\\n\\n\\nAria Alamalhodaei\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Why does the name ‘David Mayer’ crash ChatGPT? OpenAI says privacy tool went rogue\\n\\n\\n\\nDevin Coldewey\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Amazon announces Nova, a new family of multimodal AI models\\n\\n\\n\\nKyle Wiggers\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n AWS announces Aurora DSQL, a new distributed SQL database that promises virtually unlimited scalability\\n\\n\\n\\nFrederic Lardinois\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Judge confirms decision to sink Elon Musk’s $56B pay package despite Tesla shareholder vote\\n\\n\\n\\nSean O'Kane\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n \\n\\n \\n\\n \\n\\n \\n\\n \\n\\n \\n\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\t\\t\\tDevin Coldewey\\t\\n\\n\\n\\nWriter & Photographer\"), Document(metadata={'description': 'Autonomous, AI-based players are coming to a gaming experience near you, and a new startup,\\xa0Altera, is joining the fray to build this new guard of AI', 'language': 'en-US', 'source': 'https://techcrunch.com/2024/05/08/bye-bye-bots-alteras-game-playing-ai-agents-get-backing-from-eric-schmidt/', 'title': 'Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt | TechCrunch'}, page_content='The funding follows Altera’s previous raising a pre-seed $2 million from a16z SPEEDRUN and others in January of this year. Now, Altera wants to use the new capital to hire more scientists, engineers and team members to help with product development and growth.\\nIf the first wave of AI for end users was about AI bots; and more recently, AI “copilots” use generative AI to help understand and respond to increasingly sophisticated queries, then AI agents are emerging as the next stage of development. The focus is on how AI can be used to create increasingly more human-like, nuanced entities that can respond to and interact with actual humans.\\nOne early use case for these agents has been gaming — specifically to use in games that support modifications (mods) like Minecraft.\\xa0Voyager\\xa0is one recent project, built on the Minedojo framework, that creates and develops Minecraft AI agents, and this, too, is where Altera is getting its start.')]\u001b[00m\n",
      "\n",
      "\n",
      "\u001b[1m\u001b[95m# Agent:\u001b[00m \u001b[1m\u001b[92mAI News Searcher\u001b[00m\n",
      "\u001b[95m## Final Answer:\u001b[00m \u001b[92m\n",
      "- **Key Points from the Articles:**\n",
      "  1. Altera's pursuit of emotionally intelligent AI agents to enhance gaming experiences, backed by $9 million funding.\n",
      "  2. Google's AlphaFold update is making significant advancements in protein structure prediction technology available to researchers.\n",
      "  3. The trend of creating AI with social-emotional intelligence in gaming environments suggests a new frontier in user interaction.\n",
      "  4. The democratization of powerful AI tools, notably in biotechnology, indicates a shift toward greater accessibility for research applications.\n",
      "  5. Both trends point towards deeper integration of AI in everyday industries, promising advancements in engagement and productivity.\u001b[00m\n",
      "\n",
      "\n",
      "\u001b[1m\u001b[95m# Agent:\u001b[00m \u001b[1m\u001b[92mWriter\u001b[00m\n",
      "\u001b[95m## Task:\u001b[00m \u001b[92mUsing the insights provided, summarize each post and\n",
      "    highlight the most significant AI advancements.\n",
      "    Your post should be informative yet accessible, catering to a tech-savvy audience.\n",
      "    Make it sound cool, avoid complex words so it doesn't sound like AI.\n",
      "    Your final answer MUST not be more than 50 words.\u001b[00m\n",
      "\n",
      "\n",
      "\u001b[1m\u001b[95m# Agent:\u001b[00m \u001b[1m\u001b[92mWriter\u001b[00m\n",
      "\u001b[95m## Thought:\u001b[00m \u001b[92mI need to gather detailed information on the key points from the articles related to AI advancements. I'll use the Get News Tool to explore these topics further.\u001b[00m\n",
      "\u001b[95m## Using tool:\u001b[00m \u001b[92mGet News Tool\u001b[00m\n",
      "\u001b[95m## Tool Input:\u001b[00m \u001b[92m\n",
      "\"{\\\"query\\\": \\\"Altera emotionally intelligent AI agents gaming experiences funding\\\"}\"\u001b[00m\n",
      "\u001b[95m## Tool Output:\u001b[00m \u001b[92m\n",
      "[Document(metadata={'description': 'Autonomous, AI-based players are coming to a gaming experience near you, and a new startup,\\xa0Altera, is joining the fray to build this new guard of AI', 'language': 'en-US', 'source': 'https://techcrunch.com/2024/05/08/bye-bye-bots-alteras-game-playing-ai-agents-get-backing-from-eric-schmidt/', 'title': 'Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt | TechCrunch'}, page_content='The funding follows Altera’s previous raising a pre-seed $2 million from a16z SPEEDRUN and others in January of this year. Now, Altera wants to use the new capital to hire more scientists, engineers and team members to help with product development and growth.\\nIf the first wave of AI for end users was about AI bots; and more recently, AI “copilots” use generative AI to help understand and respond to increasingly sophisticated queries, then AI agents are emerging as the next stage of development. The focus is on how AI can be used to create increasingly more human-like, nuanced entities that can respond to and interact with actual humans.\\nOne early use case for these agents has been gaming — specifically to use in games that support modifications (mods) like Minecraft.\\xa0Voyager\\xa0is one recent project, built on the Minedojo framework, that creates and develops Minecraft AI agents, and this, too, is where Altera is getting its start.'), Document(metadata={'description': 'Autonomous, AI-based players are coming to a gaming experience near you, and a new startup,\\xa0Altera, is joining the fray to build this new guard of AI', 'language': 'en-US', 'source': 'https://techcrunch.com/2024/05/08/bye-bye-bots-alteras-game-playing-ai-agents-get-backing-from-eric-schmidt/', 'title': 'Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt | TechCrunch'}, page_content='Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt \\n\\n\\n\\n\\n\\nLauren Forristal\\n\\n\\n\\n\\n\\n8:14 AM PDT · May 8, 2024 \\n\\n\\n\\n\\n\\n\\n\\n \\n\\n\\n \\n\\n \\n\\n \\n\\n \\n\\n \\n\\n \\n\\n \\n\\n\\n\\n\\n\\n\\nAutonomous, AI-based players are coming to a gaming experience near you, and a new startup,\\xa0Altera, is joining the fray to build this new guard of AI agents.\\nThe company announced Wednesday that it raised $9 million in an oversubscribed seed round, co-led by First Spark Ventures (Eric Schmidt’s deep-tech fund) and Patron (the seed-stage fund co-founded by Riot Games alums).'), Document(metadata={'description': 'Autonomous, AI-based players are coming to a gaming experience near you, and a new startup,\\xa0Altera, is joining the fray to build this new guard of AI', 'language': 'en-US', 'source': 'https://techcrunch.com/2024/05/08/bye-bye-bots-alteras-game-playing-ai-agents-get-backing-from-eric-schmidt/', 'title': 'Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt | TechCrunch'}, page_content='“We see more potential in building agents within the gaming industry,” he said. “This approach allows us to iterate faster, collect data more effectively, and deliver a product where there are eager users and where emergent behavior is a feature, not a bug.”\\n(And yes, in keeping with its consumer focus, you should not be surprised that, for now, the company is not talking about monetization at all.)\\nAltera founders.Image Credits:Altera\\nSimilar to the Voyager GPT-4-powered Minecraft bot, Altera’s autonomous agents are capable of playing Minecraft as if they were humans, performing tasks like building, crafting, farming, trading, mining, attacking, equipping items, chatting and moving around.\\nAltera’s agents are designed to be companions for gamers, not assistants who do what you tell them to. Unlike NPCs (non-player characters), they have the freedom to make their own decisions, which could either make the game more entertaining or frustrating, depending on your playing style.'), Document(metadata={'description': 'Autonomous, AI-based players are coming to a gaming experience near you, and a new startup,\\xa0Altera, is joining the fray to build this new guard of AI', 'language': 'en-US', 'source': 'https://techcrunch.com/2024/05/08/bye-bye-bots-alteras-game-playing-ai-agents-get-backing-from-eric-schmidt/', 'title': 'Bye-bye bots: Altera’s game-playing AI agents get backing from Eric Schmidt | TechCrunch'}, page_content='At the helm of Altera is Robert Yang, a neuroscientist and former assistant professor at MIT. In December 2023, Yang and Altera’s other co-founders — Andrew Ahn, Nico Christie and Shuying Luo — stepped away from their applied research lab at MIT to focus on a new goal: developing AI agents (or “AI buddies,” as Yang calls them) with “social-emotional intelligence” that can interact with players and make their own decisions in-game.\\n“It has been my life goal as a neuroscientist to go all the way and build a digital human being — redefining what we thought AI was capable of,” Yang told TechCrunch. That is not to say that Yang is coming from a misanthropic point of view. “Our solidly pro-human framework means that we are building agents that will enhance humanity, not replace it,” he insists.')]\u001b[00m\n",
      "\n",
      "\n",
      "\u001b[1m\u001b[95m# Agent:\u001b[00m \u001b[1m\u001b[92mWriter\u001b[00m\n",
      "\u001b[95m## Final Answer:\u001b[00m \u001b[92m\n",
      "1. Altera raised $9 million to develop emotionally intelligent AI agents for gaming, aiming to create more human-like interactions in games, particularly Minecraft, enhancing player experiences.  \n",
      "2. Google’s AlphaFold is advancing protein structure predictions, enabling researchers to better understand biological processes for breakthroughs in biotechnology and medicine.  \n",
      "3. The emergence of socially aware AI in gaming opens up new ways for players to interact, enhancing emotional connections and experiences.  \n",
      "4. The democratization of AI in biotech signals a shift towards making advanced tools widely accessible for researchers, driving innovation across the field.  \n",
      "5. These trends indicate that AI is deeply integrating into various industries, enhancing engagement and productivity in everyday life.\u001b[00m\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# Execute the crew to see RAG in action\n",
    "result = news_crew.kickoff()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "HB91fa7BXULb",
    "outputId": "b88007db-b003-43d2-b71b-159ce0257e32"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1. Altera raised $9 million to develop emotionally intelligent AI agents for gaming, aiming to create more human-like interactions in games, particularly Minecraft, enhancing player experiences.  \n",
      "2. Google’s AlphaFold is advancing protein structure predictions, enabling researchers to better understand biological processes for breakthroughs in biotechnology and medicine.  \n",
      "3. The emergence of socially aware AI in gaming opens up new ways for players to interact, enhancing emotional connections and experiences.  \n",
      "4. The democratization of AI in biotech signals a shift towards making advanced tools widely accessible for researchers, driving innovation across the field.  \n",
      "5. These trends indicate that AI is deeply integrating into various industries, enhancing engagement and productivity in everyday life.\n"
     ]
    }
   ],
   "source": [
    "print(result)"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
