{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "collapsed_sections": [
        "_w8unPlx7Hsm"
      ]
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "##### Copyright 2023 Google LLC. SPDX-License-Identifier: Apache-2.0"
      ],
      "metadata": {
        "id": "_w8unPlx7Hsm"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Copyright 2023 Google LLC. SPDX-License-Identifier: Apache-2.0\n",
        "\n",
        "Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\n",
        "\n",
        "https://www.apache.org/licenses/LICENSE-2.0\n",
        "\n",
        "Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
      ],
      "metadata": {
        "id": "L7hAlq047Rnd"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## **LLMs as General Pattern Machines:** CartPole Environment\n",
        "\n",
        "Large language models (LLMs) are trained to absorb the myriad of patterns that are woven into the structure of language. We observe that they are capable of autoregressively completing complex token sequences -- from arbitrary ones procedurally generated by probabilistic context-free grammars (PCFG), to more rich spatial patterns found in the Abstract Reasoning Corpus (ARC), a general AI benchmark, prompted in the style of ASCII art. Surprisingly, pattern completion proficiency can be partially retained even when the sequences are expressed using tokens randomly sampled from the vocabulary. These results suggest that without any additional training, LLMs can serve as general sequence modelers, driven by in-context learning. In this work, we investigate how these zero-shot capabilities may be applied to problems in robotics.\n",
        "\n",
        "This colab explores least-to-most prompting of reward-conditioned trajectories that can discover and represent closed-loop policies to in-context learn a stabilizing controller for CartPole. While difficult to deploy today for real systems due to latency, context size limitations, and compute costs, the approach of using LLMs to drive low-level control may provide an exciting glimpse into how the patterns among words could be transferred to actions.\n",
        "\n",
        "### **Quick Start:**\n",
        "\n",
        "**Step 1.** Register for an [OpenAI API key](https://openai.com/blog/openai-api/) to use GPT-3 (there's a free trial) and enter it below\n",
        "\n",
        "**Step 2.** Menu > Runtime > Run all"
      ],
      "metadata": {
        "id": "4OfVmDKk7XvG"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "openai_api_key = \"your-api-key-here\""
      ],
      "metadata": {
        "id": "y3CVqIe0_Ni8"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## **Setup**\n",
        "\n",
        "This only needs a CPU (public) runtime."
      ],
      "metadata": {
        "id": "O2ru32hSh_oQ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install gymnasium[classic-control]\n",
        "!pip install openai\n",
        "!pip install transformers\n",
        "!pip install tiktoken"
      ],
      "metadata": {
        "id": "tc3XkvxdBPVb"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "import time\n",
        "\n",
        "import gymnasium as gym\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "import openai\n",
        "# from transformers import GPT2Tokenizer\n",
        "import tiktoken\n",
        "\n",
        "openai.api_key = openai_api_key"
      ],
      "metadata": {
        "id": "n1lPb7F7hvhd"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## **LLM** Functions\n",
        "**Note**: this can get expensive. 200 episodes is a few dollars with text-ada-001, but up to few hundred dollars for text-davinci-003+"
      ],
      "metadata": {
        "id": "StXin0YWh6X-"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "P5WwP1b2hjgc"
      },
      "outputs": [],
      "source": [
        "# This does GPT-3 inference.\n",
        "model = \"text-ada-001\"  # Small GPT-3?\n",
        "def LLM(prompt, max_tokens=256, stop=None, temperature=0.0):\n",
        "  while True:\n",
        "    try:\n",
        "      response = openai.Completion.create(engine=model, prompt=prompt, max_tokens=max_tokens, temperature=temperature, stop=stop)\n",
        "      break\n",
        "    except:\n",
        "      print(\"LLM failed. Retrying in 10s.\")\n",
        "      time.sleep(10)\n",
        "  text = [choice['text'] for choice in response['choices']]\n",
        "  return text if len(text) > 1 else text[0]\n",
        "\n",
        "# GPT-3 tokenizer is the same as GPT-2's (warning: slow).\n",
        "# tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")  # (slower)\n",
        "tokenizer = tiktoken.encoding_for_model(model)\n",
        "\n",
        "# Handshake to make sure we can talk to the LLM.\n",
        "print(LLM(\"hello world!\"))"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## **CartPole** Environment\n",
        "\n",
        "This wraps the CartPole environment and makes it LLM friendly:\n",
        "* Reduce observation to just pole angle position and velocity.\n",
        "* Normalize pole angle position from [-0.25, 0.25] to [0, 100] ints.\n",
        "* Normalize pole angle velocity from [-3.00, 3.00] to [0, 100] ints."
      ],
      "metadata": {
        "id": "HarcpynWichi"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "class CartPoleEnv:\n",
        "\n",
        "  def __init__(self):\n",
        "    self.env = gym.make(\"CartPole-v1\")  #, render_mode=\"human\"\n",
        "    self.reset()\n",
        "\n",
        "  def reset(self):\n",
        "    obs, info = self.env.reset()\n",
        "    self.terminated = False\n",
        "    self.reward = 0\n",
        "    self.state = self.norm_state([obs[2], obs[3]])  # Pole angle and velocity.\n",
        "    return self.state\n",
        "\n",
        "  def step(self, act):\n",
        "    obs, reward, terminated, truncated, info = self.env.step(act)\n",
        "    self.state = self.norm_state([obs[2], obs[3]])\n",
        "    self.terminated = terminated or truncated\n",
        "    self.reward += np.int32(reward)\n",
        "    return self.state\n",
        "\n",
        "  def random_act(self):\n",
        "    return self.env.action_space.sample()\n",
        "\n",
        "  def norm_state(self, state):\n",
        "    p = (np.clip(state[0], -0.25, 0.25) + 0.25) * (100 / 0.5)\n",
        "    v = (np.clip(state[1], -3, 3) + 3) * (100 / 6)\n",
        "    return int(np.round(p)), int(np.round(v))\n",
        "\n",
        "  def state_to_str(self, state):\n",
        "    return f\" {state[0]} {state[1]}\"\n",
        "\n",
        "  def act_to_str(self, act):\n",
        "    # LLMs bias on 0 so lets make the actions 1 and 2 instead.\n",
        "    return f\" {act + 1}\"\n",
        "\n",
        "  def str_to_act(self, str):\n",
        "    return int(str) - 1"
      ],
      "metadata": {
        "id": "Bd7yzQ_wigYT"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Sequence **Improvement**\n",
        "Online in-context policy optimization with online rollouts (max reward is 200)."
      ],
      "metadata": {
        "id": "kbu-tdWYjaKn"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "init_episodes = 100\n",
        "max_episodes = 200\n",
        "temperature = 0.0\n",
        "max_context = 1020  # In tokens.\n",
        "\n",
        "# Memory bank with reward-labeled episodes: each is a list of state-action tuples.\n",
        "episodes = []\n",
        "rewards = []\n",
        "\n",
        "env = CartPoleEnv()\n",
        "\n",
        "# Generate some random policy rollouts and add them to memory.\n",
        "while len(episodes) < init_episodes:\n",
        "  episode = []\n",
        "  s = env.reset()\n",
        "  while not env.terminated:\n",
        "    a = env.random_act()\n",
        "    episode.append((s, a))\n",
        "    s = env.step(a)\n",
        "  episodes.append(episode)\n",
        "  rewards.append(env.reward)\n",
        "\n",
        "# Incremental rollouts with the LLM in the loop.\n",
        "while len(episodes) < max_episodes:\n",
        "\n",
        "  # Set a desired reward for the current rollout.\n",
        "  desired_reward = np.max(rewards) + 20 + np.int32(np.random.uniform() * 10)\n",
        "  prompt = f\"{desired_reward}:\"\n",
        "\n",
        "  # Environment reset.\n",
        "  state = env.reset()\n",
        "  buffer = []\n",
        "\n",
        "  while not env.terminated and env.reward < 200:\n",
        "    prompt += f\"{env.state_to_str(state)},\"\n",
        "    num_tokens = len(tokenizer.encode(prompt))\n",
        "\n",
        "    # Build context of episodes sorted by ascending rewards.\n",
        "    context = \"\"\n",
        "    for i in np.argsort(rewards)[::-1]:\n",
        "      if num_tokens + 10 > max_context:  # Each episode should have at least 10 tokens.\n",
        "        break\n",
        "      episode, reward = episodes[i], rewards[i]\n",
        "      size = min(len(episode), (max_context - num_tokens) // 5)\n",
        "      text = f\"{reward}:\" + \",\".join([f\"{env.state_to_str(s)},{env.act_to_str(a)}\" for s, a in episode[:size]])\n",
        "      num_tokens += 2 + size * 5   # Manual math here to count tokens. Calling the tokenizer too much can get slow.\n",
        "      context = f\"{text}\\n{context}\"\n",
        "\n",
        "    # LLM inference.\n",
        "    pred = LLM(context + prompt, max_tokens=4, stop=[\",\", \"\\n\"], temperature=temperature)\n",
        "\n",
        "    # If predicted action is invalid, sample random action.\n",
        "    try:\n",
        "      act = env.str_to_act(pred.strip())\n",
        "    except:\n",
        "      act = -1\n",
        "    if act not in [0, 1]:\n",
        "      print(f\"Invalid action '{pred}'. Sampling random one.\")\n",
        "      act = env.random_act()\n",
        "\n",
        "    prompt += f\"{env.act_to_str(act)},\"\n",
        "    buffer.append((state, act))\n",
        "\n",
        "    # Show LLM input.\n",
        "    print(context + prompt)\n",
        "    print(\"---------------------------------------------------------\")\n",
        "    print(\"Num episodes:\", len(episodes), \"Curr highest return:\", np.max(rewards))\n",
        "    print(\"---------------------------------------------------------\")\n",
        "\n",
        "    # Step environment.\n",
        "    state = env.step(act)\n",
        "\n",
        "  episodes.append(buffer)\n",
        "  rewards.append(env.reward)\n",
        "\n",
        "  # Make a plot of performance over time.\n",
        "  plt.scatter(np.arange(init_episodes), rewards[:init_episodes], c=\"gray\", alpha=0.3)\n",
        "  plt.scatter(np.arange(init_episodes, len(rewards)), rewards[init_episodes:], alpha=0.3)\n",
        "  max_over_time = [rewards[init_episodes]]\n",
        "  for reward in rewards[init_episodes+1:]:\n",
        "    max_over_time.append(max(reward, max_over_time[-1]))\n",
        "  plt.plot(np.arange(init_episodes, len(rewards)), max_over_time)\n",
        "  plt.axhline(y=200, color='gray', linestyle='--', alpha=0.3)\n",
        "  plt.show()"
      ],
      "metadata": {
        "id": "Ph63JHgFi89i"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}