{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "8a521496-685a-4073-bbc5-bdbc13642aa1",
   "metadata": {},
   "source": [
    "# Interacting with Articulated Agents"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "50a60d63-feeb-4b05-aec2-b9196117c9d1",
   "metadata": {},
   "source": [
    "In this tutorial we will show how to interact in Habitat via articulated agents. These are agents composed of different parts which can be articulated. Examples of these agents include different commercial robots (such as Spot, Fetch, Franka) or humanoids.\n",
    "In this tutorial we will explore how to interact in Habitat with such agents. We will cover the following topics:\n",
    "\n",
    "- How to initialize an agent\n",
    "- Moving an agent around the scene\n",
    "- Dynamic vs Kinematic Simulation\n",
    "- Interacting with objects\n",
    "- Interacting with Actions\n",
    "- Multi-Agent simulation\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0c7bf54d-c588-4401-a7e6-0525f480fdc4",
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "import habitat_sim\n",
    "import magnum as mn\n",
    "import warnings\n",
    "from habitat.tasks.rearrange.rearrange_sim import RearrangeSim\n",
    "warnings.filterwarnings('ignore')\n",
    "from habitat_sim.utils.settings import make_cfg\n",
    "from matplotlib import pyplot as plt\n",
    "from habitat_sim.utils import viz_utils as vut\n",
    "from omegaconf import DictConfig\n",
    "import numpy as np\n",
    "from habitat.articulated_agents.robots import FetchRobot\n",
    "from habitat.config.default import get_agent_config\n",
    "from habitat.config.default_structured_configs import ThirdRGBSensorConfig, HeadRGBSensorConfig, HeadPanopticSensorConfig\n",
    "from habitat.config.default_structured_configs import SimulatorConfig, HabitatSimV0Config, AgentConfig\n",
    "from habitat.config.default import get_agent_config\n",
    "import habitat\n",
    "from habitat_sim.physics import JointMotorSettings, MotionType\n",
    "from omegaconf import OmegaConf\n",
    "\n",
    "import git, os\n",
    "repo = git.Repo(\".\", search_parent_directories=True)\n",
    "dir_path = repo.working_tree_dir\n",
    "data_path = os.path.join(dir_path, \"data\")\n",
    "os.chdir(dir_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c18b80b4-7b14-4b61-9921-9736641d977d",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Download necessary data. This step may take a while but will only be executed once.\n",
    "! ln -s ../../data .\n",
    "# We will download spot to show interaction between the spot robot and fetch\n",
    "! python -m habitat_sim.utils.datasets_download --no-replace --uids hab_spot_arm hab3_bench_assets ycb\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9430d6c2-16ab-4754-944e-7c1637d1859c",
   "metadata": {},
   "source": [
    "# Initializing a scene with agents\n",
    "The first thing we want to do is to initialize the simulator to include different agents. \n",
    "\n",
    "In the first part of this tutorial we will use `RearrangeSim` as our simulator, which is an abstraction over [HabitatSimulator](https://aihabitat.org/docs/habitat-lab/habitat.core.simulator.Simulator.html) and includes functionalities to update agent cameras and position or interact with objects. In the second part of the tutorial, we will be defining agent actions and will be using a `RearrangeEnvironment`, which contains a reference to the simulator, as well as functions to define and execute agent actions, obtain rewards or termination conditions. The RearrangeEnvironment will also be used to train agents via RL.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d334efd7-514a-4ce6-a958-d1b31a537a71",
   "metadata": {},
   "source": [
    "## Defining agent configurations\n",
    "We start by defining a configuration for each agent we want to add. Articulated agents are represented as any other articulated object, and are therefore defined via an URDF file. While this file is enough to represent the agent as an object, it doesn't include a way to easily set its base position, reset its joints, move a specific part or query other attributes.\n",
    "\n",
    "To simplify this, we provide an abstraction, `ArticulatedAgent`, which will wrap habitat-sim's ManagedArticulatedObject class initialized from the URDF and provide functionalities that are commonly useful for agent control. You can view the different ArticulatedAgents (robots and humanoids) [here](https://github.com/facebookresearch/habitat-lab/tree/main/habitat-lab/habitat/articulated_agents)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "01ec3be8-02aa-4238-8875-da65d6308f5b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define the agent configuration\n",
    "main_agent_config = AgentConfig()\n",
    "urdf_path = os.path.join(data_path, \"robots/hab_fetch/robots/hab_fetch.urdf\")\n",
    "main_agent_config.articulated_agent_urdf = urdf_path\n",
    "main_agent_config.articulated_agent_type = \"FetchRobot\"\n",
    "\n",
    "# Define sensors that will be attached to this agent, here a third_rgb sensor and a head_rgb.\n",
    "# We will later talk about why we are giving the sensors these names\n",
    "main_agent_config.sim_sensors = {\n",
    "    \"third_rgb\": ThirdRGBSensorConfig(),\n",
    "    \"head_rgb\": HeadRGBSensorConfig(),\n",
    "}\n",
    "\n",
    "# We create a dictionary with names of agents and their corresponding agent configuration\n",
    "agent_dict = {\"main_agent\": main_agent_config}\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5e8f679d-2d42-4b7a-9431-f491d006a80b",
   "metadata": {},
   "outputs": [],
   "source": [
    "def make_sim_cfg(agent_dict):\n",
    "    # Start the scene config\n",
    "    sim_cfg = SimulatorConfig(type=\"RearrangeSim-v0\")\n",
    "    \n",
    "    # This is for better graphics\n",
    "    sim_cfg.habitat_sim_v0.enable_hbao = True\n",
    "    sim_cfg.habitat_sim_v0.enable_physics = True\n",
    "\n",
    "    \n",
    "    # Set up an example scene\n",
    "    sim_cfg.scene = os.path.join(data_path, \"hab3_bench_assets/hab3-hssd/scenes/103997919_171031233.scene_instance.json\")\n",
    "    sim_cfg.scene_dataset = os.path.join(data_path, \"hab3_bench_assets/hab3-hssd/hab3-hssd.scene_dataset_config.json\")\n",
    "    sim_cfg.additional_object_paths = [os.path.join(data_path, 'objects/ycb/configs/')]\n",
    "\n",
    "    \n",
    "    cfg = OmegaConf.create(sim_cfg)\n",
    "\n",
    "    # Set the scene agents\n",
    "    cfg.agents = agent_dict\n",
    "    cfg.agents_order = list(cfg.agents.keys())\n",
    "    return cfg\n",
    "\n",
    "\n",
    "def init_rearrange_sim(agent_dict):\n",
    "    # Start the scene config\n",
    "    sim_cfg = make_sim_cfg(agent_dict)    \n",
    "    cfg = OmegaConf.create(sim_cfg)\n",
    "    \n",
    "    # Create the scene\n",
    "    sim = RearrangeSim(cfg)\n",
    "\n",
    "    # This is needed to initialize the agents\n",
    "    sim.agents_mgr.on_new_scene()\n",
    "\n",
    "    # For this tutorial, we will also add an extra camera that will be used for third person recording.\n",
    "    camera_sensor_spec = habitat_sim.CameraSensorSpec()\n",
    "    camera_sensor_spec.sensor_type = habitat_sim.SensorType.COLOR\n",
    "    camera_sensor_spec.uuid = \"scene_camera_rgb\"\n",
    "\n",
    "    # TODO: this is a bit dirty but I think its nice as it shows how to modify a camera sensor...\n",
    "    sim.add_sensor(camera_sensor_spec, 0)\n",
    "\n",
    "    return sim\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "035a1166-4943-419f-9a4d-2c560ffe6bb6",
   "metadata": {},
   "source": [
    "## Initializing the scene\n",
    "We can now initialize the scene. As mentioned before, we will be using here `RearrangeSim` to easily be able to interact with objects.\n",
    "\n",
    "We create a scene init function that will take as input a dictionary of agent configurations, as the one we defined before."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b443a4c4-b4c4-4b52-9212-74564e9d2ce0",
   "metadata": {},
   "outputs": [],
   "source": [
    "sim = init_rearrange_sim(agent_dict)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c2d2edf9-9e1c-48ba-b285-e83962d55d70",
   "metadata": {},
   "source": [
    "We just initialized our scene! We can now query and set our agent position"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a9a769a1-a944-4bb8-8860-5ea8de5f7072",
   "metadata": {},
   "outputs": [],
   "source": [
    "init_pos = mn.Vector3(-5.5,0,-1.5)\n",
    "art_agent = sim.articulated_agent\n",
    "# We will see later about this\n",
    "art_agent.sim_obj.motion_type = MotionType.KINEMATIC\n",
    "print(\"Current agent position:\", art_agent.base_pos)\n",
    "art_agent.base_pos = init_pos \n",
    "print(\"New agent position:\", art_agent.base_pos)\n",
    "# We take a step to update agent position\n",
    "_ = sim.step({})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5f5fc09d-4a79-4697-9feb-c7210879601d",
   "metadata": {},
   "source": [
    "We can also take observations in the environment. Here we get three sensors, two of which we defined in the config and one which we added afterwards."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4949822e-410b-41bb-89ea-b898fc9a6411",
   "metadata": {},
   "outputs": [],
   "source": [
    "observations = sim.get_sensor_observations()\n",
    "print(observations.keys())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "45a3565f-1c3a-4880-8877-8a3af9893d7a",
   "metadata": {},
   "outputs": [],
   "source": [
    "_, ax = plt.subplots(1,len(observations.keys()))\n",
    "\n",
    "for ind, name in enumerate(observations.keys()):\n",
    "    ax[ind].imshow(observations[name])\n",
    "    ax[ind].set_axis_off()\n",
    "    ax[ind].set_title(name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ba8f7681-d21a-417b-8370-7c684c2a8019",
   "metadata": {},
   "source": [
    "The first two sensors shown here are special. They are attached to a particular part of the agent, and will be updated as we update the agent. The third one is not attached to the agent.\n",
    "The reason for this is that the first two sensors start with `third` and `head`, which are special camera parameters which will have a particular behavior for this robot. You can see the camera parameters here:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ca67f8e9-c774-4ba3-ac96-d5b9da8d6f86",
   "metadata": {},
   "outputs": [],
   "source": [
    "art_agent.params.cameras.keys()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4440576b-31ae-4c8a-8caf-d95ddc2585a6",
   "metadata": {},
   "source": [
    "Whenever a sensor name starts with any of these names, it will be set to have the behavior specified in the agent_params. You can look at the ArticulatedAgent definition to see the different specified cameras."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a66bacdf-315a-42e5-adef-5e1625b8dd52",
   "metadata": {},
   "source": [
    "# Moving an agent around the scene\n",
    "The next step is to move the agent and its parts around the scene. Let's start by translating and rotating the agent base. We will be recording each frame and generating a video."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a691890-98fc-463e-80d9-93dc408241a4",
   "metadata": {},
   "source": [
    "## Moving the agent base"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "73432bd2-e92a-4b59-9f4b-d630208fdf2b",
   "metadata": {},
   "outputs": [],
   "source": [
    "observations = []\n",
    "num_iter = 100\n",
    "pos_delta = mn.Vector3(0.02,0,0)\n",
    "rot_delta = np.pi / (8 * num_iter)\n",
    "art_agent.base_pos = init_pos\n",
    "\n",
    "sim.reset()\n",
    "# set_fixed_camera(sim)\n",
    "for _ in range(num_iter):\n",
    "    # TODO: this actually seems to give issues...\n",
    "    art_agent.base_pos = art_agent.base_pos + pos_delta\n",
    "    art_agent.base_rot = art_agent.base_rot + rot_delta\n",
    "    sim.step({})\n",
    "    observations.append(sim.get_sensor_observations())\n",
    "\n",
    "vut.make_video(\n",
    "    observations,\n",
    "    \"scene_camera_rgb\",\n",
    "    \"color\",\n",
    "    \"robot_tutorial_video\",\n",
    "    open_vid=True,\n",
    ")\n",
    "vut.make_video(\n",
    "    observations,\n",
    "    \"third_rgb\",\n",
    "    \"color\",\n",
    "    \"robot_tutorial_video\",\n",
    "    open_vid=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "de4de624-de2d-4cbe-a3d7-32975bdaad1a",
   "metadata": {},
   "source": [
    "As we can see, the third_rgb camera was set to track the agent, whereas the scene_camera_rgb remained fixed."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a1ce5270-0cea-4272-8806-bffb246c8c8b",
   "metadata": {},
   "source": [
    "## Updating agent articulations\n",
    "Articulated Agent also includes reference attributes, that helps us easily access and modify relevant parameters of the robot, such as the arm joints, or get the end effector. We can look at the position of the end effector, or the arm joints:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8bb75672-0448-4be4-9091-72ea9c039838",
   "metadata": {},
   "outputs": [],
   "source": [
    "sim.reset()\n",
    "\n",
    "observations = []\n",
    "# We start by setting the arm to the minimum value\n",
    "lower_limit = art_agent.arm_joint_limits[0].copy()\n",
    "lower_limit[lower_limit == -np.inf] = 0\n",
    "upper_limit = art_agent.arm_joint_limits[1].copy()\n",
    "upper_limit[upper_limit == np.inf] = 0\n",
    "for i in range(num_iter):\n",
    "    alpha = i/num_iter\n",
    "    current_joints = upper_limit * alpha + lower_limit * (1 - alpha)\n",
    "    art_agent.arm_joint_pos = current_joints\n",
    "    sim.step({})\n",
    "    observations.append(sim.get_sensor_observations())\n",
    "    if i in [0, num_iter-1]:\n",
    "        print(f\"Step {i}:\")\n",
    "        print(\"Arm joint positions:\", art_agent.arm_joint_pos)\n",
    "        print(\"Arm end effector translation:\", art_agent.ee_transform().translation)\n",
    "        print(art_agent.sim_obj.joint_positions)\n",
    "\n",
    "vut.make_video(\n",
    "    observations,\n",
    "    \"third_rgb\",\n",
    "    \"color\",\n",
    "    \"robot_tutorial_video\",\n",
    "    open_vid=True,\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5219fe33-d873-473a-8933-1a8bf0548039",
   "metadata": {},
   "source": [
    "# Dynamic vs Kinematic Simulation\n",
    "So far, we've been updating the agent kinematically. We can also set the agent to be dynamic, such that physical forces modify the state of the agent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c77a974c-c122-4b4b-a57e-2b164855c448",
   "metadata": {},
   "outputs": [],
   "source": [
    "# We will initialize the agent 0.3 meters away from the floor and let it fall\n",
    "sim = init_rearrange_sim(agent_dict)\n",
    "art_agent = sim.articulated_agent\n",
    "art_agent._fixed_base = False\n",
    "sim.agents_mgr.on_new_scene()\n",
    "\n",
    "# The base is not fixed anymore\n",
    "art_agent.sim_obj.motion_type = MotionType.DYNAMIC\n",
    "\n",
    "\n",
    "art_agent.base_pos = init_pos + mn.Vector3(0,1.5,0)\n",
    "\n",
    "_ = sim.step({})\n",
    "observations = []\n",
    "fps = 60 # Default value for make video\n",
    "dt = 1./fps\n",
    "for _ in range(120):    \n",
    "    sim.step_physics(dt)\n",
    "    observations.append(sim.get_sensor_observations())\n",
    "    \n",
    " \n",
    "\n",
    "vut.make_video(\n",
    "    observations,\n",
    "    \"third_rgb\",\n",
    "    \"color\",\n",
    "    \"robot_tutorial_video\",\n",
    "    open_vid=True,\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "95ea91ac-68c8-4af0-b21f-1264878092a6",
   "metadata": {},
   "source": [
    "# Interacting with objects\n",
    "We will now look at how to interact with objects in the scene. For this, we will start by loading an episode from a pregenerated dataset, which contains a scene with pre-initialized objects."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c22e98d8-c8eb-42ff-aad8-18c83341269f",
   "metadata": {},
   "outputs": [],
   "source": [
    "from habitat.datasets.rearrange.rearrange_dataset import RearrangeEpisode\n",
    "import gzip\n",
    "import json\n",
    "\n",
    "# Define the agent configuration\n",
    "episode_file = os.path.join(data_path, \"hab3_bench_assets/episode_datasets/small_large.json.gz\")\n",
    "sim = init_rearrange_sim(agent_dict)\n",
    "# Load the dataset\n",
    "with gzip.open(episode_file, \"rt\") as f: \n",
    "    episode_files = json.loads(f.read())\n",
    "\n",
    "# Get the first episode\n",
    "episode = episode_files[\"episodes\"][0]\n",
    "rearrange_episode = RearrangeEpisode(**episode)\n",
    "\n",
    "art_agent = sim.articulated_agent\n",
    "art_agent._fixed_base = True\n",
    "sim.agents_mgr.on_new_scene()\n",
    "\n",
    "\n",
    "sim.reconfigure(sim.habitat_config, ep_info=rearrange_episode)\n",
    "sim.reset()\n",
    "\n",
    "art_agent.sim_obj.motion_type = MotionType.KINEMATIC\n",
    "sim.articulated_agent.base_pos =  init_pos \n",
    "_ = sim.step({})\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6b02b26-41f0-4d5c-bbf1-afca9a4de374",
   "metadata": {},
   "source": [
    "The first thing we will do is to look at the objects currently instanced in the scene. The simulator provides a RigidObjectManager and an ArticulatedObjectManger for accessing, adding, and removing objects in the scene:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "75328c48-7295-4315-ad5a-727b010ebfc0",
   "metadata": {},
   "outputs": [],
   "source": [
    "aom = sim.get_articulated_object_manager()\n",
    "rom = sim.get_rigid_object_manager()\n",
    "\n",
    "# We can query the articulated and rigid objects\n",
    "\n",
    "print(\"List of articulated objects:\")\n",
    "for handle, ao in aom.get_objects_by_handle_substring().items():\n",
    "    print(handle, \"id\", aom.get_object_id_by_handle(handle))\n",
    "\n",
    "print(\"\\nList of rigid objects:\")\n",
    "obj_ids = []\n",
    "for handle, ro in rom.get_objects_by_handle_substring().items():\n",
    "    if ro.awake:\n",
    "        print(handle, \"id\", ro.object_id)\n",
    "        obj_ids.append(ro.object_id)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7eee3675-fd82-45ec-bea3-e8f97e160958",
   "metadata": {},
   "source": [
    "Above we listed all object instances in the scene. We can also retrieve the episode's RigidObjects using RearrangeSim's `scene_obj_ids` cache.\n",
    "\n",
    "Let's set the agent to interact with the object. We will first teleport the agent somewhere close to the object, and then grab the object. To teleport the agent\n",
    "we will first look at the object coordinate, and sample a navigable area next to the object coordinate."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "497d6d58-e240-40dc-84fd-615eb1a1b900",
   "metadata": {},
   "outputs": [],
   "source": [
    "sim.reset()\n",
    "art_agent.sim_obj.motion_type = MotionType.KINEMATIC\n",
    "obj_id = sim.scene_obj_ids[0]\n",
    "first_object = rom.get_object_by_id(obj_id)\n",
    "\n",
    "object_trans = first_object.translation\n",
    "print(first_object.handle, \"is in\", object_trans)\n",
    "\n",
    "sample = sim.pathfinder.get_random_navigable_point_near(\n",
    "    circle_center=object_trans, radius=1.0, island_index=-1\n",
    ")\n",
    "vec_sample_obj = object_trans - sample\n",
    "\n",
    "angle_sample_obj = np.arctan2(-vec_sample_obj[2], vec_sample_obj[0])\n",
    "\n",
    "sim.articulated_agent.base_pos = sample\n",
    "sim.articulated_agent.base_rot = angle_sample_obj\n",
    "obs = sim.step({})\n",
    "\n",
    "plt.imshow(obs[\"head_rgb\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f152136f-7a43-410d-8a41-738e40d46116",
   "metadata": {},
   "source": [
    "We will now pick the object. In this example, we will directly attach the object to the robot arm, without animating the arm in any way. We also provide a way to train policies so that the arm approaches the object."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c04a6374-8bf7-40b9-ae42-4797c50d38bc",
   "metadata": {},
   "outputs": [],
   "source": [
    "# We use a grasp manager to interact with the object:\n",
    "agent_id = 0\n",
    "grasp_manager = sim.agents_mgr[agent_id].grasp_mgrs[0]\n",
    "grasp_manager.snap_to_obj(obj_id)\n",
    "obs = sim.step({})\n",
    "plt.imshow(obs[\"head_rgb\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f86f791a-4469-4471-9363-c5817f7d4b00",
   "metadata": {},
   "source": [
    "We can move around and drop the object"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "66872f2d-7af3-4c1d-96e7-5caf0f7ce5e8",
   "metadata": {},
   "outputs": [],
   "source": [
    "num_iter = 100\n",
    "observations = []\n",
    "\n",
    "sim.articulated_agent.base_pos = sample\n",
    "for _ in range(num_iter):    \n",
    "    forward_vec = art_agent.base_transformation.transform_vector(mn.Vector3(1,0,0))\n",
    "    art_agent.base_pos = art_agent.base_pos + forward_vec * 0.02\n",
    "    observations.append(sim.step({}))\n",
    "    \n",
    "# Remove the object\n",
    "grasp_manager.desnap()\n",
    "for _ in range(20):\n",
    "    observations.append(sim.step({}))\n",
    "vut.make_video(\n",
    "    observations,\n",
    "    \"head_rgb\",\n",
    "    \"color\",\n",
    "    \"robot_tutorial_video\",\n",
    "    open_vid=True,\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "29dd3c0b-5254-4a7b-af60-9c69a1dc7624",
   "metadata": {},
   "source": [
    "# Defining agent actions\n",
    "So far, we have been controlling agents by directly updating the robot parameters. In  many cases, you may want to abstract interaction into actions that update the robot. These actions can then be called by a planner or a learned policy. In this section we will show how to define and control agents with these actions. The Habitat Quickstart provides more instructions into how to add actions https://aihabitat.org/docs/habitat-lab/quickstart.html.\n",
    "TODO: point to skills tutorial\n",
    "\n",
    "To execute actions, we will be using the `Env`, which is an object that contains a simulator instance as well as a set of action definitions and specifiable rewards. We will not be going through the specifiable rewards. "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "60d2d066-871e-4c11-ac7d-751d3e10ca2d",
   "metadata": {},
   "source": [
    "## Defining an environment\n",
    "We will start by defining the environment class. A key difference is that now we also define actions that the environment will have"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9b1b75f5-f252-4655-997a-ed5791d4265c",
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "from habitat.config.default_structured_configs import TaskConfig, EnvironmentConfig, DatasetConfig, HabitatConfig\n",
    "from habitat.config.default_structured_configs import ArmActionConfig, BaseVelocityActionConfig, OracleNavActionConfig, ActionConfig\n",
    "from habitat.core.env import Env\n",
    "def make_sim_cfg(agent_dict):\n",
    "    # Start the scene config\n",
    "    sim_cfg = SimulatorConfig(type=\"RearrangeSim-v0\")\n",
    "    \n",
    "    # Enable Horizon Based Ambient Occlusion (HBAO) to approximate shadows.\n",
    "    sim_cfg.habitat_sim_v0.enable_hbao = True\n",
    "    \n",
    "    sim_cfg.habitat_sim_v0.enable_physics = True\n",
    "\n",
    "    \n",
    "    # Set up an example scene\n",
    "    sim_cfg.scene = os.path.join(data_path, \"hab3_bench_assets/hab3-hssd/scenes/103997919_171031233.scene_instance.json\")\n",
    "    sim_cfg.scene_dataset = os.path.join(data_path, \"hab3_bench_assets/hab3-hssd/hab3-hssd.scene_dataset_config.json\")\n",
    "    sim_cfg.additional_object_paths = [os.path.join(data_path, 'objects/ycb/configs/')]\n",
    "\n",
    "    \n",
    "    cfg = OmegaConf.create(sim_cfg)\n",
    "\n",
    "    # Set the scene agents\n",
    "    cfg.agents = agent_dict\n",
    "    cfg.agents_order = list(cfg.agents.keys())\n",
    "    return cfg\n",
    "\n",
    "def make_hab_cfg(agent_dict, action_dict):\n",
    "    sim_cfg = make_sim_cfg(agent_dict)\n",
    "    task_cfg = TaskConfig(type=\"RearrangeEmptyTask-v0\")\n",
    "    task_cfg.actions = action_dict\n",
    "    env_cfg = EnvironmentConfig()\n",
    "    dataset_cfg = DatasetConfig(type=\"RearrangeDataset-v0\", data_path=\"data/hab3_bench_assets/episode_datasets/small_large.json.gz\")\n",
    "    \n",
    "    \n",
    "    hab_cfg = HabitatConfig()\n",
    "    hab_cfg.environment = env_cfg\n",
    "    hab_cfg.task = task_cfg\n",
    "    hab_cfg.dataset = dataset_cfg\n",
    "    hab_cfg.simulator = sim_cfg\n",
    "    hab_cfg.simulator.seed = hab_cfg.seed\n",
    "\n",
    "    return hab_cfg\n",
    "\n",
    "def init_rearrange_env(agent_dict, action_dict):\n",
    "    hab_cfg = make_hab_cfg(agent_dict, action_dict)\n",
    "    res_cfg = OmegaConf.create(hab_cfg)\n",
    "    return Env(res_cfg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7242ac3b-5b27-420d-afa1-a164f7d65f9e",
   "metadata": {},
   "outputs": [],
   "source": [
    "action_dict = {\n",
    "    \"oracle_magic_grasp_action\": ArmActionConfig(type=\"MagicGraspAction\"),\n",
    "    \"base_velocity_action\": BaseVelocityActionConfig(),\n",
    "    \"oracle_coord_action\": OracleNavActionConfig(type=\"OracleNavCoordinateAction\", spawn_max_dist_to_obj=1.0)\n",
    "}\n",
    "env = init_rearrange_env(agent_dict, action_dict)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e605fd4f-1587-4f08-aa42-e550fe001d54",
   "metadata": {},
   "outputs": [],
   "source": [
    "# The environment contains a pointer to an habitat simulator, which allows us to reproduce the steps we did before\n",
    "print(env._sim)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "653b2c16-1f8d-4486-86e9-fa282edbef3c",
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "# We can query the actions available, and their action space:\n",
    "for action_name, action_space in env.action_space.items():\n",
    "    print(action_name, action_space)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "38414cdb-bcd6-4c4c-b430-67caf8480b3f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's get an observation as before:\n",
    "env.reset()\n",
    "obs = env.step({\"action\": (), \"action_args\": {}})\n",
    "plt.imshow(obs[\"third_rgb\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2a332746-448b-449a-9d1c-1b279b897e5f",
   "metadata": {},
   "source": [
    "We can now call actions in the environment to update the agent. For this, we call the step function with the name of the action we want to execute and the parameters. We can also execute multiple actions at the same time. You can also implement novel actions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dc4c1cac-e079-4bfe-a887-bdc30dcec726",
   "metadata": {},
   "outputs": [],
   "source": [
    "# We can now call the defined actions\n",
    "observations = []\n",
    "num_iter = 40\n",
    "for _ in range(num_iter):\n",
    "    params = env.action_space[\"base_velocity_action\"].sample()\n",
    "    action_dict = {\n",
    "        \"action\": \"base_velocity_action\",\n",
    "        \"action_args\": params\n",
    "    }\n",
    "    observations.append(env.step(action_dict))\n",
    "vut.make_video(\n",
    "    observations,\n",
    "    \"third_rgb\",\n",
    "    \"color\",\n",
    "    \"robot_tutorial_video\",\n",
    "    open_vid=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a4a94e5-2f5d-499a-b0cf-5a39befeea4b",
   "metadata": {},
   "source": [
    "One of the actions we defined was the OracleNavCoordAction, which uses a path planner to navigate to a given coordinate. We can use it to navigate to a specific object instance."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fac74b95-2d1d-405e-a99e-bae08fad44ac",
   "metadata": {},
   "outputs": [],
   "source": [
    "env.reset()\n",
    "rom = env.sim.get_rigid_object_manager()\n",
    "# env.sim.articulated_agent.base_pos = init_pos\n",
    "# As before, we get a navigation point next to an object id\n",
    "\n",
    "obj_id = env.sim.scene_obj_ids[0]\n",
    "first_object = rom.get_object_by_id(obj_id)\n",
    "\n",
    "object_trans = first_object.translation\n",
    "print(first_object.handle, \"is in\", object_trans)\n",
    "\n",
    "# print(sample)\n",
    "observations = []\n",
    "delta = 2.0\n",
    "\n",
    "object_agent_vec = env.sim.articulated_agent.base_pos - object_trans\n",
    "object_agent_vec.y = 0\n",
    "dist_agent_object = object_agent_vec.length()\n",
    "# Walk towards the object\n",
    "\n",
    "agent_displ = np.inf\n",
    "agent_rot = np.inf\n",
    "prev_rot = env.sim.articulated_agent.base_rot\n",
    "prev_pos = env.sim.articulated_agent.base_pos\n",
    "while agent_displ > 1e-9 or agent_rot > 1e-9:\n",
    "    prev_rot = env.sim.articulated_agent.base_rot\n",
    "    prev_pos = env.sim.articulated_agent.base_pos\n",
    "    action_dict = {\n",
    "        \"action\": (\"oracle_coord_action\"), \n",
    "        \"action_args\": {\n",
    "              \"oracle_nav_lookat_action\": object_trans,\n",
    "              \"mode\": 1\n",
    "          }\n",
    "    }\n",
    "    observations.append(env.step(action_dict))\n",
    "    \n",
    "    cur_rot = env.sim.articulated_agent.base_rot\n",
    "    cur_pos = env.sim.articulated_agent.base_pos\n",
    "    agent_displ = (cur_pos - prev_pos).length()\n",
    "    agent_rot = np.abs(cur_rot - prev_rot)\n",
    "\n",
    "# Wait\n",
    "for _ in range(20):\n",
    "    action_dict = {\"action\": (), \"action_args\": {}}\n",
    "    observations.append(env.step(action_dict))    \n",
    "vut.make_video(\n",
    "    observations,\n",
    "    \"third_rgb\",\n",
    "    \"color\",\n",
    "    \"robot_tutorial_video\",\n",
    "    open_vid=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fea2e8a0-b7b4-488e-948a-3721e352637b",
   "metadata": {},
   "source": [
    "## Defining new actions\n",
    "In the previous example we used actions to do navigation. We would like to also be able to pick up objects given an id. However, Habitat doesn't have a pre-defined action for that. We will look at a picking action here. \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a14e56b0-1e2c-4a0a-9846-58ca37373d27",
   "metadata": {},
   "outputs": [],
   "source": [
    "from habitat.tasks.rearrange.actions.articulated_agent_action import ArticulatedAgentAction\n",
    "from habitat.core.registry import registry\n",
    "from gym import spaces\n",
    "\n",
    "\n",
    "@registry.register_task_action\n",
    "class PickObjIdAction(ArticulatedAgentAction):\n",
    "    \n",
    "    @property\n",
    "    def action_space(self):\n",
    "        MAX_OBJ_ID = 1000\n",
    "        return spaces.Dict({\n",
    "            f\"{self._action_arg_prefix}pick_obj_id\": spaces.Discrete(MAX_OBJ_ID)\n",
    "        })\n",
    "\n",
    "    def step(self, *args, **kwargs):\n",
    "        obj_id = kwargs[f\"{self._action_arg_prefix}pick_obj_id\"]\n",
    "        print(self.cur_grasp_mgr, obj_id)\n",
    "        self.cur_grasp_mgr.snap_to_obj(obj_id)\n",
    "\n",
    "action_dict = {\n",
    "    \"pick_obj_id_action\": ActionConfig(type=\"PickObjIdAction\"),\n",
    "    \"base_velocity_action\": BaseVelocityActionConfig(),\n",
    "    \"oracle_coord_action\": OracleNavActionConfig(type=\"OracleNavCoordinateAction\", spawn_max_dist_to_obj=1.0)\n",
    "}\n",
    "env = init_rearrange_env(agent_dict, action_dict)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7f4e7b3b-2d8c-4fb4-a88d-f82b3603d963",
   "metadata": {},
   "outputs": [],
   "source": [
    "env.reset()\n",
    "rom = env.sim.get_rigid_object_manager()\n",
    "# env.sim.articulated_agent.base_pos = init_pos\n",
    "# As before, we get a navigation point next to an object id\n",
    "\n",
    "obj_id = env.sim.scene_obj_ids[0]\n",
    "first_object = rom.get_object_by_id(obj_id)\n",
    "\n",
    "object_trans = first_object.translation\n",
    "print(first_object.handle, \"is in\", object_trans)\n",
    "\n",
    "observations = []\n",
    "delta = 2.0\n",
    "\n",
    "object_agent_vec = env.sim.articulated_agent.base_pos - object_trans\n",
    "object_agent_vec.y = 0\n",
    "dist_agent_object = object_agent_vec.length()\n",
    "# Walk towards the object\n",
    "\n",
    "agent_displ = np.inf\n",
    "agent_rot = np.inf\n",
    "prev_rot = env.sim.articulated_agent.base_rot\n",
    "prev_pos = env.sim.articulated_agent.base_pos\n",
    "while agent_displ > 1e-9 or agent_rot > 1e-9:\n",
    "    prev_rot = env.sim.articulated_agent.base_rot\n",
    "    prev_pos = env.sim.articulated_agent.base_pos\n",
    "    action_dict = {\n",
    "        \"action\": (\"oracle_coord_action\"), \n",
    "        \"action_args\": {\n",
    "              \"oracle_nav_lookat_action\": object_trans,\n",
    "              \"mode\": 1\n",
    "          }\n",
    "    }\n",
    "    observations.append(env.step(action_dict))\n",
    "    \n",
    "    cur_rot = env.sim.articulated_agent.base_rot\n",
    "    cur_pos = env.sim.articulated_agent.base_pos\n",
    "    agent_displ = (cur_pos - prev_pos).length()\n",
    "    agent_rot = np.abs(cur_rot - prev_rot)\n",
    "    # print(agent_rot, agent_displ)\n",
    "\n",
    "for _ in range(20):\n",
    "    action_dict = {\"action\": (), \"action_args\": {}}\n",
    "    observations.append(env.step(action_dict))    \n",
    "\n",
    "action_dict = {\"action\": (\"pick_obj_id_action\"), \"action_args\": {\"pick_obj_id\": obj_id}}\n",
    "observations.append(env.step(action_dict))\n",
    "for _ in range(100):\n",
    "    action_dict = {\"action\": (), \"action_args\": {}}\n",
    "    observations.append(env.step(action_dict))    \n",
    "vut.make_video(\n",
    "    observations,\n",
    "    \"third_rgb\",\n",
    "    \"color\",\n",
    "    \"robot_tutorial_video\",\n",
    "    open_vid=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b30f76ba-51bc-4311-9bb1-dce37a6fa3a9",
   "metadata": {},
   "source": [
    "# Multi-Agent Interaction\n",
    "So far, we've been executing actions with a single agent. Habitat allows multi-agent execution, we will be looking here at how to do it "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e180c738-eae3-41d6-8dc6-04860710bcdb",
   "metadata": {},
   "outputs": [],
   "source": [
    "# We will download spot to show interaction between the spot robot and fetch\n",
    "! python -m habitat_sim.utils.datasets_download --uids hab_spot_arm --no-replace"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "af80f978-aae5-442f-b216-afff1b848939",
   "metadata": {},
   "outputs": [],
   "source": [
    "ls data/robots/hab_spot_arm/urdf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "97d1c1f2-4e4a-4a57-8242-d2978d6982fb",
   "metadata": {},
   "outputs": [],
   "source": [
    "# The main difference is in how we define the agent_dict.\n",
    "# Important: When using more than one agent, we should call them agent_{idx} with idx being between 0 and \n",
    "# the number of agents. This is required so that we can parse actions\n",
    "import copy\n",
    "second_agent_config = copy.deepcopy(main_agent_config)\n",
    "second_agent_config.articulated_agent_urdf = os.path.join(data_path, \"robots/hab_spot_arm/urdf/hab_spot_arm.urdf\")\n",
    "second_agent_config.articulated_agent_type = \"SpotRobot\"\n",
    "\n",
    "\n",
    "agent_dict = {\"agent_0\": main_agent_config, \"agent_1\": second_agent_config}\n",
    "action_dict = {\n",
    "    \"oracle_magic_grasp_action\": ArmActionConfig(type=\"MagicGraspAction\"),\n",
    "    \"base_velocity_action\": BaseVelocityActionConfig(),\n",
    "    \"oracle_coord_action\": OracleNavActionConfig(type=\"OracleNavCoordinateAction\", spawn_max_dist_to_obj=1.0)\n",
    "}\n",
    "\n",
    "multi_agent_action_dict = {}\n",
    "for action_name, action_config in action_dict.items():\n",
    "    for agent_id in range(2):\n",
    "        multi_agent_action_dict[f\"agent_{agent_id}_{action_name}\"] = action_config \n",
    "env = init_rearrange_env(agent_dict, multi_agent_action_dict)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9387461f-cc92-4edc-83ec-89860e2db173",
   "metadata": {},
   "source": [
    "#### The environment takes care about adding prefixes to observations, so that you can query the observation of each of the agents"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "570fd9aa-84dc-4d21-9099-17c570a025c3",
   "metadata": {},
   "outputs": [],
   "source": [
    "observations = env.reset()\n",
    "_, ax = plt.subplots(1,len(observations.keys()))\n",
    "\n",
    "for ind, name in enumerate(observations.keys()):\n",
    "    ax[ind].imshow(observations[name])\n",
    "    ax[ind].set_axis_off()\n",
    "    ax[ind].set_title(name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "587c1583-8cd3-42a0-99c8-ef5f3664121e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# To query the agent positions, we need to use `agents_mgr[agent_index]` articulated_agent\n",
    "env.sim.agents_mgr[1].articulated_agent"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b574a80f-ea35-4ace-8af7-d79ab7aa2c4a",
   "metadata": {},
   "source": [
    "As before, we can call actions on the agents, prepending the agent_name before the action"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9e57d73d-4117-4918-aa8d-f2599ae5a842",
   "metadata": {},
   "outputs": [],
   "source": [
    "env.reset()\n",
    "rom = env.sim.get_rigid_object_manager()\n",
    "# env.sim.articulated_agent.base_pos = init_pos\n",
    "# As before, we get a navigation point next to an object id\n",
    "\n",
    "obj_id = env.sim.scene_obj_ids[0]\n",
    "first_object = rom.get_object_by_id(obj_id)\n",
    "\n",
    "object_trans = first_object.translation\n",
    "observations = []\n",
    "\n",
    "# Walk towards the object\n",
    "\n",
    "agent_displ = np.inf\n",
    "agent_rot = np.inf\n",
    "prev_rot = env.sim.agents_mgr[0].articulated_agent.base_rot\n",
    "prev_pos = env.sim.agents_mgr[0].articulated_agent.base_pos\n",
    "while agent_displ > 1e-9 or agent_rot > 1e-9:\n",
    "    prev_rot = env.sim.agents_mgr[0].articulated_agent.base_rot\n",
    "    prev_pos = env.sim.agents_mgr[0].articulated_agent.base_pos\n",
    "    action_dict = {\n",
    "        \"action\": (\"agent_0_oracle_coord_action\", \"agent_1_oracle_coord_action\"), \n",
    "        \"action_args\": {\n",
    "              \"agent_0_oracle_nav_lookat_action\": object_trans,\n",
    "              \"agent_0_mode\": 1,\n",
    "              \"agent_1_oracle_nav_lookat_action\": object_trans,\n",
    "              \"agent_1_mode\": 1\n",
    "          }\n",
    "    }\n",
    "    observations.append(env.step(action_dict))\n",
    "    \n",
    "    cur_rot = env.sim.agents_mgr[0].articulated_agent.base_rot\n",
    "    cur_pos = env.sim.agents_mgr[0].articulated_agent.base_pos\n",
    "    agent_displ = (cur_pos - prev_pos).length()\n",
    "    agent_rot = np.abs(cur_rot - prev_rot)\n",
    "    # print(agent_rot, agent_displ)\n",
    "vut.make_video(\n",
    "    observations,\n",
    "    \"agent_1_third_rgb\",\n",
    "    \"color\",\n",
    "    \"robot_tutorial_video\",\n",
    "    open_vid=True,\n",
    ")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
