filename
stringclasses
9 values
AgentFunctions
dict
ObservationSpaces
dict
ActionSpaces
dict
Agents
stringclasses
9 values
Description
stringclasses
9 values
simple.json
{ "adversary_reward": null, "agent_reward": "def reward(self, agent, world): dist2 = np.sum(np.square(agent.state.p_pos - world.landmarks[0].state.p_pos)); return -dist2", "observation": "def observation(self, agent, world): entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks]; return np.concatenate([agent.state.p_vel] + entity_pos)" }
{ "Adversary": null, "Agent": "[self_vel, landmark_rel_position]", "Alice": null, "Bob": null, "Eve": null }
{ "Adversary": null, "Agent": "[no_action, move_left, move_right, move_down, move_up]" }
[agent_0]
In this environment a single agent sees a landmark position and is rewarded based on how close it gets to the landmark (Euclidean distance). This is not a multiagent environment, and is primarily intended for debugging purposes.
simple_adversary.json
{ "adversary_reward": "def adversary_reward(self, agent, world): shaped_reward = True; return -np.sqrt(np.sum(np.square(agent.state.p_pos - agent.goal_a.state.p_pos))) if shaped_reward else (5 if np.sqrt(np.sum(np.square(agent.state.p_pos - agent.goal_a.state.p_pos))) < 2 * agent.goal_a.size else 0)", "agent_reward": "def agent_reward(self, agent, world): shaped_reward, shaped_adv_reward = True, True; adversary_agents, good_agents = self.adversaries(world), self.good_agents(world); adv_rew = sum(np.sqrt(np.sum(np.square(a.state.p_pos - a.goal_a.state.p_pos))) for a in adversary_agents) if shaped_adv_reward else sum(-5 if np.sqrt(np.sum(np.square(a.state.p_pos - a.goal_a.state.p_pos))) < 2 * a.goal_a.size else 0 for a in adversary_agents); pos_rew = -min(np.sqrt(np.sum(np.square(a.state.p_pos - a.goal_a.state.p_pos))) for a in good_agents) if shaped_reward else (5 if min(np.sqrt(np.sum(np.square(a.state.p_pos - a.goal_a.state.p_pos))) for a in good_agents) < 2 * agent.goal_a.size else 0) - min(np.sqrt(np.sum(np.square(a.state.p_pos - a.goal_a.state.p_pos))) for a in good_agents); return pos_rew + adv_rew", "observation": "def observation(self, agent, world): entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks]; entity_color = [entity.color for entity in world.landmarks]; other_pos = [other.state.p_pos - agent.state.p_pos for other in world.agents if other is not agent]; return np.concatenate([agent.goal_a.state.p_pos - agent.state.p_pos] + entity_pos + other_pos) if not agent.adversary else np.concatenate(entity_pos + other_pos)" }
{ "Adversary": "[landmark_rel_position, other_agents_rel_positions]", "Agent": "[self_pos, self_vel, goal_rel_position, landmark_rel_position, other_agent_rel_positions]", "Alice": null, "Bob": null, "Eve": null }
{ "Adversary": "[no_action, move_left, move_right, move_down, move_up]", "Agent": "[no_action, move_left, move_right, move_down, move_up]" }
[adversary_0, agent_0, agent_1]
In this environment, there is 1 adversary (red), N good agents (green), N landmarks (default N=2). All agents observe the position of landmarks and other agents. One landmark is the ‘target landmark’ (colored green). Good agents are rewarded based on how close the closest one of them is to the target landmark, but negatively rewarded based on how close the adversary is to the target landmark. The adversary is rewarded based on distance to the target, but it doesn’t know which landmark is the target landmark. All rewards are unscaled Euclidean distance (see main MPE documentation for average distance). This means good agents have to learn to ‘split up’ and cover all landmarks to deceive the adversary.
simple_crypto.json
{ "adversary_reward": "def adversary_reward(self, agent, world): rew = 0; rew -= np.sum(np.square(agent.state.c - agent.goal_a.color)) if not (agent.state.c == np.zeros(world.dim_c)).all() else 0; return rew # Adversary (Eve) is rewarded if it can reconstruct original goal", "agent_reward": "def agent_reward(self, agent, world): good_listeners, adversaries = self.good_listeners(world), self.adversaries(world); good_rew, adv_rew = 0, 0; good_rew = sum(0 if (a.state.c == np.zeros(world.dim_c)).all() else -np.sum(np.square(a.state.c - agent.goal_a.color)) for a in good_listeners); adv_rew = sum(0 if (a.state.c == np.zeros(world.dim_c)).all() else np.sum(np.square(a.state.c - agent.goal_a.color)) for a in adversaries); return adv_rew + good_rew # Agents rewarded if Bob can reconstruct message, but adversary (Eve) cannot", "observation": "def observation(self, agent, world): goal_color = agent.goal_a.color if agent.goal_a is not None else np.zeros(world.dim_color); entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks]; comm = [other.state.c for other in world.agents if other is not agent and other.state.c is not None and other.speaker]; key = world.agents[2].key; return np.concatenate([goal_color] + [key]) if agent.speaker else np.concatenate([key] + comm) if not agent.adversary else np.concatenate(comm)" }
{ "Adversary": null, "Agent": null, "Alice": "[message, private_key]", "Bob": "[private_key, alices_comm]", "Eve": "[alices_comm]" }
{ "Adversary": null, "Agent": "[say_0, say_1, say_2, say_3]" }
[eve_0, bob_0, alice_0]
In this environment, there are 2 good agents (Alice and Bob) and 1 adversary (Eve). Alice must sent a private 1 bit message to Bob over a public channel. Alice and Bob are rewarded +2 if Bob reconstructs the message, but are rewarded -2 if Eve reconstruct the message (that adds to 0 if both teams reconstruct the bit). Eve is rewarded -2 based if it cannot reconstruct the signal, zero if it can. Alice and Bob have a private key (randomly generated at beginning of each episode) which they must learn to use to encrypt the message.
simple_push.json
{ "adversary_reward": "def adversary_reward(self, agent, world): agent_dist = [np.sqrt(np.sum(np.square(a.state.p_pos - a.goal_a.state.p_pos))) for a in world.agents if not a.adversary]; pos_rew = min(agent_dist); neg_rew = np.sqrt(np.sum(np.square(agent.goal_a.state.p_pos - agent.state.p_pos))); return pos_rew - neg_rew # keep the nearest good agents away from the goal", "agent_reward": "def agent_reward(self, agent, world): return -np.sqrt(np.sum(np.square(agent.state.p_pos - agent.goal_a.state.p_pos))) # the distance to the goal", "observation": "def observation(self, agent, world): entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks]; entity_color = [entity.color for entity in world.landmarks]; comm, other_pos = [], [other.state.p_pos - agent.state.p_pos for other in world.agents if other is not agent]; comm = [other.state.c for other in world.agents if other is not agent]; return np.concatenate([agent.state.p_vel] + [agent.goal_a.state.p_pos - agent.state.p_pos] + [agent.color] + entity_pos + entity_color + other_pos) if not agent.adversary else np.concatenate([agent.state.p_vel] + entity_pos + other_pos)" }
{ "Adversary": "[self_vel, all_landmark_rel_positions, other_agent_rel_positions]", "Agent": "[self_vel, goal_rel_position, goal_landmark_id, all_landmark_rel_positions, landmark_ids, other_agent_rel_positions]", "Alice": null, "Bob": null, "Eve": null }
{ "Adversary": "[no_action, move_left, move_right, move_down, move_up]", "Agent": "[no_action, move_left, move_right, move_down, move_up]" }
[adversary_0, agent_0]
This environment has 1 good agent, 1 adversary, and 1 landmark. The good agent is rewarded based on the distance to the landmark. The adversary is rewarded if it is close to the landmark, and if the agent is far from the landmark (the difference of the distances). Thus the adversary must learn to push the good agent away from the landmark.
simple_reference.json
{ "adversary_reward": "", "agent_reward": "def reward(self, agent, world): agent_reward = 0.0 if agent.goal_a is None or agent.goal_b is None else np.sqrt(np.sum(np.square(agent.goal_a.state.p_pos - agent.goal_b.state.p_pos))); return -agent_reward \n def global_reward(self, world): all_rewards = sum(self.reward(agent, world) for agent in world.agents); return all_rewards / len(world.agents)", "observation": "def observation(self, agent, world): goal_color = [np.zeros(world.dim_color), np.zeros(world.dim_color)]; goal_color[1] = agent.goal_b.color if agent.goal_b is not None else goal_color[1]; entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks]; entity_color = [entity.color for entity in world.landmarks]; comm = [other.state.c for other in world.agents if other is not agent]; return np.concatenate([agent.state.p_vel] + entity_pos + [goal_color[1]] + comm)" }
{ "Adversary": "[]", "Agent": "[self_vel, all_landmark_rel_positions, landmark_ids, goal_id, communication]", "Alice": null, "Bob": null, "Eve": null }
{ "Adversary": "[]", "Agent": "[say_0, say_1, say_2, say_3, say_4, say_5, say_6, say_7, say_8, say_9] X [no_action, move_left, move_right, move_down, move_up]" }
[agent_0, agent_1]
This environment has 2 agents and 3 landmarks of different colors. Each agent wants to get closer to their target landmark, which is known only by the other agents. Both agents are simultaneous speakers and listeners.Locally, the agents are rewarded by their distance to their target landmark. Globally, all agents are rewarded by the average distance of all the agents to their respective landmarks. The relative weight of these rewards is controlled by the local_ratio parameter.
simple_speaker_listener.json
{ "adversary_reward": "def reward(self, agent, world): a = world.agents[0]; dist2 = np.sum(np.square(a.goal_a.state.p_pos - a.goal_b.state.p_pos)); return -dist2 # squared distance from listener to landmark", "agent_reward": "", "observation": "def observation(self, agent, world): goal_color = agent.goal_b.color if agent.goal_b is not None else np.zeros(world.dim_color); entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks]; comm = [other.state.c for other in world.agents if other is not agent and other.state.c is not None]; return np.concatenate([goal_color]) if not agent.movable else np.concatenate([agent.state.p_vel] + entity_pos + comm) if agent.silent else []" }
{ "Adversary": "[self_vel, all_landmark_rel_positions, communication]", "Agent": "[goal_id]", "Alice": null, "Bob": null, "Eve": null }
{ "Adversary": "[no_action, move_left, move_right, move_down, move_up]", "Agent": "[say_0, say_1, say_2, say_3, say_4, say_5, say_6, say_7, say_8, say_9]" }
[speaker_0, listener_0]
This environment is similar to simple_reference, except that one agent is the ‘speaker’ (gray) and can speak but cannot move, while the other agent is the listener (cannot speak, but must navigate to correct landmark).
simple_spread.json
{ "adversary_reward": "", "agent_reward": "def is_collision(self, agent1, agent2): delta_pos = agent1.state.p_pos - agent2.state.p_pos; dist = np.sqrt(np.sum(np.square(delta_pos))); dist_min = agent1.size + agent2.size; return True if dist < dist_min else False \n def reward(self, agent, world): rew = 0; rew -= sum(1.0 * (self.is_collision(a, agent) and a != agent) for a in world.agents) if agent.collide else 0; return rew \n def global_reward(self, world): rew = sum(-min(np.sqrt(np.sum(np.square(a.state.p_pos - lm.state.p_pos))) for a in world.agents) for lm in world.landmarks); return rew", "observation": "def observation(self, agent, world): entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks]; comm = [other.state.c for other in world.agents if other is not agent]; other_pos = [other.state.p_pos - agent.state.p_pos for other in world.agents if other is not agent]; return np.concatenate([agent.state.p_vel] + [agent.state.p_pos] + entity_pos + other_pos + comm)" }
{ "Adversary": "[]", "Agent": "[self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, communication]", "Alice": null, "Bob": null, "Eve": null }
{ "Adversary": "[]", "Agent": "[no_action, move_left, move_right, move_down, move_up]" }
[agent_0, agent_1, agent_2]
This environment has N agents, N landmarks (default N=3). At a high level, agents must learn to cover all the landmarks while avoiding collisions.More specifically, all agents are globally rewarded based on how far the closest agent is to each landmark (sum of the minimum distances). Locally, the agents are penalized if they collide with other agents (-1 for each collision). The relative weights of these rewards can be controlled with the local_ratio parameter.
simple_tag.json
{ "adversary_reward": "def adversary_reward(self, agent, world): rew = 0; shape = False; agents = self.good_agents(world); adversaries = self.adversaries(world); rew -= sum(0.1 * min(np.sqrt(np.sum(np.square(a.state.p_pos - adv.state.p_pos))) for a in agents) for adv in adversaries) if shape else 0; rew += sum(10 for ag in agents for adv in adversaries if agent.collide and self.is_collision(ag, adv)); return rew # Adversaries are rewarded for collisions with agents", "agent_reward": "def agent_reward(self, agent, world): rew = 0; shape = False; adversaries = self.adversaries(world); rew += sum(0.1 * np.sqrt(np.sum(np.square(agent.state.p_pos - adv.state.p_pos))) for adv in adversaries) if shape else 0; rew -= sum(10 for a in adversaries if agent.collide and self.is_collision(a, agent)); rew -= sum(bound(abs(agent.state.p_pos[p])) for p in range(world.dim_p)); return rew # Agents are negatively rewarded if caught by adversaries", "observation": "def observation(self, agent, world): entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks if not entity.boundary]; comm = [other.state.c for other in world.agents if other is not agent]; other_pos = [other.state.p_pos - agent.state.p_pos for other in world.agents if other is not agent]; other_vel = [other.state.p_vel for other in world.agents if other is not agent and not other.adversary]; return np.concatenate([agent.state.p_vel] + [agent.state.p_pos] + entity_pos + other_pos + other_vel)" }
{ "Adversary": "[self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, other_agent_velocities]", "Agent": "[self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, other_agent_velocities]", "Alice": null, "Bob": null, "Eve": null }
{ "Adversary": "[no_action, move_left, move_right, move_down, move_up]", "Agent": "[no_action, move_left, move_right, move_down, move_up]" }
[adversary_0, adversary_1, adversary_2, agent_0]
This is a predator-prey environment. Good agents (green) are faster and receive a negative reward for being hit by adversaries (red) (-10 for each collision). Adversaries are slower and are rewarded for hitting good agents (+10 for each collision). Obstacles (large black circles) block the way. By default, there is 1 good agent, 3 adversaries and 2 obstacles.So that good agents don’t run to infinity, they are also penalized for exiting the area by the following function:def bound(x): return 0 if x < 0.9 else (x - 0.9) * 10 if x < 1.0 else min(np.exp(2 * x - 2), 10)
simple_world_comm.json
{ "adversary_reward": "def adversary_reward(self, agent, world): rew = 0; shape = True; agents = self.good_agents(world); adversaries = self.adversaries(world); rew -= 0.1 * min(np.sqrt(np.sum(np.square(a.state.p_pos - agent.state.p_pos))) for a in agents) if shape else 0; rew += sum(5 for ag in agents for adv in adversaries if agent.collide and self.is_collision(ag, adv)); return rew # Agents are rewarded based on minimum agent distance to each landmark", "agent_reward": "def agent_reward(self, agent, world): rew = 0; shape = False; adversaries = self.adversaries(world); rew += sum(0.1 * np.sqrt(np.sum(np.square(agent.state.p_pos - adv.state.p_pos))) for adv in adversaries) if shape else 0; rew -= sum(5 for a in adversaries if agent.collide and self.is_collision(a, agent)); bound = lambda x: 0 if x < 0.9 else (x - 0.9) * 10 if x < 1.0 else min(np.exp(2 * x - 2), 10); rew -= sum(2 * bound(abs(agent.state.p_pos[p])) for p in range(world.dim_p)); rew += sum(2 for food in world.food if self.is_collision(agent, food)); rew -= 0.05 * min(np.sqrt(np.sum(np.square(food.state.p_pos - agent.state.p_pos))) for food in world.food); return rew # Agents are rewarded based on minimum agent distance to each landmark", "observation": "def observation(self, agent, world): entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks if not entity.boundary]; in_forest = [np.array([-1]) for _ in range(len(world.forests))]; inf = [False for _ in range(len(world.forests))]; in_forest, inf = [np.array([1]) if self.is_collision(agent, world.forests[i]) else np.array([-1]) for i in range(len(world.forests))], [self.is_collision(agent, world.forests[i]) for i in range(len(world.forests))]; food_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.food if not entity.boundary]; comm, other_pos, other_vel = [world.agents[0].state.c], [], []; for other in world.agents: if other is not agent: oth_f = [self.is_collision(other, world.forests[i]) for i in range(len(world.forests))]; for i in range(len(world.forests)): if inf[i] and oth_f[i]: other_pos.append(other.state.p_pos - agent.state.p_pos); other_vel.append(other.state.p_vel) if not other.adversary else None; break; else: other_pos.append(other.state.p_pos - agent.state.p_pos if (not any(inf) and not any(oth_f)) or agent.leader else [0, 0]); other_vel.append(other.state.p_vel if (not any(inf) and not any(oth_f)) or agent.leader or not other.adversary else [0, 0]) if not other.adversary else None; prey_forest = [np.array([1]) if any([self.is_collision(a, f) for f in world.forests]) else np.array([-1]) for a in self.good_agents(world)]; prey_forest_lead = [np.array([1]) if any([self.is_collision(a, f) for a in self.good_agents(world)]) else np.array([-1]) for f in world.forests]; return np.concatenate([agent.state.p_vel] + [agent.state.p_pos] + entity_pos + other_pos + other_vel + in_forest + comm) if agent.adversary and not agent.leader else np.concatenate([agent.state.p_vel] + [agent.state.p_pos] + entity_pos + other_pos + other_vel + in_forest + comm) if agent.leader else np.concatenate([agent.state.p_vel] + [agent.state.p_pos] + entity_pos + other_pos + in_forest + other_vel) def observation2(self, agent, world): entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks if not entity.boundary]; food_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.food if not entity.boundary]; comm, other_pos, other_vel = [other.state.c for other in world.agents if other is not agent], [other.state.p_pos - agent.state.p_pos for other in world.agents if other is not agent], [other.state.p_vel for other in world.agents if other is not agent and not other.adversary]; return np.concatenate([agent.state.p_vel] + [agent.state.p_pos] + entity_pos + other_pos + other_vel)" }
{ "Adversary": "Normal adversary observations:[self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, other_agent_velocities, self_in_forest, leader_comm];Adversary leader observations: [self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, other_agent_velocities, leader_comm]", "Agent": "[self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, other_agent_velocities, self_in_forest]", "Alice": null, "Bob": null, "Eve": null }
{ "Adversary": "Normal adversary action space: [no_action, move_left, move_right, move_down, move_up]; Adversary leader discrete action space: [say_0, say_1, say_2, say_3] X [no_action, move_left, move_right, move_down, move_up]", "Agent": "[no_action, move_left, move_right, move_down, move_up]" }
[leadadversary_0, adversary_0, adversary_1, adversary_3, agent_0, agent_1]
This environment is similar to simple_tag, except there is food (small blue balls) that the good agents are rewarded for being near, there are ‘forests’ that hide agents inside from being seen, and there is a ‘leader adversary’ that can see the agents at all times and can communicate with the other adversaries to help coordinate the chase. By default, there are 2 good agents, 3 adversaries, 1 obstacles, 2 foods, and 2 forests.In particular, the good agents reward, is -5 for every collision with an adversary, -2 x bound by the bound function described in simple_tag, +2 for every collision with a food, and -0.05 x minimum distance to any food. The adversarial agents are rewarded +5 for collisions and -0.1 x minimum distance to a good agent.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
1
Edit dataset card