---
title: A2A endpoint in LangGraph Server
sidebarTitle: A2A endpoint in LangGraph Server
---

[Agent2Agent (A2A)](https://a2a-protocol.org/latest/) is Google's protocol for enabling communication between conversational AI agents. [LangGraph Platform implements A2A support](https://langchain-ai.github.io/langgraph/cloud/reference/api/api_ref.html#tag/a2a/post/a2a/{assistant_id}), allowing your agents to communicate with other A2A-compatible agents through a standardized protocol.

The A2A endpoint is available in [LangGraph Server](/langgraph-platform/langgraph-server) at `/a2a/{assistant_id}`.

## Agent Card Discovery

Each assistant automatically exposes an A2A Agent Card that describes its capabilities and provides the information needed for other agents to connect. You can retrieve the agent card for any assistant using:

```
GET /.well-known/agent-card.json?assistant_id={assistant_id}
```

The agent card includes the assistant's name, description, available skills, supported input/output modes, and the A2A endpoint URL for communication.

## Requirements

To use A2A, ensure you have the following dependencies installed:

* `langgraph-api >= 0.4.9`

Install with:

```bash
pip install "langgraph-api>=0.4.9"
```

## Usage overview

To enable A2A:

* Upgrade to use langgraph-api>=0.4.9.
* Deploy your agent with message-based state structure.
* Connect with other A2A-compatible agents using the endpoint.

## Creating an A2A-compatible agent

This example creates an A2A-compatible agent that processes incoming messages using OpenAI's API and maintains conversational state. The agent defines a message-based state structure and handles the A2A protocol's message format.

To be compatible with the [A2A "text" parts](https://a2a-protocol.org/dev/specification/#651-textpart-object), the agent must have a `messages` key in state. Here's an example:

```python
"""LangGraph A2A conversational agent.

Supports the A2A protocol with messages input for conversational interactions.
"""

from __future__ import annotations

import os
from dataclasses import dataclass
from typing import Any, Dict, List, TypedDict

from langgraph.graph import StateGraph
from langgraph.runtime import Runtime
from openai import AsyncOpenAI


class Context(TypedDict):
    """Context parameters for the agent."""
    my_configurable_param: str


@dataclass
class State:
    """Input state for the agent.

    Defines the initial structure for A2A conversational messages.
    """
    messages: List[Dict[str, Any]]


async def call_model(state: State, runtime: Runtime[Context]) -> Dict[str, Any]:
    """Process conversational messages and returns output using OpenAI."""
    # Initialize OpenAI client
    client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

    # Process the incoming messages
    latest_message = state.messages[-1] if state.messages else {}
    user_content = latest_message.get("content", "No message content")

    # Create messages for OpenAI API
    openai_messages = [
        {
            "role": "system",
            "content": "You are a helpful conversational agent. Keep responses brief and engaging."
        },
        {
            "role": "user",
            "content": user_content
        }
    ]

    try:
        # Make OpenAI API call
        response = await client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=openai_messages,
            max_tokens=100,
            temperature=0.7
        )

        ai_response = response.choices[0].message.content

    except Exception as e:
        ai_response = f"I received your message but had trouble processing it. Error: {str(e)[:50]}..."

    # Create a response message
    response_message = {
        "role": "assistant",
        "content": ai_response
    }

    return {
        "messages": state.messages + [response_message]
    }


# Define the graph
graph = (
    StateGraph(State, context_schema=Context)
    .add_node(call_model)
    .add_edge("__start__", "call_model")
    .compile()
)
```

## Agent-to-agent communication

Once your agents are running locally via `langgraph dev` or [deployed to production](/langgraph-platform/deployment-options), you can facilitate communication between them using the A2A protocol.

This example demonstrates how two agents can communicate by sending JSON-RPC messages to each other's A2A endpoints. The script simulates a multi-turn conversation where each agent processes the other's response and continues the dialogue.

```python
#!/usr/bin/env python3
"""Agent-to-Agent conversation simulation using LangGraph A2A protocol."""

import asyncio
import aiohttp
import os

async def send_message(session, port, assistant_id, text):
    """Send a message to an agent and return the response text."""
    url = f"http://127.0.0.1:{port}/a2a/{assistant_id}"
    payload = {
        "jsonrpc": "2.0",
        "id": "",
        "method": "message/send",
        "params": {
            "message": {
                "role": "user",
                "parts": [{"kind": "text", "text": text}]
            },
            "messageId": "",
            "thread": {"threadId": ""}
        }
    }

    headers = {"Accept": "application/json"}
    async with session.post(url, json=payload, headers=headers) as response:
        try:
            result = await response.json()
            return result["result"]["artifacts"][0]["parts"][0]["text"]
        except Exception as e:
            text = await response.text()
            print(f"Response error from port {port}: {response.status} - {text}")
            return f"Error from port {port}: {response.status}"

async def simulate_conversation():
    """Simulate a conversation between two agents."""
    agent_a_id = os.getenv("AGENT_A_ID")
    agent_b_id = os.getenv("AGENT_B_ID")

    if not agent_a_id or not agent_b_id:
        print("Set AGENT_A_ID and AGENT_B_ID environment variables")
        return

    message = "Hello! Let's have a conversation."

    async with aiohttp.ClientSession() as session:
        for i in range(3):
            print(f"--- Round {i + 1} ---")

            # Agent A responds
            message = await send_message(session, 2024, agent_a_id, message)
            print(f"🔵 Agent A: {message}")

            # Agent B responds
            message = await send_message(session, 2025, agent_b_id, message)
            print(f"🔴 Agent B: {message}")
            print()

if __name__ == "__main__":
    asyncio.run(simulate_conversation())
```
