---
title: Tracing quickstart
sidebarTitle: Trace an application
---

[_Observability_](/langsmith/observability-concepts) is a critical requirement for applications built with large language models (LLMs). LLMs are non-deterministic, which means that the same prompt can produce different responses. This behavior makes debugging and monitoring more challenging than with traditional software.

LangSmith addresses this by providing end-to-end visibility into how your application handles a request. Each request generates a [_trace_](/langsmith/observability-concepts#traces), which captures the full record of what happened. Within a trace are individual [_runs_](/langsmith/observability-concepts#runs), the specific operations your application performed, such as an LLM call or a retrieval step. Tracing runs allows you to inspect, debug, and validate your application’s behavior.

In this quickstart, you will set up a minimal [_Retrieval Augmented Generation (RAG)_](https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-retrieval-augmented-generation-rag) application and add tracing with LangSmith. You will:

1. Configure your environment.
1. Create an application that retrieves context and calls an LLM.
1. Enable tracing to capture both the retrieval step and the LLM call.
1. View the resulting traces in the LangSmith UI.

<Tip>
If you prefer to watch a video on getting started with tracing, refer to the quickstart [Video guide](#video-guide).
</Tip>

## Prerequisites

Before you begin, make sure you have:

- **A LangSmith account**: Sign up or log in at [smith.langchain.com](https://smith.langchain.com).
- **A LangSmith API key**: Follow the [Create an API key](/langsmith/create-account-api-key#create-an-api-key) guide.
- **An OpenAI API key**: Generate this from the [OpenAI dashboard](https://platform.openai.com/account/api-keys).

The example app in this quickstart will use OpenAI as the LLM provider. You can adapt the example for your app's LLM provider.

<Tip>
If you're building an application with [LangChain](https://python.langchain.com/docs/introduction/) or [LangGraph](https://langchain-ai.github.io/langgraph/), you can enable LangSmith tracing with a single environment variable. Get started by reading the guides for tracing with [LangChain](/langsmith/trace-with-langchain) or tracing with [LangGraph](/langsmith/trace-with-langgraph).
</Tip>

## 1. Create a directory and install dependencies

In your terminal, create a directory for your project and install the dependencies in your environment:

<CodeGroup>

```bash Python
mkdir ls-observability-quickstart && cd ls-observability-quickstart
python -m venv .venv && source .venv/bin/activate
python -m pip install --upgrade pip
pip install -U langsmith openai
```

```bash TypeScript
mkdir ls-observability-quickstart-ts && cd ls-observability-quickstart-ts
npm init -y
npm install langsmith openai typescript ts-node
npx tsc --init
```

</CodeGroup>

## 2. Set up environment variables

Set the following environment variables:

- `LANGSMITH_TRACING`
- `LANGSMITH_API_KEY`
- `OPENAI_API_KEY` (or your LLM provider's API key)
- (optional) `LANGSMITH_WORKSPACE_ID`: If your LangSmith API is linked to multiple workspaces, set this variable to specify which workspace to use.

``` bash
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY="<your-langsmith-api-key>"
export OPENAI_API_KEY="<your-openai-api-key>"
export LANGSMITH_WORKSPACE_ID="<your-workspace-id>"
```

If you're using Anthropic, use the [Anthropic wrapper](/langsmith/annotate-code#wrap-the-anthropic-client-python-only) to trace your calls. For other providers, use [the traceable wrapper](/langsmith/annotate-code#use-%40traceable-%2F-traceable).

## 3. Define your application

You can use the example app code outlined in this step to instrument a RAG application. Or, you can use your own application code that includes an LLM call.

This is a minimal RAG app that uses the OpenAI SDK directly without any LangSmith tracing added yet. It has three main parts:

- **Retriever function**: Simulates document retrieval that always returns the same string.
- **OpenAI client**: Instantiates a plain OpenAI client to send a chat completion request.
- **RAG function**: Combines the retrieved documents with the user’s question to form a system prompt, calls the `chat.completions.create()` endpoint with `gpt-4o-mini`, and returns the assistant’s response.

Add the following code into your app file (e.g., `app.py` or `app.ts`):

<CodeGroup>

```python Python
from openai import OpenAI

def retriever(query: str):
    # Minimal example retriever
    return ["Harrison worked at Kensho"]

# OpenAI client call (no wrapping yet)
client = OpenAI()

def rag(question: str) -> str:
    docs = retriever(question)
    system_message = (
        "Answer the user's question using only the provided information below:\n"
        + "\n".join(docs)
    )

    # This call is not traced yet
    resp = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": system_message},
            {"role": "user", "content": question},
        ],
    )
    return resp.choices[0].message.content

if __name__ == "__main__":
    print(rag("Where did Harrison work?"))
```

```typescript TypeScript
import "dotenv/config";
import OpenAI from "openai";

// Minimal example retriever
function retriever(query: string): string[] {
  return ["Harrison worked at Kensho"];
}

// OpenAI client call (no wrapping yet)
const client = new OpenAI();

async function rag(question: string) {
  const docs = retriever(question);
  const systemMessage =
    "Answer the user's question using only the provided information below:\n" +
    docs.join("\n");

  // This call is not traced yet
  const resp = await client.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [
      { role: "system", content: systemMessage },
      { role: "user", content: question },
    ],
  });

  return resp.choices[0].message?.content;
}

(async () => {
  console.log(await rag("Where did Harrison work?"));
})();
```

</CodeGroup>

## 4. Trace LLM calls

To start, you’ll trace all your OpenAI calls. LangSmith provides wrappers:

- Python: [`wrap_openai`](https://docs.smith.langchain.com/reference/python/wrappers/langsmith.wrappers._openai.wrap_openai)
- TypeScript: [`wrapOpenAI`](https://docs.smith.langchain.com/reference/js/functions/wrappers_openai.wrapOpenAI)

This snippet wraps the OpenAI client so that every subsequent model call is logged automatically as a traced child run in LangSmith.

1. Include the highlighted lines in your app file:

    <CodeGroup>

    ```python Python highlight={2,7}
    from openai import OpenAI
    from langsmith.wrappers import wrap_openai  # traces openai calls

    def retriever(query: str):
        return ["Harrison worked at Kensho"]

    client = wrap_openai(OpenAI())  # log traces by wrapping the model calls

    def rag(question: str) -> str:
        docs = retriever(question)
        system_message = (
            "Answer the user's question using only the provided information below:\n"
            + "\n".join(docs)
        )
        resp = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "system", "content": system_message},
                {"role": "user", "content": question},
            ],
        )
        return resp.choices[0].message.content

    if __name__ == "__main__":
        print(rag("Where did Harrison work?"))
    ```

    ```typescript TypeScript highlight={3,9}
    import "dotenv/config";
    import OpenAI from "openai";
    import { wrapOpenAI } from "langsmith/wrappers"; // traces openai calls

    function retriever(query: string): string[] {
    return ["Harrison worked at Kensho"];
    }

    const client = wrapOpenAI(new OpenAI()); // log traces by wrapping the model calls

    async function rag(question: string) {
    const docs = retriever(question);
    const systemMessage =
        "Answer the user's question using only the provided information below:\n" +
        docs.join("\n");

    const resp = await client.chat.completions.create({
        model: "gpt-4o-mini",
        messages: [
        { role: "system", content: systemMessage },
        { role: "user", content: question },
        ],
    });

    return resp.choices[0].message?.content;
    }

    (async () => {
    console.log(await rag("Where did Harrison work?"));
    })();
    ```

    </CodeGroup>

1. Call your application:

    <CodeGroup>

    ```bash Python
    python app.py
    ```

    ```bash TypeScript
    npx ts-node app.ts
    ```

    </CodeGroup>


    You'll receive the following output:

    ```
    Harrison worked at Kensho.
    ```

1. In the [LangSmith UI](https://smith.langchain.com), navigate to the **default** Tracing Project for your workspace (or the workspace you specified in [Step 2](#2-set-up-environment-variables)). You'll see the OpenAI call you just instrumented.

<div style={{ textAlign: 'center' }}>
<img
    className="block dark:hidden"
    src="/langsmith/images/trace-quickstart-llm-call.png"
    alt="LangSmith UI showing an LLM call trace called ChatOpenAI with a system and human input followed by an AI Output."
/>

<img
    className="hidden dark:block"
    src="/langsmith/images/trace-quickstart-llm-call-dark.png"
    alt="LangSmith UI showing an LLM call trace called ChatOpenAI with a system and human input followed by an AI Output."
/>
</div>

## 5. Trace an entire application

You can also use the `traceable` decorator for [Python](https://docs.smith.langchain.com/reference/python/run_helpers/langsmith.run_helpers.traceable) or [TypeScript](https://langsmith-docs-bdk0fivr6-langchain.vercel.app/reference/js/functions/traceable.traceable) to trace your entire application instead of just the LLM calls.

1. Include the highlighted code in your app file:

    <CodeGroup>

    ```python Python highlight={3,10}
    from openai import OpenAI
    from langsmith.wrappers import wrap_openai
    from langsmith import traceable

    def retriever(query: str):
        return ["Harrison worked at Kensho"]

    client = wrap_openai(OpenAI())  # keep this to capture the prompt and response from the LLM

    @traceable
    def rag(question: str) -> str:
        docs = retriever(question)
        system_message = (
            "Answer the user's question using only the provided information below:\n"
            + "\n".join(docs)
        )
        resp = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "system", "content": system_message},
                {"role": "user", "content": question},
            ],
        )
        return resp.choices[0].message.content

    if __name__ == "__main__":
        print(rag("Where did Harrison work?"))
    ```

    ```typescript TypeScript highlight={3,11}
    import "dotenv/config";
    import OpenAI from "openai";
    import { wrapOpenAI, traceable } from "langsmith/wrappers";

    function retriever(query: string): string[] {
    return ["Harrison worked at Kensho"];
    }

    const client = wrapOpenAI(new OpenAI()); // keep this to capture the prompt and response from the LLM

    const rag = traceable(async (question: string) => {
    const docs = retriever(question);
    const systemMessage =
        "Answer the user's question using only the provided information below:\n" +
        docs.join("\n");

    const resp = await client.chat.completions.create({
        model: "gpt-4o-mini",
        messages: [
        { role: "system", content: systemMessage },
        { role: "user", content: question },
        ],
    });

    return resp.choices[0].message?.content;
    });

    (async () => {
    console.log(await rag("Where did Harrison work?"));
    })();
    ```

    </CodeGroup>

1. Call the application again to create a run:

    <CodeGroup>

    ```bash Python
    python app.py
    ```

    ```bash TypeScript
    npx ts-node app.ts
    ```

    </CodeGroup>

1. Return to the [LangSmith UI](https://smith.langchain.com), navigate to the **default** Tracing Project for your workspace (or the workspace you specified in [Step 2](#2-set-up-environment-variables)). You'll find a trace of the entire app pipeline with the **rag** step and the **ChatOpenAI** LLM call.

<div style={{ textAlign: 'center' }}>
<img
    className="block dark:hidden"
    src="/langsmith/images/trace-quickstart-app.png"
    alt="LangSmith UI showing a trace of the entire application called rag with an input followed by an output."
/>

<img
    className="hidden dark:block"
    src="/langsmith/images/trace-quickstart-app-dark.png"
    alt="LangSmith UI showing a trace of the entire application called rag with an input followed by an output."
/>
</div>

## Next steps

Here are some topics you might want to explore next:

- [Tracing integrations](/langsmith/trace-with-langchain) provide support for various LLM providers and agent frameworks.
- [Filtering traces](/langsmith/filter-traces-in-application) can help you effectively navigate and analyze data in tracing projects that contain a significant amount of data.
- [Trace a RAG application](/langsmith/observability-llm-tutorial) is a full tutorial, which adds observability to an application from development through to production.
- [Sending traces to a specific project](/langsmith/log-traces-to-project) changes the destination project of your traces.

## Video guide
<iframe
  className="w-full aspect-video rounded-xl"
  src="https://www.youtube.com/embed/fA9b4D8IsPQ?si=0eBb1vzw5AxUtplS"
  title="YouTube video player"
  frameBorder="0"
  allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
  allowFullScreen
></iframe>

