---
subtitle: Step by step guide to logging evaluation results using Python SDK and REST API
---

Evaluating your LLM application allows you to have confidence in the performance of your LLM
application. In this guide, we will walk through manually creating experiments using data you have
already computed.

<Tip>
  This guide focuses on logging pre-computed evaluation results. If you're looking to run evaluations with Opik
  computing the metrics, refer to the [Evaluate your agent](/evaluation/evaluate_your_llm) and [Evaluate single
  prompts](/evaluation/evaluate_prompt) guides.
</Tip>

The process involves these key steps:

1. Create a dataset with your test cases
2. Prepare your evaluation results
3. Log experiment items in bulk

## 1. Create a Dataset

First, you'll need to create a dataset containing your test cases. This dataset will be linked to
your experiments.

<CodeBlocks>
  ```typescript title="TypeScript" language="typescript" maxLines=1000
  import { Opik } from "opik";

  const client = new Opik({
    apiKey: "your-api-key",
    apiUrl: "https://www.comet.com/opik/api",
    projectName: "your-project-name",
    workspaceName: "your-workspace-name",
  });
  const dataset = await client.getOrCreateDataset("My dataset");

  await dataset.insert([
    {
      user_question: "What is the capital of France?",
      expected_output: "Paris"
    },
    {
      user_question: "What is the capital of Japan?",
      expected_output: "Tokyo"
    },
    {
      user_question: "What is the capital of Brazil?",
      expected_output: "Brasília"
    }
  ]);
  ```

  ```python title="Python" language="python" maxLines=1000
  from opik import Opik
  import opik

  # Configure Opik
  opik.configure()

  # Create dataset items
  dataset_items = [
      {
          "user_question": "What is the capital of France?",
          "expected_output": "Paris"
      },
      {
          "user_question": "What is the capital of Japan?",
          "expected_output": "Tokyo"
      },
      {
          "user_question": "What is the capital of Brazil?",
          "expected_output": "Brasília"
      }
  ]

  # Get or create a dataset
  client = Opik()
  dataset = client.get_or_create_dataset(name="geography-questions")

  # Add dataset items
  dataset.insert(dataset_items)
  ```

  ```bash title="REST API" maxLines=1000
  # First, create the dataset
  curl -X POST 'https://www.comet.com/opik/api/v1/private/datasets' \
    -H 'Content-Type: application/json' \
    -H 'Comet-Workspace: <your-workspace-name>' \
    -H 'authorization: <your-api-key>' \
    -d '{
      "name": "geography-questions",
      "description": "Geography quiz dataset"
    }'

  # Then add dataset items
  curl -X POST 'https://www.comet.com/opik/api/v1/private/datasets/items' \
    -H 'Content-Type: application/json' \
    -H 'Comet-Workspace: <your-workspace-name>' \
    -H 'authorization: <your-api-key>' \
    -d '{
      "dataset_name": "geography-questions",
      "items": [
        {
          "user_question": "What is the capital of France?",
          "expected_output": "Paris"
        },
        {
          "user_question": "What is the capital of Japan?",
          "expected_output": "Tokyo"
        },
        {
          "user_question": "What is the capital of Brazil?",
          "expected_output": "Brasília"
        }
      ]
    }'
  ```
</CodeBlocks>

<Tip>
  Dataset item IDs will be automatically generated if not provided. If you do provide your own IDs, ensure they are in
  UUID7 format.
</Tip>

## 2. Prepare Evaluation Results

Structure your evaluation results with the necessary fields. Each experiment item should include:

- `dataset_item_id`: The ID of the dataset item being evaluated
- `evaluate_task_result`: The output from your LLM application
- `feedback_scores`: Array of evaluation metrics (optional)

<CodeBlocks>
  ```typescript title="TypeScript" language="typescript" maxLines=1000
  const datasetItems = await dataset.getItems();

  const mockResponses = {
    "What is the capital of France?": "The capital of France is Paris.",
    "What is the capital of Japan?": "Japan's capital is Tokyo.",
    "What is the capital of Brazil?": "The capital of Brazil is Rio de Janeiro."
  }

  // This would be replaced by your specific logic, the goal is simply to have an array of
  // evaluation items with a dataset_item_id, evaluate_task_result and feedback_scores
  const evaluationItems = datasetItems.map(item => {  
    const response = mockResponses[item.user_question] || "I don't know";
      return {
        dataset_item_id: item.id,
        evaluate_task_result: { prediction: response },
        feedback_scores: [{ name: "accuracy", value: response.includes(item.expected_output) ? 1.0 : 0.0, source: "sdk" }]
      }
  });
  ```

```python title="Python" language="python" maxLines=1000
# Get dataset items from the dataset object
dataset_items = list(dataset.get_items())

# Mock LLM responses for this example
# In a real scenario, you would call your actual LLM here
mock_responses = {
    "France": "The capital of France is Paris.",
    "Japan": "Japan's capital is Tokyo.",
    "Brazil": "The capital of Brazil is Rio de Janeiro."  # Incorrect
}

# Prepare evaluation results
evaluation_items = []

for item in dataset_items[:3]:  # Process first 3 items for this example
    # Determine which mock response to use
    question = item['user_question']
    response = "I don't know"

    for country, mock_response in mock_responses.items():
        if country.lower() in question.lower():
            response = mock_response
            break

    # Calculate accuracy (1.0 if expected answer is in response)
    accuracy = 1.0 if item['expected_output'].lower() in response.lower() else 0.0

    evaluation_items.append({
        "dataset_item_id": item['id'],
        "evaluate_task_result": {
            "prediction": response
        },
        "feedback_scores": [
            {
                "name": "accuracy",
                "value": accuracy,
                "source": "sdk"
            }
        ]
    })

print(f"Prepared {len(evaluation_items)} evaluation items")
```

```bash title="REST API"
  {
    "experiment_name": "geography-bot-v1",
    "dataset_name": "geography-questions",
    "items": [
      {
        "dataset_item_id": "dataset-item-id-1",
        "evaluate_task_result": {
          "prediction": "The capital of France is Paris."
        },
        "feedback_scores": [
          {
            "name": "accuracy",
            "value": 1.0,
            "source": "sdk"
          }
        ]
      },
      {
        "dataset_item_id": "dataset-item-id-2",
        "evaluate_task_result": {
          "prediction": "Japan's capital is Tokyo."
        },
        "feedback_scores": [
          {
            "name": "accuracy",
            "value": 1.0,
            "source": "sdk"
          }
        ]
      },
      {
        "dataset_item_id": "dataset-item-id-3",
        "evaluate_task_result": {
          "prediction": "The capital of Brazil is Rio de Janeiro."
        },
        "feedback_scores": [
          {
            "name": "accuracy",
            "value": 0.0,
            "source": "sdk"
          }
        ]
      }
    ]
  }
```

</CodeBlocks>

## 3. Log Experiment Items in Bulk

Use the bulk endpoint to efficiently log multiple evaluation results at once.

<CodeBlocks>
```typescript title="TypeScript" language="typescript" maxLines=1000
import { Opik } from "opik";

const client = new Opik({
  apiKey: "your-api-key",
  apiUrl: "https://www.comet.com/opik/api",
  projectName: "your-project-name",
  workspaceName: "your-workspace-name",
});

const experimentName = "Bulk experiment upload";
const datasetName = "geography-questions";
const items = [
  {
    dataset_item_id: "dataset-item-id-1",
    evaluate_task_result: { prediction: "The capital of France is Paris." },
    feedback_scores: [{ name: "accuracy", value: 1.0, source: "sdk" }]
  }
];

await client.api.experiments.experimentItemsBulk({ experimentName, datasetName, items });
```

```python title="Python" language="python" maxLines=1000
experiment_name = "Bulk experiment upload"
# Log experiment results using the bulk method
client.rest_client.experiments.experiment_items_bulk(
    experiment_name=experiment_name,
    dataset_name="geography-questions",
    items=[
        {
            "dataset_item_id": item["dataset_item_id"],
            "evaluate_task_result": item["evaluate_task_result"],
            "feedback_scores": [
                {**score, "source": "sdk"}
                for score in item["feedback_scores"]
            ]
        }
        for item in evaluation_items
    ]
)
```

```bash title="REST API" maxLines=1000
curl -X PUT 'https://www.comet.com/opik/api/v1/private/experiments/items/bulk' \
    -H 'Content-Type: application/json' \
    -H 'Comet-Workspace: <your-workspace-name>' \
    -H 'authorization: <your-api-key>' \
    -d '{
      "experiment_name": "geography-bot-v1",
      "dataset_name": "geography-questions",
      "items": [
        {
          "dataset_item_id": "dataset-item-id-1",
          "evaluate_task_result": {
            "prediction": "The capital of France is Paris."
          },
          "feedback_scores": [
            {
              "name": "accuracy",
              "value": 1.0,
              "source": "sdk"
            }
          ]
        },
        {
          "dataset_item_id": "dataset-item-id-2",
          "evaluate_task_result": {
            "prediction": "Japans capital is Tokyo."
          },
          "feedback_scores": [
            {
              "name": "accuracy",
              "value": 1.0,
              "source": "sdk"
            }
          ]
        }
      ]
    }'
```

</CodeBlocks>

<Warning>
  **Request Size Limit**: The maximum allowed payload size is **4MB**. For larger submissions, divide the data into
  smaller batches.
</Warning>

If you wish to divide the data into smaller batches, just add the `experiment_id` to the payload
so experiment items can be added to an existing experiment.

Below is an example of splitting the `evaluation_items` into two batches which will both be added
to the same experiment:

<CodeBlocks>
  ```typescript title="TypeScript" language="typescript" maxLines=1000
  import { generateId } from "opik";

  const experimentId = generateId();
  const experimentName = "Bulk experiment upload";
  // Split evaluation_items into two batches
  const mid = Math.floor(evaluationItems.length / 2);

  const halves = [
    evaluationItems.slice(0, mid),
    evaluationItems.slice(mid)
  ];

  for (const half of halves) {
    await client.restClient.experiments.experimentItemsBulk({
      experimentId: experimentId,
      experimentName: experimentName,
      datasetName: "geography-questions",
      items: half.map(item => ({
        datasetItemId: item.datasetItemId,
        evaluateTaskResult: item.evaluateTaskResult,
        feedbackScores: item.feedbackScores.map(score => ({
          ...score,
          source: "sdk"
        }))
      }))
    });
  }
  ```

  ```python title="Python" language="python" maxLines=1000
  experiment_id = str(uuid6.uuid7())
  experiment_name = "Bulk experiment upload"
  # Split evaluation_items into two batches
  mid = len(evaluation_items) // 2

  halves = [
      evaluation_items[:mid],
      evaluation_items[mid:]
  ]

  for half in halves:
      client.rest_client.experiments.experiment_items_bulk(
          experiment_id=experiment_id,
          experiment_name=experiment_name,
          dataset_name="geography-questions",
          items=[
              {
                  "dataset_item_id": item["dataset_item_id"],
                  "evaluate_task_result": item["evaluate_task_result"],
                  "feedback_scores": [
                      {**score, "source": "sdk"}
                      for score in item["feedback_scores"]
                  ]
              }
              for item in half
          ]
      )
  ```
</CodeBlocks>

## 4. Analyzing the results

Once you have logged your experiment items, you can analyze the results in the Opik UI and even
compare different experiments to one another.

## Complete Example

Here's a complete example that puts all the steps together:

<CodeBlocks>
  ```typescript title="TypeScript" language="typescript"
  import { Opik } from "opik";

// Configure Opik
const client = new Opik({
  apiKey: "your-api-key",
  apiUrl: "https://www.comet.com/opik/api",
  projectName: "your-project-name",
  workspaceName: "your-workspace-name",
});

// Step 1: Create dataset
const dataset = await client.getOrCreateDataset("geography-questions");

const localDatasetItems = [
  {
    user_question: "What is the capital of France?",
    expected_output: "Paris"
  },
  {
    user_question: "What is the capital of Japan?",
    expected_output: "Tokyo"
  }
];

await dataset.insert(localDatasetItems);

// Step 2: Get dataset items and prepare evaluation results
const datasetItems = await dataset.getItems();

// Helper function to get dataset item ID
const getDatasetItem = (country: string) => {
  return datasetItems.find(item =>
    item.user_question.toLowerCase().includes(country.toLowerCase())
  );
};

// Prepare evaluation results
const evaluationItems = [
  {
    dataset_item_id: getDatasetItem("France")?.id,
    evaluate_task_result: { prediction: "The capital of France is Paris." },
    feedback_scores: [{ name: "accuracy", value: 1.0 }]
  },
  {
    dataset_item_id: getDatasetItem("Japan")?.id,
    evaluate_task_result: { prediction: "Japan's capital is Tokyo." },
    feedback_scores: [{ name: "accuracy", value: 1.0 }]
  }
];

// Step 3: Log experiment results
const experimentName = `geography-bot-${Math.random().toString(36).substr(2, 4)}`;
await client.api.experiments.experimentItemsBulk({
  experimentName,
  datasetName: "geography-questions",
  items: evaluationItems.map(item => ({
    datasetItemId: item.dataset_item_id,
    evaluateTaskResult: item.evaluate_task_result,
    feedbackScores: item.feedback_scores.map(score => ({
      ...score,
      source: "sdk"
    }))
  }))
});

console.log(`Experiment '${experimentName}' created successfully!`);
```

```python title="Python" language="python"
from opik import Opik
import opik
import uuid

# Configure Opik
opik.configure()

# Step 1: Create dataset
client = Opik()
dataset = client.get_or_create_dataset(name="geography-questions")

dataset_items = [
    {
        "user_question": "What is the capital of France?",
        "expected_output": "Paris"
    },
    {
        "user_question": "What is the capital of Japan?",
        "expected_output": "Tokyo"
    }
]

dataset.insert(dataset_items)

# Step 2: Run your LLM application and collect results
# (In a real scenario, you would call your LLM here)

# Helper function to get dataset item ID
def get_dataset_item(country):
    items = dataset.get_items()
    for item in items:
        if country.lower() in item['user_question'].lower():
            return item
    return None

# Prepare evaluation results
evaluation_items = [
    {
        "dataset_item_id": get_dataset_item("France")['id'],
        "evaluate_task_result": {"prediction": "The capital of France is Paris."},
        "feedback_scores": [{"name": "accuracy", "value": 1.0}]
    },
    {
        "dataset_item_id": get_dataset_item("Japan")['id'],
        "evaluate_task_result": {"prediction": "Japan's capital is Tokyo."},
        "feedback_scores": [{"name": "accuracy", "value": 1.0}]
    }
]

# Step 3: Log experiment results
rest_client = client.rest_client
experiment_name = f"geography-bot-{str(uuid.uuid4())[0:4]}"
rest_client.experiments.experiment_items_bulk(
    experiment_name=experiment_name,
    dataset_name="geography-questions",
    items=[
        {
            "dataset_item_id": item["dataset_item_id"],
            "evaluate_task_result": item["evaluate_task_result"],
            "feedback_scores": [
                {**score, "source": "sdk"}
                for score in item["feedback_scores"]
            ]
        }
        for item in evaluation_items
    ]
)

print(f"Experiment '{experiment_name}' created successfully!")
```

```bash title="REST API"
# Set environment variables
export OPIK_API_KEY="your_api_key"
export OPIK_WORKSPACE="your_workspace_name"
# Use http://localhost:5173/api/v1/private/... for local deployments

# Step 1: Create dataset
curl -X POST "https://www.comet.com/opik/api/v1/private/datasets" \
  -H "Content-Type: application/json" \
  -H "Comet-Workspace: ${OPIK_WORKSPACE}" \
  -H "authorization: ${OPIK_API_KEY}" \
  -d '{
    "name": "geography-questions",
    "description": "Geography quiz dataset"
  }'

# Step 2: Add dataset items
curl -X POST "https://www.comet.com/opik/api/v1/private/datasets/items" \
  -H "Content-Type: application/json" \
  -H "Comet-Workspace: ${OPIK_WORKSPACE}" \
  -H "authorization: ${OPIK_API_KEY}" \
  -d '{
    "dataset_name": "geography-questions",
    "items": [
      {
        "user_question": "What is the capital of France?",
        "expected_output": "Paris"
      },
      {
        "user_question": "What is the capital of Japan?",
        "expected_output": "Tokyo"
      }
    ]
  }'

# Step 3: Log experiment results
curl -X PUT "https://www.comet.com/opik/api/v1/private/experiments/items/bulk" \
  -H "Content-Type: application/json" \
  -H "Comet-Workspace: ${OPIK_WORKSPACE}" \
  -H "authorization: ${OPIK_API_KEY}" \
  -d '{
    "experiment_name": "geography-bot-v1",
    "dataset_name": "geography-questions",
    "items": [
      {
        "dataset_item_id": "dataset-item-id-1",
        "evaluate_task_result": {
          "prediction": "The capital of France is Paris."
        },
        "feedback_scores": [
          {
            "name": "accuracy",
            "value": 1.0,
            "source": "sdk"
          }
        ]
      },
      {
        "dataset_item_id": "dataset-item-id-2",
        "evaluate_task_result": {
          "prediction": "Japan'\''s capital is Tokyo."
        },
        "feedback_scores": [
          {
            "name": "accuracy",
            "value": 1.0,
            "source": "sdk"
          }
        ]
      }
    ]
  }'
```

</CodeBlocks>

## Advanced Usage

### Including Traces and Spans

You can include full execution traces with your experiment items for complete observability, to do
achieve this, add a `trace` and `spans` field to your experiment items:

```json
[
  {
    "dataset_item_id": "your-dataset-item-id",
    "trace": {
      "name": "geography_query",
      "input": { "question": "What is the capital of France?" },
      "output": { "answer": "Paris" },
      "metadata": { "model": "gpt-3.5-turbo" },
      "start_time": "2024-01-01T00:00:00Z",
      "end_time": "2024-01-01T00:00:01Z"
    },
    "spans": [
      {
        "name": "llm_call",
        "type": "llm",
        "start_time": "2024-01-01T00:00:00Z",
        "end_time": "2024-01-01T00:00:01Z",
        "input": { "prompt": "What is the capital of France?" },
        "output": { "response": "Paris" }
      }
    ],
    "feedback_scores": [{ "name": "accuracy", "value": 1.0, "source": "sdk" }]
  }
]
```

<Warning>Important: You may supply either `evaluate_task_result` or `trace` — not both.</Warning>

### Java Example

For Java developers, here's how to integrate with Opik using Jackson and HttpClient:

```java
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.node.JsonNodeFactory;
import com.fasterxml.jackson.databind.node.ArrayNode;

public class OpikExperimentLogger {

    public static void main(String[] args) {
        ObjectMapper mapper = new ObjectMapper();

        String baseURI = System.getenv("OPIK_URL_OVERRIDE");
        String workspaceName = System.getenv("OPIK_WORKSPACE");
        String apiKey = System.getenv("OPIK_API_KEY");

        String datasetName = "geography-questions";
        String experimentName = "geography-bot-v1";

        try (var client = HttpClient.newHttpClient()) {
            // Stream dataset items
            var streamRequest = HttpRequest.newBuilder()
                    .uri(URI.create(baseURI).resolve("/v1/private/datasets/items/stream"))
                    .header("Content-Type", "application/json")
                    .header("Accept", "application/octet-stream")
                    .header("Authorization", apiKey)
                    .header("Comet-Workspace", workspaceName)
                    .POST(HttpRequest.BodyPublishers.ofString(
                        mapper.writeValueAsString(Map.of("dataset_name", datasetName))
                    ))
                    .build();

            HttpResponse<InputStream> streamResponse = client.send(
                streamRequest,
                HttpResponse.BodyHandlers.ofInputStream()
            );

            List<JsonNode> experimentItems = new ArrayList<>();

            try (var reader = new BufferedReader(new InputStreamReader(streamResponse.body()))) {
                String line;
                while ((line = reader.readLine()) != null) {
                    JsonNode datasetItem = mapper.readTree(line);
                    String question = datasetItem.get("data").get("user_question").asText();
                    UUID datasetItemId = UUID.fromString(datasetItem.get("id").asText());

                    // Call your LLM application
                    JsonNode llmOutput = callYourLLM(question);

                    // Calculate metrics
                    List<JsonNode> scores = calculateMetrics(llmOutput);

                    // Build experiment item
                    ArrayNode scoresArray = JsonNodeFactory.instance.arrayNode().addAll(scores);
                    JsonNode experimentItem = JsonNodeFactory.instance.objectNode()
                            .put("dataset_item_id", datasetItemId.toString())
                            .setAll(Map.of(
                                "evaluate_task_result", llmOutput,
                                "feedback_scores", scoresArray
                            ));

                    experimentItems.add(experimentItem);
                }
            }

            // Send experiment results in bulk
            var bulkBody = JsonNodeFactory.instance.objectNode()
                    .put("dataset_name", datasetName)
                    .put("experiment_name", experimentName)
                    .setAll(Map.of("items",
                        JsonNodeFactory.instance.arrayNode().addAll(experimentItems)
                    ));

            var bulkRequest = HttpRequest.newBuilder()
                    .uri(URI.create(baseURI).resolve("/v1/private/experiments/items/bulk"))
                    .header("Content-Type", "application/json")
                    .header("Authorization", apiKey)
                    .header("Comet-Workspace", workspaceName)
                    .PUT(HttpRequest.BodyPublishers.ofString(bulkBody.toString()))
                    .build();

            HttpResponse<String> bulkResponse = client.send(
                bulkRequest,
                HttpResponse.BodyHandlers.ofString()
            );

            if (bulkResponse.statusCode() == 204) {
                System.out.println("Experiment items successfully created.");
            } else {
                System.err.printf("Failed to create experiment items: %s %s",
                    bulkResponse.statusCode(), bulkResponse.body());
            }

        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }
}
```

### Using the REST API with local deployments

If you are using the REST API with a local deployment, you can all the endpoints using:

```bash
# No authentication headers required for local deployments
curl -X PUT 'http://localhost:5173/api/v1/private/experiments/items/bulk' \
  -H 'Content-Type: application/json' \
  -d '{ ... }'
```

## Reference

- **Endpoint**: `PUT /api/v1/private/experiments/items/bulk`
- **Max Payload Size**: 4MB
- **Required Fields**: `experiment_name`, `dataset_name`, `items` (with `dataset_item_id`)
- **SDK Reference**: [ExperimentsClient.experiment_items_bulk](https://www.comet.com/docs/opik/python-sdk-reference/rest_api/clients/experiments.html#opik.rest_api.experiments.client.ExperimentsClient.experiment_items_bulk)
- **REST API Reference**: [Experiments API](/reference/rest-api/experiments/experiment-items-bulk)
