---
description: Fine-tune Opik metrics with async scoring, evaluator temperatures, and logprob handling
---

# Advanced configuration

Opik’s metrics expose several power-user controls so you can tailor evaluations to your workflows. This guide covers the most common tweaks: asynchronous scoring, evaluator randomness, and log-probability handling.

## Asynchronous scoring with `ascore`

Every built-in metric inherits from `BaseMetric`, which defines an async counterpart to `score` named `ascore`. Use it when you need to run evaluations inside an async pipeline or when the underlying provider (e.g., LangChain, Ragas) requires an event loop.

```python title="Awaiting an async metric"
import asyncio

from opik.evaluation.metrics import Hallucination

metric = Hallucination()

async def evaluate_async():
    result = await metric.ascore(
        input="What is the capital of France?",
        output="The capital is Berlin.",
    )
    return result

score = asyncio.run(evaluate_async())
print(score.value, score.reason)
```

Within synchronous code you can still call `score`—Opik will run the async implementation under the hood when needed. When integrating with async frameworks (FastAPI endpoints, streaming agents, or notebooks using `nest_asyncio`), prefer the explicit `await metric.ascore(...)` form.

## Controlling evaluator temperature

GEval-based judges accept a `temperature` argument. Lower temperatures improve reproducibility by keeping the evaluator deterministic; higher values explore more rubric variations and can surface edge cases.

```python title="Custom temperature"
from opik.evaluation.metrics import ComplianceRiskJudge

deterministic = ComplianceRiskJudge(temperature=0.0)
exploratory = ComplianceRiskJudge(temperature=0.4)
```

Opik caches evaluator chain-of-thought prompts per `(task, criteria, model, completion_kwargs)` combination. Changing `temperature` or other LiteLLM keyword arguments (e.g., `top_p`) produces a fresh cache entry so experiments stay isolated.

## Log probabilities and evaluator models

When the LiteLLM backend supports `logprobs` and `top_logprobs`, Opik automatically requests them to stabilise GEval scores (mirroring the original paper). If you switch to a model that does not expose log probabilities, the metric still works—the score is computed from the raw judgement only.

You can inspect the evaluator’s capabilities at runtime:

```python
metric = ComplianceRiskJudge(model="gpt-4o-mini")
print("logprobs" in metric._model.supported_params)
```

If you need to propagate additional LiteLLM options (for example, `response_format` or `frequency_penalty`), instantiate `LiteLLMChatModel` manually and pass it to the metric:

```python title="Custom LiteLLM configuration"
from opik.evaluation.models.litellm import LiteLLMChatModel
from opik.evaluation.metrics import Hallucination

custom_provider = LiteLLMChatModel(
    model_name="gpt-4o-mini",
    temperature=0.2,
    frequency_penalty=0.3,
)

metric = Hallucination(model=custom_provider)
```

Because the model fingerprint is part of the cache key, changing these kwargs forces a new evaluator rubric to be generated.

## Tracking controls

Most metrics accept `track` and `project_name` keyword arguments so you can decide whether each run writes to Opik and which project it belongs to:

```python
metric = DialogueHelpfulnessJudge(track=False)
```

Disable tracking when running quick, ad-hoc experiments locally, or set `project_name="llm-migration"` to group evaluations by initiative.
