text stringlengths 0 59.1k |
|---|
``` |
```ts title="Node script" |
import { VoltOpsClient } from "@voltagent/core"; |
import { runExperiment } from "@voltagent/evals"; |
import experiment from "./experiments/support-nightly.experiment"; |
const voltOpsClient = new VoltOpsClient({ |
publicKey: process.env.VOLTAGENT_PUBLIC_KEY, |
secretKey: process.env.VOLTAGENT_SECRET_KEY, |
}); |
const result = await runExperiment(experiment, { |
voltOpsClient, |
concurrency: 4, |
onProgress: ({ completed, total }) => console.log(`Processed ${completed}/${total ?? "?"}`), |
}); |
``` |
The CLI handles TypeScript bundling and VoltOps linking for you. The programmatic form is handy for CI jobs or custom telemetry pipelines. |
```ts title="experiments/offline-smoke.experiment.ts" |
import { createExperiment } from "@voltagent/evals"; |
import { scorers } from "@voltagent/scorers"; |
import { supportAgent } from "../agents/support"; |
export default createExperiment({ |
id: "offline-smoke", |
dataset: { name: "support-nightly" }, |
experiment: { name: "support-nightly-regression" }, |
runner: async ({ item }) => { |
const reply = await supportAgent.generateText(item.input); |
return { output: reply.text }; |
}, |
scorers: [scorers.exactMatch], |
passCriteria: { type: "meanScore", min: 0.9 }, |
}); |
``` |
The experiment can be executed the same way as shown above (CLI or Node script). The CLI resolves TypeScript, streams progress, and, when VoltOps credentials are present, links the run to the named experiment. The Node API variant mirrors the same flow and returns the run summary object for further assertions. |
## Live Evaluations |
Live evaluations attach scorers to real-time agent interactions. They are suited for production monitoring, moderation, and sampling conversational quality under actual traffic. |
```ts title="Attach live scorers when defining an agent" |
import VoltAgent, { Agent, VoltAgentObservability } from "@voltagent/core"; |
import { createModerationScorer } from "@voltagent/scorers"; |
import { openai } from "@ai-sdk/openai"; |
import honoServer from "@voltagent/server-hono"; |
const observability = new VoltAgentObservability(); |
const moderationModel = openai("gpt-4o-mini"); |
const supportAgent = new Agent({ |
name: "live-scorer-demo", |
instructions: "Answer questions about VoltAgent.", |
model: openai("gpt-4o-mini"), |
eval: { |
triggerSource: "production", |
environment: "demo", |
sampling: { type: "ratio", rate: 1 }, |
scorers: { |
moderation: { |
scorer: createModerationScorer({ model: moderationModel, threshold: 0.5 }), |
}, |
}, |
}, |
}); |
new VoltAgent({ |
agents: { support: supportAgent }, |
observability, |
server: honoServer(), |
}); |
``` |
Use cases: |
- Sample live traffic, enforce moderation, or feed LLM judges without waiting for batch runs. |
- Combine with offline evals for deterministic regression checks before deploy. |
> Live vs offline: Live scorer results are added to OTLP trace spans via `eval.scorer.*` and show up in VoltOps Live Scores / telemetry views. They are not persisted into Eval Runs and stay separate from dataset/experiment runs. |
## What’s next? |
- Quick-start walkthrough: `docs/evals/quick-start` (upcoming). |
- Experiment definition reference: `docs/evals/concepts/experiment-definition` (upcoming). |
- Scorer catalog and authoring guide: `docs/evals/concepts/scorers` (upcoming). |
- CLI usage notes: `docs/evals/reference/cli` (upcoming). |
<|endoftext|> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.