Lightcap's picture
Set dataset license to CC BY 4.0
13594c4 verified
metadata
pretty_name: Agent Runtime Telemetry Small
license: cc-by-4.0
language:
  - en
tags:
  - agent-runtime
  - agent-observability
  - llm-observability
  - mcp
  - tool-calling
  - runtime-telemetry
  - audit-trail
  - workflow-traces
  - parquet
size_categories:
  - 10K<n<100K
configs:
  - config_name: dataset_overview
    data_files:
      - split: train
        path: data/dataset_overview.parquet
  - config_name: operations
    data_files:
      - split: train
        path: data/operations.parquet
  - config_name: operation_events
    data_files:
      - split: train
        path: data/operation_events.parquet
  - config_name: artifact_records
    data_files:
      - split: train
        path: data/artifact_records.parquet
  - config_name: audit_records
    data_files:
      - split: train
        path: data/audit_records.parquet
  - config_name: tool_summary
    data_files:
      - split: train
        path: data/tool_summary.parquet
  - config_name: artifact_summary
    data_files:
      - split: train
        path: data/artifact_summary.parquet
  - config_name: daily_activity
    data_files:
      - split: train
        path: data/daily_activity.parquet

Agent Runtime Telemetry Small

Curated by Faruk Alpay.

Agent Runtime Telemetry Small is a compact tabular export of MCP-style agent execution telemetry. It is designed for dataset viewer inspection, lightweight agent observability experiments, tool-call reliability analysis, workflow trace summaries, and audit-trail research.

The dataset is intentionally small and row-oriented. Each table is stored as Parquet so the Hugging Face Dataset Viewer can display clean columns without requiring a SQLite client.

What It Contains

Config Rows Columns Purpose
dataset_overview 7 6 Table inventory and export policy
operations 2,262 33 Tool execution records, status, stages, durations, and summarized result metadata
operation_events 9,903 13 Lifecycle events for operations
artifact_records 1,269 19 Forecast, state-decode, and training artifact index records
audit_records 14,053 17 Tool request/result audit rows with compact metadata
tool_summary 32 8 Aggregated tool reliability and latency statistics
artifact_summary 9 7 Aggregated artifact status and payload-size statistics
daily_activity 8 5 UTC daily activity counts across runtime tables

Privacy Boundary

This export does not upload the original SQLite databases and does not include raw nested payload_json bodies. Large JSON fields are represented with inspectable columns such as key lists, byte lengths, selected scalar status fields, and SHA-256 digests. Absolute local paths are reduced to path scope and file name columns.

Suggested Uses

  • compare agent tool success/error rates across runtime traces
  • inspect workflow latency and stage transitions
  • prototype LLM agent observability dashboards
  • analyze audit request/result volume without parsing full JSON logs
  • benchmark small-data telemetry pipelines that expect clean tabular inputs

Loading Example

from datasets import load_dataset

ops = load_dataset("Lightcap/agent-runtime-telemetry-small", "operations")
print(ops["train"][0])

summary = load_dataset("Lightcap/agent-runtime-telemetry-small", "tool_summary")
print(summary["train"].to_pandas().sort_values("operation_count", ascending=False).head())

Source

The rows were exported from local runtime SQLite stores into sanitized Parquet tables:

  • operation_state.sqlite3
  • artifact_store.sqlite3
  • audit_store.sqlite3

The export focuses on the operational shape of agent runtimes rather than application-specific content.