Lighteval documentation

Logging

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.9.2).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Logging

EvaluationTracker

class lighteval.logging.evaluation_tracker.EvaluationTracker

< >

( output_dir: str results_path_template: str | None = None save_details: bool = True push_to_hub: bool = False push_to_tensorboard: bool = False hub_results_org: str | None = '' tensorboard_metric_prefix: str = 'eval' public: bool = False nanotron_run_info: GeneralArgs = None use_wandb: bool = False )

Parameters

  • output_dir (str) — Local directory to save evaluation results and logs
  • results_path_template (str, optional) — Template for results directory structure. Example: “{outputdir}/results/{org}{model}”
  • save_details (bool, defaults to True) — Whether to save detailed evaluation records
  • push_to_hub (bool, defaults to False) — Whether to push results to HF Hub
  • push_to_tensorboard (bool, defaults to False) — Whether to push metrics to TensorBoard
  • hub_results_org (str, optional) — HF Hub organization to push results to
  • tensorboard_metric_prefix (str, defaults to “eval”) — Prefix for TensorBoard metrics
  • public (bool, defaults to False) — Whether to make Hub datasets public
  • nanotron_run_info (GeneralArgs, optional) — Nanotron model run information
  • use_wandb (bool, defaults to False) — Whether to log to Weights & Biases or Trackio if available

Tracks and manages evaluation results, metrics, and logging for model evaluations.

The EvaluationTracker coordinates multiple specialized loggers to track different aspects of model evaluation:

  • Details Logger (DetailsLogger): Records per-sample evaluation details and predictions
  • Metrics Logger (MetricsLogger): Tracks aggregate evaluation metrics and scores
  • Versions Logger (VersionsLogger): Records task and dataset versions
  • General Config Logger (GeneralConfigLogger): Stores overall evaluation configuration
  • Task Config Logger (TaskConfigLogger): Maintains per-task configuration details

The tracker can save results locally and optionally push them to:

  • Hugging Face Hub as datasets
  • TensorBoard for visualization
  • Trackio or Weights & Biases for experiment tracking

Example:

tracker = EvaluationTracker(
    output_dir="./eval_results",
    push_to_hub=True,
    hub_results_org="my-org",
    save_details=True
)

# Log evaluation results
tracker.metrics_logger.add_metric("accuracy", 0.85)
tracker.details_logger.add_detail(task_name="qa", prediction="Paris")

# Save all results
tracker.save()

generate_final_dict

< >

( ) dict

Returns

dict

Dictionary containing all experiment information including config, results, versions, and summaries

Aggregates and returns all the logger’s experiment information in a dictionary.

This function should be used to gather and display said information at the end of an evaluation run.

push_to_hub

< >

( date_id: str details: dict results_dict: dict )

Pushes the experiment details (all the model predictions for every step) to the hub.

recreate_metadata_card

< >

( repo_id: str )

Parameters

  • repo_id (str) — Details dataset repository path on the hub (org/dataset)

Fully updates the details repository metadata card for the currently evaluated model

save

< >

( )

Saves the experiment information and results to files, and to the hub if requested.

GeneralConfigLogger

class lighteval.logging.info_loggers.GeneralConfigLogger

< >

( )

Parameters

  • lighteval_sha (str) — Git commit SHA of lighteval used for evaluation, enabling exact version reproducibility. Set to ”?” if not in a git repository.
  • num_fewshot_seeds (int) — Number of random seeds used for few-shot example sampling.

    • If <= 1: Single evaluation with seed=0
    • If > 1: Multiple evaluations with different few-shot samplings (HELM-style)
  • max_samples (int, optional) — Maximum number of samples to evaluate per task. Only used for debugging - truncates each task’s dataset.
  • job_id (int, optional) — Slurm job ID if running on a cluster. Used to cross-reference with scheduler logs.
  • start_time (float) — Unix timestamp when evaluation started. Automatically set during logger initialization.
  • end_time (float) — Unix timestamp when evaluation completed. Set by calling log_end_time().
  • total_evaluation_time_secondes (str) — Total runtime in seconds. Calculated as end_time - start_time.
  • model_config (ModelConfig) — Complete model configuration settings. Contains model architecture, tokenizer, and generation parameters.
  • model_name (str) — Name identifier for the evaluated model. Extracted from model_config.

Tracks general configuration and runtime information for model evaluations.

This logger captures key configuration parameters, model details, and timing information to ensure reproducibility and provide insights into the evaluation process.

log_args_info

< >

( num_fewshot_seeds: int max_samples: int | None job_id: str )

Parameters

  • num_fewshot_seeds (int) — number of few-shot seeds.
  • max_samples (int | None) — maximum number of samples, if None, use all the samples available.
  • job_id (str) — job ID, used to retrieve logs.

Logs the information about the arguments passed to the method.

log_model_info

< >

( model_config: ModelConfig )

Parameters

  • model_config — the model config used to initialize the model.

Logs the model information.

DetailsLogger

class lighteval.logging.info_loggers.DetailsLogger

< >

( hashes: dict = <factory> compiled_hashes: dict = <factory> details: dict = <factory> compiled_details: dict = <factory> compiled_details_over_all_tasks: DetailsLogger.CompiledDetailOverAllTasks = <factory> )

Parameters

  • hashes (dict[str, listHash) — Maps each task name to the list of all its samples’ Hash.
  • compiled_hashes (dict[str, CompiledHash) — Maps each task name to its CompiledHas, an aggregation of all the individual sample hashes.
  • details (dict[str, listDetail]) — Maps each task name to the list of its samples’ details. Example: winogrande: [sample1_details, sample2_details, …]
  • compiled_details (dict[str, CompiledDetail]) — : Maps each task name to the list of its samples’ compiled details.
  • compiled_details_over_all_tasks (CompiledDetailOverAllTasks) — Aggregated details over all the tasks.

Logger for the experiment details.

Stores and logs experiment information both at the task and at the sample level.

aggregate

< >

( )

Hashes the details for each task and then for all tasks.

log

< >

( task_name: str doc: Doc model_response: ModelResponse metrics: dict )

Parameters

  • task_name (str) — Name of the current task of interest.
  • doc (Doc) — Current sample that we want to store.
  • model_response (ModelResponse) — Model outputs for the current sample
  • metrics (dict) — Model scores for said sample on the current task’s metrics.

Stores the relevant information for one sample of one task to the total list of samples stored in the DetailsLogger.

MetricsLogger

class lighteval.logging.info_loggers.MetricsLogger

< >

( metrics_values: dict = <factory> metric_aggregated: dict = <factory> )

Parameters

  • metrics_value (dict[str, dict[str, list[float]]]) — Maps each task to its dictionary of metrics to scores for all the example of the task. Example: {“winogrande|winogrande_xl”: {“accuracy”: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5]}‌}
  • metric_aggregated (dict[str, dict[str, float]]) — Maps each task to its dictionary of metrics to aggregated scores over all the example of the task. Example: {“winogrande|winogrande_xl”: {“accuracy”: 0.5}‌}

Logs the actual scores for each metric of each task.

aggregate

< >

( task_dict: dict bootstrap_iters: int = 1000 )

Parameters

  • task_dict (dict[str, LightevalTask]) — used to determine what aggregation function to use for each metric
  • bootstrap_iters (int, optional) — Number of runs used to run the statistical bootstrap. Defaults to 1000.

Aggregate the metrics for each task and then for all tasks.

VersionsLogger

class lighteval.logging.info_loggers.VersionsLogger

< >

( versions: dict = <factory> )

Parameters

  • version (dict[str, int]) — Maps the task names with the task versions.

Logger of the tasks versions.

Tasks can have a version number/date, which indicates what is the precise metric definition and dataset version used for an evaluation.

TaskConfigLogger

class lighteval.logging.info_loggers.TaskConfigLogger

< >

( tasks_configs: dict = <factory> )

Parameters

  • tasks_config (dict[str, LightevalTaskConfig]) — Maps each task to its associated LightevalTaskConfig

Logs the different parameters of the current LightevalTask of interest.

< > Update on GitHub