Lighteval documentation
Logging
Logging
EvaluationTracker
class lighteval.logging.evaluation_tracker.EvaluationTracker
< source >( output_dir: str results_path_template: str | None = None save_details: bool = True push_to_hub: bool = False push_to_tensorboard: bool = False hub_results_org: str | None = '' tensorboard_metric_prefix: str = 'eval' public: bool = False nanotron_run_info: GeneralArgs = None use_wandb: bool = False )
Parameters
- output_dir (str) — Local directory to save evaluation results and logs
- results_path_template (str, optional) — Template for results directory structure. Example: “{outputdir}/results/{org}{model}”
- save_details (bool, defaults to True) — Whether to save detailed evaluation records
- push_to_hub (bool, defaults to False) — Whether to push results to HF Hub
- push_to_tensorboard (bool, defaults to False) — Whether to push metrics to TensorBoard
- hub_results_org (str, optional) — HF Hub organization to push results to
- tensorboard_metric_prefix (str, defaults to “eval”) — Prefix for TensorBoard metrics
- public (bool, defaults to False) — Whether to make Hub datasets public
- nanotron_run_info (GeneralArgs, optional) — Nanotron model run information
- use_wandb (bool, defaults to False) — Whether to log to Weights & Biases or Trackio if available
Tracks and manages evaluation results, metrics, and logging for model evaluations.
The EvaluationTracker coordinates multiple specialized loggers to track different aspects of model evaluation:
- Details Logger (DetailsLogger): Records per-sample evaluation details and predictions
- Metrics Logger (MetricsLogger): Tracks aggregate evaluation metrics and scores
- Versions Logger (VersionsLogger): Records task and dataset versions
- General Config Logger (GeneralConfigLogger): Stores overall evaluation configuration
- Task Config Logger (TaskConfigLogger): Maintains per-task configuration details
The tracker can save results locally and optionally push them to:
- Hugging Face Hub as datasets
- TensorBoard for visualization
- Trackio or Weights & Biases for experiment tracking
Example:
tracker = EvaluationTracker(
output_dir="./eval_results",
push_to_hub=True,
hub_results_org="my-org",
save_details=True
)
# Log evaluation results
tracker.metrics_logger.add_metric("accuracy", 0.85)
tracker.details_logger.add_detail(task_name="qa", prediction="Paris")
# Save all results
tracker.save()
generate_final_dict
< source >( ) → dict
Returns
dict
Dictionary containing all experiment information including config, results, versions, and summaries
Aggregates and returns all the logger’s experiment information in a dictionary.
This function should be used to gather and display said information at the end of an evaluation run.
Pushes the experiment details (all the model predictions for every step) to the hub.
recreate_metadata_card
< source >( repo_id: str )
Fully updates the details repository metadata card for the currently evaluated model
Saves the experiment information and results to files, and to the hub if requested.
GeneralConfigLogger
class lighteval.logging.info_loggers.GeneralConfigLogger
< source >( )
Parameters
- lighteval_sha (str) — Git commit SHA of lighteval used for evaluation, enabling exact version reproducibility. Set to ”?” if not in a git repository.
- num_fewshot_seeds (int) — Number of random seeds used for few-shot example sampling.
- If <= 1: Single evaluation with seed=0
- If > 1: Multiple evaluations with different few-shot samplings (HELM-style)
- max_samples (int, optional) — Maximum number of samples to evaluate per task. Only used for debugging - truncates each task’s dataset.
- job_id (int, optional) — Slurm job ID if running on a cluster. Used to cross-reference with scheduler logs.
- start_time (float) — Unix timestamp when evaluation started. Automatically set during logger initialization.
- end_time (float) — Unix timestamp when evaluation completed. Set by calling log_end_time().
- total_evaluation_time_secondes (str) — Total runtime in seconds. Calculated as end_time - start_time.
- model_config (ModelConfig) — Complete model configuration settings. Contains model architecture, tokenizer, and generation parameters.
- model_name (str) — Name identifier for the evaluated model. Extracted from model_config.
Tracks general configuration and runtime information for model evaluations.
This logger captures key configuration parameters, model details, and timing information to ensure reproducibility and provide insights into the evaluation process.
log_args_info
< source >( num_fewshot_seeds: int max_samples: int | None job_id: str )
Logs the information about the arguments passed to the method.
log_model_info
< source >( model_config: ModelConfig )
Logs the model information.
DetailsLogger
class lighteval.logging.info_loggers.DetailsLogger
< source >( hashes: dict = <factory> compiled_hashes: dict = <factory> details: dict = <factory> compiled_details: dict = <factory> compiled_details_over_all_tasks: DetailsLogger.CompiledDetailOverAllTasks = <factory> )
Parameters
- hashes (dict[str, list
Hash
) — Maps each task name to the list of all its samples’Hash
. - compiled_hashes (dict[str, CompiledHash) — Maps each task name to its
CompiledHas
, an aggregation of all the individual sample hashes. - details (dict[str, list
Detail
]) — Maps each task name to the list of its samples’ details. Example: winogrande: [sample1_details, sample2_details, …] - compiled_details (dict[str,
CompiledDetail
]) — : Maps each task name to the list of its samples’ compiled details. - compiled_details_over_all_tasks (CompiledDetailOverAllTasks) — Aggregated details over all the tasks.
Logger for the experiment details.
Stores and logs experiment information both at the task and at the sample level.
Hashes the details for each task and then for all tasks.
log
< source >( task_name: str doc: Doc model_response: ModelResponse metrics: dict )
Stores the relevant information for one sample of one task to the total list of samples stored in the DetailsLogger.
MetricsLogger
class lighteval.logging.info_loggers.MetricsLogger
< source >( metrics_values: dict = <factory> metric_aggregated: dict = <factory> )
Parameters
- metrics_value (dict[str, dict[str, list[float]]]) — Maps each task to its dictionary of metrics to scores for all the example of the task. Example: {“winogrande|winogrande_xl”: {“accuracy”: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5]}}
- metric_aggregated (dict[str, dict[str, float]]) — Maps each task to its dictionary of metrics to aggregated scores over all the example of the task. Example: {“winogrande|winogrande_xl”: {“accuracy”: 0.5}}
Logs the actual scores for each metric of each task.
aggregate
< source >( task_dict: dict bootstrap_iters: int = 1000 )
Aggregate the metrics for each task and then for all tasks.
VersionsLogger
class lighteval.logging.info_loggers.VersionsLogger
< source >( versions: dict = <factory> )
Logger of the tasks versions.
Tasks can have a version number/date, which indicates what is the precise metric definition and dataset version used for an evaluation.
TaskConfigLogger
class lighteval.logging.info_loggers.TaskConfigLogger
< source >( tasks_configs: dict = <factory> )
Logs the different parameters of the current LightevalTask
of interest.