Lighteval documentation
Tasks
Tasks
LightevalTask
LightevalTaskConfig
class lighteval.tasks.lighteval_task.LightevalTaskConfig
< source >( name: str prompt_function: typing.Callable[[dict, str], lighteval.tasks.requests.Doc] hf_repo: str hf_subset: str metrics: list[lighteval.metrics.utils.metric_utils.Metric] | tuple[lighteval.metrics.utils.metric_utils.Metric, ...] hf_revision: str | None = None hf_filter: typing.Optional[typing.Callable[[dict], bool]] = None hf_avail_splits: list[str] | tuple[str, ...] = <factory> evaluation_splits: list[str] | tuple[str, ...] = <factory> few_shots_split: str | None = None few_shots_select: str | None = None generation_size: int | None = None generation_grammar: huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType | None = None stop_sequence: list[str] | tuple[str, ...] | None = None num_samples: list[int] | None = None suite: list[str] | tuple[str, ...] = <factory> original_num_docs: int = -1 effective_num_docs: int = -1 must_remove_duplicate_docs: bool = False num_fewshots: int = 0 version: int = 0 )
Parameters
- name (str) — Short name of the evaluation task.
- prompt_function (Callable[[dict, str], Doc]) — Function that converts dataset row to Doc objects for evaluation. Takes a dataset row dict and task name as input.
- hf_repo (str) — HuggingFace Hub repository path containing the evaluation dataset.
- hf_subset (str) — Dataset subset/configuration name to use for this task.
- metrics (ListLike[Metric]) — List of metrics to compute for this task.
Configuration dataclass for a LightevalTask.
This class stores all the configuration parameters needed to define and run an evaluation task, including dataset information, prompt formatting, evaluation metrics, and generation parameters.
Dataset Configuration: hf_revision (str | None, optional): Specific dataset revision to use. Defaults to None (latest). hf_filter (Callable[[dict], bool] | None, optional): Filter function to apply to dataset items. Defaults to None. hf_avail_splits (ListLike[str], optional): Available dataset splits. Defaults to [“train”, “validation”, “test”].
Evaluation Splits: evaluation_splits (ListLike[str], optional): Dataset splits to use for evaluation. Defaults to [“validation”]. few_shots_split (str | None, optional): Split to sample few-shot examples from. Defaults to None. few_shots_select (str | None, optional): Method for selecting few-shot examples. Defaults to None.
Generation Parameters: generation_size (int | None, optional): Maximum token length for generated text. Defaults to None. generation_grammar (TextGenerationInputGrammarType | None, optional): Grammar for structured text generation. Only available for TGI and Inference Endpoint models. Defaults to None. stop_sequence (ListLike[str] | None, optional): Sequences that stop text generation. Defaults to None. num_samples (list[int] | None, optional): Number of samples to generate per input. Defaults to None.
Task Configuration: suite (ListLike[str], optional): Evaluation suites this task belongs to. Defaults to [“custom”]. version (int, optional): Task version number. Increment when dataset or prompt changes. Defaults to 0. num_fewshots (int, optional): Number of few-shot examples to include. Defaults to 0. truncate_fewshots (bool, optional): Whether to truncate few-shot examples. Defaults to False. must_remove_duplicate_docs (bool, optional): Whether to remove duplicate documents. Defaults to False.
Document Tracking: original_num_docs (int, optional): Total number of documents in the task. Defaults to -1. effective_num_docs (int, optional): Number of documents actually used in evaluation. Defaults to -1.
LightevalTask
Return a dict with metric name and its aggregation function for all metrics
download_dataset_worker
< source >( task: LightevalTask ) → DatasetDict
Worker function to download a dataset from the HuggingFace Hub.
Downloads the dataset specified in the task configuration, optionally applies a filter if configured, and returns the dataset dictionary. This method is designed to be used for parallel dataset loading.
Returns the evaluation documents.
fewshot_docs
< source >( ) → list[Doc]
Returns
list[Doc]
Documents that will be used for few shot examples. One document = one few shot example.
Returns the few shot documents. If the few shot documents are not available, it gets them from the few shot split or the evaluation split.
get_docs
< source >( max_samples: int | None = None ) → list[Doc]
Parameters
- max_samples (int | None, optional) — Maximum number of documents to return. If None, returns all available documents. Defaults to None.
Returns
list[Doc]
List of documents ready for evaluation with few-shot examples and generation parameters configured.
Raises
ValueError
ValueError
— If no documents are available for evaluation.
Get evaluation documents with few-shot examples and generation parameters configured.
Retrieves evaluation documents, optionally limits the number of samples, shuffles them for reproducibility, and configures each document with few-shot examples and generation parameters for evaluation.
get_first_possible_fewshot_splits
< source >( available_splits: list[str] | tuple[str, ...] ) → str
Returns
str
the first available fewshot splits or None if nothing is available
Parses the possible fewshot split keys in order: train, then validation keys and matches them with the available keys. Returns the first available.
load_datasets
< source >( tasks: dict dataset_loading_processes: int = 1 )
Load datasets from the HuggingFace Hub for the given tasks.
PromptManager
class lighteval.tasks.prompt_manager.PromptManager
< source >( use_chat_template: bool = False tokenizer = None system_prompt: str | None = None )
Prepare a prompt from a document, either using chat template or plain text format.
prepare_prompt_api
< source >( doc: Doc ) → list[dict[str, str]]
Returns
list[dict[str, str]]
List of message dictionaries for API calls
Prepare a prompt for API calls, using a chat-like format. Will not tokenize the message because APIs will usually handle this.
Registry
class lighteval.tasks.registry.Registry
< source >( tasks: str | pathlib.Path | None = None custom_tasks: str | pathlib.Path | module | None = None load_community: bool = False load_extended: bool = False load_multilingual: bool = False )
The Registry class is used to manage the task registry and get task classes.
create_custom_tasks_module
< source >( custom_tasks: str | pathlib.Path | module ) → ModuleType
Creates a custom task module to load tasks defined by the user in their own file.
create_task_config_dict
< source >( meta_table: list[lighteval.tasks.lighteval_task.LightevalTaskConfig] | None = None ) → Dict[str, LightevalTaskConfig]
Create configuration tasks based on the provided meta_table.
print_all_tasks
< source >( suites: str | None = None )
Print all the tasks in the task registry.
Doc
class lighteval.tasks.requests.Doc
< source >( query: str choices: list gold_index: typing.Union[int, list[int]] instruction: str | None = None images: list['Image'] | None = None specific: dict | None = None unconditioned_query: str | None = None original_query: str | None = None id: str = '' task_name: str = '' fewshot_samples: list = <factory> sampling_methods: list = <factory> fewshot_sorting_class: str | None = None generation_size: int | None = None stop_sequences: list[str] | None = None use_logits: bool = False num_samples: int = 1 generation_grammar: None = None )
Parameters
- query (str) — The main query, prompt, or question to be sent to the model.
- choices (list[str]) — List of possible answer choices for the query. For multiple choice tasks, this contains all options (A, B, C, D, etc.). For generative tasks, this may be empty or contain reference answers.
- gold_index (Union[int, list[int]]) — Index or indices of the correct answer(s) in the choices list. For single correct answers,(e.g., 0 for first choice). For multiple correct answers, use a list (e.g., [0, 2] for first and third).
- instruction (str | None) — System prompt or task-specific instructions to guide the model. This is typically prepended to the query to set context or behavior.
- images (list[“Image”] | None) — List of PIL Image objects for multimodal tasks.
- specific (dict | None) — Task-specific information or metadata. Can contain any additional data needed for evaluation.
- unconditioned_query (Optional[str]) — Query without task-specific context for PMI normalization. Used to calculate: log P(choice | Query) - log P(choice | Unconditioned Query).
- original_query (str | None) — The query before any preprocessing or modification.
- # Set by task parameters —
- id (str) — Unique identifier for this evaluation instance. Set by the task and not the user.
- task_name (str) — Name of the task or benchmark this Doc belongs to.
- ## Few-shot Learning Parameters —
- fewshot_samples (list) — List of Doc objects representing few-shot examples. These examples are prepended to the main query to provide context.
- sampling_methods (list[SamplingMethod]) — List of sampling methods to use for this instance. Options: GENERATIVE, LOGPROBS, PERPLEXITY.
- fewshot_sorting_class (Optional[str]) — Class label for balanced few-shot example selection. Used to ensure diverse representation in few-shot examples.
- ## Generation Control Parameters —
- generation_size (int | None) — Maximum number of tokens to generate for this instance.
- stop_sequences (list[str] | None) — List of strings that should stop generation when encountered. Used for: Controlled generation, preventing unwanted continuations.
- use_logits (bool) — Whether to return logits (raw model outputs) in addition to text. Used for: Probability analysis, confidence scoring, detailed evaluation.
- num_samples (int) — Number of different samples to generate for this instance. Used for: Diversity analysis, uncertainty estimation, ensemble methods.
- generation_grammar (None) — Grammar constraints for generation (currently not implemented). Reserved for: Future structured generation features.
Dataclass representing a single evaluation sample for a benchmark.
This class encapsulates all the information needed to evaluate a model on a single task instance. It contains the input query, expected outputs, metadata, and configuration parameters for different types of evaluation tasks.
Required Fields:
query
: The input prompt or questionchoices
: Available answer choices (for multiple choice tasks)gold_index
: Index(es) of the correct answer(s)
Optional Fields:
instruction
: System prompt, task specific. Will be appended to model specific system prompt.images
: Visual inputs for multimodal tasks.
Methods: get_golds(): Returns the correct answer(s) as strings based on gold_index. Handles both single and multiple correct answers.
Usage Examples:
Multiple Choice Question:
doc = Doc(
query="What is the capital of France?",
choices=["London", "Paris", "Berlin", "Madrid"],
gold_index=1, # Paris is the correct answer
instruction="Answer the following geography question:",
)
Generative Task:
doc = Doc(
query="Write a short story about a robot.",
choices=[], # No predefined choices for generative tasks
gold_index=0, # Not used for generative tasks
generation_size=100,
stop_sequences=["
End"],
)
Few-shot Learning:
doc = Doc(
query="Translate 'Hello world' to Spanish.",
choices=["Hola mundo", "Bonjour monde", "Ciao mondo"],
gold_index=0,
fewshot_samples=[
Doc(query="Translate 'Good morning' to Spanish.",
choices=["Buenos días", "Bonjour", "Buongiorno"],
gold_index=0),
Doc(query="Translate 'Thank you' to Spanish.",
choices=["Gracias", "Merci", "Grazie"],
gold_index=0)
],
)
Multimodal Task:
doc = Doc(
query="What is shown in this image?",
choices=["A cat"],
gold_index=0,
images=[pil_image], # PIL Image object
)
Return gold targets extracted from the target dict
Datasets
get_original_order
< source >( new_arr: list ) → list
Get the original order of the data.
Iterator that yields the dataset splits based on the split limits.
init_split_limits
< source >( num_dataset_splits ) → type
Initialises the split limits based on generation parameters. The splits are used to estimate time remaining when evaluating, and in the case of generative evaluations, to group similar samples together.
For generative tasks, self._sorting_criteria outputs:
- a boolean (whether the generation task uses logits)
- a list (the stop sequences)
- the item length (the actual size sorting factor).
In the current function, we create evaluation groups by generation parameters (logits and eos), so that samples with similar properties get batched together afterwards. The samples will then be further organised by length in each split.
class lighteval.data.GenerativeTaskDatasetNanotron
< source >( requests: list num_dataset_splits: int )
class lighteval.data.GenDistributedSampler
< source >( dataset: Dataset num_replicas: typing.Optional[int] = None rank: typing.Optional[int] = None shuffle: bool = True seed: int = 0 drop_last: bool = False )
A distributed sampler that copy the last element only when drop_last is False so we keep a small padding in the batches as our samples are sorted by length.