Loading methods¶

Methods are provided to list and load datasets and metrics.

Datasets¶

datasets.list_datasets(with_community_datasets=True, with_details=False)[source]¶

List all the datasets scripts available on HuggingFace AWS bucket.

Parameters
  • with_community_datasets (bool, optional, default True) – Include the community provided datasets.

  • with_details (bool, optional, default False) – Return the full details on the datasets instead of only the short name.

datasets.load_dataset(path: str, name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Union[Dict, List] = None, split: Optional[Union[str, datasets.splits.Split]] = None, cache_dir: Optional[str] = None, features: Optional[datasets.features.Features] = None, download_config: Optional[datasets.utils.file_utils.DownloadConfig] = None, download_mode: Optional[datasets.utils.download_manager.GenerateMode] = None, ignore_verifications: bool = False, keep_in_memory: Optional[bool] = None, save_infos: bool = False, script_version: Optional[Union[str, datasets.utils.version.Version]] = None, use_auth_token: Optional[Union[bool, str]] = None, task: Optional[Union[str, datasets.tasks.base.TaskTemplate]] = None, streaming: bool = False, **config_kwargs) → Union[datasets.dataset_dict.DatasetDict, datasets.arrow_dataset.Dataset, datasets.dataset_dict.IterableDatasetDict, datasets.iterable_dataset.IterableDataset][source]¶

Load a dataset.

This method does the following under the hood:

  1. Download and import in the library the dataset loading script from path if it’s not already cached inside the library.

    Processing scripts are small python scripts that define the citation, info and format of the dataset, contain the URL to the original data files and the code to load examples from the original data files.

    You can find some of the scripts here: https://github.com/huggingface/datasets/datasets and easily upload yours to share them using the CLI huggingface-cli. You can find the complete list of datasets in the Datasets Hub at https://huggingface.co/datasets

  2. Run the dataset loading script which will:

    • Download the dataset file from the original URL (see the script) if it’s not already downloaded and cached.

    • Process and cache the dataset in typed Arrow tables for caching.

      Arrow table are arbitrarily long, typed tables which can store nested objects and be mapped to numpy/pandas/python standard types. They can be directly accessed from drive, loaded in RAM or even streamed over the web.

  3. Return a dataset built from the requested splits in split (default: all).

Parameters
  • path (str) –

    Path to the dataset processing script with the dataset builder. Can be either:

    • a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './dataset/squad' or './dataset/squad/squad.py'.

    • a dataset identifier in the HuggingFace Datasets Hub (list all available datasets and ids with datasets.list_datasets()) e.g. 'squad', 'glue' or 'openai/webtext'.

  • name (str, optional) – Defining the name of the dataset configuration.

  • data_dir (str, optional) – Defining the data_dir of the dataset configuration.

  • data_files (str, optional) – Defining the data_files of the dataset configuration.

  • split (Split or str) – Which split of the data to load. If None, will return a dict with all splits (typically datasets.Split.TRAIN and datasets.Split.TEST). If given, will return a single Dataset. Splits can be combined and specified like in tensorflow-datasets.

  • cache_dir (str, optional) – Directory to read/write data. Defaults to “~/datasets”.

  • features (Features, optional) – Set the features type to use for this dataset.

  • download_config (DownloadConfig, optional) – Specific download configuration parameters.

  • download_mode (GenerateMode, default REUSE_DATASET_IF_EXISTS) – Download/generate mode.

  • ignore_verifications (bool, default False) – Ignore the verifications of the downloaded/processed dataset information (checksums/size/splits/…).

  • keep_in_memory (bool, default None) – Whether to copy the dataset in-memory. If None, the dataset will not be copied in-memory unless explicitly enabled by setting datasets.config.IN_MEMORY_MAX_SIZE to nonzero. See more details in the Enhancing performance section.

  • save_infos (bool, default False) – Save the dataset information (checksums/size/splits/…).

  • script_version (Version or str, optional) –

    Version of the dataset script to load:

    • For canonical datasets in the huggingface/datasets library like “squad”, the default version of the module is the local version fo the lib. You can specify a different version from your local version of the lib (e.g. “master” or “1.2.0”) but it might cause compatibility issues.

    • For community provided datasets like “lhoestq/squad” that have their own git repository on the Datasets Hub, the default version “main” corresponds to the “main” branch. You can specify a different version that the default “main” by using a commit sha or a git tag of the dataset repository.

  • use_auth_token (str or bool, optional) – Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True, will get token from “~/.huggingface”.

  • task (str) – The task to prepare the dataset for during training and evaluation. Casts the dataset’s Features to standardized column names and types as detailed in datasets.tasks.

  • streaming (bool, default False) –

    If set to True, don’t download the data files. Instead, it streams the data progressively while iterating on the dataset. An IterableDataset or IterableDatasetDict is returned instead in this case.

    Note that streaming works for datasets that use data formats that support being iterated over like txt, csv, jsonl for example. Json files may be downloaded completely. Also streaming from remote zip or gzip files is supported but other compressed formats like rar and xz are not yet supported. The tgz format doesn’t allow streaming.

  • **config_kwargs – Keyword arguments to be passed to the BuilderConfig and used in the DatasetBuilder.

Returns

Dataset or DatasetDict – if split is not None: the dataset requested,

if split is None, a datasets.DatasetDict with each split.

or IterableDataset or IterableDatasetDict if streaming=True:

if split is not None: the dataset requested, if split is None, a datasets.streaming.IterableDatasetDict with each split.

datasets.load_from_disk(dataset_path: str, fs=None, keep_in_memory: Optional[bool] = None) → Union[datasets.arrow_dataset.Dataset, datasets.dataset_dict.DatasetDict][source]¶

Loads a dataset that was previously saved using Dataset.save_to_disk() from a dataset directory, or from a filesystem using either datasets.filesystems.S3FileSystem or any implementation of fsspec.spec.AbstractFileSystem.

Parameters
  • dataset_path (str) – Path (e.g. “dataset/train”) or remote URI (e.g. “s3://my-bucket/dataset/train”) of the Dataset or DatasetDict directory where the dataset will be loaded from.

  • fs (S3FileSystem or fsspec.spec.AbstractFileSystem, optional, default None) – Instance of of the remote filesystem used to download the files from.

  • keep_in_memory (bool, default None) – Whether to copy the dataset in-memory. If None, the dataset will not be copied in-memory unless explicitly enabled by setting datasets.config.IN_MEMORY_MAX_SIZE to nonzero. See more details in the Enhancing performance section.

Returns

Dataset or DatasetDict –

  • If dataset_path is a path of a dataset directory: the dataset requested.

  • If dataset_path is a path of a dataset dict directory: a datasets.DatasetDict with each split.

datasets.load_dataset_builder(path: str, name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Union[Dict, List] = None, cache_dir: Optional[str] = None, features: Optional[datasets.features.Features] = None, download_config: Optional[datasets.utils.file_utils.DownloadConfig] = None, download_mode: Optional[datasets.utils.download_manager.GenerateMode] = None, script_version: Optional[Union[str, datasets.utils.version.Version]] = None, use_auth_token: Optional[Union[bool, str]] = None, **config_kwargs) → datasets.builder.DatasetBuilder[source]¶

Load a builder for the dataset. A dataset builder can be used to inspect general information that is required to build a dataset (cache directory, config, dataset info, etc.) without downloading the dataset itself.

This method will download and import the dataset loading script from path if it’s not already cached inside the library.

Parameters
  • path (str) –

    Path to the dataset processing script with the dataset builder. Can be either:

    • a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './dataset/squad' or './dataset/squad/squad.py'.

    • a dataset identifier in the HuggingFace Datasets Hub (list all available datasets and ids with datasets.list_datasets()) e.g. 'squad', 'glue' or 'openai/webtext'.

  • name (str, optional) – Defining the name of the dataset configuration.

  • data_dir (str, optional) – Defining the data_dir of the dataset configuration.

  • data_files (str, optional) – Defining the data_files of the dataset configuration.

  • cache_dir (str, optional) – Directory to read/write data. Defaults to “~/datasets”.

  • features (Features, optional) – Set the features type to use for this dataset.

  • download_config (DownloadConfig, optional) – Specific download configuration parameters.

  • download_mode (GenerateMode, default REUSE_DATASET_IF_EXISTS) – Download/generate mode.

  • script_version (Version or str, optional) –

    Version of the dataset script to load:

    • For canonical datasets in the huggingface/datasets library like “squad”, the default version of the module is the local version fo the lib. You can specify a different version from your local version of the lib (e.g. “master” or “1.2.0”) but it might cause compatibility issues.

    • For community provided datasets like “lhoestq/squad” that have their own git repository on the Datasets Hub, the default version “main” corresponds to the “main” branch. You can specify a different version that the default “main” by using a commit sha or a git tag of the dataset repository.

  • use_auth_token (str or bool, optional) – Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True, will get token from “~/.huggingface”.

Returns

DatasetBuilder

Metrics¶

datasets.list_metrics(with_community_metrics=True, with_details=False)[source]¶

List all the metrics script available on HuggingFace AWS bucket

Parameters
  • with_community_metrics (Optional bool): Include the community provided metrics (default: True) –

  • with_details (Optional bool): Return the full details on the metrics instead of only the short name (default: False) –

datasets.load_metric(path: str, config_name: Optional[str] = None, process_id: int = 0, num_process: int = 1, cache_dir: Optional[str] = None, experiment_id: Optional[str] = None, keep_in_memory: bool = False, download_config: Optional[datasets.utils.file_utils.DownloadConfig] = None, download_mode: Optional[datasets.utils.download_manager.GenerateMode] = None, script_version: Optional[Union[str, datasets.utils.version.Version]] = None, **metric_init_kwargs) → datasets.metric.Metric[source]¶

Load a datasets.Metric.

Parameters
  • path (str) –

    path to the metric processing script with the metric builder. Can be either:
    • a local path to processing script or the directory containing the script (if the script has the same name as the directory),

      e.g. './metrics/rouge' or './metrics/rogue/rouge.py'

    • a metric identifier on the HuggingFace datasets repo (list all available metrics with datasets.list_metrics())

      e.g. 'rouge' or 'bleu'

  • config_name (Optional str) – selecting a configuration for the metric (e.g. the GLUE metric has a configuration for each subset)

  • process_id (Optional int) – for distributed evaluation: id of the process

  • num_process (Optional int) – for distributed evaluation: total number of processes

  • cache_dir (Optional str) – path to store the temporary predictions and references (default to ~/.cache/metrics/)

  • experiment_id (str) – A specific experiment id. This is used if several distributed evaluations share the same file system. This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1).

  • keep_in_memory (bool) – Whether to store the temporary results in memory (defaults to False)

  • (Optional datasets.DownloadConfig (download_config) – specific download configuration parameters.

  • download_mode (GenerateMode, default REUSE_DATASET_IF_EXISTS) – Download/generate mode.

  • script_version (Optional Union[str, datasets.Version]) – if specified, the module will be loaded from the datasets repository at this version. By default it is set to the local version fo the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.

Returns

datasets.Metric