Loading methods

Methods are provided to list and load datasets and metrics.

Datasets

nlp.list_datasets(with_community_datasets=True)[source]

List all the datasets scripts available on HuggingFace AWS bucket

nlp.load_dataset(path: str, name: Optional[str] = None, version: Optional[str] = None, data_dir: Optional[str] = None, data_files: Union[Dict, List] = None, split: Optional[Union[str, nlp.splits.Split]] = None, cache_dir: Optional[str] = None, features: Optional[nlp.features.Features] = None, download_config: Optional[nlp.utils.file_utils.DownloadConfig] = None, download_mode: Optional[nlp.utils.download_manager.GenerateMode] = None, ignore_verifications: bool = False, save_infos: bool = False, **config_kwargs) → Union[nlp.dataset_dict.DatasetDict, nlp.arrow_dataset.Dataset][source]

Load a dataset

This method does the following under the hood:

  1. Download and import in the library the dataset loading script from path if it’s not already cached inside the library.

    Processing scripts are small python scripts that define the citation, info and format of the dataset, contain the URL to the original data files and the code to load examples from the original data files.

    You can find some of the scripts here: https://github.com/huggingface/nlp/datasets and easily upload yours to share them using the CLI nlp-cli.

  2. Run the dataset loading script which will:

    • Download the dataset file from the original URL (see the script) if it’s not already downloaded and cached.

    • Process and cache the dataset in typed Arrow tables for caching.

      Arrow table are arbitrarly long, typed tables which can store nested objects and be mapped to numpy/pandas/python standard types. They can be directly access from drive, loaded in RAM or even streamed over the web.

  3. Return a dataset build from the requested splits in split (default: all).

Parameters
  • path (str) –

    path to the dataset processing script with the dataset builder. Can be either:
    • a local path to processing script or the directory containing the script (if the script has the same name as the directory),

      e.g. './dataset/squad' or './dataset/squad/squad.py'

    • a datatset identifier on HuggingFace AWS bucket (list all available datasets and ids with nlp.list_datasets())

      e.g. 'squad', 'glue' or 'openai/webtext'

  • name (Optional str) – defining the name of the dataset configuration

  • version (Optional str) – defining the version of the dataset configuration

  • data_files (Optional str) – defining the data_files of the dataset configuration

  • data_dir (Optional str) – defining the data_dir of the dataset configuration

  • split (nlp.Split or str) – which split of the data to load. If None, will return a dict with all splits (typically nlp.Split.TRAIN and nlp.Split.TEST). If given, will return a single Dataset. Splits can be combined and specified like in tensorflow-datasets.

  • cache_dir (Optional str) – directory to read/write data. Defaults to “~/nlp”.

  • features (Optional nlp.Features) – Set the features type to use for this dataset.

  • (Optional nlp.DownloadConfig (download_config) – specific download configuration parameters.

  • download_mode (Optional nlp.GenerateMode) – select the download/generate mode - Default to REUSE_DATASET_IF_EXISTS

  • ignore_verifications (bool) – Ignore the verifications of the downloaded/processed dataset information (checksums/size/splits/…)

  • save_infos (bool) – Save the dataset information (checksums/size/splits/…)

  • **config_kwargs (Optional dict) –

Returns

nlp.Dataset or nlp.DatasetDict

if split is not None: the dataset requested, if split is None, a nlp.DatasetDict with each split.

Metrics

nlp.list_metrics(with_community_metrics=True)[source]

List all the metrics script available on HuggingFace AWS bucket

nlp.load_metric(path: str, name: Optional[str] = None, process_id: int = 0, num_process: int = 1, data_dir: Optional[str] = None, experiment_id: Optional[str] = None, in_memory: bool = False, download_config: Optional[nlp.utils.file_utils.DownloadConfig] = None, **metric_init_kwargs) → nlp.metric.Metric[source]

Load a nlp.Metric.

Parameters
  • path (str) –

    path to the dataset processing script with the dataset builder. Can be either:
    • a local path to processing script or the directory containing the script (if the script has the same name as the directory),

      e.g. './dataset/squad' or './dataset/squad/squad.py'

    • a dataset identifier on HuggingFace AWS bucket (list all available datasets and ids with nlp.list_datasets())

      e.g. 'squad', 'glue' or 'openai/webtext'

  • name (Optional str) – defining the name of the dataset configuration

  • process_id (Optional int) – for distributed evaluation: id of the process

  • num_process (Optional int) – for distributed evaluation: total number of processes

  • data_dir (Optional str) – path to store the temporary predictions and references (default to ~/.nlp/)

  • experiment_id (Optional str) – An optional unique id for the experiment.

  • in_memory (bool) – Weither to store the temporary results in memory (default: False)

  • (Optional nlp.DownloadConfig (download_config) – specific download configuration parameters.

Returns: nlp.Metric.