Loading methods¶
Methods are provided to list and load datasets and metrics.
Datasets¶
-
datasets.
list_datasets
(with_community_datasets=True, with_details=False)[source]¶ List all the datasets scripts available on HuggingFace AWS bucket.
- Parameters
with_community_datasets (Optional
bool
): Include the community provided datasets (default:True
) –with_details (Optional
bool
): Return the full details on the datasets instead of only the short name (default:False
) –
-
datasets.
load_dataset
(path: str, name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Union[Dict, List] = None, split: Optional[Union[str, datasets.splits.Split]] = None, cache_dir: Optional[str] = None, features: Optional[datasets.features.Features] = None, download_config: Optional[datasets.utils.file_utils.DownloadConfig] = None, download_mode: Optional[datasets.utils.download_manager.GenerateMode] = None, ignore_verifications: bool = False, save_infos: bool = False, script_version: Optional[Union[str, datasets.utils.version.Version]] = None, **config_kwargs) → Union[datasets.dataset_dict.DatasetDict, datasets.arrow_dataset.Dataset][source]¶ Load a dataset
This method does the following under the hood:
Download and import in the library the dataset loading script from
path
if it’s not already cached inside the library.Processing scripts are small python scripts that define the citation, info and format of the dataset, contain the URL to the original data files and the code to load examples from the original data files.
You can find some of the scripts here: https://github.com/huggingface/datasets/datasets and easily upload yours to share them using the CLI
datasets-cli
.Run the dataset loading script which will:
Download the dataset file from the original URL (see the script) if it’s not already downloaded and cached.
Process and cache the dataset in typed Arrow tables for caching.
Arrow table are arbitrarily long, typed tables which can store nested objects and be mapped to numpy/pandas/python standard types. They can be directly access from drive, loaded in RAM or even streamed over the web.
Return a dataset build from the requested splits in
split
(default: all).
- Parameters
path (
str
) –- path to the dataset processing script with the dataset builder. Can be either:
- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.
'./dataset/squad'
or'./dataset/squad/squad.py'
- a dataset identifier on HuggingFace AWS bucket (list all available datasets and ids with
datasets.list_datasets()
) e.g.
'squad'
,'glue'
or'openai/webtext'
- a dataset identifier on HuggingFace AWS bucket (list all available datasets and ids with
name (Optional
str
) – defining the name of the dataset configurationdata_files (Optional
str
) – defining the data_files of the dataset configurationdata_dir (Optional
str
) – defining the data_dir of the dataset configurationsplit (datasets.Split or str) – which split of the data to load. If None, will return a dict with all splits (typically datasets.Split.TRAIN and datasets.Split.TEST). If given, will return a single Dataset. Splits can be combined and specified like in tensorflow-datasets.
cache_dir (Optional
str
) – directory to read/write data. Defaults to “~/datasets”.features (Optional
datasets.Features
) – Set the features type to use for this dataset.(Optional datasets.DownloadConfig (download_config) – specific download configuration parameters.
download_mode (Optional datasets.GenerateMode) – select the download/generate mode - Default to REUSE_DATASET_IF_EXISTS
ignore_verifications (bool) – Ignore the verifications of the downloaded/processed dataset information (checksums/size/splits/…)
save_infos (bool) – Save the dataset information (checksums/size/splits/…)
script_version (Optional
Union[str, datasets.Version]
) – if specified, the module will be loaded from the datasets repository at this version. By default it is set to the local version fo the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.**config_kwargs (Optional
dict
) –
- Returns
datasets.Dataset
ordatasets.DatasetDict
if split is not None: the dataset requested, if split is None, a
datasets.DatasetDict
with each split.
-
datasets.
load_from_disk
(dataset_path: str) → Union[datasets.arrow_dataset.Dataset, datasets.dataset_dict.DatasetDict][source]¶ Load a dataset that was previously saved using
dataset.save_to_disk(dataset_path)
.- Parameters
dataset_path (
str
) – path of a Dataset directory or a DatasetDict directory- Returns
datasets.Dataset
ordatasets.DatasetDict
if dataset_path is a path of a dataset directory: the dataset requested, if dataset_path is a path of a dataset dict directory: a
datasets.DatasetDict
with each split.
Metrics¶
-
datasets.
list_metrics
(with_community_metrics=True, id_only=False, with_details=False)[source]¶ List all the metrics script available on HuggingFace AWS bucket
- Parameters
with_community_metrics (Optional
bool
): Include the community provided metrics (default:True
) –with_details (Optional
bool
): Return the full details on the metrics instead of only the short name (default:False
) –
-
datasets.
load_metric
(path: str, config_name: Optional[str] = None, process_id: int = 0, num_process: int = 1, cache_dir: Optional[str] = None, experiment_id: Optional[str] = None, keep_in_memory: bool = False, download_config: Optional[datasets.utils.file_utils.DownloadConfig] = None, download_mode: Optional[datasets.utils.download_manager.GenerateMode] = None, script_version: Optional[Union[str, datasets.utils.version.Version]] = None, **metric_init_kwargs) → datasets.metric.Metric[source]¶ Load a datasets.Metric.
- Parameters
path (
str
) –- path to the dataset processing script with the dataset builder. Can be either:
- a local path to processing script or the directory containing the script (if the script has the same name as the directory),
e.g.
'./dataset/squad'
or'./dataset/squad/squad.py'
- a dataset identifier on HuggingFace AWS bucket (list all available datasets and ids with
datasets.list_datasets()
) e.g.
'squad'
,'glue'
or'openai/webtext'
- a dataset identifier on HuggingFace AWS bucket (list all available datasets and ids with
config_name (Optional
str
) – selecting a configuration for the metric (e.g. the GLUE metric has a configuration for each subset)process_id (Optional
int
) – for distributed evaluation: id of the processnum_process (Optional
int
) – for distributed evaluation: total number of processescache_dir (Optional str) – path to store the temporary predictions and references (default to ~/.datasets/)
keep_in_memory (bool) – Weither to store the temporary results in memory (defaults to False)
experiment_id (
str
) – A specific experiment id. This is used if several distributed evaluations share the same file system. This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1).(Optional datasets.DownloadConfig (download_config) – specific download configuration parameters.
download_mode (Optional datasets.GenerateMode) – select the download/generate mode - Default to REUSE_DATASET_IF_EXISTS
keep_in_memory – Weither to store the temporary results in memory (defaults to False)
(Optional datasets.DownloadConfig – specific download configuration parameters.
script_version (Optional
Union[str, datasets.Version]
) – if specified, the module will be loaded from the datasets repository at this version. By default it is set to the local version fo the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.
Returns: datasets.Metric.