Main classes
DatasetInfo
class datasets.DatasetInfo
< source >( description: str = <factory> citation: str = <factory> homepage: str = <factory> license: str = <factory> features: typing.Optional[datasets.features.features.Features] = None post_processed: typing.Optional[datasets.info.PostProcessedInfo] = None supervised_keys: typing.Optional[datasets.info.SupervisedKeysData] = None task_templates: typing.Optional[typing.List[datasets.tasks.base.TaskTemplate]] = None builder_name: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None version: typing.Union[str, datasets.utils.version.Version, NoneType] = None splits: typing.Optional[dict] = None download_checksums: typing.Optional[dict] = None download_size: typing.Optional[int] = None post_processing_size: typing.Optional[int] = None dataset_size: typing.Optional[int] = None size_in_bytes: typing.Optional[int] = None )
Parameters
-
description (
str
) — A description of the dataset. -
citation (
str
) — A BibTeX citation of the dataset. -
homepage (
str
) — A URL to the official homepage for the dataset. -
license (
str
) — The dataset’s license. It can be the name of the license or a paragraph containing the terms of the license. - features (Features, optional) — The features used to specify the dataset’s column types.
-
post_processed (
PostProcessedInfo
, optional) — Information regarding the resources of a possible post-processing of a dataset. For example, it can contain the information of an index. -
supervised_keys (
SupervisedKeysData
, optional) — Specifies the input feature and the label for supervised learning if applicable for the dataset (legacy from TFDS). -
builder_name (
str
, optional) — The name of theGeneratorBasedBuilder
subclass used to create the dataset. Usually matched to the corresponding script name. It is also the snake_case version of the dataset builder class name. -
config_name (
str
, optional) — The name of the configuration derived from BuilderConfig. -
version (
str
or Version, optional) — The version of the dataset. -
splits (
dict
, optional) — The mapping between split name and metadata. -
download_checksums (
dict
, optional) — The mapping between the URL to download the dataset’s checksums and corresponding metadata. -
download_size (
int
, optional) — The size of the files to download to generate the dataset, in bytes. -
post_processing_size (
int
, optional) — Size of the dataset in bytes after post-processing, if any. -
dataset_size (
int
, optional) — The combined size in bytes of the Arrow tables for all splits. -
size_in_bytes (
int
, optional) — The combined size in bytes of all files associated with the dataset (downloaded files + Arrow files). -
task_templates (
List[TaskTemplate]
, optional) — The task templates to prepare the dataset for during training and evaluation. Each template casts the dataset’s Features to standardized column names and types as detailed indatasets.tasks
. - **config_kwargs (additional keyword arguments) — Keyword arguments to be passed to the BuilderConfig and used in the DatasetBuilder.
Information about a dataset.
DatasetInfo
documents datasets, including its name, version, and features.
See the constructor arguments and properties for a full list.
Not all fields are known on construction and may be updated later.
from_directory
< source >( dataset_info_dir: str fs = 'deprecated' storage_options: typing.Optional[dict] = None )
Parameters
-
dataset_info_dir (
str
) — The directory containing the metadata file. This should be the root directory of a specific dataset version. -
fs (
fsspec.spec.AbstractFileSystem
, optional) — Instance of the remote filesystem used to download the files from.Deprecated in 2.9.0
fs
was deprecated in version 2.9.0 and will be removed in 3.0.0. Please usestorage_options
instead, e.g.storage_options=fs.storage_options
. -
storage_options (
dict
, optional) — Key/value pairs to be passed on to the file-system backend, if any.Added in 2.9.0
Create DatasetInfo from the JSON file in dataset_info_dir
.
This function updates all the dynamically generated fields (num_examples, hash, time of creation,…) of the DatasetInfo.
This will overwrite all previous metadata.
write_to_directory
< source >( dataset_info_dir pretty_print = False fs = 'deprecated' storage_options: typing.Optional[dict] = None )
Parameters
-
dataset_info_dir (
str
) — Destination directory. -
pretty_print (
bool
, defaults toFalse
) — IfTrue
, the JSON will be pretty-printed with the indent level of 4. -
fs (
fsspec.spec.AbstractFileSystem
, optional) — Instance of the remote filesystem used to download the files from.Deprecated in 2.9.0
fs
was deprecated in version 2.9.0 and will be removed in 3.0.0. Please usestorage_options
instead, e.g.storage_options=fs.storage_options
. -
storage_options (
dict
, optional) — Key/value pairs to be passed on to the file-system backend, if any.Added in 2.9.0
Write DatasetInfo
and license (if present) as JSON files to dataset_info_dir
.
Dataset
The base class Dataset implements a Dataset backed by an Apache Arrow table.
class datasets.Dataset
< source >( arrow_table: Table info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None indices_table: typing.Optional[datasets.table.Table] = None fingerprint: typing.Optional[str] = None )
A Dataset backed by an Arrow table.
add_column
< source >( name: str column: typing.Union[list, <built-in function array>] new_fingerprint: str )
Add column to Dataset.
Added in 1.7
add_item
< source >( item: dict new_fingerprint: str )
Add item to Dataset.
Added in 1.7
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> new_review = {'label': 0, 'text': 'this movie is the absolute worst thing I have ever seen'}
>>> ds = ds.add_item(new_review)
>>> ds[-1]
{'label': 0, 'text': 'this movie is the absolute worst thing I have ever seen'}
from_file
< source >( filename: str info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None indices_filename: typing.Optional[str] = None in_memory: bool = False )
Parameters
-
filename (
str
) — File name of the dataset. -
info (
DatasetInfo
, optional) — Dataset information, like description, citation, etc. -
split (
NamedSplit
, optional) — Name of the dataset split. -
indices_filename (
str
, optional) — File names of the indices. -
in_memory (
bool
, defaults toFalse
) — Whether to copy the data in-memory.
Instantiate a Dataset backed by an Arrow table at filename.
from_buffer
< source >( buffer: Buffer info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None indices_buffer: typing.Optional[pyarrow.lib.Buffer] = None )
Instantiate a Dataset backed by an Arrow buffer.
from_pandas
< source >( df: DataFrame features: typing.Optional[datasets.features.features.Features] = None info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None preserve_index: typing.Optional[bool] = None )
Parameters
-
df (
pandas.DataFrame
) — Dataframe that contains the dataset. - features (Features, optional) — Dataset features.
-
info (
DatasetInfo
, optional) — Dataset information, like description, citation, etc. -
split (
NamedSplit
, optional) — Name of the dataset split. -
preserve_index (
bool
, optional) — Whether to store the index as an additional column in the resulting Dataset. The default ofNone
will store the index as a column, except forRangeIndex
which is stored as metadata only. Usepreserve_index=True
to force it to be stored as a column.
Convert pandas.DataFrame
to a pyarrow.Table
to create a Dataset.
The column types in the resulting Arrow Table are inferred from the dtypes of the pandas.Series
in the
DataFrame. In the case of non-object Series, the NumPy dtype is translated to its Arrow equivalent. In the
case of object
, we need to guess the datatype by looking at the Python objects in this Series.
Be aware that Series of the object
dtype don’t carry enough information to always lead to a meaningful Arrow
type. In the case that we cannot infer a type, e.g. because the DataFrame is of length 0 or the Series only
contains None/nan
objects, the type is set to null
. This behavior can be avoided by constructing explicit
features and passing it to this function.
from_dict
< source >( mapping: dict features: typing.Optional[datasets.features.features.Features] = None info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None )
Parameters
-
mapping (
Mapping
) — Mapping of strings to Arrays or Python lists. - features (Features, optional) — Dataset features.
-
info (
DatasetInfo
, optional) — Dataset information, like description, citation, etc. -
split (
NamedSplit
, optional) — Name of the dataset split.
Convert dict
to a pyarrow.Table
to create a Dataset.
from_generator
< source >( generator: typing.Callable features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False gen_kwargs: typing.Optional[dict] = None num_proc: typing.Optional[int] = None **kwargs )
Parameters
-
generator ( —
Callable
): A generator function thatyields
examples. - features (Features, optional) — Dataset features.
-
cache_dir (
str
, optional, defaults to"~/.cache/huggingface/datasets"
) — Directory to cache data. -
keep_in_memory (
bool
, defaults toFalse
) — Whether to copy the data in-memory. -
gen_kwargs(
dict
, optional) — Keyword arguments to be passed to thegenerator
callable. You can define a sharded dataset by passing the list of shards ingen_kwargs
. -
num_proc (
int
, optional, defaults toNone
) — Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.Added in 2.7.0
-
**kwargs (additional keyword arguments) —
Keyword arguments to be passed to :
GeneratorConfig
.
Create a Dataset from a generator.
The Apache Arrow table backing the dataset.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds.data
MemoryMappedTable
text: string
label: int64
----
text: [["compassionately explores the seemingly irreconcilable situation between conservative christian parents and their estranged gay and lesbian children .","the soundtrack alone is worth the price of admission .","rodriguez does a splendid job of racial profiling hollywood style--casting excellent latin actors of all ages--a trend long overdue .","beneath the film's obvious determination to shock at any cost lies considerable skill and determination , backed by sheer nerve .","bielinsky is a filmmaker of impressive talent .","so beautifully acted and directed , it's clear that washington most certainly has a new career ahead of him if he so chooses .","a visual spectacle full of stunning images and effects .","a gentle and engrossing character study .","it's enough to watch huppert scheming , with her small , intelligent eyes as steady as any noir villain , and to enjoy the perfectly pitched web of tension that chabrol spins .","an engrossing portrait of uncompromising artists trying to create something original against the backdrop of a corporate music industry that only seems to care about the bottom line .",...,"ultimately , jane learns her place as a girl , softens up and loses some of the intensity that made her an interesting character to begin with .","ah-nuld's action hero days might be over .","it's clear why deuces wild , which was shot two years ago , has been gathering dust on mgm's shelf .","feels like nothing quite so much as a middle-aged moviemaker's attempt to surround himself with beautiful , half-naked women .","when the precise nature of matthew's predicament finally comes into sharp focus , the revelation fails to justify the build-up .","this picture is murder by numbers , and as easy to be bored by as your abc's , despite a few whopping shootouts .","hilarious musical comedy though stymied by accents thick as mud .","if you are into splatter movies , then you will probably have a reasonably good time with the salton sea .","a dull , simple-minded and stereotypical tale of drugs , death and mind-numbing indifference on the inner-city streets .","the feature-length stretch . . . strains the show's concept ."]]
label: [[1,1,1,1,1,1,1,1,1,1,...,0,0,0,0,0,0,0,0,0,0]]
The cache files containing the Apache Arrow table backing the dataset.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds.cache_files
[{'filename': '/root/.cache/huggingface/datasets/rotten_tomatoes_movie_review/default/1.0.0/40d411e45a6ce3484deed7cc15b82a53dad9a72aafd9f86f8f227134bec5ca46/rotten_tomatoes_movie_review-validation.arrow'}]
Number of columns in the dataset.
Number of rows in the dataset (same as Dataset.len()).
Names of the columns in the dataset.
Shape of the dataset (number of columns, number of rows).
unique
< source >(
column: str
)
→
list
Parameters
-
column (
str
) — Column name (list all the column names with column_names).
Returns
list
List of unique elements in the given column.
Return a list of the unique elements in a column.
This is implemented in the low-level backend and as such, very fast.
flatten
< source >( new_fingerprint: typing.Optional[str] = None max_depth = 16 ) → Dataset
Flatten the table. Each column with a struct type is flattened into one column per struct field. Other columns are left unchanged.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("squad", split="train")
>>> ds.features
{'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None),
'context': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None)}
>>> ds.flatten()
Dataset({
features: ['id', 'title', 'context', 'question', 'answers.text', 'answers.answer_start'],
num_rows: 87599
})
cast
< source >( features: Features batch_size: typing.Optional[int] = 1000 keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 num_proc: typing.Optional[int] = None ) → Dataset
Parameters
-
features (Features) —
New features to cast the dataset to.
The name of the fields in the features must match the current column names.
The type of the data must also be convertible from one type to the other.
For non-trivial conversion, e.g.
str
<->ClassLabel
you should use map() to update the Dataset. -
batch_size (
int
, defaults to1000
) — Number of examples per batch provided to cast. Ifbatch_size <= 0
orbatch_size == None
then provide the full dataset as a single batch to cast. -
keep_in_memory (
bool
, defaults toFalse
) — Whether to copy the data in-memory. -
load_from_cache_file (
bool
, defaults toTrue
if caching is enabled) — If a cache file storing the current computation fromfunction
can be identified, use it instead of recomputing. -
cache_file_name (
str
, optional, defaults toNone
) — Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. -
writer_batch_size (
int
, defaults to1000
) — Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map(). -
num_proc (
int
, optional, defaults toNone
) — Number of processes for multiprocessing. By default it doesn’t use multiprocessing.
Returns
A copy of the dataset with casted features.
Cast the dataset to a new set of features.
Example:
>>> from datasets import load_dataset, ClassLabel, Value
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds.features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
'text': Value(dtype='string', id=None)}
>>> new_features = ds.features.copy()
>>> new_features['label'] = ClassLabel(names=['bad', 'good'])
>>> new_features['text'] = Value('large_string')
>>> ds = ds.cast(new_features)
>>> ds.features
{'label': ClassLabel(num_classes=2, names=['bad', 'good'], id=None),
'text': Value(dtype='large_string', id=None)}
cast_column
< source >( column: str feature: typing.Union[dict, list, tuple, datasets.features.features.Value, datasets.features.features.ClassLabel, datasets.features.translation.Translation, datasets.features.translation.TranslationVariableLanguages, datasets.features.features.Sequence, datasets.features.features.Array2D, datasets.features.features.Array3D, datasets.features.features.Array4D, datasets.features.features.Array5D, datasets.features.audio.Audio, datasets.features.image.Image] new_fingerprint: typing.Optional[str] = None )
Cast column to feature for decoding.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds.features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
'text': Value(dtype='string', id=None)}
>>> ds = ds.cast_column('label', ClassLabel(names=['bad', 'good']))
>>> ds.features
{'label': ClassLabel(num_classes=2, names=['bad', 'good'], id=None),
'text': Value(dtype='string', id=None)}
remove_columns
< source >( column_names: typing.Union[str, typing.List[str]] new_fingerprint: typing.Optional[str] = None ) → Dataset
Parameters
-
column_names (
Union[str, List[str]]
) — Name of the column(s) to remove. -
new_fingerprint (
str
, optional) — The new fingerprint of the dataset after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Returns
A copy of the dataset object without the columns to remove.
Remove one or several column(s) in the dataset and the features associated to them.
You can also remove a column using map() with remove_columns
but the present method
is in-place (doesn’t copy the data to a new dataset) and is thus faster.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds.remove_columns('label')
Dataset({
features: ['text'],
num_rows: 1066
})
>>> ds.remove_columns(column_names=ds.column_names) # Removing all the columns returns an empty dataset with the `num_rows` property set to 0
Dataset({
features: [],
num_rows: 0
})
rename_column
< source >( original_column_name: str new_column_name: str new_fingerprint: typing.Optional[str] = None ) → Dataset
Parameters
-
original_column_name (
str
) — Name of the column to rename. -
new_column_name (
str
) — New name for the column. -
new_fingerprint (
str
, optional) — The new fingerprint of the dataset after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Returns
A copy of the dataset with a renamed column.
Rename a column in the dataset, and move the features associated to the original column under the new column name.
rename_columns
< source >( column_mapping: typing.Dict[str, str] new_fingerprint: typing.Optional[str] = None ) → Dataset
Parameters
-
column_mapping (
Dict[str, str]
) — A mapping of columns to rename to their new names -
new_fingerprint (
str
, optional) — The new fingerprint of the dataset after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Returns
A copy of the dataset with renamed columns
Rename several columns in the dataset, and move the features associated to the original columns under the new column names.
select_columns
< source >( column_names: typing.Union[str, typing.List[str]] new_fingerprint: typing.Optional[str] = None ) → Dataset
Parameters
-
column_names (
Union[str, List[str]]
) — Name of the column(s) to keep. -
new_fingerprint (
str
, optional) — The new fingerprint of the dataset after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Returns
A copy of the dataset object which only consists of selected columns.
Select one or several column(s) in the dataset and the features associated to them.
class_encode_column
< source >( column: str include_nulls: bool = False )
Parameters
-
column (
str
) — The name of the column to cast (list all the column names with column_names) -
include_nulls (
bool
, defaults toFalse
) — Whether to include null values in the class labels. IfTrue
, the null values will be encoded as the"None"
class label.Added in 1.14.2
Casts the given column as ClassLabel and updates the table.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("boolq", split="validation")
>>> ds.features
{'answer': Value(dtype='bool', id=None),
'passage': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None)}
>>> ds = ds.class_encode_column('answer')
>>> ds.features
{'answer': ClassLabel(num_classes=2, names=['False', 'True'], id=None),
'passage': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None)}
Number of rows in the dataset.
Iterate through the examples.
If a formatting is set with Dataset.set_format() rows will be returned with the selected format.
iter
< source >( batch_size: int drop_last_batch: bool = False )
Iterate through the batches of size batch_size.
If a formatting is set with [~datasets.Dataset.set_format] rows will be returned with the selected format.
formatted_as
< source >( type: typing.Optional[str] = None columns: typing.Optional[typing.List] = None output_all_columns: bool = False **format_kwargs )
Parameters
-
type (
str
, optional) — Output type selected in[None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax']
.None
means `getitem“ returns python objects (default). -
columns (
List[str]
, optional) — Columns to format in the output.None
means__getitem__
returns all columns (default). -
output_all_columns (
bool
, defaults toFalse
) — Keep un-formatted columns as well in the output (as python objects). -
**format_kwargs (additional keyword arguments) —
Keywords arguments passed to the convert function like
np.array
,torch.tensor
ortensorflow.ragged.constant
.
To be used in a with
statement. Set __getitem__
return format (type and columns).
set_format
< source >( type: typing.Optional[str] = None columns: typing.Optional[typing.List] = None output_all_columns: bool = False **format_kwargs )
Parameters
-
type (
str
, optional) — Either output type selected in[None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax']
.None
means__getitem__
returns python objects (default). -
columns (
List[str]
, optional) — Columns to format in the output.None
means__getitem__
returns all columns (default). -
output_all_columns (
bool
, defaults toFalse
) — Keep un-formatted columns as well in the output (as python objects). -
**format_kwargs (additional keyword arguments) —
Keywords arguments passed to the convert function like
np.array
,torch.tensor
ortensorflow.ragged.constant
.
Set __getitem__
return format (type and columns). The data formatting is applied on-the-fly.
The format type
(for example “numpy”) is used to format batches when using __getitem__
.
It’s also possible to use custom transforms for formatting using set_transform().
It is possible to call map() after calling set_format
. Since map
may add new columns, then the list of formatted columns
gets updated. In this case, if you apply map
on a dataset to add a new column, then this column will be formatted as:
new formatted columns = (all columns - previously unformatted columns)
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)
>>> ds.set_format(type='numpy', columns=['text', 'label'])
>>> ds.format
{'type': 'numpy',
'format_kwargs': {},
'columns': ['text', 'label'],
'output_all_columns': False}
set_transform
< source >( transform: typing.Optional[typing.Callable] columns: typing.Optional[typing.List] = None output_all_columns: bool = False )
Parameters
-
transform (
Callable
, optional) — User-defined formatting transform, replaces the format defined by set_format(). A formatting function is a callable that takes a batch (as adict
) as input and returns a batch. This function is applied right before returning the objects in__getitem__
. -
columns (
List[str]
, optional) — Columns to format in the output. If specified, then the input batch of the transform only contains those columns. -
output_all_columns (
bool
, defaults toFalse
) — Keep un-formatted columns as well in the output (as python objects). If set to True, then the other un-formatted columns are kept with the output of the transform.
Set __getitem__
return format using this transform. The transform is applied on-the-fly on batches when __getitem__
is called.
As set_format(), this can be reset using reset_format().
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
>>> def encode(batch):
... return tokenizer(batch['text'], padding=True, truncation=True, return_tensors='pt')
>>> ds.set_transform(encode)
>>> ds[0]
{'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1]),
'input_ids': tensor([ 101, 29353, 2135, 15102, 1996, 9428, 20868, 2890, 8663, 6895,
20470, 2571, 3663, 2090, 4603, 3017, 3008, 1998, 2037, 24211,
5637, 1998, 11690, 2336, 1012, 102]),
'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0])}
Reset __getitem__
return format to python objects and all columns.
Same as self.set_format()
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)
>>> ds.set_format(type='numpy', columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
>>> ds.format
{'columns': ['input_ids', 'token_type_ids', 'attention_mask', 'label'],
'format_kwargs': {},
'output_all_columns': False,
'type': 'numpy'}
>>> ds.reset_format()
>>> ds.format
{'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'],
'format_kwargs': {},
'output_all_columns': False,
'type': None}
with_format
< source >( type: typing.Optional[str] = None columns: typing.Optional[typing.List] = None output_all_columns: bool = False **format_kwargs )
Parameters
-
type (
str
, optional) — Either output type selected in[None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax']
.None
means__getitem__
returns python objects (default). -
columns (
List[str]
, optional) — Columns to format in the output.None
means__getitem__
returns all columns (default). -
output_all_columns (
bool
, defaults toFalse
) — Keep un-formatted columns as well in the output (as python objects). -
**format_kwargs (additional keyword arguments) —
Keywords arguments passed to the convert function like
np.array
,torch.tensor
ortensorflow.ragged.constant
.
Set __getitem__
return format (type and columns). The data formatting is applied on-the-fly.
The format type
(for example “numpy”) is used to format batches when using __getitem__
.
It’s also possible to use custom transforms for formatting using with_transform().
Contrary to set_format(), with_format
returns a new Dataset object.
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)
>>> ds.format
{'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'],
'format_kwargs': {},
'output_all_columns': False,
'type': None}
>>> ds = ds.with_format(type='tensorflow', columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
>>> ds.format
{'columns': ['input_ids', 'token_type_ids', 'attention_mask', 'label'],
'format_kwargs': {},
'output_all_columns': False,
'type': 'tensorflow'}
with_transform
< source >( transform: typing.Optional[typing.Callable] columns: typing.Optional[typing.List] = None output_all_columns: bool = False )
Parameters
-
transform (
Callable
,optional
) — User-defined formatting transform, replaces the format defined by set_format(). A formatting function is a callable that takes a batch (as adict
) as input and returns a batch. This function is applied right before returning the objects in__getitem__
. -
columns (
List[str]
,optional
) — Columns to format in the output. If specified, then the input batch of the transform only contains those columns. -
output_all_columns (
bool
, defaults toFalse
) — Keep un-formatted columns as well in the output (as python objects). If set toTrue
, then the other un-formatted columns are kept with the output of the transform.
Set __getitem__
return format using this transform. The transform is applied on-the-fly on batches when __getitem__
is called.
As set_format(), this can be reset using reset_format().
Contrary to set_transform(), with_transform
returns a new Dataset object.
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> def encode(example):
... return tokenizer(example["text"], padding=True, truncation=True, return_tensors='pt')
>>> ds = ds.with_transform(encode)
>>> ds[0]
{'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1]),
'input_ids': tensor([ 101, 18027, 16310, 16001, 1103, 9321, 178, 11604, 7235, 6617,
1742, 2165, 2820, 1206, 6588, 22572, 12937, 1811, 2153, 1105,
1147, 12890, 19587, 6463, 1105, 15026, 1482, 119, 102]),
'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0])}
Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).
Clean up all cache files in the dataset cache directory, excepted the currently used cache file if there is one.
Be careful when running this command that no other process is currently using other cache files.
map
< source >( function: typing.Optional[typing.Callable] = None with_indices: bool = False with_rank: bool = False input_columns: typing.Union[str, typing.List[str], NoneType] = None batched: bool = False batch_size: typing.Optional[int] = 1000 drop_last_batch: bool = False remove_columns: typing.Union[str, typing.List[str], NoneType] = None keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 features: typing.Optional[datasets.features.features.Features] = None disable_nullable: bool = False fn_kwargs: typing.Optional[dict] = None num_proc: typing.Optional[int] = None suffix_template: str = '_{rank:05d}_of_{num_proc:05d}' new_fingerprint: typing.Optional[str] = None desc: typing.Optional[str] = None )
Parameters
-
function (
Callable
) — Function with one of the following signatures:function(example: Dict[str, Any]) -> Dict[str, Any]
ifbatched=False
andwith_indices=False
andwith_rank=False
function(example: Dict[str, Any], *extra_args) -> Dict[str, Any]
ifbatched=False
andwith_indices=True
and/orwith_rank=True
(one extra arg for each)function(batch: Dict[str, List]) -> Dict[str, List]
ifbatched=True
andwith_indices=False
andwith_rank=False
function(batch: Dict[str, List], *extra_args) -> Dict[str, List]
ifbatched=True
andwith_indices=True
and/orwith_rank=True
(one extra arg for each)
For advanced usage, the function can also return a
pyarrow.Table
. Moreover if your function returns nothing (None
), thenmap
will run your function and return the dataset unchanged. If no function is provided, default to identity function:lambda x: x
. -
with_indices (
bool
, defaults toFalse
) — Provide example indices tofunction
. Note that in this case the signature offunction
should bedef function(example, idx[, rank]): ...
. -
with_rank (
bool
, defaults toFalse
) — Provide process rank tofunction
. Note that in this case the signature offunction
should bedef function(example[, idx], rank): ...
. -
input_columns (
Optional[Union[str, List[str]]]
, defaults toNone
) — The columns to be passed intofunction
as positional arguments. IfNone
, adict
mapping to all formatted columns is passed as one argument. -
batched (
bool
, defaults toFalse
) — Provide batch of examples tofunction
. -
batch_size (
int
, optional, defaults to1000
) — Number of examples per batch provided tofunction
ifbatched=True
. Ifbatch_size <= 0
orbatch_size == None
, provide the full dataset as a single batch tofunction
. -
drop_last_batch (
bool
, defaults toFalse
) — Whether a last batch smaller than the batch_size should be dropped instead of being processed by the function. -
remove_columns (
Optional[Union[str, List[str]]]
, defaults toNone
) — Remove a selection of columns while doing the mapping. Columns will be removed before updating the examples with the output offunction
, i.e. iffunction
is adding columns with names inremove_columns
, these columns will be kept. -
keep_in_memory (
bool
, defaults toFalse
) — Keep the dataset in memory instead of writing it to a cache file. -
load_from_cache_file (
Optioanl[bool]
, defaults toTrue
if caching is enabled) — If a cache file storing the current computation fromfunction
can be identified, use it instead of recomputing. -
cache_file_name (
str
, optional, defaults toNone
) — Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. -
writer_batch_size (
int
, defaults to1000
) — Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap
. -
features (
Optional[datasets.Features]
, defaults toNone
) — Use a specific Features to store the cache file instead of the automatically generated one. -
disable_nullable (
bool
, defaults toFalse
) — Disallow null values in the table. -
fn_kwargs (
Dict
, optional, defaults toNone
) — Keyword arguments to be passed tofunction
. -
num_proc (
int
, optional, defaults toNone
) — Max number of processes when generating cache. Already cached shards are loaded sequentially. -
suffix_template (
str
) — Ifcache_file_name
is specified, then this suffix will be added at the end of the base name of each. Defaults to"_{rank:05d}_of_{num_proc:05d}"
. For example, ifcache_file_name
is “processed.arrow”, then forrank=1
andnum_proc=4
, the resulting file would be"processed_00001_of_00004.arrow"
for the default suffix. -
new_fingerprint (
str
, optional, defaults toNone
) — The new fingerprint of the dataset after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. -
desc (
str
, optional, defaults toNone
) — Meaningful description to be displayed alongside with the progress bar while mapping examples.
Apply a function to all the examples in the table (individually or in batches) and update the table. If your function returns a column that already exists, then it overwrites it.
You can specify whether the function should be batched or not with the batched
parameter:
- If batched is
False
, then the function takes 1 example in and should return 1 example. An example is a dictionary, e.g.{"text": "Hello there !"}
. - If batched is
True
andbatch_size
is 1, then the function takes a batch of 1 example as input and can return a batch with 1 or more examples. A batch is a dictionary, e.g. a batch of 1 example is{"text": ["Hello there !"]}
. - If batched is
True
andbatch_size
isn > 1
, then the function takes a batch ofn
examples as input and can return a batch withn
examples, or with an arbitrary number of examples. Note that the last batch may have less thann
examples. A batch is a dictionary, e.g. a batch ofn
examples is{"text": ["Hello there !"] * n}
.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> def add_prefix(example):
... example["text"] = "Review: " + example["text"]
... return example
>>> ds = ds.map(add_prefix)
>>> ds[0:3]["text"]
['Review: compassionately explores the seemingly irreconcilable situation between conservative christian parents and their estranged gay and lesbian children .',
'Review: the soundtrack alone is worth the price of admission .',
'Review: rodriguez does a splendid job of racial profiling hollywood style--casting excellent latin actors of all ages--a trend long overdue .']
# process a batch of examples
>>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True)
# set number of processors
>>> ds = ds.map(add_prefix, num_proc=4)
filter
< source >( function: typing.Optional[typing.Callable] = None with_indices = False input_columns: typing.Union[str, typing.List[str], NoneType] = None batched: bool = False batch_size: typing.Optional[int] = 1000 keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 fn_kwargs: typing.Optional[dict] = None num_proc: typing.Optional[int] = None suffix_template: str = '_{rank:05d}_of_{num_proc:05d}' new_fingerprint: typing.Optional[str] = None desc: typing.Optional[str] = None )
Parameters
-
function (
Callable
) — Callable with one of the following signatures:function(example: Dict[str, Any]) -> bool
ifwith_indices=False, batched=False
function(example: Dict[str, Any], indices: int) -> bool
ifwith_indices=True, batched=False
function(example: Dict[str, List]) -> List[bool]
ifwith_indices=False, batched=True
function(example: Dict[str, List], indices: List[int]) -> List[bool]
ifwith_indices=True, batched=True
If no function is provided, defaults to an always
True
function:lambda x: True
. -
with_indices (
bool
, defaults toFalse
) — Provide example indices tofunction
. Note that in this case the signature offunction
should bedef function(example, idx): ...
. -
input_columns (
str
orList[str]
, optional) — The columns to be passed intofunction
as positional arguments. IfNone
, adict
mapping to all formatted columns is passed as one argument. -
batched (
bool
, defaults toFalse
) — Provide batch of examples tofunction
. -
batch_size (
int
, optional, defaults to1000
) — Number of examples per batch provided tofunction
ifbatched = True
. Ifbatched = False
, one example per batch is passed tofunction
. Ifbatch_size <= 0
orbatch_size == None
, provide the full dataset as a single batch tofunction
. -
keep_in_memory (
bool
, defaults toFalse
) — Keep the dataset in memory instead of writing it to a cache file. -
load_from_cache_file (
Optional[bool]
, defaults toTrue
if caching is enabled) — If a cache file storing the current computation fromfunction
can be identified, use it instead of recomputing. -
cache_file_name (
str
, optional) — Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. -
writer_batch_size (
int
, defaults to1000
) — Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap
. -
fn_kwargs (
dict
, optional) — Keyword arguments to be passed tofunction
. -
num_proc (
int
, optional) — Number of processes for multiprocessing. By default it doesn’t use multiprocessing. -
suffix_template (
str
) — Ifcache_file_name
is specified, then this suffix will be added at the end of the base name of each. For example, ifcache_file_name
is"processed.arrow"
, then forrank = 1
andnum_proc = 4
, the resulting file would be"processed_00001_of_00004.arrow"
for the default suffix (default_{rank:05d}_of_{num_proc:05d}
). -
new_fingerprint (
str
, optional) — The new fingerprint of the dataset after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. -
desc (
str
, optional, defaults toNone
) — Meaningful description to be displayed alongside with the progress bar while filtering examples.
Apply a filter function to all the elements in the table in batches and update the table so that the dataset only includes examples according to the filter function.
select
< source >( indices: typing.Iterable keep_in_memory: bool = False indices_cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 new_fingerprint: typing.Optional[str] = None )
Parameters
-
indices (
range
,list
,iterable
,ndarray
orSeries
) — Range, list or 1D-array of integer indices for indexing. If the indices correspond to a contiguous range, the Arrow table is simply sliced. However passing a list of indices that are not contiguous creates indices mapping, which is much less efficient, but still faster than recreating an Arrow table made of the requested rows. -
keep_in_memory (
bool
, defaults toFalse
) — Keep the indices mapping in memory instead of writing it to a cache file. -
indices_cache_file_name (
str
, optional, defaults toNone
) — Provide the name of a path for the cache file. It is used to store the indices mapping instead of the automatically generated cache file name. -
writer_batch_size (
int
, defaults to1000
) — Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap
. -
new_fingerprint (
str
, optional, defaults toNone
) — The new fingerprint of the dataset after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Create a new dataset with rows selected following the list/array of indices.
sort
< source >( column_names: typing.Union[str, typing.Sequence[str]] reverse: typing.Union[bool, typing.Sequence[bool]] = False kind = 'deprecated' null_placement: str = 'at_end' keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None indices_cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 new_fingerprint: typing.Optional[str] = None )
Parameters
-
column_names (
Union[str, Sequence[str]]
) — Column name(s) to sort by. -
reverse (
Union[bool, Sequence[bool]]
, defaults toFalse
) — IfTrue
, sort by descending order rather than ascending. If a single bool is provided, the value is applied to the sorting of all column names. Otherwise a list of bools with the same length and order as column_names must be provided. -
kind (
str
, optional) — Pandas algorithm for sorting selected in{quicksort, mergesort, heapsort, stable}
, The default isquicksort
. Note that bothstable
andmergesort
usetimsort
under the covers and, in general, the actual implementation will vary with data type. Themergesort
option is retained for backwards compatibility.Deprecated in 2.8.0
kind
was deprecated in version 2.10.0 and will be removed in 3.0.0. -
null_placement (
str
, defaults toat_end
) — PutNone
values at the beginning ifat_start
orfirst
or at the end ifat_end
orlast
Added in 1.14.2
-
keep_in_memory (
bool
, defaults toFalse
) — Keep the sorted indices in memory instead of writing it to a cache file. -
load_from_cache_file (
Optional[bool]
, defaults toTrue
if caching is enabled) — If a cache file storing the sorted indices can be identified, use it instead of recomputing. -
indices_cache_file_name (
str
, optional, defaults toNone
) — Provide the name of a path for the cache file. It is used to store the sorted indices instead of the automatically generated cache file name. -
writer_batch_size (
int
, defaults to1000
) — Number of rows per write operation for the cache file writer. Higher value gives smaller cache files, lower value consume less temporary memory. -
new_fingerprint (
str
, optional, defaults toNone
) — The new fingerprint of the dataset after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments
Create a new dataset sorted according to a single or multiple columns.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset('rotten_tomatoes', split='validation')
>>> ds['label'][:10]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> sorted_ds = ds.sort('label')
>>> sorted_ds['label'][:10]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> another_sorted_ds = ds.sort(['label', 'text'], reverse=[True, False])
>>> another_sorted_ds['label'][:10]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
shuffle
< source >( seed: typing.Optional[int] = None generator: typing.Optional[numpy.random._generator.Generator] = None keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None indices_cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 new_fingerprint: typing.Optional[str] = None )
Parameters
-
seed (
int
, optional) — A seed to initialize the default BitGenerator ifgenerator=None
. IfNone
, then fresh, unpredictable entropy will be pulled from the OS. If anint
orarray_like[ints]
is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state. -
generator (
numpy.random.Generator
, optional) — Numpy random Generator to use to compute the permutation of the dataset rows. Ifgenerator=None
(default), usesnp.random.default_rng
(the default BitGenerator (PCG64) of NumPy). -
keep_in_memory (
bool
, defaultFalse
) — Keep the shuffled indices in memory instead of writing it to a cache file. -
load_from_cache_file (
Optional[bool]
, defaults toTrue
if caching is enabled) — If a cache file storing the shuffled indices can be identified, use it instead of recomputing. -
indices_cache_file_name (
str
, optional) — Provide the name of a path for the cache file. It is used to store the shuffled indices instead of the automatically generated cache file name. -
writer_batch_size (
int
, defaults to1000
) — Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap
. -
new_fingerprint (
str
, optional, defaults toNone
) — The new fingerprint of the dataset after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Create a new Dataset where the rows are shuffled.
Currently shuffling uses numpy random generators. You can either supply a NumPy BitGenerator to use, or a seed to initiate NumPy’s default random generator (PCG64).
Shuffling takes the list of indices [0:len(my_dataset)]
and shuffles it to create an indices mapping.
However as soon as your Dataset has an indices mapping, the speed can become 10x slower.
This is because there is an extra step to get the row index to read using the indices mapping, and most importantly, you aren’t reading contiguous chunks of data anymore.
To restore the speed, you’d need to rewrite the entire dataset on your disk again using Dataset.flatten_indices(), which removes the indices mapping.
This may take a lot of time depending of the size of your dataset though:
my_dataset[0] # fast
my_dataset = my_dataset.shuffle(seed=42)
my_dataset[0] # up to 10x slower
my_dataset = my_dataset.flatten_indices() # rewrite the shuffled dataset on disk as contiguous chunks of data
my_dataset[0] # fast again
In this case, we recommend switching to an IterableDataset and leveraging its fast approximate shuffling method IterableDataset.shuffle().
It only shuffles the shards order and adds a shuffle buffer to your dataset, which keeps the speed of your dataset optimal:
my_iterable_dataset = my_dataset.to_iterable_dataset(num_shards=128)
for example in enumerate(my_iterable_dataset): # fast
pass
shuffled_iterable_dataset = my_iterable_dataset.shuffle(seed=42, buffer_size=100)
for example in enumerate(shuffled_iterable_dataset): # as fast as before
pass
train_test_split
< source >( test_size: typing.Union[float, int, NoneType] = None train_size: typing.Union[float, int, NoneType] = None shuffle: bool = True stratify_by_column: typing.Optional[str] = None seed: typing.Optional[int] = None generator: typing.Optional[numpy.random._generator.Generator] = None keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None train_indices_cache_file_name: typing.Optional[str] = None test_indices_cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 train_new_fingerprint: typing.Optional[str] = None test_new_fingerprint: typing.Optional[str] = None )
Parameters
-
test_size (
numpy.random.Generator
, optional) — Size of the test split Iffloat
, should be between0.0
and1.0
and represent the proportion of the dataset to include in the test split. Ifint
, represents the absolute number of test samples. IfNone
, the value is set to the complement of the train size. Iftrain_size
is alsoNone
, it will be set to0.25
. -
train_size (
numpy.random.Generator
, optional) — Size of the train split Iffloat
, should be between0.0
and1.0
and represent the proportion of the dataset to include in the train split. Ifint
, represents the absolute number of train samples. IfNone
, the value is automatically set to the complement of the test size. -
shuffle (
bool
, optional, defaults toTrue
) — Whether or not to shuffle the data before splitting. -
stratify_by_column (
str
, optional, defaults toNone
) — The column name of labels to be used to perform stratified split of data. -
seed (
int
, optional) — A seed to initialize the default BitGenerator ifgenerator=None
. IfNone
, then fresh, unpredictable entropy will be pulled from the OS. If anint
orarray_like[ints]
is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state. -
generator (
numpy.random.Generator
, optional) — Numpy random Generator to use to compute the permutation of the dataset rows. Ifgenerator=None
(default), usesnp.random.default_rng
(the default BitGenerator (PCG64) of NumPy). -
keep_in_memory (
bool
, defaults toFalse
) — Keep the splits indices in memory instead of writing it to a cache file. -
load_from_cache_file (
Optional[bool]
, defaults toTrue
if caching is enabled) — If a cache file storing the splits indices can be identified, use it instead of recomputing. -
train_cache_file_name (
str
, optional) — Provide the name of a path for the cache file. It is used to store the train split indices instead of the automatically generated cache file name. -
test_cache_file_name (
str
, optional) — Provide the name of a path for the cache file. It is used to store the test split indices instead of the automatically generated cache file name. -
writer_batch_size (
int
, defaults to1000
) — Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap
. -
train_new_fingerprint (
str
, optional, defaults toNone
) — The new fingerprint of the train set after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments -
test_new_fingerprint (
str
, optional, defaults toNone
) — The new fingerprint of the test set after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments
Return a dictionary (datasets.DatasetDict) with two random train and test subsets (train
and test
Dataset
splits).
Splits are created from the dataset according to test_size
, train_size
and shuffle
.
This method is similar to scikit-learn train_test_split
.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds = ds.train_test_split(test_size=0.2, shuffle=True)
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 852
})
test: Dataset({
features: ['text', 'label'],
num_rows: 214
})
})
# set a seed
>>> ds = ds.train_test_split(test_size=0.2, seed=42)
# stratified split
>>> ds = load_dataset("imdb",split="train")
Dataset({
features: ['text', 'label'],
num_rows: 25000
})
>>> ds = ds.train_test_split(test_size=0.2, stratify_by_column="label")
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 20000
})
test: Dataset({
features: ['text', 'label'],
num_rows: 5000
})
})
shard
< source >( num_shards: int index: int contiguous: bool = False keep_in_memory: bool = False indices_cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 )
Parameters
-
num_shards (
int
) — How many shards to split the dataset into. -
index (
int
) — Which shard to select and return. contiguous — (bool
, defaults toFalse
): Whether to select contiguous blocks of indices for shards. -
keep_in_memory (
bool
, defaults toFalse
) — Keep the dataset in memory instead of writing it to a cache file. -
indices_cache_file_name (
str
, optional) — Provide the name of a path for the cache file. It is used to store the indices of each shard instead of the automatically generated cache file name. -
writer_batch_size (
int
, defaults to1000
) — Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap
.
Return the index
-nth shard from dataset split into num_shards
pieces.
This shards deterministically. dset.shard(n, i)
will contain all elements of dset whose
index mod n = i
.
dset.shard(n, i, contiguous=True)
will instead split dset into contiguous chunks,
so it can be easily concatenated back together after processing. If n % i == l
, then the
first l
shards will have length (n // i) + 1
, and the remaining shards will have length (n // i)
.
datasets.concatenate([dset.shard(n, i, contiguous=True) for i in range(n)])
will return
a dataset with the same order as the original.
Be sure to shard before using any randomizing operator (such as shuffle
).
It is best if the shard operator is used early in the dataset pipeline.
to_tf_dataset
< source >( batch_size: typing.Optional[int] = None columns: typing.Union[str, typing.List[str], NoneType] = None shuffle: bool = False collate_fn: typing.Optional[typing.Callable] = None drop_remainder: bool = False collate_fn_args: typing.Union[typing.Dict[str, typing.Any], NoneType] = None label_cols: typing.Union[str, typing.List[str], NoneType] = None prefetch: bool = True num_workers: int = 0 num_test_batches: int = 20 )
Parameters
-
batch_size (
int
, optional) — Size of batches to load from the dataset. Defaults toNone
, which implies that the dataset won’t be batched, but the returned dataset can be batched later withtf_dataset.batch(batch_size)
. -
columns (
List[str]
orstr
, optional) — Dataset column(s) to load in thetf.data.Dataset
. Column names that are created by thecollate_fn
and that do not exist in the original dataset can be used. -
shuffle(
bool
, defaults toFalse
) — Shuffle the dataset order when loading. RecommendedTrue
for training,False
for validation/evaluation. -
drop_remainder(
bool
, defaults toFalse
) — Drop the last incomplete batch when loading. Ensures that all batches yielded by the dataset will have the same length on the batch dimension. -
collate_fn(
Callable
, optional) — A function or callable object (such as aDataCollator
) that will collate lists of samples into a batch. -
collate_fn_args (
Dict
, optional) — An optionaldict
of keyword arguments to be passed to thecollate_fn
. -
label_cols (
List[str]
orstr
, defaults toNone
) — Dataset column(s) to load as labels. Note that many models compute loss internally rather than letting Keras do it, in which case passing the labels here is optional, as long as they’re in the inputcolumns
. -
prefetch (
bool
, defaults toTrue
) — Whether to run the dataloader in a separate thread and maintain a small buffer of batches for training. Improves performance by allowing data to be loaded in the background while the model is training. -
num_workers (
int
, defaults to0
) — Number of workers to use for loading the dataset. Only supported on Python versions >= 3.8. -
num_test_batches (
int
, defaults to20
) — Number of batches to use to infer the output signature of the dataset. The higher this number, the more accurate the signature will be, but the longer it will take to create the dataset.
Create a tf.data.Dataset
from the underlying Dataset. This tf.data.Dataset
will load and collate batches from
the Dataset, and is suitable for passing to methods like model.fit()
or model.predict()
. The dataset will yield
dicts
for both inputs and labels unless the dict
would contain only a single key, in which case a raw
tf.Tensor
is yielded instead.
push_to_hub
< source >( repo_id: str config_name: str = 'default' split: typing.Optional[str] = None private: typing.Optional[bool] = False token: typing.Optional[str] = None branch: typing.Optional[str] = None max_shard_size: typing.Union[str, int, NoneType] = None num_shards: typing.Optional[int] = None embed_external_files: bool = True )
Parameters
-
repo_id (
str
) — The ID of the repository to push to in the following format:<user>/<dataset_name>
or<org>/<dataset_name>
. Also accepts<dataset_name>
, which will default to the namespace of the logged-in user. -
config_name (
str
, defaults to “default”) — The configuration name of a dataset. Defaults to “default” -
split (
str
, optional) — The name of the split that will be given to that dataset. Defaults toself.split
. -
private (
bool
, optional, defaults toFalse
) — Whether the dataset repository should be set to private or not. Only affects repository creation: a repository that already exists will not be affected by that parameter. -
token (
str
, optional) — An optional authentication token for the Hugging Face Hub. If no token is passed, will default to the token saved locally when logging in withhuggingface-cli login
. Will raise an error if no token is passed and the user is not logged-in. -
branch (
str
, optional) — The git branch on which to push the dataset. This defaults to the default branch as specified in your repository, which defaults to"main"
. -
max_shard_size (
int
orstr
, optional, defaults to"500MB"
) — The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit (like"5MB"
). -
num_shards (
int
, optional) — Number of shards to write. By default the number of shards depends onmax_shard_size
.Added in 2.8.0
-
embed_external_files (
bool
, defaults toTrue
) — Whether to embed file bytes in the shards. In particular, this will do the following before the push for the fields of type:
Pushes the dataset to the hub as a Parquet dataset. The dataset is pushed using HTTP requests and does not need to have neither git or git-lfs installed.
The resulting Parquet files are self-contained by default. If your dataset contains Image or Audio
data, the Parquet files will store the bytes of your images or audio files.
You can disable this by setting embed_external_files
to False
.
save_to_disk
< source >( dataset_path: typing.Union[str, bytes, os.PathLike] fs = 'deprecated' max_shard_size: typing.Union[str, int, NoneType] = None num_shards: typing.Optional[int] = None num_proc: typing.Optional[int] = None storage_options: typing.Optional[dict] = None )
Parameters
-
dataset_path (
str
) — Path (e.g.dataset/train
) or remote URI (e.g.s3://my-bucket/dataset/train
) of the dataset directory where the dataset will be saved to. -
fs (
fsspec.spec.AbstractFileSystem
, optional) — Instance of the remote filesystem where the dataset will be saved to.Deprecated in 2.8.0
fs
was deprecated in version 2.8.0 and will be removed in 3.0.0. Please usestorage_options
instead, e.g.storage_options=fs.storage_options
-
max_shard_size (
int
orstr
, optional, defaults to"500MB"
) — The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit (like"50MB"
). -
num_shards (
int
, optional) — Number of shards to write. By default the number of shards depends onmax_shard_size
andnum_proc
.Added in 2.8.0
-
num_proc (
int
, optional) — Number of processes when downloading and generating the dataset locally. Multiprocessing is disabled by default.Added in 2.8.0
-
storage_options (
dict
, optional) — Key/value pairs to be passed on to the file-system backend, if any.Added in 2.8.0
Saves a dataset to a dataset directory, or in a filesystem using any implementation of fsspec.spec.AbstractFileSystem
.
All the Image() and Audio() data are stored in the arrow files. If you want to store paths or urls, please use the Value(“string”) type.
load_from_disk
< source >( dataset_path: str fs = 'deprecated' keep_in_memory: typing.Optional[bool] = None storage_options: typing.Optional[dict] = None ) → Dataset or DatasetDict
Parameters
-
dataset_path (
str
) — Path (e.g."dataset/train"
) or remote URI (e.g."s3//my-bucket/dataset/train"
) of the dataset directory where the dataset will be loaded from. -
fs (
fsspec.spec.AbstractFileSystem
, optional) — Instance of the remote filesystem where the dataset will be saved to.Deprecated in 2.8.0
fs
was deprecated in version 2.8.0 and will be removed in 3.0.0. Please usestorage_options
instead, e.g.storage_options=fs.storage_options
-
keep_in_memory (
bool
, defaults toNone
) — Whether to copy the dataset in-memory. IfNone
, the dataset will not be copied in-memory unless explicitly enabled by settingdatasets.config.IN_MEMORY_MAX_SIZE
to nonzero. See more details in the improve performance section. -
storage_options (
dict
, optional) — Key/value pairs to be passed on to the file-system backend, if any.Added in 2.8.0
Returns
- If
dataset_path
is a path of a dataset directory, the dataset requested. - If
dataset_path
is a path of a dataset dict directory, adatasets.DatasetDict
with each split.
Loads a dataset that was previously saved using save_to_disk
from a dataset directory, or from a
filesystem using any implementation of fsspec.spec.AbstractFileSystem
.
flatten_indices
< source >( keep_in_memory: bool = False cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 features: typing.Optional[datasets.features.features.Features] = None disable_nullable: bool = False num_proc: typing.Optional[int] = None new_fingerprint: typing.Optional[str] = None )
Parameters
-
keep_in_memory (
bool
, defaults toFalse
) — Keep the dataset in memory instead of writing it to a cache file. -
cache_file_name (
str
, optional, defaultNone
) — Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. -
writer_batch_size (
int
, defaults to1000
) — Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap
. -
features (
Optional[datasets.Features]
, defaults toNone
) — Use a specific Features to store the cache file instead of the automatically generated one. -
disable_nullable (
bool
, defaults toFalse
) — Allow null values in the table. -
num_proc (
int
, optional, defaultNone
) — Max number of processes when generating cache. Already cached shards are loaded sequentially -
new_fingerprint (
str
, optional, defaults toNone
) — The new fingerprint of the dataset after transform. IfNone
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments
Create and cache a new Dataset by flattening the indices mapping.
to_csv
< source >(
path_or_buf: typing.Union[str, bytes, os.PathLike, typing.BinaryIO]
batch_size: typing.Optional[int] = None
num_proc: typing.Optional[int] = None
**to_csv_kwargs
)
→
int
Parameters
-
path_or_buf (
PathLike
orFileOrBuffer
) — Either a path to a file or a BinaryIO. -
batch_size (
int
, optional) — Size of the batch to load in memory and write at once. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE
. -
num_proc (
int
, optional) — Number of processes for multiprocessing. By default it doesn’t use multiprocessing.batch_size
in this case defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE
but feel free to make it 5x or 10x of the default value if you have sufficient compute power. -
**to_csv_kwargs (additional keyword arguments) —
Parameters to pass to pandas’s
pandas.DataFrame.to_csv
.Changed in 2.10.0
Now,
index
defaults toFalse
if not specified.If you would like to write the index, pass
index=True
and also set a name for the index column by passingindex_label
.
Returns
int
The number of characters or bytes written.
Exports the dataset to csv
to_pandas
< source >( batch_size: typing.Optional[int] = None batched: bool = False )
Parameters
-
batched (
bool
) — Set toTrue
to return a generator that yields the dataset as batches ofbatch_size
rows. Defaults toFalse
(returns the whole datasets once). -
batch_size (
int
, optional) — The size (number of rows) of the batches ifbatched
isTrue
. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE
.
Returns the dataset as a pandas.DataFrame
. Can also return a generator for large datasets.
to_dict
< source >( batch_size: typing.Optional[int] = None batched = 'deprecated' )
Parameters
-
batched (
bool
) — Set toTrue
to return a generator that yields the dataset as batches ofbatch_size
rows. Defaults toFalse
(returns the whole datasets once).Deprecated in 2.11.0
Use
.iter(batch_size=batch_size)
followed by.to_dict()
on the individual batches instead. -
batch_size (
int
, optional) — The size (number of rows) of the batches ifbatched
isTrue
. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE
.
Returns the dataset as a Python dict. Can also return a generator for large datasets.
to_json
< source >(
path_or_buf: typing.Union[str, bytes, os.PathLike, typing.BinaryIO]
batch_size: typing.Optional[int] = None
num_proc: typing.Optional[int] = None
**to_json_kwargs
)
→
int
Parameters
-
path_or_buf (
PathLike
orFileOrBuffer
) — Either a path to a file or a BinaryIO. -
batch_size (
int
, optional) — Size of the batch to load in memory and write at once. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE
. -
num_proc (
int
, optional) — Number of processes for multiprocessing. By default it doesn’t use multiprocessing.batch_size
in this case defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE
but feel free to make it 5x or 10x of the default value if you have sufficient compute power. -
**to_json_kwargs (additional keyword arguments) —
Parameters to pass to pandas’s
pandas.DataFrame.to_json
.Changed in 2.11.0
Now,
index
defaults toFalse
iforient
is"split"
or"table"
.If you would like to write the index, pass
index=True
.
Returns
int
The number of characters or bytes written.
Export the dataset to JSON Lines or JSON.
to_parquet
< source >(
path_or_buf: typing.Union[str, bytes, os.PathLike, typing.BinaryIO]
batch_size: typing.Optional[int] = None
**parquet_writer_kwargs
)
→
int
Parameters
-
path_or_buf (
PathLike
orFileOrBuffer
) — Either a path to a file or a BinaryIO. -
batch_size (
int
, optional) — Size of the batch to load in memory and write at once. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE
. -
**parquet_writer_kwargs (additional keyword arguments) —
Parameters to pass to PyArrow’s
pyarrow.parquet.ParquetWriter
.
Returns
int
The number of characters or bytes written.
Exports the dataset to parquet
to_sql
< source >(
name: str
con: typing.Union[str, ForwardRef('sqlalchemy.engine.Connection'), ForwardRef('sqlalchemy.engine.Engine'), ForwardRef('sqlite3.Connection')]
batch_size: typing.Optional[int] = None
**sql_writer_kwargs
)
→
int
Parameters
-
name (
str
) — Name of SQL table. -
con (
str
orsqlite3.Connection
orsqlalchemy.engine.Connection
orsqlalchemy.engine.Connection
) — A URI string or a SQLite3/SQLAlchemy connection object used to write to a database. -
batch_size (
int
, optional) — Size of the batch to load in memory and write at once. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE
. -
**sql_writer_kwargs (additional keyword arguments) —
Parameters to pass to pandas’s
pandas.DataFrame.to_sql
.Changed in 2.11.0
Now,
index
defaults toFalse
if not specified.If you would like to write the index, pass
index=True
and also set a name for the index column by passingindex_label
.
Returns
int
The number of records written.
Exports the dataset to a SQL database.
to_iterable_dataset
< source >( num_shards: typing.Optional[int] = 1 )
Parameters
-
num_shards (
int
, default to1
) — Number of shards to define when instantiating the iterable dataset. This is especially useful for big datasets to be able to shuffle properly, and also to enable fast parallel loading using a PyTorch DataLoader or in distributed setups for example. Shards are defined using datasets.Dataset.shard(): it simply slices the data without writing anything on disk.
Get an datasets.IterableDataset from a map-style datasets.Dataset. This is equivalent to loading a dataset in streaming mode with datasets.load_dataset(), but much faster since the data is streamed from local files.
Contrary to map-style datasets, iterable datasets are lazy and can only be iterated over (e.g. using a for loop). Since they are read sequentially in training loops, iterable datasets are much faster than map-style datasets. All the transformations applied to iterable datasets like filtering or processing are done on-the-fly when you start iterating over the dataset.
Still, it is possible to shuffle an iterable dataset using datasets.IterableDataset.shuffle(). This is a fast approximate shuffling that works best if you have multiple shards and if you specify a buffer size that is big enough.
To get the best speed performance, make sure your dataset doesn’t have an indices mapping.
If this is the case, the data are not read contiguously, which can be slow sometimes.
You can use ds = ds.flatten_indices()
to write your dataset in contiguous chunks of data and have optimal speed before switching to an iterable dataset.
Example:
With lazy filtering and processing:
>>> ids = ds.to_iterable_dataset()
>>> ids = ids.filter(filter_fn).map(process_fn) # will filter and process on-the-fly when you start iterating over the iterable dataset
>>> for example in ids:
... pass
With sharding to enable efficient shuffling:
>>> ids = ds.to_iterable_dataset(num_shards=64) # the dataset is split into 64 shards to be iterated over
>>> ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer for fast approximate shuffling when you start iterating
>>> for example in ids:
... pass
With a PyTorch DataLoader:
>>> import torch
>>> ids = ds.to_iterable_dataset(num_shards=64)
>>> ids = ids.filter(filter_fn).map(process_fn)
>>> dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards to each worker to load, filter and process when you start iterating
>>> for example in ids:
... pass
With a PyTorch DataLoader and shuffling:
>>> import torch
>>> ids = ds.to_iterable_dataset(num_shards=64)
>>> ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating
>>> dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from the shuffled list of shards to each worker when you start iterating
>>> for example in ids:
... pass
In a distributed setup like PyTorch DDP with a PyTorch DataLoader and shuffling
>>> from datasets.distributed import split_dataset_by_node
>>> ids = ds.to_iterable_dataset(num_shards=512)
>>> ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating
>>> ids = split_dataset_by_node(ds, world_size=8, rank=0) # will keep only 512 / 8 = 64 shards from the shuffled lists of shards when you start iterating
>>> dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from this node's list of shards to each worker when you start iterating
>>> for example in ids:
... pass
With shuffling and multiple epochs:
>>> ids = ds.to_iterable_dataset(num_shards=64)
>>> ids = ids.shuffle(buffer_size=10_000, seed=42) # will shuffle the shards order and use a shuffle buffer when you start iterating
>>> for epoch in range(n_epochs):
... ids.set_epoch(epoch) # will use effective_seed = seed + epoch to shuffle the shards and for the shuffle buffer when you start iterating
... for example in ids:
... pass
add_faiss_index
< source >( column: str index_name: typing.Optional[str] = None device: typing.Optional[int] = None string_factory: typing.Optional[str] = None metric_type: typing.Optional[int] = None custom_index: typing.Optional[ForwardRef('faiss.Index')] = None batch_size: int = 1000 train_size: typing.Optional[int] = None faiss_verbose: bool = False dtype = <class 'numpy.float32'> )
Parameters
-
column (
str
) — The column of the vectors to add to the index. -
index_name (
str
, optional) — Theindex_name
/identifier of the index. This is theindex_name
that is used to call get_nearest_examples() or search(). By default it corresponds tocolumn
. -
device (
Union[int, List[int]]
, optional) — If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs. If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU. -
string_factory (
str
, optional) — This is passed to the index factory of Faiss to create the index. Default index class isIndexFlat
. -
metric_type (
int
, optional) — Type of metric. Ex:faiss.METRIC_INNER_PRODUCT
orfaiss.METRIC_L2
. -
custom_index (
faiss.Index
, optional) — Custom Faiss index that you already have instantiated and configured for your needs. -
batch_size (
int
) — Size of the batch to use while adding vectors to theFaissIndex
. Default value is1000
.Added in 2.4.0
-
train_size (
int
, optional) — If the index needs a training step, specifies how many vectors will be used to train the index. -
faiss_verbose (
bool
, defaults toFalse
) — Enable the verbosity of the Faiss index. -
dtype (
data-type
) — The dtype of the numpy arrays that are indexed. Default isnp.float32
.
Add a dense index using Faiss for fast retrieval.
By default the index is done over the vectors of the specified column.
You can specify device
if you want to run it on GPU (device
must be the GPU index).
You can find more information about Faiss here:
- For string factory
Example:
>>> ds = datasets.load_dataset('crime_and_punish', split='train')
>>> ds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']}))
>>> ds_with_embeddings.add_faiss_index(column='embeddings')
>>> # query
>>> scores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', embed('my new query'), k=10)
>>> # save index
>>> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')
>>> ds = datasets.load_dataset('crime_and_punish', split='train')
>>> # load index
>>> ds.load_faiss_index('embeddings', 'my_index.faiss')
>>> # query
>>> scores, retrieved_examples = ds.get_nearest_examples('embeddings', embed('my new query'), k=10)
add_faiss_index_from_external_arrays
< source >( external_arrays: array index_name: str device: typing.Optional[int] = None string_factory: typing.Optional[str] = None metric_type: typing.Optional[int] = None custom_index: typing.Optional[ForwardRef('faiss.Index')] = None batch_size: int = 1000 train_size: typing.Optional[int] = None faiss_verbose: bool = False dtype = <class 'numpy.float32'> )
Parameters
-
external_arrays (
np.array
) — If you want to use arrays from outside the lib for the index, you can setexternal_arrays
. It will useexternal_arrays
to create the Faiss index instead of the arrays in the givencolumn
. -
index_name (
str
) — Theindex_name
/identifier of the index. This is theindex_name
that is used to call get_nearest_examples() or search(). -
device (Optional
Union[int, List[int]]
, optional) — If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs. If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU. -
string_factory (
str
, optional) — This is passed to the index factory of Faiss to create the index. Default index class isIndexFlat
. -
metric_type (
int
, optional) — Type of metric. Ex:faiss.faiss.METRIC_INNER_PRODUCT
orfaiss.METRIC_L2
. -
custom_index (
faiss.Index
, optional) — Custom Faiss index that you already have instantiated and configured for your needs. -
batch_size (
int
, optional) — Size of the batch to use while adding vectors to the FaissIndex. Default value is 1000.Added in 2.4.0
-
train_size (
int
, optional) — If the index needs a training step, specifies how many vectors will be used to train the index. -
faiss_verbose (
bool
, defaults to False) — Enable the verbosity of the Faiss index. -
dtype (
numpy.dtype
) — The dtype of the numpy arrays that are indexed. Default is np.float32.
Add a dense index using Faiss for fast retrieval.
The index is created using the vectors of external_arrays
.
You can specify device
if you want to run it on GPU (device
must be the GPU index).
You can find more information about Faiss here:
- For string factory
save_faiss_index
< source >( index_name: str file: typing.Union[str, pathlib.PurePath] storage_options: typing.Optional[typing.Dict] = None )
Parameters
-
index_name (
str
) — The index_name/identifier of the index. This is the index_name that is used to call.get_nearest
or.search
. -
file (
str
) — The path to the serialized faiss index on disk or remote URI (e.g."s3://my-bucket/index.faiss"
). -
storage_options (
dict
, optional) — Key/value pairs to be passed on to the file-system backend, if any.Added in 2.11.0
Save a FaissIndex on disk.
load_faiss_index
< source >( index_name: str file: typing.Union[str, pathlib.PurePath] device: typing.Union[int, typing.List[int], NoneType] = None storage_options: typing.Optional[typing.Dict] = None )
Parameters
-
index_name (
str
) — The index_name/identifier of the index. This is the index_name that is used to call.get_nearest
or.search
. -
file (
str
) — The path to the serialized faiss index on disk or remote URI (e.g."s3://my-bucket/index.faiss"
). -
device (Optional
Union[int, List[int]]
) — If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs. If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU. -
storage_options (
dict
, optional) — Key/value pairs to be passed on to the file-system backend, if any.Added in 2.11.0
Load a FaissIndex from disk.
If you want to do additional configurations, you can have access to the faiss index object by doing
.get_index(index_name).faiss_index
to make it fit your needs.
add_elasticsearch_index
< source >( column: str index_name: typing.Optional[str] = None host: typing.Optional[str] = None port: typing.Optional[int] = None es_client: typing.Optional[ForwardRef('elasticsearch.Elasticsearch')] = None es_index_name: typing.Optional[str] = None es_index_config: typing.Optional[dict] = None )
Parameters
-
column (
str
) — The column of the documents to add to the index. -
index_name (
str
, optional) — Theindex_name
/identifier of the index. This is the index name that is used to call get_nearest_examples() or Dataset.search(). By default it corresponds tocolumn
. -
host (
str
, optional, defaults tolocalhost
) — Host of where ElasticSearch is running. -
port (
str
, optional, defaults to9200
) — Port of where ElasticSearch is running. -
es_client (
elasticsearch.Elasticsearch
, optional) — The elasticsearch client used to create the index if host and port areNone
. -
es_index_name (
str
, optional) — The elasticsearch index name used to create the index. -
es_index_config (
dict
, optional) — The configuration of the elasticsearch index. Default config is:
Add a text index using ElasticSearch for fast retrieval. This is done in-place.
Example:
>>> es_client = elasticsearch.Elasticsearch()
>>> ds = datasets.load_dataset('crime_and_punish', split='train')
>>> ds.add_elasticsearch_index(column='line', es_client=es_client, es_index_name="my_es_index")
>>> scores, retrieved_examples = ds.get_nearest_examples('line', 'my new query', k=10)
load_elasticsearch_index
< source >( index_name: str es_index_name: str host: typing.Optional[str] = None port: typing.Optional[int] = None es_client: typing.Optional[ForwardRef('Elasticsearch')] = None es_index_config: typing.Optional[dict] = None )
Parameters
-
index_name (
str
) — Theindex_name
/identifier of the index. This is the index name that is used to callget_nearest
orsearch
. -
es_index_name (
str
) — The name of elasticsearch index to load. -
host (
str
, optional, defaults tolocalhost
) — Host of where ElasticSearch is running. -
port (
str
, optional, defaults to9200
) — Port of where ElasticSearch is running. -
es_client (
elasticsearch.Elasticsearch
, optional) — The elasticsearch client used to create the index if host and port areNone
. -
es_index_config (
dict
, optional) — The configuration of the elasticsearch index. Default config is:
Load an existing text index using ElasticSearch for fast retrieval.
List the colindex_nameumns
/identifiers of all the attached indexes.
List the index_name
/identifiers of all the attached indexes.
drop_index
< source >( index_name: str )
Drop the index with the specified column.
search
< source >(
index_name: str
query: typing.Union[str, <built-in function array>]
k: int = 10
**kwargs
)
→
(scores, indices)
Parameters
-
index_name (
str
) — The name/identifier of the index. -
query (
Union[str, np.ndarray]
) — The query as a string ifindex_name
is a text index or as a numpy array ifindex_name
is a vector index. -
k (
int
) — The number of examples to retrieve.
Returns
(scores, indices)
A tuple of (scores, indices)
where:
- scores (
List[List[float]
): the retrieval scores from either FAISS (IndexFlatL2
by default) or ElasticSearch of the retrieved examples - indices (
List[List[int]]
): the indices of the retrieved examples
Find the nearest examples indices in the dataset to the query.
search_batch
< source >(
index_name: str
queries: typing.Union[typing.List[str], <built-in function array>]
k: int = 10
**kwargs
)
→
(total_scores, total_indices)
Parameters
-
index_name (
str
) — Theindex_name
/identifier of the index. -
queries (
Union[List[str], np.ndarray]
) — The queries as a list of strings ifindex_name
is a text index or as a numpy array ifindex_name
is a vector index. -
k (
int
) — The number of examples to retrieve per query.
Returns
(total_scores, total_indices)
A tuple of (total_scores, total_indices)
where:
- total_scores (
List[List[float]
): the retrieval scores from either FAISS (IndexFlatL2
by default) or ElasticSearch of the retrieved examples per query - total_indices (
List[List[int]]
): the indices of the retrieved examples per query
Find the nearest examples indices in the dataset to the query.
get_nearest_examples
< source >(
index_name: str
query: typing.Union[str, <built-in function array>]
k: int = 10
**kwargs
)
→
(scores, examples)
Parameters
-
index_name (
str
) — The index_name/identifier of the index. -
query (
Union[str, np.ndarray]
) — The query as a string ifindex_name
is a text index or as a numpy array ifindex_name
is a vector index. -
k (
int
) — The number of examples to retrieve.
Returns
(scores, examples)
A tuple of (scores, examples)
where:
- scores (
List[float]
): the retrieval scores from either FAISS (IndexFlatL2
by default) or ElasticSearch of the retrieved examples - examples (
dict
): the retrieved examples
Find the nearest examples in the dataset to the query.
get_nearest_examples_batch
< source >(
index_name: str
queries: typing.Union[typing.List[str], <built-in function array>]
k: int = 10
**kwargs
)
→
(total_scores, total_examples)
Parameters
-
index_name (
str
) — Theindex_name
/identifier of the index. -
queries (
Union[List[str], np.ndarray]
) — The queries as a list of strings ifindex_name
is a text index or as a numpy array ifindex_name
is a vector index. -
k (
int
) — The number of examples to retrieve per query.
Returns
(total_scores, total_examples)
A tuple of (total_scores, total_examples)
where:
- total_scores (
List[List[float]
): the retrieval scores from either FAISS (IndexFlatL2
by default) or ElasticSearch of the retrieved examples per query - total_examples (
List[dict]
): the retrieved examples per query
Find the nearest examples in the dataset to the query.
DatasetInfo object containing all the metadata in the dataset.
NamedSplit object corresponding to a named dataset split.
from_csv
< source >(