The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: OSError Message: cannot find loader for this HDF5 file Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute compute_first_rows_from_parquet_response( File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response rows_index = indexer.get_rows_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 640, in get_rows_index return RowsIndex( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 521, in __init__ self.parquet_index = self._init_parquet_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 538, in _init_parquet_index response = get_previous_step_or_raise( File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 584, in get_previous_step_or_raise raise CachedArtifactError( libcommon.simple_cache.CachedArtifactError: The previous step failed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 96, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 73, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1393, in __iter__ example = _apply_feature_types_on_example( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1082, in _apply_feature_types_on_example decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1983, in decode_example return { File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1984, in <dictcomp> column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1349, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 188, in decode_example image.load() # to avoid "Too many open files" errors File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/ImageFile.py", line 366, in load raise OSError(msg) OSError: cannot find loader for this HDF5 file
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
General Description
MultiSetTransformerData is a large dataset designed to train and validate neural Symbolic Regression models. It was designed to solve the Multi-Set Symbolic Skeleton Prediction (MSSP) problems, described in the paper "Univariate Skeleton Prediction in Multivariate Systems Using Transformers". However, it can be used for training generic SR models as well.
This dataset consists of artificially generated univariate symbolic skeletons, from which mathematical expressions are sampled, which are then used to sample data sets. In this repository, a dataset Q1 is presented:
- Q1: Consists of mathematical expressions that use up to 5 unary and binary operators (e.g., uses five operators). It allows up to one nested operator (e.g., is allowed but is not).
Dataset Structure
In the Q1 folder, you will find a training set alongside its corresponding validation set. Then, each folder consists of a collection of HDF5 files, as shown below:
βββ Q1
β βββ training
β β βββ 0.h5
β β βββ 1.h5
β β βββ ...
β βββ validation
β β βββ 0.h5
β β βββ 1.h5
β β βββ ...
Each HDF5 file contains 5000 blocks and has the following structure:
{ "block_1": {
"X": "Support vector, shape (10000, 10)",
"Y": "Response vector, shape (10000, 10)",
"tokenized": "Symbolic skeleton expression tokenized using vocabulary, list",
"exprs": "Symbolic skeleton expression, str",
"sampled_exprs": "Ten mathematical expressions sampled from a common skeleton"
},
"block_2": {
"X": "Support, shape (10000, 10)",
"Y": "Response, shape (10000, 10)",
"tokenized": "Symbolic skeleton expression tokenized using vocabulary, list",
"exprs": "Symbolic skeleton expression, str",
"sampled_exprs": "Ten mathematical expressions sampled from a common skeleton"
},
...
}
More specifically, each block corresponds to one univariate symbolic skeleton (i.e., a function without defined constant values); for example, c + c/(c*sin(c*x_1) + c)
.
From this skeleton, 10 random functions are sampled; for example:
-2.284 + 0.48/(-sin(0.787*x_1) - 1.136)
4.462 - 2.545/(3.157*sin(0.422*x_1) - 1.826)
, ...
Then, for the -th function (where ), we sample a support vector X[:, i]
of 10000 elements whose values are drawn from a uniform distribution .
The support vector X[:, i]
is evaluated on the -th function to obtain the response vector Y[:, i]
.
In other words, a block contains input-output data generated from 10 different functions that share the same symbolic skeleton.
For instance, the following figure shows 10 sets of data generated from the symbolic skeleton c + c/(c*sin(c*x_1) + c)
:
Loading Data
Once the data is downloaded, it can be loaded using Python as follows:
imort os
import glob
import h5py
def open_h5(path):
block = []
with h5py.File(path, "r") as hf:
# Iterate through the groups in the HDF5 file (group names are integers)
for group_name in hf:
group = hf[group_name]
X = group["X"][:]
Y = group["Y"][:]
# Load 'tokenized' as a list of integers
tokenized = list(group["tokenized"])
# Load 'exprs' as a string
exprs = group["exprs"][()].tobytes().decode("utf-8")
# Load 'sampled_exprs' as a list of sympy expressions
sampled_exprs = [expr_str for expr_str in group["sampled_exprs"][:].astype(str)]
block.append([X, Y, tokenized, exprs, sampled_exprs])
return block
train_path = 'data/Q1/training'
train_files = glob.glob(os.path.join(self.sampledData_train_path, '*.h5'))
for tfile in train_files:
# Read block
block = open_h5(tfile)
# Do stuff with your data
Vocabulary and Expression Generation
The table below provides the vocabulary used to construct the expressions of this dataset.
We use a method that builds the expression tree recursively in a preorder fashion, which allows us to enforce certain conditions and constraints effectively. That is, we forbid certain combinations of operators and set a maximum limit on the nesting depth of unary operators within each other. For example, we avoid embedding the operator within the operator , or vice versa, since such composition could lead to direct simplification (e.g., . We can also avoid combinations of operators that would generate extremely large values (e.g., and ). The table below shows the forbidden operators we considered for some specific parent operators.
Citation
Use this Bibtex to cite this repository
@INPROCEEDINGS{MultiSetSR,
author="Morales, Giorgio
and Sheppard, John W.",
editor="Bifet, Albert
and Daniu{\v{s}}is, Povilas
and Davis, Jesse
and Krilavi{\v{c}}ius, Tomas
and Kull, Meelis
and Ntoutsi, Eirini
and Puolam{\"a}ki, Kai
and {\v{Z}}liobait{\.{e}}, Indr{\.{e}}",
title="Univariate Skeleton Prediction in Multivariate Systems Using Transformers",
booktitle="Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="107--125",
isbn="978-3-031-70371-3"
}
- Downloads last month
- 2,607