|
--- |
|
license: mit |
|
--- |
|
|
|
|
|
## General Description |
|
|
|
MultiSetTransformerData is a large dataset designed to train and validate neural Symbolic Regression models. It was designed to solve the Multi-Set Symbolic Skeleton Prediction (MSSP) problems, described in the paper **"Univariate Skeleton Prediction in Multivariate Systems Using Transformers"**. However, it can be used for training generic SR models as well. |
|
|
|
This dataset consists of artificially generated **univariate symbolic skeletons**, from which mathematical expressions are sampled, which are then used to sample data sets. |
|
In this repository, a dataset **Q1** is presented: |
|
|
|
* **Q1**: Consists of mathematical expressions that use up to 5 unary and binary operators (e.g., \\(1 + 1 / (\sin(2x) + 3)\\) uses five operators). It allows up to one nested operator (e.g., \\(\sin( \exp(x))\\) is allowed but \\(\sin( \exp(x^2))\\) is not). |
|
|
|
## Dataset Structure |
|
|
|
In the **Q1** folder, you will find a training set alongside its corresponding validation set. |
|
Then, each folder consists of a collection of HDF5 files, as shown below: |
|
|
|
``` |
|
βββ Q1 |
|
β βββ training |
|
β β βββ 0.h5 |
|
β β βββ 1.h5 |
|
β β βββ ... |
|
β βββ validation |
|
β β βββ 0.h5 |
|
β β βββ 1.h5 |
|
β β βββ ... |
|
``` |
|
|
|
Each HDF5 file contains 5000 **blocks** and has the following structure: |
|
|
|
``` |
|
{ "block_1": { |
|
"X": "Support vector, shape (10000, 10)", |
|
"Y": "Response vector, shape (10000, 10)", |
|
"tokenized": "Symbolic skeleton expression tokenized using vocabulary, list", |
|
"exprs": "Symbolic skeleton expression, str", |
|
"sampled_exprs": "Ten mathematical expressions sampled from a common skeleton" |
|
}, |
|
"block_2": { |
|
"X": "Support, shape (10000, 10)", |
|
"Y": "Response, shape (10000, 10)", |
|
"tokenized": "Symbolic skeleton expression tokenized using vocabulary, list", |
|
"exprs": "Symbolic skeleton expression, str", |
|
"sampled_exprs": "Ten mathematical expressions sampled from a common skeleton" |
|
}, |
|
... |
|
} |
|
``` |
|
|
|
More specifically, each block corresponds to one univariate symbolic skeleton (i.e., a function without defined constant values); for example, `c + c/(c*sin(c*x_1) + c)`. |
|
From this skeleton, 10 random functions are sampled; for example: |
|
|
|
* `-2.284 + 0.48/(-sin(0.787*x_1) - 1.136)` |
|
* `4.462 - 2.545/(3.157*sin(0.422*x_1) - 1.826)`, ... |
|
|
|
Then, for the \\(i\\)-th function (where \\(i \in [0, 1, ..., 9]\\)), we sample a **support vector** `X[:, i]` of 10000 elements whose values are drawn from a uniform distribution \\(\mathcal{U}(-10, 10)\\). |
|
The support vector `X[:, i]` is evaluated on the \\(i\\)-th function to obtain the response vector `Y[:, i]`. |
|
In other words, a block contains input-output data generated from 10 **different functions that share the same symbolic skeleton**. |
|
For instance, the following figure shows 10 sets of data generated from the symbolic skeleton `c + c/(c*sin(c*x_1) + c)`: |
|
|
|
<p align="center"> |
|
<img src="images/data_example.jpg" alt="alt text" width="600"> |
|
</p> |
|
|
|
## Loading Data |
|
|
|
Once the data is downloaded, it can be loaded using Python as follows: |
|
|
|
``` |
|
imort os |
|
import glob |
|
import h5py |
|
|
|
def open_h5(path): |
|
block = [] |
|
with h5py.File(path, "r") as hf: |
|
# Iterate through the groups in the HDF5 file (group names are integers) |
|
for group_name in hf: |
|
group = hf[group_name] |
|
X = group["X"][:] |
|
Y = group["Y"][:] |
|
# Load 'tokenized' as a list of integers |
|
tokenized = list(group["tokenized"]) |
|
# Load 'exprs' as a string |
|
exprs = group["exprs"][()].tobytes().decode("utf-8") |
|
# Load 'sampled_exprs' as a list of sympy expressions |
|
sampled_exprs = [expr_str for expr_str in group["sampled_exprs"][:].astype(str)] |
|
block.append([X, Y, tokenized, exprs, sampled_exprs]) |
|
return block |
|
|
|
train_path = 'data/Q1/training' |
|
train_files = glob.glob(os.path.join(self.sampledData_train_path, '*.h5')) |
|
for tfile in train_files: |
|
# Read block |
|
block = open_h5(tfile) |
|
# Do stuff with your data |
|
``` |
|
|
|
## Vocabulary and Expression Generation |
|
|
|
The table below provides the vocabulary used to construct the expressions of this dataset. |
|
|
|
<p align="center"> |
|
<img src="images/vocabulary.jpg" alt="alt text" width="500"> |
|
</p> |
|
|
|
We use a method that builds the expression tree recursively in a preorder fashion, which allows us to enforce certain conditions and constraints effectively. |
|
That is, we forbid certain combinations of operators and set a maximum limit on the nesting depth of unary operators within each other. |
|
For example, we avoid embedding the operator \\(\text{log}\\) within the operator \\(\text{exp}\\), or vice versa, since such composition could lead to direct simplification (e.g., \\(\text{log}\left( \text{exp} (x) \right) = x\\). |
|
We can also avoid combinations of operators that would generate extremely large values (e.g., \\(\text{exp}\left( \text{exp} (x) \right)\\) and \\(\text{sinh} \left( \text{sinh} (x) \right)\\)). |
|
The table below shows the forbidden operators we considered for some specific parent operators. |
|
|
|
<p align="center"> |
|
<img src="images/forbidden_ops.jpg" alt="alt text" width="500"> |
|
</p> |
|
|
|
|
|
## Citation |
|
|
|
Use this Bibtex to cite this repository |
|
|
|
``` |
|
@INPROCEEDINGS{MultiSetSR, |
|
author="Giorgio Morales and John W. Sheppard", |
|
title="Univariate Skeleton Prediction in Multivariate Systems Using Transformers", |
|
booktitle="Machine Learning and Knowledge Discovery in Databases", |
|
year="2024", |
|
location = {Vilnius, Lithuania} |
|
} |
|
``` |
|
|