File size: 5,938 Bytes
ca22e16 90f018d 302fce6 90f018d 302fce6 90f018d 302fce6 90f018d eb70757 90f018d 30437b3 90f018d eb70757 90f018d 302fce6 90f018d 302fce6 3ca8b52 302fce6 3ca8b52 302fce6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
license: mit
---
## General Description
MultiSetTransformerData is a large dataset designed to train and validate neural Symbolic Regression models. It was designed to solve the Multi-Set Symbolic Skeleton Prediction (MSSP) problems, described in the paper **"Univariate Skeleton Prediction in Multivariate Systems Using Transformers"**. However, it can be used for training generic SR models as well.
This dataset consists of artificially generated **univariate symbolic skeletons**, from which mathematical expressions are sampled, which are then used to sample data sets.
In this repository, a dataset **Q1** is presented:
* **Q1**: Consists of mathematical expressions that use up to 5 unary and binary operators (e.g., \\(1 + 1 / (\sin(2x) + 3)\\) uses five operators). It allows up to one nested operator (e.g., \\(\sin( \exp(x))\\) is allowed but \\(\sin( \exp(x^2))\\) is not).
## Dataset Structure
In the **Q1** folder, you will find a training set alongside its corresponding validation set.
Then, each folder consists of a collection of HDF5 files, as shown below:
```
βββ Q1
β βββ training
β β βββ 0.h5
β β βββ 1.h5
β β βββ ...
β βββ validation
β β βββ 0.h5
β β βββ 1.h5
β β βββ ...
```
Each HDF5 file contains 5000 **blocks** and has the following structure:
```
{ "block_1": {
"X": "Support vector, shape (10000, 10)",
"Y": "Response vector, shape (10000, 10)",
"tokenized": "Symbolic skeleton expression tokenized using vocabulary, list",
"exprs": "Symbolic skeleton expression, str",
"sampled_exprs": "Ten mathematical expressions sampled from a common skeleton"
},
"block_2": {
"X": "Support, shape (10000, 10)",
"Y": "Response, shape (10000, 10)",
"tokenized": "Symbolic skeleton expression tokenized using vocabulary, list",
"exprs": "Symbolic skeleton expression, str",
"sampled_exprs": "Ten mathematical expressions sampled from a common skeleton"
},
...
}
```
More specifically, each block corresponds to one univariate symbolic skeleton (i.e., a function without defined constant values); for example, `c + c/(c*sin(c*x_1) + c)`.
From this skeleton, 10 random functions are sampled; for example:
* `-2.284 + 0.48/(-sin(0.787*x_1) - 1.136)`
* `4.462 - 2.545/(3.157*sin(0.422*x_1) - 1.826)`, ...
Then, for the \\(i\\)-th function (where \\(i \in [0, 1, ..., 9]\\)), we sample a **support vector** `X[:, i]` of 10000 elements whose values are drawn from a uniform distribution \\(\mathcal{U}(-10, 10)\\).
The support vector `X[:, i]` is evaluated on the \\(i\\)-th function to obtain the response vector `Y[:, i]`.
In other words, a block contains input-output data generated from 10 **different functions that share the same symbolic skeleton**.
For instance, the following figure shows 10 sets of data generated from the symbolic skeleton `c + c/(c*sin(c*x_1) + c)`:
<p align="center">
<img src="images/data_example.jpg" alt="alt text" width="600">
</p>
## Loading Data
Once the data is downloaded, it can be loaded using Python as follows:
```
imort os
import glob
import h5py
def open_h5(path):
block = []
with h5py.File(path, "r") as hf:
# Iterate through the groups in the HDF5 file (group names are integers)
for group_name in hf:
group = hf[group_name]
X = group["X"][:]
Y = group["Y"][:]
# Load 'tokenized' as a list of integers
tokenized = list(group["tokenized"])
# Load 'exprs' as a string
exprs = group["exprs"][()].tobytes().decode("utf-8")
# Load 'sampled_exprs' as a list of sympy expressions
sampled_exprs = [expr_str for expr_str in group["sampled_exprs"][:].astype(str)]
block.append([X, Y, tokenized, exprs, sampled_exprs])
return block
train_path = 'data/Q1/training'
train_files = glob.glob(os.path.join(self.sampledData_train_path, '*.h5'))
for tfile in train_files:
# Read block
block = open_h5(tfile)
# Do stuff with your data
```
## Vocabulary and Expression Generation
The table below provides the vocabulary used to construct the expressions of this dataset.
<p align="center">
<img src="images/vocabulary.jpg" alt="alt text" width="500">
</p>
We use a method that builds the expression tree recursively in a preorder fashion, which allows us to enforce certain conditions and constraints effectively.
That is, we forbid certain combinations of operators and set a maximum limit on the nesting depth of unary operators within each other.
For example, we avoid embedding the operator \\(\text{log}\\) within the operator \\(\text{exp}\\), or vice versa, since such composition could lead to direct simplification (e.g., \\(\text{log}\left( \text{exp} (x) \right) = x\\).
We can also avoid combinations of operators that would generate extremely large values (e.g., \\(\text{exp}\left( \text{exp} (x) \right)\\) and \\(\text{sinh} \left( \text{sinh} (x) \right)\\)).
The table below shows the forbidden operators we considered for some specific parent operators.
<p align="center">
<img src="images/forbidden_ops.jpg" alt="alt text" width="500">
</p>
## Citation
Use this Bibtex to cite this repository
```
@INPROCEEDINGS{MultiSetSR,
author="Morales, Giorgio
and Sheppard, John W.",
editor="Bifet, Albert
and Daniu{\v{s}}is, Povilas
and Davis, Jesse
and Krilavi{\v{c}}ius, Tomas
and Kull, Meelis
and Ntoutsi, Eirini
and Puolam{\"a}ki, Kai
and {\v{Z}}liobait{\.{e}}, Indr{\.{e}}",
title="Univariate Skeleton Prediction inΒ Multivariate Systems Using Transformers",
booktitle="Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="107--125",
isbn="978-3-031-70371-3"
}
```
|