datasets: | |
- BasStein/250000-randomfunctions-10d | |
language: | |
- en | |
library_name: tf-keras | |
license: apache-2.0 | |
metrics: | |
- mse | |
tags: | |
- doe2vec | |
- exploratory-landscape-analysis | |
- autoencoders | |
co2_eq_emissions: | |
emissions: 0.0363 | |
source: code carbon | |
training_type: pre-training | |
geographical_location: Leiden, The Netherlands | |
hardware_used: 1 Tesla T4 | |
## Model description | |
DoE2Vec model that can transform any design of experiments (function landscape) to a feature vector. | |
For different input dimensions or sample size you require a different model. | |
Each model name is build up like doe2vec-d{dimension\}-m{sample size}-ls{latent size}-{AE or VAE}-kl{Kl loss weight} | |
Example code of loading this huggingface model using the doe2vec package. | |
First install the package | |
```zsh | |
pip install doe2vec | |
``` | |
Then import and load the model. | |
```python | |
from doe2vec import doe_model | |
obj = doe_model( | |
10, | |
8, | |
latent_dim=24, | |
kl_weight=0.001, | |
model_type="VAE" | |
) | |
obj.load_from_huggingface() | |
#test the model | |
obj.plot_label_clusters_bbob() | |
``` | |
## Intended uses & limitations | |
The model is intended to be used to generate feature representations for optimization function landscapes. | |
The representations can then be used for downstream tasks such as automatic optimization pipelines and meta-learning. | |
## Training procedure | |
The model is trained using a weighed KL loss and mean squared error reconstruction loss. | |
The model is trained using 250.000 randomly generated functions (see the dataset) over 100 epochs. | |
- **Hardware:** 1x Tesla T4 GPU | |
- **Optimizer:** Adam | |