BasStein commited on
Commit
942a4d7
1 Parent(s): 6630f7d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +51 -10
README.md CHANGED
@@ -1,25 +1,66 @@
1
  ---
 
 
 
2
  library_name: keras
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  ## Model description
6
 
7
- More information needed
 
 
8
 
9
- ## Intended uses & limitations
 
 
 
 
 
 
 
 
10
 
11
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- ## Training and evaluation data
 
14
 
15
- More information needed
16
 
17
  ## Training procedure
18
 
19
- ### Training hyperparameters
 
20
 
21
- The following hyperparameters were used during training:
 
22
 
23
- | name | learning_rate | decay | beta_1 | beta_2 | epsilon | amsgrad | training_precision |
24
- |----|-------------|-----|------|------|-------|-------|------------------|
25
- |Adam|0.0010000000474974513|0.0|0.8999999761581421|0.9990000128746033|1e-07|False|float32|
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
  library_name: keras
6
+ tags:
7
+ - doe2vec
8
+ - exploratory-landscape-analysis
9
+ - autoencoders
10
+ datasets:
11
+ - BasStein/250000-randomfunctions-10d
12
+ metrics:
13
+ - mse
14
+ co2_eq_emissions:
15
+ emissions: 0.0363
16
+ source: "code carbon"
17
+ training_type: "pre-training"
18
+ geographical_location: "Leiden, The Netherlands"
19
+ hardware_used: "1 Tesla T4"
20
  ---
21
 
22
  ## Model description
23
 
24
+ DoE2Vec model that can transform any design of experiments (function landscape) to a feature vector.
25
+ For different input dimensions or sample size you require a different model.
26
+ Each model name is build up like doe2vec-d{dimension\}-m{sample size}-ls{latent size}-{AE or VAE}-kl{Kl loss weight}
27
 
28
+ Example code of loading this huggingface model using the doe2vec package.
29
+
30
+ First install the package
31
+
32
+ ```zsh
33
+ pip install doe2vec
34
+ ```
35
+
36
+ Then import and load the model.
37
 
38
+ ```python
39
+ from doe2vec import doe_model
40
+
41
+ obj = doe_model(
42
+ 10,
43
+ 8,
44
+ latent_dim=24,
45
+ kl_weight=0.001,
46
+ model_type="VAE"
47
+ )
48
+ obj.load_from_huggingface()
49
+ #test the model
50
+ obj.plot_label_clusters_bbob()
51
+ ```
52
+
53
+ ## Intended uses & limitations
54
 
55
+ The model is intended to be used to generate feature representations for optimization function landscapes.
56
+ The representations can then be used for downstream tasks such as automatic optimization pipelines and meta-learning.
57
 
 
58
 
59
  ## Training procedure
60
 
61
+ The model is trained using a weighed KL loss and mean squared error reconstruction loss.
62
+ The model is trained using 250.000 randomly generated functions (see the dataset) over 100 epochs.
63
 
64
+ - **Hardware:** 1x Tesla T4 GPU
65
+ - **Optimizer:** Adam
66