BasStein commited on
Commit
b51800e
1 Parent(s): b8a6e31

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +46 -12
README.md CHANGED
@@ -1,25 +1,59 @@
1
  ---
 
 
 
2
  library_name: keras
3
- ---
 
 
 
 
 
 
 
4
 
5
  ## Model description
6
 
7
- More information needed
 
 
8
 
9
- ## Intended uses & limitations
10
 
11
- More information needed
12
 
13
- ## Training and evaluation data
14
 
15
- More information needed
16
 
17
- ## Training procedure
18
 
19
- ### Training hyperparameters
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- The following hyperparameters were used during training:
 
22
 
23
- | name | learning_rate | decay | beta_1 | beta_2 | epsilon | amsgrad | training_precision |
24
- |----|-------------|-----|------|------|-------|-------|------------------|
25
- |Adam|0.0010000000474974513|0.0|0.8999999761581421|0.9990000128746033|1e-07|False|float32|
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
  library_name: keras
6
+ tags:
7
+ - doe2vec
8
+ - exploratory-landscape-analysis
9
+ - autoencoders
10
+ datasets:
11
+ - doe2vec-d2-m8-ls24-VAE-kl0.001
12
+ metrics:
13
+ - mse
14
 
15
  ## Model description
16
 
17
+ DoE2Vec model that can transform any design of experiments (function landscape) to a feature vector.
18
+ For different input dimensions or sample size you require a different model.
19
+ Each model name is build up like doe2vec-d{dimension\}-m{sample size}-ls{latent size}-{AE or VAE}-kl{Kl loss weight}
20
 
21
+ Example code of loading this huggingface model using the doe2vec package.
22
 
23
+ First install the package
24
 
25
+ pip install doe2vec
26
 
27
+ Then import and load the model.
28
 
 
29
 
30
+ from doe2vec import doe_model
31
+
32
+ obj = doe_model(
33
+ 2,
34
+ 8,
35
+ latent_dim=24,
36
+ kl_weight=0.001,
37
+ model_type="VAE"
38
+ )
39
+ obj.load_from_huggingface()
40
+ #test the model
41
+ obj.plot_label_clusters_bbob()
42
+
43
+ ## Intended uses & limitations
44
+
45
+ The model is intended to be used to generate feature representations for optimization function landscapes.
46
+ The representations can then be used for downstream tasks such as automatic optimization pipelines and meta-learning.
47
+
48
+
49
+ ## Training procedure
50
 
51
+ The model is trained using a weighed KL loss and mean squared error reconstruction loss.
52
+ The model is trained using 250.000 randomly generated functions (see the dataset) over 100 epochs.
53
 
54
+ co2_eq_emissions:
55
+ emissions: 0.0363
56
+ source: "code carbon"
57
+ training_type: "pre-training"
58
+ geographical_location: "Leiden, The Netherlands"
59
+ hardware_used: "1 Tesla T4"