sgbaird commited on
Commit
7ebabb5
β€’
1 Parent(s): b0c702c

Refactor description in app.py to improve readability

Browse files
Files changed (1) hide show
  1. app.py +66 -23
app.py CHANGED
@@ -251,29 +251,72 @@ iface = gr.Interface(
251
  datatype=["number"] * len(example_result),
252
  ),
253
  description="""
254
- `y1`, `y2`, `y3`, and `y4`, should all be minimized. `y1` and `y2` are
255
- correlated, whereas `y1` and `y2` are both anticorrelated with `y3`. `y1`,
256
- `y2`, and `y3` are stochastic (heteroskedastic, parameter-free noise),
257
- whereas `y4` is deterministic, but still considered 'black-box'. In other
258
- words, repeat calls with the same input arguments will result in different
259
- values for `y1`, `y2`, and `y3`, but the same value for `y4`.
260
-
261
- If `y1` is greater than 0.2, the result is considered "bad" no matter how good
262
- the other values are. If `y2` is greater than 0.7, the result is considered
263
- "bad" no matter how good the other values are. If `y3` is greater than 1800,
264
- the result is considered "bad" no matter how good the other values are. If `y4`
265
- is greater than 40e6, the result is considered "bad" no matter how good the
266
- other values are.
267
-
268
- `fidelity1` is a fidelity parameter. 0 is the lowest fidelity, and 1 is the
269
- highest fidelity. The higher the fidelity, typically the more expensive the
270
- evaluation. However, this also typically means higher quality and relevance
271
- to the optimization campaign goals. `fidelity1` and `y3` are
272
- correlated.
273
-
274
- Constraints:
275
- - `x19` should be less than `x20`.
276
- - `x6` and `x15` should sum to no more than 1.0.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
277
  """,
278
  )
279
  iface.launch()
 
251
  datatype=["number"] * len(example_result),
252
  ),
253
  description="""
254
+ ## Objectives
255
+
256
+ **Minimize `y1`, `y2`, `y3`, and `y4`**
257
+
258
+ ### Correlations
259
+
260
+ - `y1` and `y2` are correlated
261
+ - `y1` is anticorrelated with `y3`
262
+ - `y2` is anticorrelated with `y3`
263
+
264
+ ### Noise
265
+
266
+ `y1`, `y2`, and `y3` are stochastic with heteroskedastic, parameter-free
267
+ noise, whereas `y4` is deterministic, but still considered 'black-box'. In
268
+ other words, repeat calls with the same input arguments will result in
269
+ different values for `y1`, `y2`, and `y3`, but the same value for `y4`.
270
+
271
+ ### Objective thresholds
272
+
273
+ If `y1` is greater than 0.2, the result is considered "bad" no matter how
274
+ good the other values are. If `y2` is greater than 0.7, the result is
275
+ considered "bad" no matter how good the other values are. If `y3` is greater
276
+ than 1800, the result is considered "bad" no matter how good the other
277
+ values are. If `y4` is greater than 40e6, the result is considered "bad" no
278
+ matter how good the other values are.
279
+
280
+ ## Search Space
281
+
282
+ ### Fidelity
283
+
284
+ `fidelity1` is a fidelity parameter. The lowest fidelity is 0, and the
285
+ highest fidelity is 1. The higher the fidelity, the more expensive the
286
+ evaluation, and the higher the quality.
287
+
288
+ NOTE: `fidelity1` and `y3` are correlated.
289
+
290
+ ### Constraints
291
+
292
+ - x<sub>19</sub> < x<sub>20</sub>
293
+ - x<sub>6</sub> + x<sub>15</sub> ≀ 1.0
294
+
295
+ ### Parameter bounds
296
+
297
+ - 0 ≀ x<sub>i</sub> ≀ 1 for i ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
298
+ 14, 15, 16, 17, 18, 19, 20}
299
+ - c<sub>1</sub> ∈ {c1_0, c1_1}
300
+ - c<sub>2</sub> ∈ {c2_0, c2_1}
301
+ - c<sub>3</sub> ∈ {c3_0, c3_1, c3_2}
302
+ - 0 ≀ fidelity1 ≀ 1
303
+
304
+ ## Notion of best
305
+
306
+ Thresholded Pareto front hypervolume vs. running cost for three different
307
+ budgets, and averaged over 10 search campaigns.
308
+
309
+ References:
310
+
311
+ (1) Baird, S. G.; Liu, M.; Sparks, T. D. High-Dimensional Bayesian
312
+ Optimization of 23 Hyperparameters over 100 Iterations for an
313
+ Attention-Based Network to Predict Materials Property: A Case Study on
314
+ CrabNet Using Ax Platform and SAASBO. Computational Materials Science
315
+ 2022, 211, 111505. https://doi.org/10.1016/j.commatsci.2022.111505.
316
+ (2) Baird, S. G.; Parikh, J. N.; Sparks, T. D. Materials Science
317
+ Optimization Benchmark Dataset for High-Dimensional, Multi-Objective,
318
+ Multi-Fidelity Optimization of CrabNet Hyperparameters. ChemRxiv March
319
+ 7, 2023. https://doi.org/10.26434/chemrxiv-2023-9s6r7.
320
  """,
321
  )
322
  iface.launch()