RedTachyon
commited on
Commit
•
2b1c036
1
Parent(s):
b6d9e08
Upload folder using huggingface_hub
Browse files- N8M2yqRicS/2_image_0.png +3 -0
- N8M2yqRicS/7_image_0.png +3 -0
- N8M2yqRicS/9_image_0.png +3 -0
- N8M2yqRicS/9_image_1.png +3 -0
- N8M2yqRicS/N8M2yqRicS.md +785 -0
- N8M2yqRicS/N8M2yqRicS_meta.json +25 -0
N8M2yqRicS/2_image_0.png
ADDED
Git LFS Details
|
N8M2yqRicS/7_image_0.png
ADDED
Git LFS Details
|
N8M2yqRicS/9_image_0.png
ADDED
Git LFS Details
|
N8M2yqRicS/9_image_1.png
ADDED
Git LFS Details
|
N8M2yqRicS/N8M2yqRicS.md
ADDED
@@ -0,0 +1,785 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Active Learning For Level Set Estimation Using Randomized Straddle Algorithms
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Level set estimation (LSE) the problem of identifying the set of input points where a function takes a value above (or below) a given threshold is important in practical applications.
|
8 |
+
|
9 |
+
When the function is expensive to evaluate and black-box, the straddle algorithm, a representative heuristic for LSE based on Gaussian process models, and its extensions with theoretical guarantees have been developed. However, many existing methods include a confidence parameter, β 1/2 t, that must be specified by the user. Methods that choose β 1/2 t heuristically do not provide theoretical guarantees. In contrast, theoretically guaranteed values of β 1/2 t need to be increased depending on the number of iterations and candidate points; they are conservative and do not perform well in practice. In this study, we propose a novel method, the randomized straddle algorithm, in which βt in the straddle algorithm is replaced by a random sample from the chi-squared distribution with two degrees of freedom.
|
10 |
+
|
11 |
+
The confidence parameter in the proposed method does not require adjustment, does not depend on the number of iterations and candidate points, and is not conservative. Furthermore, we show that the proposed method has theoretical guarantees that depend on the sample complexity and the number of iterations. Finally, we validate the applicability of the proposed method through numerical experiments using synthetic and real data.
|
12 |
+
|
13 |
+
## 1 Introduction
|
14 |
+
|
15 |
+
In various practical applications, including engineering, level set estimation (LSE) the estimation of the region where the value of a function is above (or below) a given threshold, θ is important. A specific example of LSE is the estimation of defective regions in materials for quality control. For instance, in silicon ingots, which are used in solar cells, the carrier lifetime value a measure of the ingot's quality is observed at each point on the ingot's surface before shipping, allowing identification of regions that can or cannot be used as solar cells. Since many functions encountered in practical applications, such as the carrier lifetime in the silicon ingot example, are black-box functions with high evaluation costs, it is desirable to identify the desired region without performing an exhaustive search of these black-box functions.
|
16 |
+
|
17 |
+
Bayesian optimization (BO) (Shahriari et al., 2015) is a powerful tool for optimizing black-box functions with high evaluation costs. BO predicts black-box functions using surrogate models and adaptively observes the function values based on a criterion called acquisition functions (AFs). Many studies have focused on BO,
|
18 |
+
particularly on developing new AFs. Among these, BO based on the AF known as Gaussian process upper confidence bound (GP-UCB) (Srinivas et al., 2010) offers a theoretical guarantee for finding the optimal solution and is a useful method that is flexible and extendable to various problem settings. GP-UCB-based methods have been proposed in various settings, such as the LSE algorithm (Gotovos et al., 2013), multifidelity BO (Kandasamy et al., 2016; 2017), multi-objective BO (Zuluaga et al., 2016; Inatsu et al., 2024),
|
19 |
+
high-dimensional BO (Kandasamy et al., 2015; Rolland et al., 2018), parallel BO (Contal et al., 2013), cascade BO (Kusakawa et al., 2022), and robust BO (Kirschner et al., 2020). These GP-UCB-based methods, like the original GP-UCB-based BO, provide some theoretical guarantee for optimality in each problem setting. However, GP-UCB and its related methods require the user to specify a confidence parameter, β 1/2 t, to adjust the trade-off between exploration and exploitation, where t is the number of iterations in BO. As a theoretical value for GP-UCB, Srinivas et al. (2010) proposes that β 1/2 tshould increase with the iteration t, but this value is conservative, and Takeno et al. (2023) has pointed out that it results in poor practical performance. Recently, however, Takeno et al. (2023) proposed IRGP-UCB, an AF that randomizes βt in GP-UCB by replacing it with a random sample from a two-parameter exponential distribution. IRGPUCB does not require parameter tuning, and the realized values from the exponential distribution are less conservative than the theoretical values in GP-UCB, resulting in better practical performance. Furthermore, it has been shown that IRGP-UCB provides a tighter bound for the Bayesian regret, one of the optimality measures in BO, than existing methods. However, it is not clear whether IRGP-UCB can be extended to various methods, including LSE. This study proposes a new method for LSE based on the randomization used in IRGP-UCB.
|
20 |
+
|
21 |
+
## 1.1 Related Work
|
22 |
+
|
23 |
+
GPs (Rasmussen & Williams, 2005) are often used as surrogate models in BO, and methods using GPs for LSE have also been proposed. A representative heuristic using GPs is the straddle heuristic by Bryan et al. (2005). The straddle method balances the trade-off between the absolute value of the difference between the GP model's predicted mean and the threshold value, and the uncertainty of the prediction. However, no theoretical analysis has been performed on this method. An extension of the straddle heuristic to cases where the black-box function is a composite function was proposed by Bryan & Schneider (2008), but this too is a heuristic method that lacks theoretical analysis.
|
24 |
+
|
25 |
+
As a GP-UCB-based method using GPs, Gotovos et al. (2013) proposed the LSE algorithm. The LSE algorithm uses the same confidence parameter, β 1/2 t as GP-UCB and is based on the degree of violation from the threshold relative to the confidence interval determined by the GP prediction model. It has been shown that the LSE algorithm returns an -accurate solution for the true set with high probability. Bogunovic et al. (2016) proposed the truncated variance reduction (TRUVAR) method, which can handle both BO
|
26 |
+
and LSE. TRUVAR also accounts for situations where the observation cost varies across observation points and is designed to maximize the reduction in uncertainty in the uncertain set for each observation point per unit cost. Additionally, Shekhar & Javidi (2019) proposed a chaining-based method, which handles the case where the input space is continuous. As an expected improvement-based method, Zanette et al. (2019)
|
27 |
+
proposed the maximum improvement for level-set estimation (MILE) method. MILE is an algorithm that selects the input point with the highest expected number of points estimated to be in the super-level set, one step ahead, based on data observation.
|
28 |
+
|
29 |
+
LSE methods have also been proposed for different settings of black-box functions. For example, Letham et al. (2022) introduced a method for cases where the observation of the black-box function is binary. In the robust BO framework, where the inputs of black-box functions are subject to uncertainty, LSE methods for various robust measures have been developed. Iwazaki et al. (2020) proposed LSE for probability threshold robustness measures, and Inatsu et al. (2021) introduced LSE for distributionally robust probability threshold robustness measures both of which are acquisition functions based on MILE. Additionally, Hozumi et al.
|
30 |
+
|
31 |
+
(2023) proposed a straddle-based method within the framework of transfer learning, where a large amount of data for similar functions is available alongside the primary black-box function to be classified. Inatsu et al. (2020) introduced a MILE-based method for the LSE problem in settings where the uncertainty of the input changes depending on the cost. Mason et al. (2022) addressed the LSE problem in the context where the black-box function is an element of a reproducing kernel Hilbert space.
|
32 |
+
|
33 |
+
The straddle method, LSE algorithm, TRUVAR, chaining-based algorithm, and MILE, which have been proposed under settings similar to those considered in this study, have the following issues. The straddle method is not an acquisition function proposed based on GP-UCB, but it includes the confidence parameter β 1/2 t, which is essentially the same as in GP-UCB. However, the value of this parameter is determined heuristically, resulting in a method without theoretical guarantees. The LSE algorithm and TRUVAR have been theoretically analyzed, but, like GP-UCB, they require increasing the theoretical value of the confidence parameter according to the iteration t, which makes them conservative. The chaining-based algorithm can handle continuous spaces through discretization, but it involves many adjustment parameters. The recommended theoretical values depend on model parameters, including kernel parameters of the surrogate
|
34 |
+
|
35 |
+
![2_image_0.png](2_image_0.png)
|
36 |
+
|
37 |
+
Figure 1: Comparison of the confidence parameter β 1/2 tin the randomized straddle and LSE algorithms.
|
38 |
+
|
39 |
+
The left-hand side figure shows the histogram of β 1/2 t when βt is sampled 1,000,000 times from the chisquared distribution with two degrees of freedom. The red line in the center and right figure denotes E[β 1/2 t] = √2π/2 ≈ 1.25, the shaded area denotes the 95% confidence interval of β 1/2 t, and the black line denotes the theoretical value of β 1/2 tin the LSE algorithm given by β 1/2 t =p2 log(|X |π 2t 2/(6δ)), where δ = 0.05. The figure in the center shows the behavior of β 1/2 t as the number of iterations t increases when the number of candidate points *|X |* is fixed at 1000, whereas the figure on the right shows the behavior of β 1/2 t as the number of candidate points *|X |* increases when the number of iterations t is fixed at 100.
|
40 |
+
model, and are known only for specific settings. MILE is designed for cases with a finite number of candidate points and does not support continuous settings like the chaining-based algorithm.
|
41 |
+
|
42 |
+
## 1.2 Contribution
|
43 |
+
|
44 |
+
This study proposes a novel straddle AF called the *randomized straddle*, which introduces the confidence parameter randomization technique used in IRGP-UCB and solves the problems described in Section 1.1. Figure 1 shows a comparison of the confidence parameters in the proposed AF and those in the LSE algorithm. The contributions of this study are as:
|
45 |
+
- This study proposes a randomized straddle AF, which replaces βt in the straddle heuristic with a random sample from the chi-squared distribution with two degrees of freedom. We emphasize that unlike the LSE algorithm, the confidence parameter in the randomized straddle does not need to increase with the iteration t. Additionally, β 1/2 tin the LSE algorithm depends on the number of candidate points *|X |*, and β 1/2 tincreases as *|X |* increases, while β 1/2 tin the randomized straddle does not depend on *|X |*, and can be applied even when X is an infinite set. Furthermore, the expected value of the realized value of β 1/2 tin the randomized straddle is √2π/2 ≈ 1.25, which is less conservative than the theoretical value in the LSE algorithm.
|
46 |
+
|
47 |
+
- We show that the randomized straddle guarantees that the expected loss for misclassification in LSE converges to 0. In particular, for the misclassification loss rt =1 |X | Px∈X lt(x), the randomized straddle guarantees E[rt] = O(pγt/t), where lt(x) is 0 if the input point x is correctly classified, and |f(x) − θ|, if misclassified, and γt is the maximum information gain which is a commonly used sample complexity measure.
|
48 |
+
|
49 |
+
- Additionally, we conducted numerical experiments using synthetic and real data, which confirmed that the proposed method has performance equal to or better than existing methods.
|
50 |
+
|
51 |
+
## 2 Preliminary
|
52 |
+
|
53 |
+
Let f : X → R be an expensive-to-evaluate black-box function, where X ⊂ R
|
54 |
+
dis a finite set, or an infinite compact set with positive Lebesgue measure Vol(X ). Also let θ ∈ R be a known threshold given by the user.
|
55 |
+
|
56 |
+
The aim of this study is to efficiently identify subsets H∗ and L
|
57 |
+
∗ of X defined as H∗ = {x ∈ X | f(x) ≥ θ}, L∗ = {x ∈ X | f(x) < θ}.
|
58 |
+
|
59 |
+
For each iteration t ≥ 1, we can query xt ∈ X , and f(xt) is observed with noise as yt = f(xt) + εt, where εt follows the normal distribution with mean 0 and variance σ 2 noise. In this study, we assume that f is a sample path from a GP GP(0, k), where GP(0, k) is the zero mean GP with a kernel function k(·, ·). Moreover, we assume that k(·, ·) is a positive-definite kernel that satisfies k(x, x) ≤ 1 for all x ∈ X , and f, ε1*, . . . , ε*t are mutually independent.
|
60 |
+
|
61 |
+
Gaussian Process Model We use a GP surrogate model GP(0, k) for the black-box function. Given a dataset Dt = {(xj , yj}
|
62 |
+
t j=1, where t ≥ 1 is the number of iterations, the posterior distribution of f is again a GP. Then, its posterior mean µt(x) and posterior variance σ 2 t
|
63 |
+
(x) can be calculated as:
|
64 |
+
|
65 |
+
$$\begin{array}{l}{{\mu_{t}(\mathbf{x})=\mathbf{k}_{t}(\mathbf{x})^{\top}(\mathbf{K}_{t}+\sigma_{\mathrm{noise}}^{2}\mathbf{I}_{t})^{-1}\mathbf{y}_{t},}}\\ {{\sigma_{t}^{2}(\mathbf{x})=k(\mathbf{x},\mathbf{x})-\mathbf{k}_{t}(\mathbf{x})^{\top}(\mathbf{K}_{t}+\sigma_{\mathrm{noise}}^{2}\mathbf{I}_{t})^{-1}\mathbf{k}_{t}(\mathbf{x}),}}\end{array}$$
|
66 |
+
$$(1)$$
|
67 |
+
|
68 |
+
where kt(x) is the t-dimensional vector whose i-th element is k(x, xi), yt = (y1*, . . . , y*t)
|
69 |
+
>, Kt is the t × t matrix whose (*j, k*)-th element is k(xj , xk), It is the t×t identity matrix, with a superscript > that indicates the transpose of vectors or matrices. In addition, we define D0 = ∅, µ0(x) = 0 and σ 2 0
|
70 |
+
(x) = k(x, x).
|
71 |
+
|
72 |
+
## 3 Proposed Method
|
73 |
+
|
74 |
+
In this section, we describe a method for estimating H∗ and L
|
75 |
+
∗ based on the GP posterior and an AF for determining the next evaluation.
|
76 |
+
|
77 |
+
## 3.1 Level Set Estimation
|
78 |
+
|
79 |
+
First, we propose a method to estimate H∗ and L
|
80 |
+
∗. While an existing study (Gotovos et al., 2013) proposes an estimation method using the lower and upper bounds of a credible interval of f(x), this study proposes an estimation method using the posterior mean instead of the credible interval.
|
81 |
+
|
82 |
+
Definition 3.1 (Level Set Estimation). For each t ≥ 1, we estimate H∗ and L
|
83 |
+
∗ as:
|
84 |
+
Ht = {x ∈ X | µt−1(x) ≥ θ}, Lt = {x ∈ X | µt−1(x) < θ}. (2)
|
85 |
+
By definition 3.1, any x ∈ X belongs to either Ht or Lt, and Ht ∪ Lt = X . Therefore, the unknown set, as in existing study (Gotovos et al., 2013), is not defined in this study.
|
86 |
+
|
87 |
+
## 3.2 Acquisition Function
|
88 |
+
|
89 |
+
In this section, we propose an AF for determining the next point to be evaluated. For each t ≥ 1 and x ∈ X ,
|
90 |
+
we define the upper bound ucbt−1(x) and lower bound lcbt−1(x) in the credible interval of f(x) as ucbt−1(x) = µt−1(x) + β 1/2 t σt−1(x), lcbt−1(x) = µt−1(x) − β 1/2 t σt−1(x),
|
91 |
+
where β 1/2 t ≥ 0 is a user-specified confidence parameter. Here, the straddle heuristic STRt−1(x) proposed by Bryan et al. (2005) is defined as:
|
92 |
+
|
93 |
+
$$\mathrm{STR}_{t-1}(\mathbf{x})=\beta_{t}^{1/2}\sigma_{t-1}(\mathbf{x})-|\mu_{t-1}(\mathbf{x})-\theta|.$$
|
94 |
+
$$\left(2\right)$$
|
95 |
+
|
96 |
+
Algorithm 1 Active Learning for Level Set Estimation Using Randomized Straddle Algorithms Input: GP prior GP(0, k), threshold θ ∈ R
|
97 |
+
for t = 1, 2*, . . . , T* do Compute µt−1(x) and σ 2 t−1
|
98 |
+
(x) for each x ∈ X by equation 1 Estimate Ht and Lt by equation 2 Generate βt from the chi-squared distribution with two degrees of freedom Compute ucbt−1(x), lcbt−1(x) and at−1(x)
|
99 |
+
Select the next evaluation point xt by xt = arg maxx∈X at−1(x)
|
100 |
+
Observe yt = f(xt) + εt at the point xt Update GP by adding the observed data end for Output: Return HT and LT as the estimated sets Thus, by using ucbt−1(x) and lcbt−1(x), STRt−1(x) can be rewritten as STRt−1(x) = min{ucbt−1(x) − *θ, θ* − lcbt−1(x)}.
|
101 |
+
|
102 |
+
We consider sampling βt of the straddle heuristic from a probability distribution. In the framework of blackbox function maximization, Takeno et al. (2023) uses a sample from a two-parameter exponential distribution as the confidence parameter of the original GP-UCB. The two-parameter exponential distribution considered by Takeno et al. (2023) can be expressed as 2 log(*|X |*/2)+st, where st follows the chi-squared distribution with two degrees of freedom. Therefore, we use a similar argument and consider βt of the straddle heuristic as a sample from the chi-squared distribution with two degrees of freedom, and propose the following randomized straddle AF.
|
103 |
+
|
104 |
+
Definition 3.2 (Randomized Straddle). For each t ≥ 1, let βt be a sample from the chi-squared distribution with two degrees of freedom, where β1, . . . , βt, ε1, . . . , εt, f are mutually independent. Then, the randomized straddle at−1(x) is defined as follows:
|
105 |
+
|
106 |
+
$$a_{t-1}(\mathbf{x})=\operatorname*{max}\{\operatorname*{min}\{\operatorname*{ucb}_{t-1}(\mathbf{x})-\theta,\theta-\operatorname*{lcb}_{t-1}(\mathbf{x})\},0\}.$$
|
107 |
+
at−1(x) = max{min{ucbt−1(x) − *θ, θ* − lcbt−1(x)}, 0}. (3)
|
108 |
+
$$\left({\mathfrak{3}}\right)$$
|
109 |
+
.
|
110 |
+
Hence, using at−1(x), the next point to be evaluated is selected by xt = arg maxx∈X at−1(x). Takeno et al. (2023) adds a constant 2 log(*|X |*/2) , which depends on the number of elements in X , to the sample from the chi-squared distribution with two degrees of freedom. In contrast, the random sample proposed in this study does not require the addition of such a constant. As a result, the confidence parameter in the randomized straddle does not depend on the number of iterations t or the number of candidate points.
|
111 |
+
|
112 |
+
The only difference between the straddle heuristic STRt−1(x) and equation 3 is that β 1/2 tis randomized, and equation 3 performs a max operation with 0. We describe in Section 4 that this modification leads to theoretical guarantees. Finally, we give the pseudocode of the proposed algorithm in Algorithm 1.
|
113 |
+
|
114 |
+
## 4 Theoretical Analysis
|
115 |
+
|
116 |
+
In this section, we give theoretical guarantees for the proposed model. First, we define the loss lt(x) for each x ∈ X and t ≥ 1 as
|
117 |
+
|
118 |
+
$$l_{t}(\mathbf{x})={\left\{\begin{array}{l l}{0}&{{\mathrm{if~}}\mathbf{x}\in H^{*},\mathbf{x}\in H_{t},}\\ {0}&{{\mathrm{if~}}\mathbf{x}\in L^{*},\mathbf{x}\in L_{t},}\\ {f(\mathbf{x})-\theta}&{{\mathrm{if~}}\mathbf{x}\in H^{*},\mathbf{x}\in L_{t},}\\ {\theta-f(\mathbf{x})}&{{\mathrm{if~}}\mathbf{x}\in L^{*},\mathbf{x}\in H_{t}}\end{array}\right.}$$
|
119 |
+
|
120 |
+
Then, the loss r(Ht, Lt) for the estimated sets Ht and Lt is defined as 1:
|
121 |
+
|
122 |
+
the estimated case $H_{t}$ and $H_{t}$ is defined as $\cdot$ $$r(H_{t},L_{t})=\left\{\begin{array}{ll}\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}l_{t}(\mathbf{x})&\mbox{if$\mathcal{X}$is finite}\\ \frac{1}{\mbox{Vol}(\mathcal{X})}\int_{\mathcal{X}}l_{t}(\mathbf{x})\mbox{d}\mathbf{x}&\mbox{if$\mathcal{X}$is infinite}\\ =r_{t}&\end{array}\right.$$
|
123 |
+
$$\equiv r_{t}.$$
|
124 |
+
We also define the cumulative loss as Rt =Pt i=1 ri. Let γt be a maximum information gain, where γt is one of indicators for measuring the sample complexity. The maximum information gain γt is often used in theoretical analysis of BO and LSE using GP (Srinivas et al., 2010; Gotovos et al., 2013), and γt is given by
|
125 |
+
|
126 |
+
$$\gamma_{t}=\frac{1}{2}\operatorname*{max}_{\vec{x}_{1},\ldots,\vec{x}_{t}}\log\operatorname*{det}(I_{t}+\sigma_{\mathrm{noise}}^{-2}\tilde{K}_{t}),$$
|
127 |
+
|
128 |
+
where K˜t is the t × t matrix whose (*j, k*)-th element is k(x˜j , x˜k). Then, the following theorem holds.
|
129 |
+
|
130 |
+
Theorem 4.1. Assume that f follows GP(0, k), where k(·, ·) is a positive-definite kernel satisfying k(x, x) ≤
|
131 |
+
1 for any x ∈ X . For each t ≥ 1, let βt be a sample from the chi-squared distribution with two degrees of freedom, where β1, . . . , βt, ε1, . . . .εt, f are mutually independent. Then, the following inequality holds:
|
132 |
+
|
133 |
+
$$\mathbb{E}[R_{t}]\leq{\sqrt{C_{1}t\gamma_{t}}},$$
|
134 |
+
|
135 |
+
where C1 = 4/ log(1 + σ
|
136 |
+
−2 noise), and the expectation is taken with all randomness including f, εt and βt.
|
137 |
+
|
138 |
+
From Theorem 4.1, the following theorem holds. Theorem 4.2. Under the assumptions of Theorem 4.1, the following inequality holds:
|
139 |
+
|
140 |
+
$$\mathbb{E}[r_{t}]\leq{\sqrt{\frac{C_{1}\gamma_{t}}{t}}},$$
|
141 |
+
where C1 is given in Theorem 4.1.
|
142 |
+
By the definition of the loss lt(x), lt(x) represents how far f(x) is from the threshold when x is misclassified, and rt represents the average value of lt(x) across all candidate points. Under mild assumptions, it is known that γt is sublinear (Srinivas et al., 2010). Therefore, by Theorem 4.1, it is guaranteed that Rt is also sublinear in the expected value sense. Furthermore, by Theorem 4.2, it is guaranteed that rt converges to 0 in the expected value sense. Finally, it is challenging to directly compare the proposed method with GP-based methods such as the LSE algorithm and TRUVAR in terms of theoretical analysis. This difficulty arises because, first, the proposed method and these methods use different approaches to estimate H∗ and L
|
143 |
+
∗, and second, the criteria for evaluating the quality of the estimated sets differ. However, it is important to note that the proposed method has theoretical guarantees, and the confidence parameter β 1/2 t does not depend on the number of iterations t or the input space X , making it applicable whether X is finite or infinite. Additionally, since E[β 1/2 t] = √2π/2 ≈ 1.25, the realized values of β 1/2 t are not conservative. To the best of our knowledge, no existing method satisfies all of these properties. Moreover, we confirm in Section 5 that the practical performance of the proposed method is equal to or better than existing methods.
|
144 |
+
|
145 |
+
## 5 Numerical Experiments
|
146 |
+
|
147 |
+
We confirm the practical performance of the proposed method using synthetic functions and real-world data.
|
148 |
+
|
149 |
+
## 5.1 Synthetic Data Experiments When X **Is Finite**
|
150 |
+
|
151 |
+
In this section, the input space X was defined as a set of grid points that uniformly cut the region [l1, u1] ×
|
152 |
+
[l2, u2] into 50 × 50. In all experiments, we used the following Gaussian kernel:
|
153 |
+
|
154 |
+
$$k(\mathbf{x},\mathbf{x}^{\prime})=\sigma_{f}^{2}\exp\left(-{\frac{\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}^{2}}{L}}\right)$$
|
155 |
+
|
156 |
+
1The discussion of the case where the loss is defined based on the maximum value r(Ht, Lt) = maxx∈X lt(x) is given in Appendix A.
|
157 |
+
As black-box functions, we considered the following three synthetic functions:
|
158 |
+
Case 1 The black-box function f(x1, x2) is a sample path from GP(0, k), where k(·, ·) is given by k(x, x 0) = exp(−kx − x 0k 2 2/2).
|
159 |
+
|
160 |
+
Case 2 The black-box function f(x1, x2) is the following sinusoidal function:
|
161 |
+
f(x1, x2) = sin(10x1) + cos(4x2) − cos(3x1x2).
|
162 |
+
|
163 |
+
Case 3 The black-box function f(x1, x2) is the following shifted negative Himmelblau function:
|
164 |
+
f(x1, x2) = −(x 2 1 + x2 − 11)2 − (x1 + x 2 2 − 7)2 + 100.
|
165 |
+
|
166 |
+
Furthermore, we used the normal distribution with mean 0 and variance σ 2 noise for the observation noise.
|
167 |
+
|
168 |
+
The threshold θ and the parameters used for each setting are summarized in Table 1. The settings for the sinusoidal and Himmelblau functions are the same as those used in Zanette et al. (2019). The performance was evaluated using the loss rt and Fscoret, where Fscoret is the F-score calculated by
|
169 |
+
|
170 |
+
$$\text{Pre}_{t}={\frac{|H_{t}\cap H^{*}|}{|H_{t}|}},\text{Rec}_{t}={\frac{|H_{t}\cap H^{*}|}{|H^{*}|}},\text{Fscore}_{t}={\frac{2\times\text{Pre}_{t}\times\text{Rec}_{t}}{\text{Pre}_{t}+\text{Rec}_{t}}}.$$
|
171 |
+
$\mathbf{a}$
|
172 |
+
Then, we compared the following six AFs:
|
173 |
+
(Random) Select xt by using random sampling.
|
174 |
+
|
175 |
+
(US) Perform uncertainty sampling, that is, xt = arg maxx∈X σ 2 t−1
|
176 |
+
(x).
|
177 |
+
|
178 |
+
(Straddle) Perform the straddle heuristic proposed by Bryan et al. (2005), that is, xt = arg maxx∈X STRt−1(x).
|
179 |
+
|
180 |
+
(LSE) Perform the LSE algorithm using the LSE AF a
|
181 |
+
(LSE)
|
182 |
+
t−1(x) proposed by Gotovos et al. (2013), that is, xt = arg maxx∈X a
|
183 |
+
(LSE)
|
184 |
+
t−1(x).
|
185 |
+
|
186 |
+
(MILE) Perform the MILE algorithm proposed by Zanette et al. (2019), that is, xt = arg maxx∈X a
|
187 |
+
(MILE)
|
188 |
+
t−1(x),
|
189 |
+
where, a
|
190 |
+
(MILE)
|
191 |
+
t−1(x) is the same as the robust MILE, another AF proposed by Zanette et al. (2019),
|
192 |
+
with the tuning parameters and γ set to 0 and −∞, respectively.
|
193 |
+
|
194 |
+
(Proposed) Select xt by using equation 3, that is, xt = arg maxx∈X at−1(x).
|
195 |
+
|
196 |
+
In all experiments, the classification rules were the same for all six methods, and only the AF was changed. We used β 1/2 t = 3 as the confidence parameter required for MILE and Straddle, and β 1/2 p t =
|
197 |
+
2 log(2500 × π 2t 2/(6 × 0.05)) for LSE. Under this setup, one initial point was taken at random and the algorithm was run until the number of iterations reached 300. This simulation was repeated 100 times, and the average rt and Fscoret at each iteration were calculated, where in Case 1, f was generated for each simulation from GP(0, k).
|
198 |
+
|
199 |
+
As shown in Fig. 2, the proposed method consistently performs as well as or better than the comparison methods in all three cases, in terms of both the loss rt and the Fscoret.
|
200 |
+
|
201 |
+
## 5.2 Synthetic Data Experiments When X **Is Infinite**
|
202 |
+
|
203 |
+
In this section, we used the region [−5, 5]5 ⊂ R
|
204 |
+
5 as X and the same kernel as in Section 5.1. As black-box functions, we used the following three synthetic functions:
|
205 |
+
Case 1 The black-box function f(x1, x2, x3, x4, x5) is the following shifted negative sphere function:
|
206 |
+
|
207 |
+
$$f(x_{1},x_{2},x_{3},x_{4},x_{5})=41.65518-\left(\sum_{d=1}^{5}x_{d}^{2}\right)$$
|
208 |
+
.
|
209 |
+
|
210 |
+
| Table 1: Experimental parameters for each setting in Section 5.1 2 2 | | | | | | | | |
|
211 |
+
|------------------------------------------------------------------------|----|----|----|----|--------|-----------|---------|-----|
|
212 |
+
| Black-box function | l1 | u1 | l2 | u2 | σ f | L | σ noise | θ |
|
213 |
+
| GP sample path | −5 | 5 | −5 | 5 | 1 | 2 | 10−6 | 0.5 |
|
214 |
+
| Sinusoidal function | 0 | 1 | 0 | 2 | exp(2) | 2 exp(−3) | exp(−2) | 1 |
|
215 |
+
| Himmelblau's function | -5 | 5 | -5 | 5 | exp(8) | 2 | exp(4) | 0 |
|
216 |
+
|
217 |
+
![7_image_0.png](7_image_0.png)
|
218 |
+
|
219 |
+
Figure 2: Averages for the loss rt and Fscoret for each AF over 100 simulations across different settings when the input space is finite. The top row shows rt, and the bottom row shows Fscoret. Error bars represent six times the standard error.
|
220 |
+
Case 2 The black-box function f(x1, x2, x3, x4, x5) is the following shifted negative Rosenbrock function:
|
221 |
+
|
222 |
+
$$f(x_{1},x_{2},x_{3},x_{4},x_{5})=53458.91-\left[\sum_{d=1}^{4}\left\{100(x_{d+1}-x_{d}^{2})^{2}+(1-x_{d})^{2}\right\}\right].$$
|
223 |
+
|
224 |
+
Case 3 The black-box function f(x1, x2, x3, x4, x5) is the following shifted negative Styblinski-Tang function:
|
225 |
+
|
226 |
+
$$f(x_{1},x_{2},x_{3},x_{4},x_{5})=-20.8875-\frac{\sum_{d=1}^{5}(x_{d}^{4}-16x_{d}^{2}+5x_{d})}{2}.$$
|
227 |
+
|
228 |
+
Additionally, we used the normal distribution with mean 0 and variance σ 2 noise for the observation noise. The threshold θ and parameters used for each setting are summarized in Table 2. The performance was evaluated using rt and Fscoret. For each simulation, 100,000 points were randomly selected from [−5, 5]5, which were used as the input point set X˜ to calculate rt and Fscoret. The values of rt and Fscoret in X˜ were calculated as approximations of the true values. As AFs, we compared five methods used in Section 5.1, except for
|
229 |
+
|
230 |
+
| Table 2: Experimental parameters for each setting in Se | ction 5.2 | | | |
|
231 |
+
|-----------------------------------------------------------|-------------|----|---------|-------|
|
232 |
+
| 2 | 2 | | | |
|
233 |
+
| Black-box function | σ f | L | σ noise | θ |
|
234 |
+
| Sphere | 900 | 40 | 10−6 | 9.6 |
|
235 |
+
| Rosenbrock | 300002 | 40 | 10−6 | 14800 |
|
236 |
+
| Styblinski-Tang | 752 | 40 | 10−6 | 12.3 |
|
237 |
+
|
238 |
+
MILE, which does not handle continuous settings. We used β 1/2 t = 3 as the confidence parameter required for Straddle, and β 1/2 t =p2 log(1015 × π 2t 2/(6 × 0.05)) for LSE. Here, the original LSE algorithm uses the intersection of ucbt−1(x) and lcbt−1(x) in the previous iterations given below to calculate the AF:
|
239 |
+
|
240 |
+
$$\tilde{\mathrm{ucb}}_{t-1}(\mathbf{x})=\operatorname*{min}_{1\leq i\leq t}\mathrm{ucb}_{i-1}(\mathbf{x}),\mathrm{l}\tilde{\mathrm{cb}}_{t-1}(\mathbf{x})=\operatorname*{max}_{1\leq i\leq t}\mathrm{l}\mathrm{cb}_{i-1}(\mathbf{x}).$$
|
241 |
+
|
242 |
+
Conversely, we did not perform this operation in the infinite set setting, and calculated the AF instead using ucb˜t−1(x) = ucbt−1(x) and ˜lcbt−1(x) = lcbt−1(x). Under this setup, one initial point was chosen at random and the algorithm was run for 500 iterations. This simulation was repeated 100 times and the average rt and Fscoret at each iteration were calculated.
|
243 |
+
|
244 |
+
From Fig 3, it can be confirmed that the proposed method has performance equal to or better than the comparison methods in terms of both rt and Fscoret in the sphere function setting. In the case of the Rosenbrock function setting, the proposed method exhibited performance equivalent to or better than the comparison method in terms of rt. Moreover, in terms of Fscoret, the Random method showed the best performance up to 250 iterations, but the proposed method matched or outperformed the comparison methods by the end of the iterations. In the Styblinski-Tang function setting, Random performed best in terms of rt and Fscoret up to around 300 iterations, but the proposed method equaled or surpassed the comparison methods by the final iterations.
|
245 |
+
|
246 |
+
## 5.3 Real-World Data Experiments
|
247 |
+
|
248 |
+
In this section, we conducted experiments using the carrier lifetime value, a measure of the quality performance of silicon ingots used in solar cells (Kutsukake et al., 2015). The data we used include the two-dimensional coordinates x = (x1, x2) ∈ R
|
249 |
+
2 of the sample surface and the carrier lifetime values
|
250 |
+
˜f(x) ∈ [0.091587, 7.4613] at each coordinate, where x1 ∈ {2a + 6 | 1 ≤ a ≤ 89}, x2 ∈ {2a + 6 | 1 ≤ a ≤ 74}
|
251 |
+
and *|X |* = 89 × 74 = 6586. In quality evaluation, identifying defective regions, known as red zones areas where the value of ˜f(x) falls below a certain threshold is crucial. In this experiment, the threshold was set to 3, and we focused on identifying regions where ˜f(x) is 3 or less. We considered f(x) = − ˜f(x) + 3 as the black-box function and performed experiments with θ = 0. Additionally, the experiment was conducted assuming there was no noise in the observations. Moreover, to stabilize the posterior distribution calculation, σ 2 noise = 10−6 was used in the calculation. We used the following Matérn 3/2 kernel:
|
252 |
+
|
253 |
+
$$k(\mathbf{x},\mathbf{x}^{\prime})=4\left(1+{\frac{{\sqrt{3}}\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}}{25}}\right)\exp\left(-{\frac{{\sqrt{3}}\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}}{25}}\right)$$
|
254 |
+
|
255 |
+
The performance was evaluated using the loss rt and Fscoret. As AFs, we compared six methods used in Section 5.1. We used β 1/2 t = 3 as the confidence parameter required for MILE and Straddle, and β 1/2 p t =
|
256 |
+
2 log(6586 × π 2t 2/(6 × 0.05)) for LSE. Under this setup, one initial point was chosen at random and the algorithm was run for 200 iterations. Because the observation noise was set to 0, the experiment was conducted under the setting that a point that had been observed once would not be observed thereafter.
|
257 |
+
|
258 |
+
This simulation was repeated 100 times and the average rt and Fscoret at each iteration were calculated.
|
259 |
+
|
260 |
+
As shown in Fig. 4, the proposed method demonstrates performance that is equal to or better than the comparison methods in terms of both loss rt and Fscoret.
|
261 |
+
|
262 |
+
![9_image_0.png](9_image_0.png)
|
263 |
+
Figure 3: Averages of the loss rt and Fscoret for each AF over 100 simulations for each setting when the input space is infinite. The top row shows rt, the bottom row shows Fscoret, and each error bar length represents the six times the standard error.
|
264 |
+
|
265 |
+
![9_image_1.png](9_image_1.png)
|
266 |
+
|
267 |
+
Figure 4: Averages of the loss rt and Fscoret for each AF over 100 simulations using the carrier lifetime data. The left figure shows rt, while the right figure shows Fscoret, with error bars representing six times the standard error.
|
268 |
+
|
269 |
+
## 6 Conclusion
|
270 |
+
|
271 |
+
In this study, we proposed a novel method called the randomized straddle algorithm, an extension of the straddle algorithm for LSE problems in black-box functions. The proposed method replaces the value of βt in the straddle algorithm with a random sample from the chi-squared distribution with two degrees of freedom, performing LSE based on the GP posterior mean. Through these modifications, we proved that the expected value of the loss in the estimated sets is O(pγt/t). Compared to existing methods, the proposed approach offers three key advantages. First, most theoretical analyses of existing methods involve confidence parameters that depend on the number of candidate points and iterations, whereas such terms are not present in the proposed method. Second, existing methods either do not apply to continuous search spaces or require discretization, with parameters for discretization often being unknown. In contrast, the proposed method is applicable to continuous search spaces without requiring algorithmic adjustments, providing the same theoretical guarantees as for finite search spaces. Third, while confidence parameters in existing methods tend to be overly conservative, the expected value of the confidence parameter in the proposed method is
|
272 |
+
√2π/2 ≈ 1.25, which is not excessively conservative. Furthermore, numerical experiments demonstrated that the performance of the proposed method is equal to or better than that of existing methods. This indicates that the proposed method performs comparably to heuristic methods while offering the added benefit of theoretical guarantees. However, it is important to develop methods that provide both theoretical guarantees and practical applicability for other evaluation measures of classification performance, such as Fscoret. Addressing these issues will be the focus of future work.
|
273 |
+
|
274 |
+
## References
|
275 |
+
|
276 |
+
Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, and Volkan Cevher. Truncated variance reduction:
|
277 |
+
A unified approach to bayesian optimization and level-set estimation. *Advances in neural information* processing systems, 29, 2016.
|
278 |
+
|
279 |
+
Brent Bryan and Jeff Schneider. Actively learning level-sets of composite functions. In *Proceedings of the* 25th international conference on Machine learning, pp. 80–87, 2008.
|
280 |
+
|
281 |
+
Brent Bryan, Robert C Nichol, Christopher R Genovese, Jeff Schneider, Christopher J Miller, and Larry Wasserman. Active learning for identifying function threshold boundaries. Advances in neural information processing systems, 18, 2005.
|
282 |
+
|
283 |
+
Emile Contal, David Buffoni, Alexandre Robicquet, and Nicolas Vayatis. Parallel gaussian process optimization with upper confidence bound and pure exploration. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 225–240. Springer, 2013.
|
284 |
+
|
285 |
+
Alkis Gotovos, Nathalie Casati, Gregory Hitz, and Andreas Krause. Active learning for level set estimation.
|
286 |
+
|
287 |
+
In *Proceedings of the Twenty-Third international joint conference on Artificial Intelligence*, pp. 1344–1350, 2013.
|
288 |
+
|
289 |
+
Shota Hozumi, Kentaro Kutsukake, Kota Matsui, Syunya Kusakawa, Toru Ujihara, and Ichiro Takeuchi.
|
290 |
+
|
291 |
+
Adaptive defective area identification in material surface using active transfer learning-based level set estimation. *arXiv preprint arXiv:2304.01404*, 2023.
|
292 |
+
|
293 |
+
Yu Inatsu, Masayuki Karasuyama, Keiichi Inoue, and Ichiro Takeuchi. Active Learning for Level Set Estimation Under Input Uncertainty and Its Extensions. *Neural Computation*, 32(12):2486–2531, 12 2020.
|
294 |
+
|
295 |
+
ISSN 0899-7667. doi: 10.1162/neco_a_01332. URL https://doi.org/10.1162/neco_a_01332.
|
296 |
+
|
297 |
+
Yu Inatsu, Shogo Iwazaki, and Ichiro Takeuchi. Active learning for distributionally robust level-set estimation. In *International Conference on Machine Learning*, pp. 4574–4584. PMLR, 2021.
|
298 |
+
|
299 |
+
Yu Inatsu, Shion Takeno, Hiroyuki Hanada, Kazuki Iwata, and Ichiro Takeuchi. Bounding box-based multiobjective Bayesian optimization of risk measures under input uncertainty. In Sanjoy Dasgupta, Stephan Mandt, and Yingzhen Li (eds.), *Proceedings of The 27th International Conference on Artificial Intelligence* and Statistics, volume 238 of *Proceedings of Machine Learning Research*, pp. 4564–4572. PMLR, 02–04 May 2024. URL https://proceedings.mlr.press/v238/inatsu24a.html.
|
300 |
+
|
301 |
+
Shogo Iwazaki, Yu Inatsu, and Ichiro Takeuchi. Bayesian experimental design for finding reliable level set under input uncertainty. *IEEE Access*, 8:203982–203993, 2020.
|
302 |
+
|
303 |
+
Kirthevasan Kandasamy, Jeff Schneider, and Barnabás Póczos. High dimensional bayesian optimisation and bandits via additive models. In *International conference on machine learning*, pp. 295–304. PMLR, 2015.
|
304 |
+
|
305 |
+
Kirthevasan Kandasamy, Gautam Dasarathy, Junier B Oliva, Jeff Schneider, and Barnabás Póczos. Gaussian process bandit optimisation with multi-fidelity evaluations. Advances in neural information processing systems, 29, 2016.
|
306 |
+
|
307 |
+
Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, and Barnabás Póczos. Multi-fidelity bayesian optimisation with continuous approximations. In *International conference on machine learning*, pp. 1799–
|
308 |
+
1808. PMLR, 2017.
|
309 |
+
|
310 |
+
Johannes Kirschner, Ilija Bogunovic, Stefanie Jegelka, and Andreas Krause. Distributionally robust bayesian optimization. In Silvia Chiappa and Roberto Calandra (eds.), *Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics*, volume 108 of *Proceedings of Machine Learning Research*, pp. 2174–2184. PMLR, 26–28 Aug 2020. URL https://proceedings.mlr.press/v108/
|
311 |
+
kirschner20a.html.
|
312 |
+
|
313 |
+
Shunya Kusakawa, Shion Takeno, Yu Inatsu, Kentaro Kutsukake, Shogo Iwazaki, Takashi Nakano, Toru Ujihara, Masayuki Karasuyama, and Ichiro Takeuchi. Bayesian optimization for cascade-type multistage processes. *Neural Computation*, 34(12):2408–2431, 2022.
|
314 |
+
|
315 |
+
Kentaro Kutsukake, Momoko Deura, Yutaka Ohno, and Ichiro Yonenaga. Characterization of silicon ingots: Mono-like versus high-performance multicrystalline. *Japanese Journal of Applied Physics*, 54(8S1):
|
316 |
+
08KD10, 2015.
|
317 |
+
|
318 |
+
Benjamin Letham, Phillip Guan, Chase Tymms, Eytan Bakshy, and Michael Shvartsman. Look-ahead acquisition functions for bernoulli level set estimation. In International Conference on Artificial Intelligence and Statistics, pp. 8493–8513. PMLR, 2022.
|
319 |
+
|
320 |
+
Blake Mason, Lalit Jain, Subhojyoti Mukherjee, Romain Camilleri, Kevin Jamieson, and Robert Nowak.
|
321 |
+
|
322 |
+
Nearly optimal algorithms for level set estimation. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera (eds.), *Proceedings of The 25th International Conference on Artificial Intelligence and Statistics*,
|
323 |
+
volume 151 of *Proceedings of Machine Learning Research*, pp. 7625–7658. PMLR, 28–30 Mar 2022. URL
|
324 |
+
https://proceedings.mlr.press/v151/mason22a.html.
|
325 |
+
|
326 |
+
Carl Edward Rasmussen and Christopher K. I. Williams. *Gaussian Processes for Machine Learning (Adaptive* Computation and Machine Learning). The MIT Press, 2005. ISBN 026218253X.
|
327 |
+
|
328 |
+
Paul Rolland, Jonathan Scarlett, Ilija Bogunovic, and Volkan Cevher. High-dimensional bayesian optimization via additive models with overlapping groups. In *International conference on artificial intelligence and* statistics, pp. 298–307. PMLR, 2018.
|
329 |
+
|
330 |
+
Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. Taking the human out of the loop: A review of bayesian optimization. *Proceedings of the IEEE*, 104(1):148–175, 2015.
|
331 |
+
|
332 |
+
Shubhanshu Shekhar and Tara Javidi. Multiscale gaussian process level set estimation. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 3283–3291. PMLR, 2019.
|
333 |
+
|
334 |
+
Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proceedings of the 27th International Conference on Machine Learning, pp. 1015–1022, 2010.
|
335 |
+
|
336 |
+
Shion Takeno, Yu Inatsu, and Masayuki Karasuyama. Randomized Gaussian process upper confidence bound with tighter Bayesian regret bounds. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), *Proceedings of the 40th International Conference* on Machine Learning, volume 202 of *Proceedings of Machine Learning Research*, pp. 33490–33515. PMLR,
|
337 |
+
23–29 Jul 2023. URL https://proceedings.mlr.press/v202/takeno23a.html.
|
338 |
+
|
339 |
+
Andrea Zanette, Junzi Zhang, and Mykel J Kochenderfer. Robust super-level set estimation using gaussian processes. In *Machine Learning and Knowledge Discovery in Databases: European Conference, ECML*
|
340 |
+
PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part II 18, pp. 276–291. Springer, 2019.
|
341 |
+
|
342 |
+
Marcela Zuluaga, Andreas Krause, et al. e-pal: An active learning approach to the multi-objective optimization problem. *Journal of Machine Learning Research*, 17(104):1–32, 2016.
|
343 |
+
|
344 |
+
Algorithm 2 Randomized Straddle Algorithms for Max-value Loss in the Finite Setting Input: GP prior GP(0, k), threshold θ ∈ R
|
345 |
+
for t = 1, 2*, . . . , T* do Compute µt−1(x) and σ 2 t−1
|
346 |
+
(x) for each x ∈ X by equation 1 Estimate Ht and Lt by equation 2 Generate ξt from the chi-squared distribution with two degrees of freedom Compute βt = ξt + 2 log(*|X |*), ucbt−1(x), lcbt−1(x) and a˜t−1(x)
|
347 |
+
Select the next evaluation point xt by xt = arg maxx∈X a˜t−1(x)
|
348 |
+
Observe yt = f(xt) + εt at the point xt Update GP by adding the observed data end for Output: Return HTˆ and LTˆ as the estimated sets, where Tˆ is given by equation 5
|
349 |
+
|
350 |
+
## A Extension To Max-Value Loss
|
351 |
+
|
352 |
+
In this section, we consider the following max-value loss defined based on the maximum value of lt(x):
|
353 |
+
r(Ht, Lt) = max x∈X
|
354 |
+
lt(x) ≡ r˜t.
|
355 |
+
|
356 |
+
When X is finite, we need to modify the definition of the AF and the estimated sets returned at the end of the algorithm. Conversely, if X is an infinite set, the definitions of Ht and Lt should be modified in addition to the above. Therefore, we discuss the finite and infinite cases separately.
|
357 |
+
|
358 |
+
## A.1 Proposed Method For Max-Value Loss When X **Is Finite**
|
359 |
+
|
360 |
+
When X is finite, we propose the following AF with a modified distribution that βt follows. Definition A.1 (Randomized Straddle for Max-value Loss). For each t ≥ 1, let ξt be a random sample from the chi-squared distribution with two degrees of freedom, where ξ1, . . . , ξt, ε1, . . . , εt, f are mutually independent. Define βt = ξt + 2 log(*|X |*). Then, the randomized straddle AF for the max-value loss, a˜t−1(x),
|
361 |
+
is defined as:
|
362 |
+
a˜t−1(x) = max{min{ucbt−1(x) − *θ, θ* − lcbt−1(x)}, 0}. (4)
|
363 |
+
|
364 |
+
By using a˜t−1(x), the next point to be evaluated is selected by xt = arg maxx∈X a˜t−1(x). Additionally, we
|
365 |
+
change estimation sets returned at the end of iterations T in the algorithm to the following instead of HT and LT :
|
366 |
+
Definition A.2. For each t, define
|
367 |
+
$${\hat{t}}={\underset{1\leq i\leq t}{\operatorname{arg\,min}}}\,\mathbb{E}_{t}[{\tilde{r}}_{i}],$$
|
368 |
+
Et[˜ri], (5)
|
369 |
+
where Et[·] represents the conditional expectation given Dt−1. Then, at the end of iterations T, we define HTˆ andLTˆ to be the estimated sets.
|
370 |
+
|
371 |
+
Finally, we give the pseudocode of the proposed algorithm in Algorithm 2.
|
372 |
+
|
373 |
+
## A.1.1 Theoretical Analysis For Max-Value Loss When X **Is Finite**
|
374 |
+
|
375 |
+
For the max-value loss, the following theorem holds under Algorithm 2.
|
376 |
+
|
377 |
+
Theorem A.1. Let f be a sample path from GP(0, k), where k(·, ·) is a positive-definite kernel satisfying k(x, x) ≤ 1 for any x ∈ X . For each t ≥ 1, let ξt be a random sample from the chi-squared distribution with two degrees of freedom, where ξ1, . . . , ξt, ε1, . . . .εt, f are mutually independent. Define βt = ξt + 2 log(*|X |*).
|
378 |
+
|
379 |
+
Then, the following holds for R˜t =Pt i=1 r˜i:
|
380 |
+
|
381 |
+
$$\mathbb{E}[{\tilde{R}}_{t}]\leq{\sqrt{{\tilde{C}}_{1}t\gamma_{t}}},$$
|
382 |
+
$$\left({\bar{\mathbf{5}}}\right)$$
|
383 |
+
|
384 |
+
where C˜1 = (4 + 4 log(*|X |*))/ log(1 + σ
|
385 |
+
−2 noise) and the expectation is taken with all randomness including *f, ε*t and βt.
|
386 |
+
|
387 |
+
From Theorem A.1, the following theorem holds. Theorem A.2. Under the assumptions of Theorem A.1, the following inequality holds:
|
388 |
+
|
389 |
+
$$\mathbb{E}[r_{t}]\leq{\sqrt{\frac{{\hat{C}}_{1}\gamma_{t}}{t}}},$$
|
390 |
+
|
391 |
+
where tˆ and C˜1 are given in equation 5 and Theorem A.1, respectively.
|
392 |
+
|
393 |
+
Comparing Theorems 4.1 and A.1, when considering the max-value loss, βt should be 2 log(*|X |*) larger than in the case of rt, and the constant that appears in the upper bound of the expected value of the cumulative loss has the relationship C˜1 = (1 + log(*|X |*))C1. Note that while the upper bound for rt does not depend on X , it depends on the logarithm of the number of elements in X for the max-value loss. Also, when comparing Theorem 4.2 and A.2, it is not necessary to consider tˆ in rt, whereas it is necessary to consider tˆ
|
394 |
+
in the max-value loss. For the max-value loss, it is difficult to analytically derive Et[˜ri], and hence, it is also difficult to precisely calculate tˆ. Nevertheless, because the posterior distribution of f given Dt−1 is again a GP, we can generate M sample paths from the GP posterior distribution and calculate the realization r˜
|
395 |
+
(j)
|
396 |
+
i of r˜i from each sample path f
|
397 |
+
(j), and calculate the estimate tˇ of tˆ as
|
398 |
+
|
399 |
+
$$\check{t}=\operatorname*{arg\,min}_{1\leq i\leq t}\,\frac{1}{M}\sum_{j=1}^{M}\check{r}_{i}^{(j)}.$$
|
400 |
+
|
401 |
+
## A.2 Proposed Method For Max-Value Loss When X **Is Infinite**
|
402 |
+
|
403 |
+
In this section, we assume that the input space X ⊂ R
|
404 |
+
dis a compact set and satisfies X ⊂ [0, r]
|
405 |
+
d, where
|
406 |
+
|
407 |
+
r > 0. Furthermore, we assume the following additional assumption for f:
|
408 |
+
Assumption A.1. Let f be differentiable with probability 1. Assuming positive constants *a, b* exist, such that
|
409 |
+
$$\mathbb{P}\left(\sup_{\mathbf{x}\in\mathcal{X}}\left|{\frac{\partial f}{\partial x_{j}}}\right|>L\right)\leq a\exp\left(-\left({\frac{L}{b}}\right)^{2}\right),\quad j\in[d],$$ nt of $\mathbf{x}$ and $[d]\equiv\{1,\ldots,d\}$.
|
410 |
+
where xj is the j-th element of x and [d] ≡ {1*, . . . , d*}.
|
411 |
+
|
412 |
+
Next, we provide a LSE method based on the discretization of the input space.
|
413 |
+
|
414 |
+
$$\mathbf{A.2.1}$$
|
415 |
+
|
416 |
+
## A.2.1 Level Set Estimation For Max-Value Loss When X **Is Infinite**
|
417 |
+
|
418 |
+
For each t ≥ 1, let Xt be a finite subset of X . Also, for any x ∈ X , let [x]t be the element of Xt that has the shortest L1 distance from x 2 Then, we define Ht and Lt as
|
419 |
+
|
420 |
+
$H_{t}=\{\mathbf{x}\in\mathcal{X}\mid\mu_{t-1}([\mathbf{x}]_{t})\geq\theta\},\ L_{t}=\{\mathbf{x}\in\mathcal{X}\mid\mu_{t-1}([\mathbf{x}]_{t})<\theta\}.$
|
421 |
+
A.2.2 Acquisition Function for Max-value Loss when X **is Infinite**
|
422 |
+
We define a randomized straddle AF based on Xt: Definition A.3. For each t ≥ 1, let ξt be a random sample from the chi-squared distribution with two degrees of freedom, where ξ1, . . . , ξt, ε1, . . . , ξt, f are mutually independent. Define βt = 2 log(|Xt|) + ξt.
|
423 |
+
|
424 |
+
Then, the randomized straddle AF for the max-value loss when X is infinite, aˇt−1(x), is defined as:
|
425 |
+
aˇt−1(x) = max{min{ucbt−1(x) − *θ, θ* − lcbt−1(x)}, 0}.
|
426 |
+
|
427 |
+
The next point to be evaluated is selected by xt = arg maxx∈X aˇt−1(x). Finally, we give the pseudocode of the proposed algorithm in Algorithm 3.
|
428 |
+
|
429 |
+
2If there are multiple x ∈ Xt with the shortest L1 distance, determine the one that is unique. For example, we first choose the option with the smallest first component. If a unique determination is not possible, we then select the option with the smallest second component. This process is repeated up to the d-th component to achieve a unique determination.
|
430 |
+
|
431 |
+
$$(6)$$
|
432 |
+
|
433 |
+
Algorithm 3 Randomized Straddle Algorithms for Max-value Loss in the Infinite Setting Input: GP prior GP(0, k), threshold θ ∈ R, discretized sets X1*, . . . ,* XT
|
434 |
+
for t = 1, 2*, . . . , T* do Compute µt−1(x) and σ 2 t−1
|
435 |
+
(x) for each x ∈ X by equation 1 Estimate Ht and Lt by equation 6 Generate ξt from the chi-squared distribution with two degrees of freedom Compute βt = ξt + 2 log(|Xt|), ucbt−1(x), lcbt−1(x) and a˜t−1(x)
|
436 |
+
Select the next evaluation point xt by xt = arg maxx∈X aˇt−1(x)
|
437 |
+
Observe yt = f(xt) + εt at the point xt Update GP by adding the observed data end for Output: Return HTˆ and LTˆ as the estimated sets, where Tˆ = arg min1≤i≤T ET [˜ri]
|
438 |
+
|
439 |
+
## A.2.3 Theoretical Analysis For Max-Value Loss When X **Is Infinite**
|
440 |
+
|
441 |
+
Under Algorithm 3, the following theorem holds.
|
442 |
+
|
443 |
+
Theorem A.3. Let X ⊂ [0, r]
|
444 |
+
d be a compact set with r > 0. Assume that f is a sample path from GP(0, k),
|
445 |
+
where k(·, ·) is a positive-definite kernel satisfying k(x, x) ≤ 1 for any x ∈ X . Also assume that Assumption A.1 holds. Moreover, for each t ≥ 1, let τt = d*bdrt*2(plog(ad) + √π/2)e, and let Xt be a finite subset of X
|
446 |
+
satisfying |Xt| = τ d t and
|
447 |
+
|
448 |
+
$$\|\mathbf{x}-[\mathbf{x}]_{t}\|_{1}\leq{\frac{d r}{\tau_{t}}},\quad\mathbf{x}\in{\mathcal{X}}.$$
|
449 |
+
|
450 |
+
Suppose that ξt is a random sample from the chi-squared distribution with two degrees of freedom, where ξ1, . . . , ξt, ε1, . . . , εt, f are mutually independent. Define βt = 2d log(d*bdrt*2(plog(ad) + √π/2)e) + ξt. Then, the following holds for R˜t =Pt i=1 r˜i:
|
451 |
+
|
452 |
+
$$\mathbb{E}[\tilde{R}_{t}]\leq\frac{\pi^{2}}{6}+\sqrt{C_{1}t\gamma_{t}(2+s_{t})},$$
|
453 |
+
|
454 |
+
where Cˇ1 = 2/ log(1 + σ
|
455 |
+
−2 noise) and st = 2d log(d*bdrt*2(plog(ad) + √π/2)e), and the expectation is taken with all randomness including *f, ε*t and βt.
|
456 |
+
|
457 |
+
From Theorem A.3, the following holds. Theorem A.4. Under the assumptions of Theorem A.3, define
|
458 |
+
|
459 |
+
$\hat{t}=\underset{1\leq t}{\operatorname{arg}}$.
|
460 |
+
$${\hat{t}}=\operatorname*{arg\,min}_{1\leq i\leq t}\mathbb{E}_{t}[{\hat{r}}_{i}].$$
|
461 |
+
1≤i≤t
|
462 |
+
Then, the following holds:
|
463 |
+
|
464 |
+
$$\mathbb{E}[{\tilde{r}}_{\hat{t}}]\leq{\frac{\pi^{2}}{6t}}+{\sqrt{\frac{{\tilde{C}}_{1}\gamma_{t}(2+s_{t})}{t}}},$$
|
465 |
+
|
466 |
+
where Cˇ1 and st are given in Theorem A.3.
|
467 |
+
|
468 |
+
## B Proofs B.1 Proof Of Theorem 4.1
|
469 |
+
|
470 |
+
Proof. Let δ ∈ (0, 1). For any t ≥ 1, Dt−1 and x ∈ X , from the proof of Lemma 5.1 in Srinivas et al. (2010),
|
471 |
+
the following holds with probability at least 1 − δ:
|
472 |
+
|
473 |
+
$$\mathbf{\partial}\mathbf{\partial}=\rho_{\delta}\mathbf{\partial}$$
|
474 |
+
|
475 |
+
lcbt−1,δ(x) ≡ µt−1(x) − β 1/2 δσt−1(x) ≤ f(x) ≤ µt−1(x) + β
|
476 |
+
|
477 |
+
$$\leq\mu_{t-1}(\mathbf{x})+\beta_{\delta}^{1/2}\sigma_{t-1}(\mathbf{x})\equiv\operatorname{ucb}_{t-1,\delta}(\mathbf{x}),$$
|
478 |
+
$$\left(7\right)$$
|
479 |
+
|
480 |
+
where βδ = 2 log(1/δ). Here, we consider the case where x ∈ Ht. If x ∈ H∗, we have lt(x) = 0. In contrast, if x ∈ L
|
481 |
+
∗, noting that lcbt−1,δ(x) ≤ f(x) by equation 7 we get
|
482 |
+
|
483 |
+
$$l_{t}(\mathbf{x})=\theta-f(\mathbf{x})\leq\theta-\operatorname{lcb}_{t-1,\delta}(\mathbf{x}).$$
|
484 |
+
|
485 |
+
Moreover, the inequality µt−1(x) ≥ θ holds because x ∈ Ht. Hence, from the definition of lcbt−1,δ(x) and ucbt−1,δ(x), we obtain θ − lcbt−1,δ(x) ≤ ucbt−1,δ(x) − θ.
|
486 |
+
|
487 |
+
Therefore, we get
|
488 |
+
|
489 |
+
$l_{t}(\mathbf{x})\leq\theta-\operatorname{lcb}_{t-1,\delta}(\mathbf{x})=\min\{\operatorname{vcb}_{t-1,\delta}(\mathbf{x})-\theta,\theta-\operatorname{lcb}_{t-1,\delta}(\mathbf{x})\}$ $$\leq\max\{\min\{\operatorname{vcb}_{t-1,\delta}(\mathbf{x})-\theta,\theta-\operatorname{lcb}_{t-1,\delta}(\mathbf{x})\},0\}\equiv a_{t-1,\delta}(\mathbf{x}).$$
|
490 |
+
Similarly, we consider the case where x ∈ Lt. If x ∈ L
|
491 |
+
∗, we obtain lt(x) = 0. Thus, because at−1,δ(x) ≥ 0, we get lt(x) ≤ at−1,δ(x). Moreover, if x ∈ H∗, noting that f(x) ≤ ucbt−1,δ(x) by equation 7, we obtain
|
492 |
+
|
493 |
+
$$l_{t}(\mathbf{x})=f$$
|
494 |
+
|
495 |
+
lt(x) = f(x) − θ ≤ ucbt−1,δ(x) − θ.
|
496 |
+
|
497 |
+
Here, the inequality µt−1(x) < θ holds because x ∈ Lt. Therefore, from the definition of lcbt−1,δ(x) and ucbt−1,δ(x), we obtain ucbt−1,δ(x) − θ ≤ θ − lcbt−1,δ(x).
|
498 |
+
|
499 |
+
Thus, the following inequality holds:
|
500 |
+
lt(x) ≤ ucbt−1,δ(x) − θ = min{ucbt−1,δ(x) − *θ, θ* − lcbt−1,δ(x)} ≤ at−1,δ(x).
|
501 |
+
|
502 |
+
Therefore, for all cases, the inequality lt(x) ≤ at−1,δ(x) holds. This indicates that the following inequality holds with probability at least 1 − δ:
|
503 |
+
|
504 |
+
$$({\boldsymbol{8}})$$
|
505 |
+
$$l_{t}(\mathbf{x})\leq a_{t-1,\delta}(\mathbf{x})\leq\operatorname*{max}_{\tilde{\mathbf{x}}\in{\mathcal{X}}}a_{t-1,\delta}({\tilde{\mathbf{x}}}).$$
|
506 |
+
at−1,δ(x˜). (8)
|
507 |
+
Next, we consider the conditional distribution of lt(x) given Dt−1. Note that this distribution does not depend on βδ. Let Ft−1(·) be a distribution function of lt(x) given Dt−1. Then, from equation 8 we have
|
508 |
+
|
509 |
+
$$F_{t-1}\left(\operatorname*{max}_{\tilde{x}\in{\mathcal{X}}}a_{t-1,\delta}({\tilde{x}})\right)\geq1-\delta.$$
|
510 |
+
|
511 |
+
Hence, by considering the generalized inverse function of Ft−1(·) for both sides, the following inequality
|
512 |
+
holds:
|
513 |
+
$$F_{t-1}^{-1}(1-\delta)$$
|
514 |
+
−1
|
515 |
+
(1 − δ) ≤ max
|
516 |
+
x˜∈X
|
517 |
+
at−1,δ(x˜).
|
518 |
+
|
519 |
+
Here, if δ follows the uniform distribution on the interval (0, 1), then 1 − δ follows the same distribution. In this case, the distribution of F
|
520 |
+
−1 t−1
|
521 |
+
(1 − δ) is equal to the distribution of lt(x) given Dt−1. This implies that
|
522 |
+
|
523 |
+
$$\mathbb{E}_{t}[l_{t}(\mathbf{x})]\leq\mathbb{E}_{\delta}\left[\operatorname*{max}_{\mathbf{x}\in{\mathcal{X}}}a_{t-1,\delta}(\mathbf{x})\right],$$
|
524 |
+
|
525 |
+
where Eδ[·] means the expectation with respect to δ. Furthermore, because 2 log(1/δ) and βt follow the chi-squared distribution with two degrees of freedom, the following holds:
|
526 |
+
Et[lt(x)] ≤ Eβt
|
527 |
+
[at−1(xt)] .
|
528 |
+
|
529 |
+
Thus, if X is finite, from the definition of rt we obtain
|
530 |
+
|
531 |
+
$$\mathbb{E}_{t}[r_{t}]=\mathbb{E}_{t}\left[\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}l_{t}(\mathbf{x})\right]=\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}\mathbb{E}_{t}[l_{t}(\mathbf{x})]\leq\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}\mathbb{E}_{\beta_{t}}\left[a_{t-1}(\mathbf{x}_{t})\right]=\mathbb{E}_{\beta_{t}}\left[a_{t-1}(\mathbf{x}_{t})\right].$$
|
532 |
+
Similarly, if X is infinite, from the definition of rt and non-negativity of lt(x), using Fubini's theorem we get Et[rt] = Et 1 Vol(X ) Z X lt(x)dx =1 Vol(X ) Z X Et[lt(x)]dx ≤1 Vol(X ) Z X Eβt [at−1(xt)] dx = Eβt [at−1(xt)] . Therefore, the inequality Et[rt] ≤ Eβt [at−1(xt)] holds for both cases. Moreover, from the definition of
|
533 |
+
at−1(x), the following inequality holds:
|
534 |
+
$$a_{t-1}(\mathbf{x}_{t})\leq\beta_{t}^{1/2}\sigma_{t-1}(\mathbf{x}_{t})$$
|
535 |
+
|
536 |
+
Hence, we get the following inequality:
|
537 |
+
|
538 |
+
E[Rt] = E "Xt i=1 ri # ≤ E "Xt i=1 β 1/2 i σi−1(xi) # Cauchy-Schwarz inequality −−−−−−−−−−−−−−−−−→ ≤ E Xt i=1 βi !1/2 Xt i=1 σ 2 i−1 (xi) !1/2 Hölder's inequality −−−−−−−−−−−−→ ≤ vuutE "Xt i=1 βi #vuutE "Xt i=1 σ 2 i−1 (xi) # E[βi]=2 −−−−−→ = √2t vuutE "Xt i=1 σ 2 i−1 (xi) # ≤ √2t sE 2 log(1 + σ −2 noise) γt =pC1tγt,
|
539 |
+
|
540 |
+
where the last inequality is derived by the proof of Lemma 5.4 in Srinivas et al. (2010).
|
541 |
+
|
542 |
+
We first give three lemmas to prove Theorem 4.2. Theorem 4.2 is proved by Lemma B.1 and B.3. Lemma B.1. Under the assumptions of Theorem 4.1, let
|
543 |
+
|
544 |
+
$${\hat{t}}={\underset{1\leq i\leq t}{\operatorname{arg\,min}}}\operatorname{\mathbb{E}}_{t}[r_{i}].$$
|
545 |
+
|
546 |
+
Then, the following inequality holds:
|
547 |
+
|
548 |
+
$$\mathbb{E}[r_{i}]\leq{\sqrt{\frac{C_{1}\gamma_{t}}{t}}}.$$
|
549 |
+
|
550 |
+
Proof. From the definition of tˆ, the inequality Et[rtˆ] ≤
|
551 |
+
Pt i=1 Et[ri]
|
552 |
+
tholds. Therefore, we obtain
|
553 |
+
|
554 |
+
$$\mathbb{E}[r_{i}]\leq{\frac{\sum_{i=1}^{t}\mathbb{E}[r_{i}]}{t}}={\frac{\mathbb{E}\left[\sum_{i=1}^{t}r_{i}\right]}{t}}={\frac{\mathbb{E}[R_{t}]}{t}}.$$
|
555 |
+
|
556 |
+
By combining this and Theorem 4.1, we get the desired result.
|
557 |
+
Lemma B.2. For any t ≥ 1, i ≤ t and x ∈ X , the expectation Et[li(x)] can be calculated as follows:
|
558 |
+
|
559 |
+
$$\mathbb{E}_{t}[l_{i}(\mathbf{x})]={\left\{\begin{array}{l l}{\sigma_{t-1}(\mathbf{x})\left[\phi(-\alpha)+\alpha\left\{1-\Phi(-\alpha)\right\}\right]}\\ {\sigma_{t-1}(\mathbf{x})\left[\phi(\alpha)-\alpha\left\{1-\Phi(\alpha)\right\}\right]}\end{array}\right.}$$
|
560 |
+
|
561 |
+
σt−1(x) [φ(α) − α {1 − Φ(α)}] if x ∈ Hi, where α =
|
562 |
+
µt−1(x)−θ σt−1(x)
|
563 |
+
, and φ(z) and Φ(z) are the density and distribution function of the standard normal distribution, respectively.
|
564 |
+
|
565 |
+
Proof. From the definition of li(x), if x ∈ Li, li(x) can be expressed as li(x) = (f(x)−θ)1l[f(x) ≥ θ], where 1l[·] is the indicator function which takes 1 if the condition · holds, otherwise 0. Furthermore, the conditional distribution of f(x) given Dt−1 is the normal distribution with mean µt−1(x) and variance σ 2 t−1
|
566 |
+
(x). Thus, from the definition of Et[·], the following holds:
|
567 |
+
|
568 |
+
Et[li(x)] = Z ∞
|
569 |
+
θ
|
570 |
+
(y − θ)1
|
571 |
+
q2πσ2
|
572 |
+
t−1
|
573 |
+
(x)
|
574 |
+
exp −
|
575 |
+
(y − µt−1(x))2
|
576 |
+
2σ
|
577 |
+
2
|
578 |
+
t−1
|
579 |
+
(x)
|
580 |
+
dy
|
581 |
+
=
|
582 |
+
Z ∞
|
583 |
+
θ
|
584 |
+
σt−1(x)
|
585 |
+
y − µt−1(x)
|
586 |
+
σt−1(x)+
|
587 |
+
µt−1(x) − θ
|
588 |
+
σt−1(x)
|
589 |
+
1
|
590 |
+
q2πσ2
|
591 |
+
t−1
|
592 |
+
(x)
|
593 |
+
exp −
|
594 |
+
(y − µt−1(x))2
|
595 |
+
2σ
|
596 |
+
2
|
597 |
+
t−1
|
598 |
+
(x)
|
599 |
+
dy
|
600 |
+
=
|
601 |
+
Z ∞
|
602 |
+
−α
|
603 |
+
σt−1(x) (z + α)1
|
604 |
+
√2π
|
605 |
+
exp −
|
606 |
+
z
|
607 |
+
2
|
608 |
+
2
|
609 |
+
dz
|
610 |
+
= σt−1(x)
|
611 |
+
Z ∞
|
612 |
+
−α
|
613 |
+
(z + α) φ(z)dz = σt−1(x){[−φ(z)]∞
|
614 |
+
−α + α(1 − Φ(−α))}
|
615 |
+
= σt−1(x) [φ(−α) + α {1 − Φ(−α)}] .
|
616 |
+
Similarly, if x ∈ Hi, li(x) can be expressed as li(x) = (θ − f(x))1l[f(x) < θ]. Then, we obtain
|
617 |
+
Et[li(x)] = Z θ
|
618 |
+
−∞
|
619 |
+
(θ − y)1
|
620 |
+
q2πσ2
|
621 |
+
t−1
|
622 |
+
(x)
|
623 |
+
exp −
|
624 |
+
(y − µt−1(x))2
|
625 |
+
2σ
|
626 |
+
2
|
627 |
+
t−1
|
628 |
+
(x)
|
629 |
+
dy
|
630 |
+
=
|
631 |
+
Z θ
|
632 |
+
−∞
|
633 |
+
σt−1(x)
|
634 |
+
θ − µt−1(x)
|
635 |
+
σt−1(x)+
|
636 |
+
µt−1(x) − y
|
637 |
+
σt−1(x)
|
638 |
+
1
|
639 |
+
q2πσ2
|
640 |
+
t−1
|
641 |
+
(x)
|
642 |
+
exp −
|
643 |
+
(y − µt−1(x))2
|
644 |
+
2σ
|
645 |
+
2
|
646 |
+
t−1
|
647 |
+
(x)
|
648 |
+
dy
|
649 |
+
=
|
650 |
+
Z α
|
651 |
+
∞
|
652 |
+
σt−1(x) (z − α)1
|
653 |
+
√2π
|
654 |
+
exp −
|
655 |
+
z
|
656 |
+
2
|
657 |
+
2
|
658 |
+
(−1)dz
|
659 |
+
= σt−1(x)
|
660 |
+
Z ∞
|
661 |
+
α
|
662 |
+
(z − α) φ(z)dz = σt−1(x){[−φ(z)]∞
|
663 |
+
α − α(1 − Φ(α))}
|
664 |
+
= σt−1(x) [φ(α) − α {1 − Φ(α)}] .
|
665 |
+
Lemma B.3. Under the assumptions of Theorem 4.1 the equality tˆ= t holds.
|
666 |
+
|
667 |
+
Proof. Let x ∈ X . If x ∈ Ht, the inequality µt−1(x) ≥ θ holds. This implies that α ≥ 0. Hence, from
|
668 |
+
Lemma B.2 we obtain
|
669 |
+
Et[lt(x)] = σt−1(x) [φ(α) − α {1 − Φ(α)}] .
|
670 |
+
Thus, since α ≥ 0, the following inequality holds:
|
671 |
+
$\sigma_{t-1}(\mathbf{x})\left[\phi(\alpha)-\alpha\left\{1-\Phi(\alpha)\right\}\right]\leq\sigma_{t-1}(\mathbf{x})\left[\phi(-\alpha)+\alpha\left\{1-\Phi(-\alpha)\right\}\right].$ $\alpha$\(\
|
672 |
+
Therefore, from the definition of Et[li(x)], we get
|
673 |
+
Similarly, if x ∈ Lt, using the same argument we have
|
674 |
+
Et[lt(x)] = σt−1(x) [φ(−α) + α {1 − Φ(−α)}] ≤ Et[li(x)].
|
675 |
+
Here, if X is finite, from the definition of ri we obtain
|
676 |
+
$\mathbb{E}_{t}[l_{t}(\mathbf{x})]=\sigma_{t-1}(\mathbf{x})\left[\phi(\alpha)-\alpha\left\{1-\Phi(\alpha)\right\}\right]\leq\mathbb{E}_{t}[l_{i}(\mathbf{x})]$.
|
677 |
+
$$\mathbb{E}_{t}[l_{t}(\mathbf{x})]=\sigma_{t-1}(\mathbf{x})$$
|
678 |
+
$$\mathbb{E}_{t}[l_{t}(\mathbf{x})]=\sigma_{t-1}(\mathbf{x})$$
|
679 |
+
$$\mathbb{E}_{t}[r_{t}]=\mathbb{E}_{t}\left[\frac{1}{|X|}\sum_{\mathbf{x}\in X}l_{t}(x)\right]=\frac{1}{|X|}\sum_{\mathbf{x}\in X}\mathbb{E}_{t}[l_{t}(\mathbf{x})]\leq\frac{1}{|X|}\sum_{\mathbf{x}\in X}\mathbb{E}_{t}[l_{t}(\mathbf{x})]=\mathbb{E}_{t}[r_{t}].$$ Similarly, if $X$ is infinite, by using the same argument and Fubini's theorem, we get $\mathbb{E}_{t}[r_{t}]\leq\mathbb{E}_{t}[r_{t}]$. Therefore, for all cases the inequality $\mathbb{E}[r_{t}]\leq\mathbb{E}[r_{t}]$ holds. This implies that $\mathbb{E}[r_{t}]=\mathbb{E}[r_{t}]$
|
680 |
+
for all cases the inequality Et[rt] ≤ Et[ri] holds. This implies that tˆ= t.
|
681 |
+
From Lemma B.1 and B.3, we get Theorem 4.2.
|
682 |
+
|
683 |
+
$$\begin{array}{l}{\square}\end{array}$$
|
684 |
+
$\mathbf{b}+\alpha\left\{1-\Phi(-\alpha)\right\}$
|
685 |
+
Proof. Let δ ∈ (0, 1). For any t ≥ 1 and Dt−1, from the proof of Lemma 5.1 in Srinivas et al. (2010), with probability at least 1 − δ, the following holds for any x ∈ X :
|
686 |
+
|
687 |
+
$\operatorname{lcb}_{t-1,\delta}(\mathbf{x})\equiv\mu_{t-1}(\mathbf{x})-\beta_{\delta}^{1/2}\sigma_{t-1}(\mathbf{x})\leq f(\mathbf{x})\leq\mu_{t-1}(\mathbf{x})+\beta_{\delta}^{1/2}\sigma_{t-1}(\mathbf{x})\equiv\operatorname{ucb}_{t-1,\delta}(\mathbf{x})$,
|
688 |
+
where βδ = 2 log(*|X |*/δ). Here, by using the same argument as in the proof of Theorem 4.1, the inequality lt(x) ≤ a˜t−1,δ(x) holds. Hence, the following holds with probability at least 1 − δ:
|
689 |
+
|
690 |
+
$$\hat{r}_{t}=\max_{\mathbf{x}\in\mathcal{X}}l_{t}(\mathbf{x})\leq\max_{\mathbf{x}\in\mathcal{X}}\hat{a}_{t-1,\delta}(\mathbf{x}).\tag{1}$$
|
691 |
+
$$({\mathfrak{g}})$$
|
692 |
+
|
693 |
+
Next, we consider the conditional distribution of r˜t given Dt−1. Note that this distribution does not depend on βδ. Let Ft−1(·) be a distribution function of r˜t given Dt−1. Then, from equation 9, we obtain
|
694 |
+
|
695 |
+
$$F_{t-1}\left(\operatorname*{max}_{\mathbf{x}\in{\mathcal{X}}}{\tilde{a}}_{t-1,\delta}(\mathbf{x})\right)\geq1-\delta.$$
|
696 |
+
|
697 |
+
Therefore, by taking the generalized inverse function for both sides, we get
|
698 |
+
|
699 |
+
$$F_{t-1}^{-1}(1-\delta)\leq\operatorname*{max}_{\mathbf{x}\in{\mathcal{X}}}{\tilde{a}}_{t-1,\delta}(\mathbf{x}).$$
|
700 |
+
|
701 |
+
Here, if δ follows the uniform distribution on the interval (0, 1), 1 − δ follows the same distribution. Furthermore, since the distribution of F
|
702 |
+
−1 t−1
|
703 |
+
(1 − δ) is equal to the conditional distribution of r˜t given Dt−1, we have
|
704 |
+
|
705 |
+
$$\mathbb{E}_{t}[\tilde{r}_{t}]\leq\mathbb{E}_{\delta}\left[\operatorname*{max}_{\mathbf{x}\in{\mathcal{X}}}\tilde{a}_{t-1,\delta}(\mathbf{x})\right].$$
|
706 |
+
|
707 |
+
Moreover, noting that 2 log(*|X |*/δ) and βt follow the same distribution, we obtain
|
708 |
+
|
709 |
+
$$\mathbb{E}_{t}[{\tilde{r}}_{t}]\leq\mathbb{B}$$
|
710 |
+
|
711 |
+
Et[˜rt] ≤ Eβt
|
712 |
+
|
713 |
+
$$t{-1}\left(x_{t}\right)]\,.$$
|
714 |
+
|
715 |
+
Additionally, from a˜t−1(x), the following inequality holds:
|
716 |
+
|
717 |
+
$$\tilde{a}_{t-1}(\mathbf{x}_{t})\leq\beta_{t}^{1/2}\sigma_{t-1}(\mathbf{x}_{t})$$
|
718 |
+
|
719 |
+
Therefore, since E[βt] = 2 + 2 log(*|X |*)), the following inequality holds:
|
720 |
+
|
721 |
+
$$\mathbb{E}[{\tilde{R}}_{t}]=\mathbb{E}\left[\sum_{i=1}^{t}{\tilde{r_{i}}}\right]$$
|
722 |
+
# ≤ E "Xt i=1 β 1/2 i σi−1(xi) # ≤ E Xt i=1 βi !1/2 Xt i=1 σ 2 i−1 (xi) !1/2 ≤ vuutE "Xt i=1 βi #vuutE "Xt i=1 σ 2 i−1 (xi) # ≤pt(2 + 2 log(|X |)) vuutE "Xt i=1 σ 2 i−1 (xi) # ≤pt(2 + 2 log(|X |))sE 2 log(1 + σ −2 noise) γt = qC˜1tγt.
|
723 |
+
|
724 |
+
Proof. Theorem A.2 is proved by using the same argument as in the proof of Lemma B.1.
|
725 |
+
|
726 |
+
Proof. Let x ∈ X . If x ∈ H∗ ∩ Ht or x ∈ L
|
727 |
+
∗ ∩ Lt, the equality lt(x) = 0 holds. Hence, the following inequality holds:
|
728 |
+
lt(x) ≤ lt([x]t) ≤ lt([x]t) + |f(x) − f([x]t)|.
|
729 |
+
|
730 |
+
We consider the case where x ∈ H∗ and x ∈ Lt, that is, lt(x) = f(x) − θ. Here, since x ∈ Lt, the inequality µt−1([x]t) < θ holds. This implies that [x]t ∈ Lt. If [x]t ∈ H∗, noting that lt([x]t) = f([x]t) − θ we get lt(x) = f(x) − θ = f(x) − f([x]t) + f([x]t) − θ ≤ f([x]t) − θ + |f(x) − f([x]t)| = lt([x]t) + |f(x) − f([x]t)|. Similarly, if [x]t ∈ L
|
731 |
+
∗, noting that f([x]t) < θ and 0 ≤ lt([x]t) we obtain
|
732 |
+
|
733 |
+
$$)=f(\mathbf{x})-\theta=f([\mathbf{x}]_{t})-\theta$$
|
734 |
+
|
735 |
+
lt(x) = f(x) − θ = f([x]t) − θ + f(x) − f([x]t) ≤ 0 + f(x) − f([x]t) ≤ lt([x]t) + |f(x) − f([x]t)|.
|
736 |
+
|
737 |
+
Next, we consider the case where x ∈ L
|
738 |
+
∗ and x ∈ Ht, that is, lt(x) = θ − f(x). Here, since x ∈ Ht, the inequality µt−1([x]t) ≥ θholds. This implies that [x]t ∈ Ht. If [x]t ∈ L
|
739 |
+
∗, noting that lt([x]t) = θ − f([x]t),
|
740 |
+
we have lt(x) = θ − f(x) = θ − f([x]t) + f([x]t) − f(x) ≤ lt([x]t) + |f(x) − f([x]t)| Similarly, if [x]t ∈ H∗, noting that f([x]t) ≥ θ and 0 ≤ lt([x]t), we get lt(x) = θ − f(x) = θ − f([x]t) + f([x]t) − f(x) ≤ 0 + f([x]t) − f(x) ≤ lt([x]t) + |f(x) − f([x]t)|.
|
741 |
+
|
742 |
+
Therefore, for all cases the following inequality holds:
|
743 |
+
|
744 |
+
$$l_{t}(\mathbf{x})\leq l_{t}([\mathbf{x}]_{t})+|f(\mathbf{x})-f([\mathbf{x}]_{t})|.$$
|
745 |
+
|
746 |
+
Here, let Lmax = supj∈[d]
|
747 |
+
supx∈X
|
748 |
+
|
749 |
+
∂f
|
750 |
+
∂xj
|
751 |
+
. Then, the following holds:
|
752 |
+
|
753 |
+
$$|f(\mathbf{x})-f([\mathbf{x}]_{t})|\leq L_{\operatorname*{max}}\|\mathbf{x}-[\mathbf{x}]_{t}\|_{1}\leq L_{\operatorname*{max}}{\frac{d r}{\tau_{t}}}.$$
|
754 |
+
|
755 |
+
Thus, noting that
|
756 |
+
|
757 |
+
$$l_{t}(\mathbf{x})\leq l_{t}([\mathbf{x}]_{t})+L_{\operatorname*{max}}{\frac{d r}{\tau_{t}}}$$
|
758 |
+
we obtain
|
759 |
+
$$\tilde{r}_{t}=\max_{\mathbf{x}\in\mathcal{X}}l_{t}(\mathbf{x})\leq L_{\max}\frac{dr}{\tau_{t}}+\max_{\mathbf{x}\in\mathcal{X}}l_{t}([\mathbf{x}]_{t})\equiv L_{\max}\frac{dr}{\tau_{t}}+\max_{\mathbf{\hat{x}}\in\mathcal{X}_{t}}l_{t}(\mathbf{\hat{x}})\equiv L_{\max}\frac{dr}{\tau_{t}}+\tilde{r}_{t}.$$
|
760 |
+
|
761 |
+
In addition, from Lemma H.1 in Takeno et al. (2023), the following inequality holds:
|
762 |
+
|
763 |
+
$$\mathbb{E}[L_{\operatorname*{max}}]\leq b({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2).$$
|
764 |
+
|
765 |
+
Hence, we get
|
766 |
+
|
767 |
+
$$\mathbb{E}\left[L_{\operatorname*{max}}{\frac{d r}{\tau_{t}}}\right]\leq{\frac{b({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2)}{\tau_{t}}}d r={\frac{b({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2)}{[b d r t^{2}({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2)]}}d r$$ $$\leq{\frac{b({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2)}{b d r t^{2}({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2)}}d r={\frac{1}{t^{2}}}.$$
|
768 |
+
|
769 |
+
Therefore, the following inequality holds:
|
770 |
+
|
771 |
+
$$\mathbb{E}[{\hat{R}}_{t}]=\mathbb{E}\left[\sum_{i=1}^{t}{\tilde{r}}_{i}\right]\leq\sum_{i=1}^{t}{\frac{1}{i^{2}}}+\mathbb{E}\left[\sum_{i=1}^{t}{\tilde{r}}_{i}\right]\leq{\frac{\pi^{2}}{6}}+\mathbb{E}\left[\sum_{i=1}^{t}{\tilde{r}}_{i}\right].$$
|
772 |
+
|
773 |
+
Here, rˇiis the maximum value of the loss li(x˜) restricted on Xi, and since Xiis a finite set, by replacing X
|
774 |
+
with Xiin the proof of Theorem A.1 and performing the same proof, we obtain Ei[ˇri] ≤ Eδ[maxx˜∈Xt aˇi−1(x˜)].
|
775 |
+
|
776 |
+
Furthermore, since the next point to be evaluated is selected from X , the following inequality holds:
|
777 |
+
|
778 |
+
$\mathbb{E}_{i}[\tilde{r}_{i}]\leq\mathbb{E}_{\delta}[\max_{\tilde{\mathbf{x}}\in\mathcal{X}_{t}}\tilde{a}_{i-1}(\tilde{\mathbf{x}})]\leq\mathbb{E}_{\delta}[\max_{\mathbf{x}\in\mathcal{X}}\tilde{a}_{i-1}(\mathbf{x})]$.
|
779 |
+
Therefore, we have
|
780 |
+
|
781 |
+
$$\mathbb{E}\left[\sum_{i=1}^{t}{\check{r}}_{i}\right]$$
|
782 |
+
|
783 |
+
# ≤ E "Xt i=1 β 1/2 i σi−1(xi) # ≤ E Xt i=1 βi !1/2 Xt i=1 σ 2 i−1 (xi) !1/2 ≤ vuutE "Xt i=1 βi #vuutE "Xt i=1 σ 2 i−1 (xi) # ≤ptE[βt] vuutE "Xt i=1 σ 2 i−1 (xi) # ≤ qt(2 + 2d log(dbdrt2(plog(ad) + √π/2)e))rE hCˇ1γt i = qCˇ1tγt(2 + st).
|
784 |
+
$\square$
|
785 |
+
Proof. Theorem A.4 is proved by using the same argument as in the proof of Lemma B.1.
|
N8M2yqRicS/N8M2yqRicS_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 22,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 22,
|
14 |
+
"code": 0,
|
15 |
+
"table": 2,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 79,
|
18 |
+
"unsuccessful_ocr": 5,
|
19 |
+
"equations": 84
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|