File size: 66,081 Bytes
2b1c036 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 |
# Active Learning For Level Set Estimation Using Randomized Straddle Algorithms
Anonymous authors Paper under double-blind review
## Abstract
Level set estimation (LSE) the problem of identifying the set of input points where a function takes a value above (or below) a given threshold is important in practical applications.
When the function is expensive to evaluate and black-box, the straddle algorithm, a representative heuristic for LSE based on Gaussian process models, and its extensions with theoretical guarantees have been developed. However, many existing methods include a confidence parameter, β 1/2 t, that must be specified by the user. Methods that choose β 1/2 t heuristically do not provide theoretical guarantees. In contrast, theoretically guaranteed values of β 1/2 t need to be increased depending on the number of iterations and candidate points; they are conservative and do not perform well in practice. In this study, we propose a novel method, the randomized straddle algorithm, in which βt in the straddle algorithm is replaced by a random sample from the chi-squared distribution with two degrees of freedom.
The confidence parameter in the proposed method does not require adjustment, does not depend on the number of iterations and candidate points, and is not conservative. Furthermore, we show that the proposed method has theoretical guarantees that depend on the sample complexity and the number of iterations. Finally, we validate the applicability of the proposed method through numerical experiments using synthetic and real data.
## 1 Introduction
In various practical applications, including engineering, level set estimation (LSE) the estimation of the region where the value of a function is above (or below) a given threshold, θ is important. A specific example of LSE is the estimation of defective regions in materials for quality control. For instance, in silicon ingots, which are used in solar cells, the carrier lifetime value a measure of the ingot's quality is observed at each point on the ingot's surface before shipping, allowing identification of regions that can or cannot be used as solar cells. Since many functions encountered in practical applications, such as the carrier lifetime in the silicon ingot example, are black-box functions with high evaluation costs, it is desirable to identify the desired region without performing an exhaustive search of these black-box functions.
Bayesian optimization (BO) (Shahriari et al., 2015) is a powerful tool for optimizing black-box functions with high evaluation costs. BO predicts black-box functions using surrogate models and adaptively observes the function values based on a criterion called acquisition functions (AFs). Many studies have focused on BO,
particularly on developing new AFs. Among these, BO based on the AF known as Gaussian process upper confidence bound (GP-UCB) (Srinivas et al., 2010) offers a theoretical guarantee for finding the optimal solution and is a useful method that is flexible and extendable to various problem settings. GP-UCB-based methods have been proposed in various settings, such as the LSE algorithm (Gotovos et al., 2013), multifidelity BO (Kandasamy et al., 2016; 2017), multi-objective BO (Zuluaga et al., 2016; Inatsu et al., 2024),
high-dimensional BO (Kandasamy et al., 2015; Rolland et al., 2018), parallel BO (Contal et al., 2013), cascade BO (Kusakawa et al., 2022), and robust BO (Kirschner et al., 2020). These GP-UCB-based methods, like the original GP-UCB-based BO, provide some theoretical guarantee for optimality in each problem setting. However, GP-UCB and its related methods require the user to specify a confidence parameter, β 1/2 t, to adjust the trade-off between exploration and exploitation, where t is the number of iterations in BO. As a theoretical value for GP-UCB, Srinivas et al. (2010) proposes that β 1/2 tshould increase with the iteration t, but this value is conservative, and Takeno et al. (2023) has pointed out that it results in poor practical performance. Recently, however, Takeno et al. (2023) proposed IRGP-UCB, an AF that randomizes βt in GP-UCB by replacing it with a random sample from a two-parameter exponential distribution. IRGPUCB does not require parameter tuning, and the realized values from the exponential distribution are less conservative than the theoretical values in GP-UCB, resulting in better practical performance. Furthermore, it has been shown that IRGP-UCB provides a tighter bound for the Bayesian regret, one of the optimality measures in BO, than existing methods. However, it is not clear whether IRGP-UCB can be extended to various methods, including LSE. This study proposes a new method for LSE based on the randomization used in IRGP-UCB.
## 1.1 Related Work
GPs (Rasmussen & Williams, 2005) are often used as surrogate models in BO, and methods using GPs for LSE have also been proposed. A representative heuristic using GPs is the straddle heuristic by Bryan et al. (2005). The straddle method balances the trade-off between the absolute value of the difference between the GP model's predicted mean and the threshold value, and the uncertainty of the prediction. However, no theoretical analysis has been performed on this method. An extension of the straddle heuristic to cases where the black-box function is a composite function was proposed by Bryan & Schneider (2008), but this too is a heuristic method that lacks theoretical analysis.
As a GP-UCB-based method using GPs, Gotovos et al. (2013) proposed the LSE algorithm. The LSE algorithm uses the same confidence parameter, β 1/2 t as GP-UCB and is based on the degree of violation from the threshold relative to the confidence interval determined by the GP prediction model. It has been shown that the LSE algorithm returns an -accurate solution for the true set with high probability. Bogunovic et al. (2016) proposed the truncated variance reduction (TRUVAR) method, which can handle both BO
and LSE. TRUVAR also accounts for situations where the observation cost varies across observation points and is designed to maximize the reduction in uncertainty in the uncertain set for each observation point per unit cost. Additionally, Shekhar & Javidi (2019) proposed a chaining-based method, which handles the case where the input space is continuous. As an expected improvement-based method, Zanette et al. (2019)
proposed the maximum improvement for level-set estimation (MILE) method. MILE is an algorithm that selects the input point with the highest expected number of points estimated to be in the super-level set, one step ahead, based on data observation.
LSE methods have also been proposed for different settings of black-box functions. For example, Letham et al. (2022) introduced a method for cases where the observation of the black-box function is binary. In the robust BO framework, where the inputs of black-box functions are subject to uncertainty, LSE methods for various robust measures have been developed. Iwazaki et al. (2020) proposed LSE for probability threshold robustness measures, and Inatsu et al. (2021) introduced LSE for distributionally robust probability threshold robustness measures both of which are acquisition functions based on MILE. Additionally, Hozumi et al.
(2023) proposed a straddle-based method within the framework of transfer learning, where a large amount of data for similar functions is available alongside the primary black-box function to be classified. Inatsu et al. (2020) introduced a MILE-based method for the LSE problem in settings where the uncertainty of the input changes depending on the cost. Mason et al. (2022) addressed the LSE problem in the context where the black-box function is an element of a reproducing kernel Hilbert space.
The straddle method, LSE algorithm, TRUVAR, chaining-based algorithm, and MILE, which have been proposed under settings similar to those considered in this study, have the following issues. The straddle method is not an acquisition function proposed based on GP-UCB, but it includes the confidence parameter β 1/2 t, which is essentially the same as in GP-UCB. However, the value of this parameter is determined heuristically, resulting in a method without theoretical guarantees. The LSE algorithm and TRUVAR have been theoretically analyzed, but, like GP-UCB, they require increasing the theoretical value of the confidence parameter according to the iteration t, which makes them conservative. The chaining-based algorithm can handle continuous spaces through discretization, but it involves many adjustment parameters. The recommended theoretical values depend on model parameters, including kernel parameters of the surrogate
![2_image_0.png](2_image_0.png)
Figure 1: Comparison of the confidence parameter β 1/2 tin the randomized straddle and LSE algorithms.
The left-hand side figure shows the histogram of β 1/2 t when βt is sampled 1,000,000 times from the chisquared distribution with two degrees of freedom. The red line in the center and right figure denotes E[β 1/2 t] = √2π/2 ≈ 1.25, the shaded area denotes the 95% confidence interval of β 1/2 t, and the black line denotes the theoretical value of β 1/2 tin the LSE algorithm given by β 1/2 t =p2 log(|X |π 2t 2/(6δ)), where δ = 0.05. The figure in the center shows the behavior of β 1/2 t as the number of iterations t increases when the number of candidate points *|X |* is fixed at 1000, whereas the figure on the right shows the behavior of β 1/2 t as the number of candidate points *|X |* increases when the number of iterations t is fixed at 100.
model, and are known only for specific settings. MILE is designed for cases with a finite number of candidate points and does not support continuous settings like the chaining-based algorithm.
## 1.2 Contribution
This study proposes a novel straddle AF called the *randomized straddle*, which introduces the confidence parameter randomization technique used in IRGP-UCB and solves the problems described in Section 1.1. Figure 1 shows a comparison of the confidence parameters in the proposed AF and those in the LSE algorithm. The contributions of this study are as:
- This study proposes a randomized straddle AF, which replaces βt in the straddle heuristic with a random sample from the chi-squared distribution with two degrees of freedom. We emphasize that unlike the LSE algorithm, the confidence parameter in the randomized straddle does not need to increase with the iteration t. Additionally, β 1/2 tin the LSE algorithm depends on the number of candidate points *|X |*, and β 1/2 tincreases as *|X |* increases, while β 1/2 tin the randomized straddle does not depend on *|X |*, and can be applied even when X is an infinite set. Furthermore, the expected value of the realized value of β 1/2 tin the randomized straddle is √2π/2 ≈ 1.25, which is less conservative than the theoretical value in the LSE algorithm.
- We show that the randomized straddle guarantees that the expected loss for misclassification in LSE converges to 0. In particular, for the misclassification loss rt =1 |X | Px∈X lt(x), the randomized straddle guarantees E[rt] = O(pγt/t), where lt(x) is 0 if the input point x is correctly classified, and |f(x) − θ|, if misclassified, and γt is the maximum information gain which is a commonly used sample complexity measure.
- Additionally, we conducted numerical experiments using synthetic and real data, which confirmed that the proposed method has performance equal to or better than existing methods.
## 2 Preliminary
Let f : X → R be an expensive-to-evaluate black-box function, where X ⊂ R
dis a finite set, or an infinite compact set with positive Lebesgue measure Vol(X ). Also let θ ∈ R be a known threshold given by the user.
The aim of this study is to efficiently identify subsets H∗ and L
∗ of X defined as H∗ = {x ∈ X | f(x) ≥ θ}, L∗ = {x ∈ X | f(x) < θ}.
For each iteration t ≥ 1, we can query xt ∈ X , and f(xt) is observed with noise as yt = f(xt) + εt, where εt follows the normal distribution with mean 0 and variance σ 2 noise. In this study, we assume that f is a sample path from a GP GP(0, k), where GP(0, k) is the zero mean GP with a kernel function k(·, ·). Moreover, we assume that k(·, ·) is a positive-definite kernel that satisfies k(x, x) ≤ 1 for all x ∈ X , and f, ε1*, . . . , ε*t are mutually independent.
Gaussian Process Model We use a GP surrogate model GP(0, k) for the black-box function. Given a dataset Dt = {(xj , yj}
t j=1, where t ≥ 1 is the number of iterations, the posterior distribution of f is again a GP. Then, its posterior mean µt(x) and posterior variance σ 2 t
(x) can be calculated as:
$$\begin{array}{l}{{\mu_{t}(\mathbf{x})=\mathbf{k}_{t}(\mathbf{x})^{\top}(\mathbf{K}_{t}+\sigma_{\mathrm{noise}}^{2}\mathbf{I}_{t})^{-1}\mathbf{y}_{t},}}\\ {{\sigma_{t}^{2}(\mathbf{x})=k(\mathbf{x},\mathbf{x})-\mathbf{k}_{t}(\mathbf{x})^{\top}(\mathbf{K}_{t}+\sigma_{\mathrm{noise}}^{2}\mathbf{I}_{t})^{-1}\mathbf{k}_{t}(\mathbf{x}),}}\end{array}$$
$$(1)$$
where kt(x) is the t-dimensional vector whose i-th element is k(x, xi), yt = (y1*, . . . , y*t)
>, Kt is the t × t matrix whose (*j, k*)-th element is k(xj , xk), It is the t×t identity matrix, with a superscript > that indicates the transpose of vectors or matrices. In addition, we define D0 = ∅, µ0(x) = 0 and σ 2 0
(x) = k(x, x).
## 3 Proposed Method
In this section, we describe a method for estimating H∗ and L
∗ based on the GP posterior and an AF for determining the next evaluation.
## 3.1 Level Set Estimation
First, we propose a method to estimate H∗ and L
∗. While an existing study (Gotovos et al., 2013) proposes an estimation method using the lower and upper bounds of a credible interval of f(x), this study proposes an estimation method using the posterior mean instead of the credible interval.
Definition 3.1 (Level Set Estimation). For each t ≥ 1, we estimate H∗ and L
∗ as:
Ht = {x ∈ X | µt−1(x) ≥ θ}, Lt = {x ∈ X | µt−1(x) < θ}. (2)
By definition 3.1, any x ∈ X belongs to either Ht or Lt, and Ht ∪ Lt = X . Therefore, the unknown set, as in existing study (Gotovos et al., 2013), is not defined in this study.
## 3.2 Acquisition Function
In this section, we propose an AF for determining the next point to be evaluated. For each t ≥ 1 and x ∈ X ,
we define the upper bound ucbt−1(x) and lower bound lcbt−1(x) in the credible interval of f(x) as ucbt−1(x) = µt−1(x) + β 1/2 t σt−1(x), lcbt−1(x) = µt−1(x) − β 1/2 t σt−1(x),
where β 1/2 t ≥ 0 is a user-specified confidence parameter. Here, the straddle heuristic STRt−1(x) proposed by Bryan et al. (2005) is defined as:
$$\mathrm{STR}_{t-1}(\mathbf{x})=\beta_{t}^{1/2}\sigma_{t-1}(\mathbf{x})-|\mu_{t-1}(\mathbf{x})-\theta|.$$
$$\left(2\right)$$
Algorithm 1 Active Learning for Level Set Estimation Using Randomized Straddle Algorithms Input: GP prior GP(0, k), threshold θ ∈ R
for t = 1, 2*, . . . , T* do Compute µt−1(x) and σ 2 t−1
(x) for each x ∈ X by equation 1 Estimate Ht and Lt by equation 2 Generate βt from the chi-squared distribution with two degrees of freedom Compute ucbt−1(x), lcbt−1(x) and at−1(x)
Select the next evaluation point xt by xt = arg maxx∈X at−1(x)
Observe yt = f(xt) + εt at the point xt Update GP by adding the observed data end for Output: Return HT and LT as the estimated sets Thus, by using ucbt−1(x) and lcbt−1(x), STRt−1(x) can be rewritten as STRt−1(x) = min{ucbt−1(x) − *θ, θ* − lcbt−1(x)}.
We consider sampling βt of the straddle heuristic from a probability distribution. In the framework of blackbox function maximization, Takeno et al. (2023) uses a sample from a two-parameter exponential distribution as the confidence parameter of the original GP-UCB. The two-parameter exponential distribution considered by Takeno et al. (2023) can be expressed as 2 log(*|X |*/2)+st, where st follows the chi-squared distribution with two degrees of freedom. Therefore, we use a similar argument and consider βt of the straddle heuristic as a sample from the chi-squared distribution with two degrees of freedom, and propose the following randomized straddle AF.
Definition 3.2 (Randomized Straddle). For each t ≥ 1, let βt be a sample from the chi-squared distribution with two degrees of freedom, where β1, . . . , βt, ε1, . . . , εt, f are mutually independent. Then, the randomized straddle at−1(x) is defined as follows:
$$a_{t-1}(\mathbf{x})=\operatorname*{max}\{\operatorname*{min}\{\operatorname*{ucb}_{t-1}(\mathbf{x})-\theta,\theta-\operatorname*{lcb}_{t-1}(\mathbf{x})\},0\}.$$
at−1(x) = max{min{ucbt−1(x) − *θ, θ* − lcbt−1(x)}, 0}. (3)
$$\left({\mathfrak{3}}\right)$$
.
Hence, using at−1(x), the next point to be evaluated is selected by xt = arg maxx∈X at−1(x). Takeno et al. (2023) adds a constant 2 log(*|X |*/2) , which depends on the number of elements in X , to the sample from the chi-squared distribution with two degrees of freedom. In contrast, the random sample proposed in this study does not require the addition of such a constant. As a result, the confidence parameter in the randomized straddle does not depend on the number of iterations t or the number of candidate points.
The only difference between the straddle heuristic STRt−1(x) and equation 3 is that β 1/2 tis randomized, and equation 3 performs a max operation with 0. We describe in Section 4 that this modification leads to theoretical guarantees. Finally, we give the pseudocode of the proposed algorithm in Algorithm 1.
## 4 Theoretical Analysis
In this section, we give theoretical guarantees for the proposed model. First, we define the loss lt(x) for each x ∈ X and t ≥ 1 as
$$l_{t}(\mathbf{x})={\left\{\begin{array}{l l}{0}&{{\mathrm{if~}}\mathbf{x}\in H^{*},\mathbf{x}\in H_{t},}\\ {0}&{{\mathrm{if~}}\mathbf{x}\in L^{*},\mathbf{x}\in L_{t},}\\ {f(\mathbf{x})-\theta}&{{\mathrm{if~}}\mathbf{x}\in H^{*},\mathbf{x}\in L_{t},}\\ {\theta-f(\mathbf{x})}&{{\mathrm{if~}}\mathbf{x}\in L^{*},\mathbf{x}\in H_{t}}\end{array}\right.}$$
Then, the loss r(Ht, Lt) for the estimated sets Ht and Lt is defined as 1:
the estimated case $H_{t}$ and $H_{t}$ is defined as $\cdot$ $$r(H_{t},L_{t})=\left\{\begin{array}{ll}\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}l_{t}(\mathbf{x})&\mbox{if$\mathcal{X}$is finite}\\ \frac{1}{\mbox{Vol}(\mathcal{X})}\int_{\mathcal{X}}l_{t}(\mathbf{x})\mbox{d}\mathbf{x}&\mbox{if$\mathcal{X}$is infinite}\\ =r_{t}&\end{array}\right.$$
$$\equiv r_{t}.$$
We also define the cumulative loss as Rt =Pt i=1 ri. Let γt be a maximum information gain, where γt is one of indicators for measuring the sample complexity. The maximum information gain γt is often used in theoretical analysis of BO and LSE using GP (Srinivas et al., 2010; Gotovos et al., 2013), and γt is given by
$$\gamma_{t}=\frac{1}{2}\operatorname*{max}_{\vec{x}_{1},\ldots,\vec{x}_{t}}\log\operatorname*{det}(I_{t}+\sigma_{\mathrm{noise}}^{-2}\tilde{K}_{t}),$$
where K˜t is the t × t matrix whose (*j, k*)-th element is k(x˜j , x˜k). Then, the following theorem holds.
Theorem 4.1. Assume that f follows GP(0, k), where k(·, ·) is a positive-definite kernel satisfying k(x, x) ≤
1 for any x ∈ X . For each t ≥ 1, let βt be a sample from the chi-squared distribution with two degrees of freedom, where β1, . . . , βt, ε1, . . . .εt, f are mutually independent. Then, the following inequality holds:
$$\mathbb{E}[R_{t}]\leq{\sqrt{C_{1}t\gamma_{t}}},$$
where C1 = 4/ log(1 + σ
−2 noise), and the expectation is taken with all randomness including f, εt and βt.
From Theorem 4.1, the following theorem holds. Theorem 4.2. Under the assumptions of Theorem 4.1, the following inequality holds:
$$\mathbb{E}[r_{t}]\leq{\sqrt{\frac{C_{1}\gamma_{t}}{t}}},$$
where C1 is given in Theorem 4.1.
By the definition of the loss lt(x), lt(x) represents how far f(x) is from the threshold when x is misclassified, and rt represents the average value of lt(x) across all candidate points. Under mild assumptions, it is known that γt is sublinear (Srinivas et al., 2010). Therefore, by Theorem 4.1, it is guaranteed that Rt is also sublinear in the expected value sense. Furthermore, by Theorem 4.2, it is guaranteed that rt converges to 0 in the expected value sense. Finally, it is challenging to directly compare the proposed method with GP-based methods such as the LSE algorithm and TRUVAR in terms of theoretical analysis. This difficulty arises because, first, the proposed method and these methods use different approaches to estimate H∗ and L
∗, and second, the criteria for evaluating the quality of the estimated sets differ. However, it is important to note that the proposed method has theoretical guarantees, and the confidence parameter β 1/2 t does not depend on the number of iterations t or the input space X , making it applicable whether X is finite or infinite. Additionally, since E[β 1/2 t] = √2π/2 ≈ 1.25, the realized values of β 1/2 t are not conservative. To the best of our knowledge, no existing method satisfies all of these properties. Moreover, we confirm in Section 5 that the practical performance of the proposed method is equal to or better than existing methods.
## 5 Numerical Experiments
We confirm the practical performance of the proposed method using synthetic functions and real-world data.
## 5.1 Synthetic Data Experiments When X **Is Finite**
In this section, the input space X was defined as a set of grid points that uniformly cut the region [l1, u1] ×
[l2, u2] into 50 × 50. In all experiments, we used the following Gaussian kernel:
$$k(\mathbf{x},\mathbf{x}^{\prime})=\sigma_{f}^{2}\exp\left(-{\frac{\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}^{2}}{L}}\right)$$
1The discussion of the case where the loss is defined based on the maximum value r(Ht, Lt) = maxx∈X lt(x) is given in Appendix A.
As black-box functions, we considered the following three synthetic functions:
Case 1 The black-box function f(x1, x2) is a sample path from GP(0, k), where k(·, ·) is given by k(x, x 0) = exp(−kx − x 0k 2 2/2).
Case 2 The black-box function f(x1, x2) is the following sinusoidal function:
f(x1, x2) = sin(10x1) + cos(4x2) − cos(3x1x2).
Case 3 The black-box function f(x1, x2) is the following shifted negative Himmelblau function:
f(x1, x2) = −(x 2 1 + x2 − 11)2 − (x1 + x 2 2 − 7)2 + 100.
Furthermore, we used the normal distribution with mean 0 and variance σ 2 noise for the observation noise.
The threshold θ and the parameters used for each setting are summarized in Table 1. The settings for the sinusoidal and Himmelblau functions are the same as those used in Zanette et al. (2019). The performance was evaluated using the loss rt and Fscoret, where Fscoret is the F-score calculated by
$$\text{Pre}_{t}={\frac{|H_{t}\cap H^{*}|}{|H_{t}|}},\text{Rec}_{t}={\frac{|H_{t}\cap H^{*}|}{|H^{*}|}},\text{Fscore}_{t}={\frac{2\times\text{Pre}_{t}\times\text{Rec}_{t}}{\text{Pre}_{t}+\text{Rec}_{t}}}.$$
$\mathbf{a}$
Then, we compared the following six AFs:
(Random) Select xt by using random sampling.
(US) Perform uncertainty sampling, that is, xt = arg maxx∈X σ 2 t−1
(x).
(Straddle) Perform the straddle heuristic proposed by Bryan et al. (2005), that is, xt = arg maxx∈X STRt−1(x).
(LSE) Perform the LSE algorithm using the LSE AF a
(LSE)
t−1(x) proposed by Gotovos et al. (2013), that is, xt = arg maxx∈X a
(LSE)
t−1(x).
(MILE) Perform the MILE algorithm proposed by Zanette et al. (2019), that is, xt = arg maxx∈X a
(MILE)
t−1(x),
where, a
(MILE)
t−1(x) is the same as the robust MILE, another AF proposed by Zanette et al. (2019),
with the tuning parameters and γ set to 0 and −∞, respectively.
(Proposed) Select xt by using equation 3, that is, xt = arg maxx∈X at−1(x).
In all experiments, the classification rules were the same for all six methods, and only the AF was changed. We used β 1/2 t = 3 as the confidence parameter required for MILE and Straddle, and β 1/2 p t =
2 log(2500 × π 2t 2/(6 × 0.05)) for LSE. Under this setup, one initial point was taken at random and the algorithm was run until the number of iterations reached 300. This simulation was repeated 100 times, and the average rt and Fscoret at each iteration were calculated, where in Case 1, f was generated for each simulation from GP(0, k).
As shown in Fig. 2, the proposed method consistently performs as well as or better than the comparison methods in all three cases, in terms of both the loss rt and the Fscoret.
## 5.2 Synthetic Data Experiments When X **Is Infinite**
In this section, we used the region [−5, 5]5 ⊂ R
5 as X and the same kernel as in Section 5.1. As black-box functions, we used the following three synthetic functions:
Case 1 The black-box function f(x1, x2, x3, x4, x5) is the following shifted negative sphere function:
$$f(x_{1},x_{2},x_{3},x_{4},x_{5})=41.65518-\left(\sum_{d=1}^{5}x_{d}^{2}\right)$$
.
| Table 1: Experimental parameters for each setting in Section 5.1 2 2 | | | | | | | | |
|------------------------------------------------------------------------|----|----|----|----|--------|-----------|---------|-----|
| Black-box function | l1 | u1 | l2 | u2 | σ f | L | σ noise | θ |
| GP sample path | −5 | 5 | −5 | 5 | 1 | 2 | 10−6 | 0.5 |
| Sinusoidal function | 0 | 1 | 0 | 2 | exp(2) | 2 exp(−3) | exp(−2) | 1 |
| Himmelblau's function | -5 | 5 | -5 | 5 | exp(8) | 2 | exp(4) | 0 |
![7_image_0.png](7_image_0.png)
Figure 2: Averages for the loss rt and Fscoret for each AF over 100 simulations across different settings when the input space is finite. The top row shows rt, and the bottom row shows Fscoret. Error bars represent six times the standard error.
Case 2 The black-box function f(x1, x2, x3, x4, x5) is the following shifted negative Rosenbrock function:
$$f(x_{1},x_{2},x_{3},x_{4},x_{5})=53458.91-\left[\sum_{d=1}^{4}\left\{100(x_{d+1}-x_{d}^{2})^{2}+(1-x_{d})^{2}\right\}\right].$$
Case 3 The black-box function f(x1, x2, x3, x4, x5) is the following shifted negative Styblinski-Tang function:
$$f(x_{1},x_{2},x_{3},x_{4},x_{5})=-20.8875-\frac{\sum_{d=1}^{5}(x_{d}^{4}-16x_{d}^{2}+5x_{d})}{2}.$$
Additionally, we used the normal distribution with mean 0 and variance σ 2 noise for the observation noise. The threshold θ and parameters used for each setting are summarized in Table 2. The performance was evaluated using rt and Fscoret. For each simulation, 100,000 points were randomly selected from [−5, 5]5, which were used as the input point set X˜ to calculate rt and Fscoret. The values of rt and Fscoret in X˜ were calculated as approximations of the true values. As AFs, we compared five methods used in Section 5.1, except for
| Table 2: Experimental parameters for each setting in Se | ction 5.2 | | | |
|-----------------------------------------------------------|-------------|----|---------|-------|
| 2 | 2 | | | |
| Black-box function | σ f | L | σ noise | θ |
| Sphere | 900 | 40 | 10−6 | 9.6 |
| Rosenbrock | 300002 | 40 | 10−6 | 14800 |
| Styblinski-Tang | 752 | 40 | 10−6 | 12.3 |
MILE, which does not handle continuous settings. We used β 1/2 t = 3 as the confidence parameter required for Straddle, and β 1/2 t =p2 log(1015 × π 2t 2/(6 × 0.05)) for LSE. Here, the original LSE algorithm uses the intersection of ucbt−1(x) and lcbt−1(x) in the previous iterations given below to calculate the AF:
$$\tilde{\mathrm{ucb}}_{t-1}(\mathbf{x})=\operatorname*{min}_{1\leq i\leq t}\mathrm{ucb}_{i-1}(\mathbf{x}),\mathrm{l}\tilde{\mathrm{cb}}_{t-1}(\mathbf{x})=\operatorname*{max}_{1\leq i\leq t}\mathrm{l}\mathrm{cb}_{i-1}(\mathbf{x}).$$
Conversely, we did not perform this operation in the infinite set setting, and calculated the AF instead using ucb˜t−1(x) = ucbt−1(x) and ˜lcbt−1(x) = lcbt−1(x). Under this setup, one initial point was chosen at random and the algorithm was run for 500 iterations. This simulation was repeated 100 times and the average rt and Fscoret at each iteration were calculated.
From Fig 3, it can be confirmed that the proposed method has performance equal to or better than the comparison methods in terms of both rt and Fscoret in the sphere function setting. In the case of the Rosenbrock function setting, the proposed method exhibited performance equivalent to or better than the comparison method in terms of rt. Moreover, in terms of Fscoret, the Random method showed the best performance up to 250 iterations, but the proposed method matched or outperformed the comparison methods by the end of the iterations. In the Styblinski-Tang function setting, Random performed best in terms of rt and Fscoret up to around 300 iterations, but the proposed method equaled or surpassed the comparison methods by the final iterations.
## 5.3 Real-World Data Experiments
In this section, we conducted experiments using the carrier lifetime value, a measure of the quality performance of silicon ingots used in solar cells (Kutsukake et al., 2015). The data we used include the two-dimensional coordinates x = (x1, x2) ∈ R
2 of the sample surface and the carrier lifetime values
˜f(x) ∈ [0.091587, 7.4613] at each coordinate, where x1 ∈ {2a + 6 | 1 ≤ a ≤ 89}, x2 ∈ {2a + 6 | 1 ≤ a ≤ 74}
and *|X |* = 89 × 74 = 6586. In quality evaluation, identifying defective regions, known as red zones areas where the value of ˜f(x) falls below a certain threshold is crucial. In this experiment, the threshold was set to 3, and we focused on identifying regions where ˜f(x) is 3 or less. We considered f(x) = − ˜f(x) + 3 as the black-box function and performed experiments with θ = 0. Additionally, the experiment was conducted assuming there was no noise in the observations. Moreover, to stabilize the posterior distribution calculation, σ 2 noise = 10−6 was used in the calculation. We used the following Matérn 3/2 kernel:
$$k(\mathbf{x},\mathbf{x}^{\prime})=4\left(1+{\frac{{\sqrt{3}}\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}}{25}}\right)\exp\left(-{\frac{{\sqrt{3}}\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}}{25}}\right)$$
The performance was evaluated using the loss rt and Fscoret. As AFs, we compared six methods used in Section 5.1. We used β 1/2 t = 3 as the confidence parameter required for MILE and Straddle, and β 1/2 p t =
2 log(6586 × π 2t 2/(6 × 0.05)) for LSE. Under this setup, one initial point was chosen at random and the algorithm was run for 200 iterations. Because the observation noise was set to 0, the experiment was conducted under the setting that a point that had been observed once would not be observed thereafter.
This simulation was repeated 100 times and the average rt and Fscoret at each iteration were calculated.
As shown in Fig. 4, the proposed method demonstrates performance that is equal to or better than the comparison methods in terms of both loss rt and Fscoret.
![9_image_0.png](9_image_0.png)
Figure 3: Averages of the loss rt and Fscoret for each AF over 100 simulations for each setting when the input space is infinite. The top row shows rt, the bottom row shows Fscoret, and each error bar length represents the six times the standard error.
![9_image_1.png](9_image_1.png)
Figure 4: Averages of the loss rt and Fscoret for each AF over 100 simulations using the carrier lifetime data. The left figure shows rt, while the right figure shows Fscoret, with error bars representing six times the standard error.
## 6 Conclusion
In this study, we proposed a novel method called the randomized straddle algorithm, an extension of the straddle algorithm for LSE problems in black-box functions. The proposed method replaces the value of βt in the straddle algorithm with a random sample from the chi-squared distribution with two degrees of freedom, performing LSE based on the GP posterior mean. Through these modifications, we proved that the expected value of the loss in the estimated sets is O(pγt/t). Compared to existing methods, the proposed approach offers three key advantages. First, most theoretical analyses of existing methods involve confidence parameters that depend on the number of candidate points and iterations, whereas such terms are not present in the proposed method. Second, existing methods either do not apply to continuous search spaces or require discretization, with parameters for discretization often being unknown. In contrast, the proposed method is applicable to continuous search spaces without requiring algorithmic adjustments, providing the same theoretical guarantees as for finite search spaces. Third, while confidence parameters in existing methods tend to be overly conservative, the expected value of the confidence parameter in the proposed method is
√2π/2 ≈ 1.25, which is not excessively conservative. Furthermore, numerical experiments demonstrated that the performance of the proposed method is equal to or better than that of existing methods. This indicates that the proposed method performs comparably to heuristic methods while offering the added benefit of theoretical guarantees. However, it is important to develop methods that provide both theoretical guarantees and practical applicability for other evaluation measures of classification performance, such as Fscoret. Addressing these issues will be the focus of future work.
## References
Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, and Volkan Cevher. Truncated variance reduction:
A unified approach to bayesian optimization and level-set estimation. *Advances in neural information* processing systems, 29, 2016.
Brent Bryan and Jeff Schneider. Actively learning level-sets of composite functions. In *Proceedings of the* 25th international conference on Machine learning, pp. 80–87, 2008.
Brent Bryan, Robert C Nichol, Christopher R Genovese, Jeff Schneider, Christopher J Miller, and Larry Wasserman. Active learning for identifying function threshold boundaries. Advances in neural information processing systems, 18, 2005.
Emile Contal, David Buffoni, Alexandre Robicquet, and Nicolas Vayatis. Parallel gaussian process optimization with upper confidence bound and pure exploration. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 225–240. Springer, 2013.
Alkis Gotovos, Nathalie Casati, Gregory Hitz, and Andreas Krause. Active learning for level set estimation.
In *Proceedings of the Twenty-Third international joint conference on Artificial Intelligence*, pp. 1344–1350, 2013.
Shota Hozumi, Kentaro Kutsukake, Kota Matsui, Syunya Kusakawa, Toru Ujihara, and Ichiro Takeuchi.
Adaptive defective area identification in material surface using active transfer learning-based level set estimation. *arXiv preprint arXiv:2304.01404*, 2023.
Yu Inatsu, Masayuki Karasuyama, Keiichi Inoue, and Ichiro Takeuchi. Active Learning for Level Set Estimation Under Input Uncertainty and Its Extensions. *Neural Computation*, 32(12):2486–2531, 12 2020.
ISSN 0899-7667. doi: 10.1162/neco_a_01332. URL https://doi.org/10.1162/neco_a_01332.
Yu Inatsu, Shogo Iwazaki, and Ichiro Takeuchi. Active learning for distributionally robust level-set estimation. In *International Conference on Machine Learning*, pp. 4574–4584. PMLR, 2021.
Yu Inatsu, Shion Takeno, Hiroyuki Hanada, Kazuki Iwata, and Ichiro Takeuchi. Bounding box-based multiobjective Bayesian optimization of risk measures under input uncertainty. In Sanjoy Dasgupta, Stephan Mandt, and Yingzhen Li (eds.), *Proceedings of The 27th International Conference on Artificial Intelligence* and Statistics, volume 238 of *Proceedings of Machine Learning Research*, pp. 4564–4572. PMLR, 02–04 May 2024. URL https://proceedings.mlr.press/v238/inatsu24a.html.
Shogo Iwazaki, Yu Inatsu, and Ichiro Takeuchi. Bayesian experimental design for finding reliable level set under input uncertainty. *IEEE Access*, 8:203982–203993, 2020.
Kirthevasan Kandasamy, Jeff Schneider, and Barnabás Póczos. High dimensional bayesian optimisation and bandits via additive models. In *International conference on machine learning*, pp. 295–304. PMLR, 2015.
Kirthevasan Kandasamy, Gautam Dasarathy, Junier B Oliva, Jeff Schneider, and Barnabás Póczos. Gaussian process bandit optimisation with multi-fidelity evaluations. Advances in neural information processing systems, 29, 2016.
Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, and Barnabás Póczos. Multi-fidelity bayesian optimisation with continuous approximations. In *International conference on machine learning*, pp. 1799–
1808. PMLR, 2017.
Johannes Kirschner, Ilija Bogunovic, Stefanie Jegelka, and Andreas Krause. Distributionally robust bayesian optimization. In Silvia Chiappa and Roberto Calandra (eds.), *Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics*, volume 108 of *Proceedings of Machine Learning Research*, pp. 2174–2184. PMLR, 26–28 Aug 2020. URL https://proceedings.mlr.press/v108/
kirschner20a.html.
Shunya Kusakawa, Shion Takeno, Yu Inatsu, Kentaro Kutsukake, Shogo Iwazaki, Takashi Nakano, Toru Ujihara, Masayuki Karasuyama, and Ichiro Takeuchi. Bayesian optimization for cascade-type multistage processes. *Neural Computation*, 34(12):2408–2431, 2022.
Kentaro Kutsukake, Momoko Deura, Yutaka Ohno, and Ichiro Yonenaga. Characterization of silicon ingots: Mono-like versus high-performance multicrystalline. *Japanese Journal of Applied Physics*, 54(8S1):
08KD10, 2015.
Benjamin Letham, Phillip Guan, Chase Tymms, Eytan Bakshy, and Michael Shvartsman. Look-ahead acquisition functions for bernoulli level set estimation. In International Conference on Artificial Intelligence and Statistics, pp. 8493–8513. PMLR, 2022.
Blake Mason, Lalit Jain, Subhojyoti Mukherjee, Romain Camilleri, Kevin Jamieson, and Robert Nowak.
Nearly optimal algorithms for level set estimation. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera (eds.), *Proceedings of The 25th International Conference on Artificial Intelligence and Statistics*,
volume 151 of *Proceedings of Machine Learning Research*, pp. 7625–7658. PMLR, 28–30 Mar 2022. URL
https://proceedings.mlr.press/v151/mason22a.html.
Carl Edward Rasmussen and Christopher K. I. Williams. *Gaussian Processes for Machine Learning (Adaptive* Computation and Machine Learning). The MIT Press, 2005. ISBN 026218253X.
Paul Rolland, Jonathan Scarlett, Ilija Bogunovic, and Volkan Cevher. High-dimensional bayesian optimization via additive models with overlapping groups. In *International conference on artificial intelligence and* statistics, pp. 298–307. PMLR, 2018.
Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. Taking the human out of the loop: A review of bayesian optimization. *Proceedings of the IEEE*, 104(1):148–175, 2015.
Shubhanshu Shekhar and Tara Javidi. Multiscale gaussian process level set estimation. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 3283–3291. PMLR, 2019.
Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proceedings of the 27th International Conference on Machine Learning, pp. 1015–1022, 2010.
Shion Takeno, Yu Inatsu, and Masayuki Karasuyama. Randomized Gaussian process upper confidence bound with tighter Bayesian regret bounds. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), *Proceedings of the 40th International Conference* on Machine Learning, volume 202 of *Proceedings of Machine Learning Research*, pp. 33490–33515. PMLR,
23–29 Jul 2023. URL https://proceedings.mlr.press/v202/takeno23a.html.
Andrea Zanette, Junzi Zhang, and Mykel J Kochenderfer. Robust super-level set estimation using gaussian processes. In *Machine Learning and Knowledge Discovery in Databases: European Conference, ECML*
PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part II 18, pp. 276–291. Springer, 2019.
Marcela Zuluaga, Andreas Krause, et al. e-pal: An active learning approach to the multi-objective optimization problem. *Journal of Machine Learning Research*, 17(104):1–32, 2016.
Algorithm 2 Randomized Straddle Algorithms for Max-value Loss in the Finite Setting Input: GP prior GP(0, k), threshold θ ∈ R
for t = 1, 2*, . . . , T* do Compute µt−1(x) and σ 2 t−1
(x) for each x ∈ X by equation 1 Estimate Ht and Lt by equation 2 Generate ξt from the chi-squared distribution with two degrees of freedom Compute βt = ξt + 2 log(*|X |*), ucbt−1(x), lcbt−1(x) and a˜t−1(x)
Select the next evaluation point xt by xt = arg maxx∈X a˜t−1(x)
Observe yt = f(xt) + εt at the point xt Update GP by adding the observed data end for Output: Return HTˆ and LTˆ as the estimated sets, where Tˆ is given by equation 5
## A Extension To Max-Value Loss
In this section, we consider the following max-value loss defined based on the maximum value of lt(x):
r(Ht, Lt) = max x∈X
lt(x) ≡ r˜t.
When X is finite, we need to modify the definition of the AF and the estimated sets returned at the end of the algorithm. Conversely, if X is an infinite set, the definitions of Ht and Lt should be modified in addition to the above. Therefore, we discuss the finite and infinite cases separately.
## A.1 Proposed Method For Max-Value Loss When X **Is Finite**
When X is finite, we propose the following AF with a modified distribution that βt follows. Definition A.1 (Randomized Straddle for Max-value Loss). For each t ≥ 1, let ξt be a random sample from the chi-squared distribution with two degrees of freedom, where ξ1, . . . , ξt, ε1, . . . , εt, f are mutually independent. Define βt = ξt + 2 log(*|X |*). Then, the randomized straddle AF for the max-value loss, a˜t−1(x),
is defined as:
a˜t−1(x) = max{min{ucbt−1(x) − *θ, θ* − lcbt−1(x)}, 0}. (4)
By using a˜t−1(x), the next point to be evaluated is selected by xt = arg maxx∈X a˜t−1(x). Additionally, we
change estimation sets returned at the end of iterations T in the algorithm to the following instead of HT and LT :
Definition A.2. For each t, define
$${\hat{t}}={\underset{1\leq i\leq t}{\operatorname{arg\,min}}}\,\mathbb{E}_{t}[{\tilde{r}}_{i}],$$
Et[˜ri], (5)
where Et[·] represents the conditional expectation given Dt−1. Then, at the end of iterations T, we define HTˆ andLTˆ to be the estimated sets.
Finally, we give the pseudocode of the proposed algorithm in Algorithm 2.
## A.1.1 Theoretical Analysis For Max-Value Loss When X **Is Finite**
For the max-value loss, the following theorem holds under Algorithm 2.
Theorem A.1. Let f be a sample path from GP(0, k), where k(·, ·) is a positive-definite kernel satisfying k(x, x) ≤ 1 for any x ∈ X . For each t ≥ 1, let ξt be a random sample from the chi-squared distribution with two degrees of freedom, where ξ1, . . . , ξt, ε1, . . . .εt, f are mutually independent. Define βt = ξt + 2 log(*|X |*).
Then, the following holds for R˜t =Pt i=1 r˜i:
$$\mathbb{E}[{\tilde{R}}_{t}]\leq{\sqrt{{\tilde{C}}_{1}t\gamma_{t}}},$$
$$\left({\bar{\mathbf{5}}}\right)$$
where C˜1 = (4 + 4 log(*|X |*))/ log(1 + σ
−2 noise) and the expectation is taken with all randomness including *f, ε*t and βt.
From Theorem A.1, the following theorem holds. Theorem A.2. Under the assumptions of Theorem A.1, the following inequality holds:
$$\mathbb{E}[r_{t}]\leq{\sqrt{\frac{{\hat{C}}_{1}\gamma_{t}}{t}}},$$
where tˆ and C˜1 are given in equation 5 and Theorem A.1, respectively.
Comparing Theorems 4.1 and A.1, when considering the max-value loss, βt should be 2 log(*|X |*) larger than in the case of rt, and the constant that appears in the upper bound of the expected value of the cumulative loss has the relationship C˜1 = (1 + log(*|X |*))C1. Note that while the upper bound for rt does not depend on X , it depends on the logarithm of the number of elements in X for the max-value loss. Also, when comparing Theorem 4.2 and A.2, it is not necessary to consider tˆ in rt, whereas it is necessary to consider tˆ
in the max-value loss. For the max-value loss, it is difficult to analytically derive Et[˜ri], and hence, it is also difficult to precisely calculate tˆ. Nevertheless, because the posterior distribution of f given Dt−1 is again a GP, we can generate M sample paths from the GP posterior distribution and calculate the realization r˜
(j)
i of r˜i from each sample path f
(j), and calculate the estimate tˇ of tˆ as
$$\check{t}=\operatorname*{arg\,min}_{1\leq i\leq t}\,\frac{1}{M}\sum_{j=1}^{M}\check{r}_{i}^{(j)}.$$
## A.2 Proposed Method For Max-Value Loss When X **Is Infinite**
In this section, we assume that the input space X ⊂ R
dis a compact set and satisfies X ⊂ [0, r]
d, where
r > 0. Furthermore, we assume the following additional assumption for f:
Assumption A.1. Let f be differentiable with probability 1. Assuming positive constants *a, b* exist, such that
$$\mathbb{P}\left(\sup_{\mathbf{x}\in\mathcal{X}}\left|{\frac{\partial f}{\partial x_{j}}}\right|>L\right)\leq a\exp\left(-\left({\frac{L}{b}}\right)^{2}\right),\quad j\in[d],$$ nt of $\mathbf{x}$ and $[d]\equiv\{1,\ldots,d\}$.
where xj is the j-th element of x and [d] ≡ {1*, . . . , d*}.
Next, we provide a LSE method based on the discretization of the input space.
$$\mathbf{A.2.1}$$
## A.2.1 Level Set Estimation For Max-Value Loss When X **Is Infinite**
For each t ≥ 1, let Xt be a finite subset of X . Also, for any x ∈ X , let [x]t be the element of Xt that has the shortest L1 distance from x 2 Then, we define Ht and Lt as
$H_{t}=\{\mathbf{x}\in\mathcal{X}\mid\mu_{t-1}([\mathbf{x}]_{t})\geq\theta\},\ L_{t}=\{\mathbf{x}\in\mathcal{X}\mid\mu_{t-1}([\mathbf{x}]_{t})<\theta\}.$
A.2.2 Acquisition Function for Max-value Loss when X **is Infinite**
We define a randomized straddle AF based on Xt: Definition A.3. For each t ≥ 1, let ξt be a random sample from the chi-squared distribution with two degrees of freedom, where ξ1, . . . , ξt, ε1, . . . , ξt, f are mutually independent. Define βt = 2 log(|Xt|) + ξt.
Then, the randomized straddle AF for the max-value loss when X is infinite, aˇt−1(x), is defined as:
aˇt−1(x) = max{min{ucbt−1(x) − *θ, θ* − lcbt−1(x)}, 0}.
The next point to be evaluated is selected by xt = arg maxx∈X aˇt−1(x). Finally, we give the pseudocode of the proposed algorithm in Algorithm 3.
2If there are multiple x ∈ Xt with the shortest L1 distance, determine the one that is unique. For example, we first choose the option with the smallest first component. If a unique determination is not possible, we then select the option with the smallest second component. This process is repeated up to the d-th component to achieve a unique determination.
$$(6)$$
Algorithm 3 Randomized Straddle Algorithms for Max-value Loss in the Infinite Setting Input: GP prior GP(0, k), threshold θ ∈ R, discretized sets X1*, . . . ,* XT
for t = 1, 2*, . . . , T* do Compute µt−1(x) and σ 2 t−1
(x) for each x ∈ X by equation 1 Estimate Ht and Lt by equation 6 Generate ξt from the chi-squared distribution with two degrees of freedom Compute βt = ξt + 2 log(|Xt|), ucbt−1(x), lcbt−1(x) and a˜t−1(x)
Select the next evaluation point xt by xt = arg maxx∈X aˇt−1(x)
Observe yt = f(xt) + εt at the point xt Update GP by adding the observed data end for Output: Return HTˆ and LTˆ as the estimated sets, where Tˆ = arg min1≤i≤T ET [˜ri]
## A.2.3 Theoretical Analysis For Max-Value Loss When X **Is Infinite**
Under Algorithm 3, the following theorem holds.
Theorem A.3. Let X ⊂ [0, r]
d be a compact set with r > 0. Assume that f is a sample path from GP(0, k),
where k(·, ·) is a positive-definite kernel satisfying k(x, x) ≤ 1 for any x ∈ X . Also assume that Assumption A.1 holds. Moreover, for each t ≥ 1, let τt = d*bdrt*2(plog(ad) + √π/2)e, and let Xt be a finite subset of X
satisfying |Xt| = τ d t and
$$\|\mathbf{x}-[\mathbf{x}]_{t}\|_{1}\leq{\frac{d r}{\tau_{t}}},\quad\mathbf{x}\in{\mathcal{X}}.$$
Suppose that ξt is a random sample from the chi-squared distribution with two degrees of freedom, where ξ1, . . . , ξt, ε1, . . . , εt, f are mutually independent. Define βt = 2d log(d*bdrt*2(plog(ad) + √π/2)e) + ξt. Then, the following holds for R˜t =Pt i=1 r˜i:
$$\mathbb{E}[\tilde{R}_{t}]\leq\frac{\pi^{2}}{6}+\sqrt{C_{1}t\gamma_{t}(2+s_{t})},$$
where Cˇ1 = 2/ log(1 + σ
−2 noise) and st = 2d log(d*bdrt*2(plog(ad) + √π/2)e), and the expectation is taken with all randomness including *f, ε*t and βt.
From Theorem A.3, the following holds. Theorem A.4. Under the assumptions of Theorem A.3, define
$\hat{t}=\underset{1\leq t}{\operatorname{arg}}$.
$${\hat{t}}=\operatorname*{arg\,min}_{1\leq i\leq t}\mathbb{E}_{t}[{\hat{r}}_{i}].$$
1≤i≤t
Then, the following holds:
$$\mathbb{E}[{\tilde{r}}_{\hat{t}}]\leq{\frac{\pi^{2}}{6t}}+{\sqrt{\frac{{\tilde{C}}_{1}\gamma_{t}(2+s_{t})}{t}}},$$
where Cˇ1 and st are given in Theorem A.3.
## B Proofs B.1 Proof Of Theorem 4.1
Proof. Let δ ∈ (0, 1). For any t ≥ 1, Dt−1 and x ∈ X , from the proof of Lemma 5.1 in Srinivas et al. (2010),
the following holds with probability at least 1 − δ:
$$\mathbf{\partial}\mathbf{\partial}=\rho_{\delta}\mathbf{\partial}$$
lcbt−1,δ(x) ≡ µt−1(x) − β 1/2 δσt−1(x) ≤ f(x) ≤ µt−1(x) + β
$$\leq\mu_{t-1}(\mathbf{x})+\beta_{\delta}^{1/2}\sigma_{t-1}(\mathbf{x})\equiv\operatorname{ucb}_{t-1,\delta}(\mathbf{x}),$$
$$\left(7\right)$$
where βδ = 2 log(1/δ). Here, we consider the case where x ∈ Ht. If x ∈ H∗, we have lt(x) = 0. In contrast, if x ∈ L
∗, noting that lcbt−1,δ(x) ≤ f(x) by equation 7 we get
$$l_{t}(\mathbf{x})=\theta-f(\mathbf{x})\leq\theta-\operatorname{lcb}_{t-1,\delta}(\mathbf{x}).$$
Moreover, the inequality µt−1(x) ≥ θ holds because x ∈ Ht. Hence, from the definition of lcbt−1,δ(x) and ucbt−1,δ(x), we obtain θ − lcbt−1,δ(x) ≤ ucbt−1,δ(x) − θ.
Therefore, we get
$l_{t}(\mathbf{x})\leq\theta-\operatorname{lcb}_{t-1,\delta}(\mathbf{x})=\min\{\operatorname{vcb}_{t-1,\delta}(\mathbf{x})-\theta,\theta-\operatorname{lcb}_{t-1,\delta}(\mathbf{x})\}$ $$\leq\max\{\min\{\operatorname{vcb}_{t-1,\delta}(\mathbf{x})-\theta,\theta-\operatorname{lcb}_{t-1,\delta}(\mathbf{x})\},0\}\equiv a_{t-1,\delta}(\mathbf{x}).$$
Similarly, we consider the case where x ∈ Lt. If x ∈ L
∗, we obtain lt(x) = 0. Thus, because at−1,δ(x) ≥ 0, we get lt(x) ≤ at−1,δ(x). Moreover, if x ∈ H∗, noting that f(x) ≤ ucbt−1,δ(x) by equation 7, we obtain
$$l_{t}(\mathbf{x})=f$$
lt(x) = f(x) − θ ≤ ucbt−1,δ(x) − θ.
Here, the inequality µt−1(x) < θ holds because x ∈ Lt. Therefore, from the definition of lcbt−1,δ(x) and ucbt−1,δ(x), we obtain ucbt−1,δ(x) − θ ≤ θ − lcbt−1,δ(x).
Thus, the following inequality holds:
lt(x) ≤ ucbt−1,δ(x) − θ = min{ucbt−1,δ(x) − *θ, θ* − lcbt−1,δ(x)} ≤ at−1,δ(x).
Therefore, for all cases, the inequality lt(x) ≤ at−1,δ(x) holds. This indicates that the following inequality holds with probability at least 1 − δ:
$$({\boldsymbol{8}})$$
$$l_{t}(\mathbf{x})\leq a_{t-1,\delta}(\mathbf{x})\leq\operatorname*{max}_{\tilde{\mathbf{x}}\in{\mathcal{X}}}a_{t-1,\delta}({\tilde{\mathbf{x}}}).$$
at−1,δ(x˜). (8)
Next, we consider the conditional distribution of lt(x) given Dt−1. Note that this distribution does not depend on βδ. Let Ft−1(·) be a distribution function of lt(x) given Dt−1. Then, from equation 8 we have
$$F_{t-1}\left(\operatorname*{max}_{\tilde{x}\in{\mathcal{X}}}a_{t-1,\delta}({\tilde{x}})\right)\geq1-\delta.$$
Hence, by considering the generalized inverse function of Ft−1(·) for both sides, the following inequality
holds:
$$F_{t-1}^{-1}(1-\delta)$$
−1
(1 − δ) ≤ max
x˜∈X
at−1,δ(x˜).
Here, if δ follows the uniform distribution on the interval (0, 1), then 1 − δ follows the same distribution. In this case, the distribution of F
−1 t−1
(1 − δ) is equal to the distribution of lt(x) given Dt−1. This implies that
$$\mathbb{E}_{t}[l_{t}(\mathbf{x})]\leq\mathbb{E}_{\delta}\left[\operatorname*{max}_{\mathbf{x}\in{\mathcal{X}}}a_{t-1,\delta}(\mathbf{x})\right],$$
where Eδ[·] means the expectation with respect to δ. Furthermore, because 2 log(1/δ) and βt follow the chi-squared distribution with two degrees of freedom, the following holds:
Et[lt(x)] ≤ Eβt
[at−1(xt)] .
Thus, if X is finite, from the definition of rt we obtain
$$\mathbb{E}_{t}[r_{t}]=\mathbb{E}_{t}\left[\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}l_{t}(\mathbf{x})\right]=\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}\mathbb{E}_{t}[l_{t}(\mathbf{x})]\leq\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}\mathbb{E}_{\beta_{t}}\left[a_{t-1}(\mathbf{x}_{t})\right]=\mathbb{E}_{\beta_{t}}\left[a_{t-1}(\mathbf{x}_{t})\right].$$
Similarly, if X is infinite, from the definition of rt and non-negativity of lt(x), using Fubini's theorem we get Et[rt] = Et 1 Vol(X ) Z X lt(x)dx =1 Vol(X ) Z X Et[lt(x)]dx ≤1 Vol(X ) Z X Eβt [at−1(xt)] dx = Eβt [at−1(xt)] . Therefore, the inequality Et[rt] ≤ Eβt [at−1(xt)] holds for both cases. Moreover, from the definition of
at−1(x), the following inequality holds:
$$a_{t-1}(\mathbf{x}_{t})\leq\beta_{t}^{1/2}\sigma_{t-1}(\mathbf{x}_{t})$$
Hence, we get the following inequality:
E[Rt] = E "Xt i=1 ri # ≤ E "Xt i=1 β 1/2 i σi−1(xi) # Cauchy-Schwarz inequality −−−−−−−−−−−−−−−−−→ ≤ E Xt i=1 βi !1/2 Xt i=1 σ 2 i−1 (xi) !1/2 Hölder's inequality −−−−−−−−−−−−→ ≤ vuutE "Xt i=1 βi #vuutE "Xt i=1 σ 2 i−1 (xi) # E[βi]=2 −−−−−→ = √2t vuutE "Xt i=1 σ 2 i−1 (xi) # ≤ √2t sE 2 log(1 + σ −2 noise) γt =pC1tγt,
where the last inequality is derived by the proof of Lemma 5.4 in Srinivas et al. (2010).
We first give three lemmas to prove Theorem 4.2. Theorem 4.2 is proved by Lemma B.1 and B.3. Lemma B.1. Under the assumptions of Theorem 4.1, let
$${\hat{t}}={\underset{1\leq i\leq t}{\operatorname{arg\,min}}}\operatorname{\mathbb{E}}_{t}[r_{i}].$$
Then, the following inequality holds:
$$\mathbb{E}[r_{i}]\leq{\sqrt{\frac{C_{1}\gamma_{t}}{t}}}.$$
Proof. From the definition of tˆ, the inequality Et[rtˆ] ≤
Pt i=1 Et[ri]
tholds. Therefore, we obtain
$$\mathbb{E}[r_{i}]\leq{\frac{\sum_{i=1}^{t}\mathbb{E}[r_{i}]}{t}}={\frac{\mathbb{E}\left[\sum_{i=1}^{t}r_{i}\right]}{t}}={\frac{\mathbb{E}[R_{t}]}{t}}.$$
By combining this and Theorem 4.1, we get the desired result.
Lemma B.2. For any t ≥ 1, i ≤ t and x ∈ X , the expectation Et[li(x)] can be calculated as follows:
$$\mathbb{E}_{t}[l_{i}(\mathbf{x})]={\left\{\begin{array}{l l}{\sigma_{t-1}(\mathbf{x})\left[\phi(-\alpha)+\alpha\left\{1-\Phi(-\alpha)\right\}\right]}\\ {\sigma_{t-1}(\mathbf{x})\left[\phi(\alpha)-\alpha\left\{1-\Phi(\alpha)\right\}\right]}\end{array}\right.}$$
σt−1(x) [φ(α) − α {1 − Φ(α)}] if x ∈ Hi, where α =
µt−1(x)−θ σt−1(x)
, and φ(z) and Φ(z) are the density and distribution function of the standard normal distribution, respectively.
Proof. From the definition of li(x), if x ∈ Li, li(x) can be expressed as li(x) = (f(x)−θ)1l[f(x) ≥ θ], where 1l[·] is the indicator function which takes 1 if the condition · holds, otherwise 0. Furthermore, the conditional distribution of f(x) given Dt−1 is the normal distribution with mean µt−1(x) and variance σ 2 t−1
(x). Thus, from the definition of Et[·], the following holds:
Et[li(x)] = Z ∞
θ
(y − θ)1
q2πσ2
t−1
(x)
exp −
(y − µt−1(x))2
2σ
2
t−1
(x)
dy
=
Z ∞
θ
σt−1(x)
y − µt−1(x)
σt−1(x)+
µt−1(x) − θ
σt−1(x)
1
q2πσ2
t−1
(x)
exp −
(y − µt−1(x))2
2σ
2
t−1
(x)
dy
=
Z ∞
−α
σt−1(x) (z + α)1
√2π
exp −
z
2
2
dz
= σt−1(x)
Z ∞
−α
(z + α) φ(z)dz = σt−1(x){[−φ(z)]∞
−α + α(1 − Φ(−α))}
= σt−1(x) [φ(−α) + α {1 − Φ(−α)}] .
Similarly, if x ∈ Hi, li(x) can be expressed as li(x) = (θ − f(x))1l[f(x) < θ]. Then, we obtain
Et[li(x)] = Z θ
−∞
(θ − y)1
q2πσ2
t−1
(x)
exp −
(y − µt−1(x))2
2σ
2
t−1
(x)
dy
=
Z θ
−∞
σt−1(x)
θ − µt−1(x)
σt−1(x)+
µt−1(x) − y
σt−1(x)
1
q2πσ2
t−1
(x)
exp −
(y − µt−1(x))2
2σ
2
t−1
(x)
dy
=
Z α
∞
σt−1(x) (z − α)1
√2π
exp −
z
2
2
(−1)dz
= σt−1(x)
Z ∞
α
(z − α) φ(z)dz = σt−1(x){[−φ(z)]∞
α − α(1 − Φ(α))}
= σt−1(x) [φ(α) − α {1 − Φ(α)}] .
Lemma B.3. Under the assumptions of Theorem 4.1 the equality tˆ= t holds.
Proof. Let x ∈ X . If x ∈ Ht, the inequality µt−1(x) ≥ θ holds. This implies that α ≥ 0. Hence, from
Lemma B.2 we obtain
Et[lt(x)] = σt−1(x) [φ(α) − α {1 − Φ(α)}] .
Thus, since α ≥ 0, the following inequality holds:
$\sigma_{t-1}(\mathbf{x})\left[\phi(\alpha)-\alpha\left\{1-\Phi(\alpha)\right\}\right]\leq\sigma_{t-1}(\mathbf{x})\left[\phi(-\alpha)+\alpha\left\{1-\Phi(-\alpha)\right\}\right].$ $\alpha$\(\
Therefore, from the definition of Et[li(x)], we get
Similarly, if x ∈ Lt, using the same argument we have
Et[lt(x)] = σt−1(x) [φ(−α) + α {1 − Φ(−α)}] ≤ Et[li(x)].
Here, if X is finite, from the definition of ri we obtain
$\mathbb{E}_{t}[l_{t}(\mathbf{x})]=\sigma_{t-1}(\mathbf{x})\left[\phi(\alpha)-\alpha\left\{1-\Phi(\alpha)\right\}\right]\leq\mathbb{E}_{t}[l_{i}(\mathbf{x})]$.
$$\mathbb{E}_{t}[l_{t}(\mathbf{x})]=\sigma_{t-1}(\mathbf{x})$$
$$\mathbb{E}_{t}[l_{t}(\mathbf{x})]=\sigma_{t-1}(\mathbf{x})$$
$$\mathbb{E}_{t}[r_{t}]=\mathbb{E}_{t}\left[\frac{1}{|X|}\sum_{\mathbf{x}\in X}l_{t}(x)\right]=\frac{1}{|X|}\sum_{\mathbf{x}\in X}\mathbb{E}_{t}[l_{t}(\mathbf{x})]\leq\frac{1}{|X|}\sum_{\mathbf{x}\in X}\mathbb{E}_{t}[l_{t}(\mathbf{x})]=\mathbb{E}_{t}[r_{t}].$$ Similarly, if $X$ is infinite, by using the same argument and Fubini's theorem, we get $\mathbb{E}_{t}[r_{t}]\leq\mathbb{E}_{t}[r_{t}]$. Therefore, for all cases the inequality $\mathbb{E}[r_{t}]\leq\mathbb{E}[r_{t}]$ holds. This implies that $\mathbb{E}[r_{t}]=\mathbb{E}[r_{t}]$
for all cases the inequality Et[rt] ≤ Et[ri] holds. This implies that tˆ= t.
From Lemma B.1 and B.3, we get Theorem 4.2.
$$\begin{array}{l}{\square}\end{array}$$
$\mathbf{b}+\alpha\left\{1-\Phi(-\alpha)\right\}$
Proof. Let δ ∈ (0, 1). For any t ≥ 1 and Dt−1, from the proof of Lemma 5.1 in Srinivas et al. (2010), with probability at least 1 − δ, the following holds for any x ∈ X :
$\operatorname{lcb}_{t-1,\delta}(\mathbf{x})\equiv\mu_{t-1}(\mathbf{x})-\beta_{\delta}^{1/2}\sigma_{t-1}(\mathbf{x})\leq f(\mathbf{x})\leq\mu_{t-1}(\mathbf{x})+\beta_{\delta}^{1/2}\sigma_{t-1}(\mathbf{x})\equiv\operatorname{ucb}_{t-1,\delta}(\mathbf{x})$,
where βδ = 2 log(*|X |*/δ). Here, by using the same argument as in the proof of Theorem 4.1, the inequality lt(x) ≤ a˜t−1,δ(x) holds. Hence, the following holds with probability at least 1 − δ:
$$\hat{r}_{t}=\max_{\mathbf{x}\in\mathcal{X}}l_{t}(\mathbf{x})\leq\max_{\mathbf{x}\in\mathcal{X}}\hat{a}_{t-1,\delta}(\mathbf{x}).\tag{1}$$
$$({\mathfrak{g}})$$
Next, we consider the conditional distribution of r˜t given Dt−1. Note that this distribution does not depend on βδ. Let Ft−1(·) be a distribution function of r˜t given Dt−1. Then, from equation 9, we obtain
$$F_{t-1}\left(\operatorname*{max}_{\mathbf{x}\in{\mathcal{X}}}{\tilde{a}}_{t-1,\delta}(\mathbf{x})\right)\geq1-\delta.$$
Therefore, by taking the generalized inverse function for both sides, we get
$$F_{t-1}^{-1}(1-\delta)\leq\operatorname*{max}_{\mathbf{x}\in{\mathcal{X}}}{\tilde{a}}_{t-1,\delta}(\mathbf{x}).$$
Here, if δ follows the uniform distribution on the interval (0, 1), 1 − δ follows the same distribution. Furthermore, since the distribution of F
−1 t−1
(1 − δ) is equal to the conditional distribution of r˜t given Dt−1, we have
$$\mathbb{E}_{t}[\tilde{r}_{t}]\leq\mathbb{E}_{\delta}\left[\operatorname*{max}_{\mathbf{x}\in{\mathcal{X}}}\tilde{a}_{t-1,\delta}(\mathbf{x})\right].$$
Moreover, noting that 2 log(*|X |*/δ) and βt follow the same distribution, we obtain
$$\mathbb{E}_{t}[{\tilde{r}}_{t}]\leq\mathbb{B}$$
Et[˜rt] ≤ Eβt
$$t{-1}\left(x_{t}\right)]\,.$$
Additionally, from a˜t−1(x), the following inequality holds:
$$\tilde{a}_{t-1}(\mathbf{x}_{t})\leq\beta_{t}^{1/2}\sigma_{t-1}(\mathbf{x}_{t})$$
Therefore, since E[βt] = 2 + 2 log(*|X |*)), the following inequality holds:
$$\mathbb{E}[{\tilde{R}}_{t}]=\mathbb{E}\left[\sum_{i=1}^{t}{\tilde{r_{i}}}\right]$$
# ≤ E "Xt i=1 β 1/2 i σi−1(xi) # ≤ E Xt i=1 βi !1/2 Xt i=1 σ 2 i−1 (xi) !1/2 ≤ vuutE "Xt i=1 βi #vuutE "Xt i=1 σ 2 i−1 (xi) # ≤pt(2 + 2 log(|X |)) vuutE "Xt i=1 σ 2 i−1 (xi) # ≤pt(2 + 2 log(|X |))sE 2 log(1 + σ −2 noise) γt = qC˜1tγt.
Proof. Theorem A.2 is proved by using the same argument as in the proof of Lemma B.1.
Proof. Let x ∈ X . If x ∈ H∗ ∩ Ht or x ∈ L
∗ ∩ Lt, the equality lt(x) = 0 holds. Hence, the following inequality holds:
lt(x) ≤ lt([x]t) ≤ lt([x]t) + |f(x) − f([x]t)|.
We consider the case where x ∈ H∗ and x ∈ Lt, that is, lt(x) = f(x) − θ. Here, since x ∈ Lt, the inequality µt−1([x]t) < θ holds. This implies that [x]t ∈ Lt. If [x]t ∈ H∗, noting that lt([x]t) = f([x]t) − θ we get lt(x) = f(x) − θ = f(x) − f([x]t) + f([x]t) − θ ≤ f([x]t) − θ + |f(x) − f([x]t)| = lt([x]t) + |f(x) − f([x]t)|. Similarly, if [x]t ∈ L
∗, noting that f([x]t) < θ and 0 ≤ lt([x]t) we obtain
$$)=f(\mathbf{x})-\theta=f([\mathbf{x}]_{t})-\theta$$
lt(x) = f(x) − θ = f([x]t) − θ + f(x) − f([x]t) ≤ 0 + f(x) − f([x]t) ≤ lt([x]t) + |f(x) − f([x]t)|.
Next, we consider the case where x ∈ L
∗ and x ∈ Ht, that is, lt(x) = θ − f(x). Here, since x ∈ Ht, the inequality µt−1([x]t) ≥ θholds. This implies that [x]t ∈ Ht. If [x]t ∈ L
∗, noting that lt([x]t) = θ − f([x]t),
we have lt(x) = θ − f(x) = θ − f([x]t) + f([x]t) − f(x) ≤ lt([x]t) + |f(x) − f([x]t)| Similarly, if [x]t ∈ H∗, noting that f([x]t) ≥ θ and 0 ≤ lt([x]t), we get lt(x) = θ − f(x) = θ − f([x]t) + f([x]t) − f(x) ≤ 0 + f([x]t) − f(x) ≤ lt([x]t) + |f(x) − f([x]t)|.
Therefore, for all cases the following inequality holds:
$$l_{t}(\mathbf{x})\leq l_{t}([\mathbf{x}]_{t})+|f(\mathbf{x})-f([\mathbf{x}]_{t})|.$$
Here, let Lmax = supj∈[d]
supx∈X
∂f
∂xj
. Then, the following holds:
$$|f(\mathbf{x})-f([\mathbf{x}]_{t})|\leq L_{\operatorname*{max}}\|\mathbf{x}-[\mathbf{x}]_{t}\|_{1}\leq L_{\operatorname*{max}}{\frac{d r}{\tau_{t}}}.$$
Thus, noting that
$$l_{t}(\mathbf{x})\leq l_{t}([\mathbf{x}]_{t})+L_{\operatorname*{max}}{\frac{d r}{\tau_{t}}}$$
we obtain
$$\tilde{r}_{t}=\max_{\mathbf{x}\in\mathcal{X}}l_{t}(\mathbf{x})\leq L_{\max}\frac{dr}{\tau_{t}}+\max_{\mathbf{x}\in\mathcal{X}}l_{t}([\mathbf{x}]_{t})\equiv L_{\max}\frac{dr}{\tau_{t}}+\max_{\mathbf{\hat{x}}\in\mathcal{X}_{t}}l_{t}(\mathbf{\hat{x}})\equiv L_{\max}\frac{dr}{\tau_{t}}+\tilde{r}_{t}.$$
In addition, from Lemma H.1 in Takeno et al. (2023), the following inequality holds:
$$\mathbb{E}[L_{\operatorname*{max}}]\leq b({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2).$$
Hence, we get
$$\mathbb{E}\left[L_{\operatorname*{max}}{\frac{d r}{\tau_{t}}}\right]\leq{\frac{b({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2)}{\tau_{t}}}d r={\frac{b({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2)}{[b d r t^{2}({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2)]}}d r$$ $$\leq{\frac{b({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2)}{b d r t^{2}({\sqrt{\log(a d)}}+{\sqrt{\pi}}/2)}}d r={\frac{1}{t^{2}}}.$$
Therefore, the following inequality holds:
$$\mathbb{E}[{\hat{R}}_{t}]=\mathbb{E}\left[\sum_{i=1}^{t}{\tilde{r}}_{i}\right]\leq\sum_{i=1}^{t}{\frac{1}{i^{2}}}+\mathbb{E}\left[\sum_{i=1}^{t}{\tilde{r}}_{i}\right]\leq{\frac{\pi^{2}}{6}}+\mathbb{E}\left[\sum_{i=1}^{t}{\tilde{r}}_{i}\right].$$
Here, rˇiis the maximum value of the loss li(x˜) restricted on Xi, and since Xiis a finite set, by replacing X
with Xiin the proof of Theorem A.1 and performing the same proof, we obtain Ei[ˇri] ≤ Eδ[maxx˜∈Xt aˇi−1(x˜)].
Furthermore, since the next point to be evaluated is selected from X , the following inequality holds:
$\mathbb{E}_{i}[\tilde{r}_{i}]\leq\mathbb{E}_{\delta}[\max_{\tilde{\mathbf{x}}\in\mathcal{X}_{t}}\tilde{a}_{i-1}(\tilde{\mathbf{x}})]\leq\mathbb{E}_{\delta}[\max_{\mathbf{x}\in\mathcal{X}}\tilde{a}_{i-1}(\mathbf{x})]$.
Therefore, we have
$$\mathbb{E}\left[\sum_{i=1}^{t}{\check{r}}_{i}\right]$$
# ≤ E "Xt i=1 β 1/2 i σi−1(xi) # ≤ E Xt i=1 βi !1/2 Xt i=1 σ 2 i−1 (xi) !1/2 ≤ vuutE "Xt i=1 βi #vuutE "Xt i=1 σ 2 i−1 (xi) # ≤ptE[βt] vuutE "Xt i=1 σ 2 i−1 (xi) # ≤ qt(2 + 2d log(dbdrt2(plog(ad) + √π/2)e))rE hCˇ1γt i = qCˇ1tγt(2 + st).
$\square$
Proof. Theorem A.4 is proved by using the same argument as in the proof of Lemma B.1. |