Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- 20240721/1909.05006v5.json +0 -0
- 20240721/2102.09111v2.json +202 -0
- 20240721/2206.00851v4.json +476 -0
- 20240721/2209.10517v13.json +555 -0
- 20240721/2210.12777v4.json +0 -0
- 20240721/2212.00250v3.json +0 -0
- 20240721/2212.04687v2.json +0 -0
- 20240721/2301.11290v3.json +301 -0
- 20240721/2301.12195v3.json +158 -0
- 20240721/2302.12246v5.json +0 -0
- 20240721/2303.10460v2.json +0 -0
- 20240721/2303.11884v2.json +170 -0
- 20240721/2304.06372v3.json +0 -0
- 20240721/2305.07408v3.json +194 -0
- 20240721/2306.06871v4.json +0 -0
- 20240721/2306.13421v2.json +0 -0
- 20240721/2307.16601v2.json +0 -0
- 20240721/2308.02785v2.json +111 -0
- 20240721/2308.07867v2.json +101 -0
- 20240721/2308.09718v2.json +0 -0
- 20240721/2309.13289v3.json +0 -0
- 20240721/2311.07172v2.json +410 -0
- 20240721/2311.08919v3.json +0 -0
- 20240721/2311.17101v2.json +0 -0
- 20240721/2312.02175v2.json +113 -0
- 20240721/2312.06646v4.json +545 -0
- 20240721/2312.08224v2.json +0 -0
- 20240721/2312.09863v2.json +0 -0
- 20240721/2312.14024v3.json +0 -0
- 20240721/2402.03119v2.json +0 -0
- 20240721/2402.10698v2.json +0 -0
- 20240721/2402.11111v2.json +0 -0
- 20240721/2402.14646v2.json +0 -0
- 20240721/2402.16399v2.json +251 -0
- 20240721/2402.16832v2.json +365 -0
- 20240721/2402.17553v3.json +165 -0
- 20240721/2402.18919v3.json +0 -0
- 20240721/2403.00957v2.json +132 -0
- 20240721/2403.01915v2.json +0 -0
- 20240721/2403.05016v2.json +664 -0
- 20240721/2403.05018v2.json +159 -0
- 20240721/2403.08495v4.json +0 -0
- 20240721/2403.11437v3.json +144 -0
- 20240721/2403.12422v2.json +0 -0
- 20240721/2403.17222v2.json +65 -0
- 20240721/2404.00801v2.json +0 -0
- 20240721/2404.02059v3.json +0 -0
- 20240721/2404.07988v2.json +0 -0
- 20240721/2404.12228v3.json +0 -0
- 20240721/2404.13903v3.json +0 -0
20240721/1909.05006v5.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2102.09111v2.json
ADDED
|
@@ -0,0 +1,202 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Online Optimization and Ambiguity-based Learning of Distributionally Uncertain Dynamic Systems",
|
| 3 |
+
"abstract": "This paper proposes a novel approach to construct data-driven online solutions to\noptimization\nproblems (P)\nsubject to a class of distributionally uncertain dynamical\nsystems. The introduced framework allows for the simultaneous learning of distributional system uncertainty via a parameterized, control-dependent ambiguity set using a finite historical data set, and its use to make online decisions with probabilistic regret function bounds. Leveraging the merits of Machine Learning, the main technical approach relies on the theory of Distributional Robust Optimization (DRO), to hedge against uncertainty and provide less conservative results than standard Robust Optimization approaches.\nStarting from recent results that describe ambiguity sets via parameterized, and control-dependent empirical distributions as well as ambiguity radii, we first present a tractable reformulation of the corresponding optimization problem while maintaining the probabilistic guarantees. We then specialize these problems to the cases of 1) optimal one-stage control of distributionally uncertain nonlinear systems, and 2) resource allocation under distributional uncertainty. A novelty of this work is that it\nextends DRO to online optimization problems subject to a distributionally uncertain dynamical system constraint, handled via a control-dependent ambiguity set that leads to\nonline-tractable optimization with probabilistic guarantees on regret bounds.\nFurther, we introduce an online version of the\nNesterov\u2019s accelerated-gradient algorithm, and analyze its performance to solve this class of problems via dissipativity theory.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Online optimization has attracted significant attention from various\nfields, including Machine Learning,\nInformation Theory, Robotics and Smart Power Systems; see [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###] and references therein.\nA basic online optimization setting\ninvolves the minimization of time-varying convex loss functions,\nresulting into Online Convex Programming (OCP). Typically, loss\nobjectives in OCP are functions of non-stationary stochastic\nprocesses [4 ###reference_b4###, 5 ###reference_b5###]. Regret minimization aims to deal with non-stationarity by reducing the\ndifference between an optimal decision made with information in\nhindsight, and one made as information is increasingly revealed.\nThus, several online algorithms and techniques are aimed at minimizing\nvarious types of regret functions [6 ###reference_b6###, 7 ###reference_b7###]. More recently, and with the aim of further reducing\nthe cost, regret-based OCP has integrated prediction models of loss\nfunctions [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###].\nHowever, exact models of evolving loss functions may not be available,\nwhile alternative data-based approximate models may require large\namounts of data that are hard to obtain.\nThis motivates the need of developing new learning algorithms for loss\nfunctions that can employ finite data sets, while guaranteeing a\nprecise performance of the corresponding optimization.\nLiterature Review.\nDue to recent advances in Data Science and Machine Learning, the question of learning system models as well as distributional uncertainty from data is gaining significant attention. From the early work on Systems Identification [12 ###reference_b12###], Willem\u2019s Behavioral Theory and\nfundamental lemma [13 ###reference_b13###, 14 ###reference_b14###] have been recently leveraged to learn linear, time-invariant system models in predictive control applications [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]. The aforementioned works rely on the use of Hankel system representations of the LTI system, and may be subject or not to additional uncertainty. In particular, the work [19 ###reference_b19###] leverages the behavioral theory to obtain sub-linear regret bounds for the online optimization of discrete-time unknown but deterministic linear systems. Other approaches to learn LTI systems from input-output data employ concentration inequalities and finite samples, and include, for example, [20 ###reference_b20###], exploiting least squares and the Ho-Kalman algorithm, [21 ###reference_b21###], using subspace identification techniques for LTI systems subject to unknown Gaussian disturbances, and [22 ###reference_b22###], resorting to Lasso-like methods that exploit the sparse representation of LTI systems.\nOn the other hand, classical online optimization relies on Sample Averaging Approximation (SAA) (with bootstrap) to derive optimal value and/or policy approximations. However, SAA usually requires large amounts of data to provide good approximations of the stochastic cost, which leads to non-robust solutions to unseen data.\nIn contrast, recent developments on measure-of-concentration results [23 ###reference_b23###] have lead to a new type of Distributionally Robust Optimization (DRO) [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###], which aims to bridge this gap. Particularly, the DRO framework enables finite-sample, performance-guaranteed optimization under distributional uncertainty [24 ###reference_b24###, 25 ###reference_b25###], and paves the way to dealing with the control and estimation of system dynamics subject to distributional uncertainty.\nMotivated by this, the works [27 ###reference_b27###, 28 ###reference_b28###] consider the time evolution of Wasserstein ambiguity sets and their updates under streaming data for estimation. However, the nominal dynamic constraints defined in these problems are assumed to be known, while in practice, these models also need to be identified. The previous work [29 ###reference_b29###] proposes a method for integrating the learning of an unknown and nominal parameterized system dynamics with Wasserstein ambiguity sets. These ambiguity sets are given by a parameter and control-dependent ambiguity ball center as well as a corresponding radius. Taking this as a starting point, and motivated by the direct use of these ambiguity sets in a type of \u201cdistributionally robust control\u201d, here we further extend this setup in connection with online optimization problems.\nPrecisely, what distinguishes this work from other approaches is the focus on learning the transition system dynamics itself via control-dependent ambiguity sets. The control method is derived from an online optimization method [6 ###reference_b6###], and, therefore, it does not aim to calculate exactly an optimal control, but to find an approximate solution that leads to a low instantaneous regret function value w.r.t. standard, online and regret-based optimization problems. Finally, this manuscript connects with the topic online optimization using decision-dependent distributions [30 ###reference_b30###, 31 ###reference_b31###], where the uncertainty distribution changes with the decision variable. As these problems are intractable, [30 ###reference_b30###, 31 ###reference_b31###] solve for alternative stable solutions, or optimal solutions wrt to the distribution they induce. In addition to this, and while [30 ###reference_b30###, 31 ###reference_b31###] can handle dynamic systems, a main difference with this work is that a dynamic system structure that is being learned is not exploited, which can help reduce uncertainty more effectively.\nStatement of Contributions. In this\nwork, we propose a novel approach to solve a class of online optimization problems subject\nto distributionally uncertain dynamical systems. Our end goal is to produce an online controller that results in bounded instantaneous regrets with high confidence. Our proposed framework is\nunique in that it enables the online learning of the underlying nominal system,\nmaintains online-problem tractability, and simultaneously provides\nfinite-sample, probabilistic guarantee bounds on the resulting regret. This is achieved by\nconsidering a worst-case-system formulation that employs\nnovel parameterized and control-dependent, Wasserstein ambiguity sets. Our learning method precisely consists of updating this ambiguity set.\nThe proposed formulation is valid for a wide class\nof problems, including but not limited to 1) a class of optimal control\nproblems subject to distributionally uncertain dynamical system, and 2)\nonline resource allocation under distributional uncertainty. To do this, we first obtain tractable problem reformulations for these two cases, which results in online and non-smooth convex problem optimizations.\nFor each of these\ncategories, and smoothed-out versions of these problems, we propose an online control algorithm dynamics,\nwhich extends Nesterov\u2019s accelerated-gradient method. Adapting dissipativity theory, we prove optimal first-order convergence rate for these algorithms under smoothness and convexity assumptions. This result is crucial to guarantee that the online controller can provide probabilistic guarantees on their regret bounds via the control-dependent ambiguity set. We thus finish our work by quantifying these dynamic\nregret bounds, and by\nexplicitly characterizing the effect of learning parameters with finite historical samples."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Notations",
|
| 15 |
+
"text": "We denote by m, , \nand m\u00d7n the -dimensional real space, nonnegative\northant, nonnegative integer-orthant space, and the space\nof matrices, respectively. The transpose of a column vector\n is\n, and is a\nshorthand for . We index vectors\nwith subscripts, i.e., with , and given we denote its\n component by . We denote by \nand the -norm and -norm,\nrespectively. The inner product of m is given as , ; thus,\n.\nThe gradient of a real-valued function is denoted as\n and is the\npartial derivative w.r.t. . In what follows,\n. A function is\n-strongly convex, if for any \nthere exists such that\n. The function is convex if . We call a vector \na subgradient of at and denote by the subgradient set. If is\ndifferentiable at , then . Finally, the operation\n\nprojects the set onto\n under the Euclidean\nnorm. We write , where , and\n if ,\notherwise .\nEndow n with the Borel -algebra , and let be the set of probability measures (or distributions) over . The set of probability distributions with bounded first moments is .\nWe use the Wasserstein\nmetric [32 ###reference_b32###] to define a distance in\n, and the dual version of the\n-Wasserstein metric , is defined by\n,\nwhere is the space of all Lipschitz functions with\nLipschitz constant 1. We denote a closed Wasserstein ball of radius\n (also called an ambiguity set) centered at a distribution by\n. The Dirac measure\nat is a distribution in denoted by\n. Given , we have , if ,\notherwise .\nA random vector with probability distribution is\nsub-Gaussian if there are positive constants such that .\nEquivalently, a zero-mean random vector is sub-Gaussian if for any we have for some ."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Problem Statement, Motivation, and Approach based\non Ambiguity Set Learning",
|
| 21 |
+
"text": "We\nstart by introducing a class of online optimization problems, where\nthe objective function is time-varying according to an\nunknown dynamical system subject to an unknown\ndisturbance. Consider a dynamical system that evolves according to\nunknown stochastic dynamics\nwhere is an online decision or control action at\ntime , is a\nmeasurable, but unknown state transition function, and , is an unknown, random, disturbance vector. Due to the Markov assumption, can be described by an unknown transition probability measure , conditioned on the system state and control at time . Denote by ,\n an\na-priori selected, measurable loss\nfunction. Assume that is compact,\nand we are interested in selecting that minimizes the loss\nThis objective value is inaccessible since the state distribution is unknown,\nand its evolution is highly dependent on the\nsystem, disturbance, and as well as on the decisions taken.\nIn this work, we aim to propose an effective online optimization and\nlearning algorithm which tracks the minimizers of the time-varying\nobjective function with low regret in high\nprobability. Thus, at each time , we aim to\nfind that minimizes the loss\nin the immediate future at\nThis problem formulation is similar to a one-stage optimization problems with unknown system transitions [33 ###reference_b33###].\nThe expectation operator with respect to is conditional on the historical realizations , , the adopted decisions , , the yet-to-be-learned unknown dynamical system , and realizations , . We will identify which, by the Markovian property, satisfies .\nAt time , let denote an optimizer of Problem (P ###reference_###) and consider the instantaneous regret\nwhich is the loss incurred if the selected is different from an optimal decision. Our goal will be to develop a robust online algorithm which ensures a probabilistic bound on the regret. That is, with high probability , the regret is upper bounded by a sum of terms,\na first one depending on the initial condition ; a second one depending on the instantaneous variation of the loss of (P ###reference_###); and a third term related to how well the unknown system and the uncertainty are characterized; please see Theorem V.1 ###reference_theorem1###. While the second and third terms are inherent to the system, the effect of the second one can be reduced by considering a predicted loss of the system [11 ###reference_b11###]. In this work, we aim to bound the third term and minimize it by estimating the distribution via an ambiguity set of distributions. We will show that, as historical data are assimilated over time, this third term asymptotically decays to zero.\nThis is achieved under the following assumption {assumption}[Independent and stationary sub-Gaussian\ndistributions] The vectors , , are i.i.d. with and zero-mean sub-Gaussian111That is, for all unit vector , we have , . Equivalently, , ..\nSub-Gaussian distributions include Gaussian random variables and all distributions of bounded support.\nExample 1 (Vehicle path planning and tracking): A two-wheeled vehicle moves in an unknown\n2D environment. Assume that an\naccessible path-planner provides a control signal for the\nvehicle to track a desired reference trajectory under ideal\nconditions, see Fig. 1 ###reference_###. Fig. 2 ###reference_### shows two examples where, first, the vehicle\nimplements a series of lane changes, and, second, navigates through a planned circular/loopy route. Since both the environment\nand dynamics are uncertain, exact tracking is rare. Our goal\nis to learn the real-time road conditions, and by solving the online\nproblem (P ###reference_###), derive a control signal that enables path\nfollowing minimizing the tracking error with high probability.\n###figure_1### ###figure_2### ###figure_3### Example 2 (Online resource allocation in the\nstock market): An agent aims to achieve a target\nprofit of in a highly-fluctuating trading market. Thus, it actively allocates wealth to multiple risky assets\nwhile trying to balance resources among assets. As\nasset-prices are uncertain, modeling the return rate of each asset\nis specially challenging. To solve this, an agent can aim\nto learn the real-time returns responsively, estimate the\ndistributions of immediate returns, and then allocate wealth\nwisely to maximize the expected profit with high probability. This\nproblem fits in the proposed formulation, resulting in online,\nbalanced resource allocation with low regrets."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "III-A Online Constructions of Ambiguity Sets",
|
| 27 |
+
"text": "Our main approach to obtain a suitable control signal is\nbased on learning a set of distributions or ambiguity set that\ncharacterizes system uncertainty.\nMore precisely, we employ the dynamic\nambiguity set proposed\nin [29 ###reference_b29###]. The set contains a\nclass of distributions, which is, in high probability, large enough to\ninclude the unknown under certain conditions. Thus, we\ncan use it to formulate a robust version of the problem at each time\ninstant . Such characterization enables an online-tractable\nreformulation of (P ###reference_###) later.\nWe summarize next the construction of these ambiguity sets . First, we assume the following on the unknown .\n{assumption}[System parametrization] Given , the system can be expressed as\nwhere is an unknown parameter, and , , is a set of linearly independent known basis functions or predictors chosen a priori.\nNow, given arbitrary , the set is a Wasserstein ball centered at a parametric-dependent distribution for each ; that is,\nHere, will be a time-varying function which depends on a number of measurements, and a confidence . More precisely,\nsee the footnote222\n, with , , being the state measurements at time , , and being the past input at , ., where , for . If , then provides an outcome , for each . For a general , the value provides \u201capproximated\u201d outcomes , for each .\nThen, we claim the probabilistic guarantee of by a selection of the parameter and for any .\nLet Assumptions III ###reference_### and III-A ###reference_### hold.\nFor a given , historical data and , , we select as in (2 ###reference_###) where is selected in [29 ###reference_b29###, Theorem 2 (Learning of )]333In [29 ###reference_b29###, Theorem 2], the value plays the role of in this work.. Then, for given and a confidence-related value\n, a radius can be chosen such that\nHere, the left-hand-side expression is a shorthand for the probability of the event and denotes the probability measure defined\non the -fold product of , which evaluates the probability that the selection of samples define an ambiguity ball which contains the true distribution. In particular, the confidence value is\nwhere is a data-dependent positive constant and is a user selected parameter. Further, the radius is\nwhere and are positive constants, and\nwhich bounds the variation of predicted system trajectories.\n\nIdea of the Proof. The probabilistic guarantees (3 ###reference_###) are a consequence of Lemma 1, Theorem 1, Theorem 2 and Eqn. (7) in [29 ###reference_b29###] with Assumptions III ###reference_### and III-A ###reference_###. Precisely, we achieve this by upper bounding the metric using plus .\nThen, the first distance is handled via [29 ###reference_b29###, Lemma 1] using standard measure of concentration results444Lemma 1 in [29 ###reference_b29###] makes use of a stronger Assumption III ###reference_###, which requires to be white. However, this can be relaxed to the current assumption by multiplying the upper bound in the lemma with a constant associated with noise whitening via an appropriate linear transformation., contributing to the first two terms of the radius in (4 ###reference_###).\nNext, the second distance can be bounded in terms of the difference via [29 ###reference_b29###, Theorem 1], contributing to the third term in . Notice that the third term depends on Assumption III-A ###reference_### and the selected parameter which relies on the selection of via [29 ###reference_b29###, Theorem 2 (Learning of )]. The confidence value is achieved by Assumption III ###reference_### applying to the same procedure as in [29 ###reference_b29###, Theorem 2], which essentially bounds in probability. Precisely, by Assumption III ###reference_###, we have , , resulting in , analogous to [34 ###reference_b34###, Lemma 2]. Then, with the proof similar to [34 ###reference_b34###, Theorem IV.2], we achieve\nBy selecting\nwe follow the proof [34 ###reference_b34###, Theorem IV.2] to achieve\nBy bound propagation, we have\nwith and is selected as in [29 ###reference_b29###, Theorem 2].\nFinally, the combination of all the above considerations complete the proof.\nTheorem III.1 ###reference_theorem1### provides a methodology to construct online ambiguity sets with guarantees in probability. In general, is strictly smaller than 1 unless there is a way of making . This is implemented in [29 ###reference_b29###] via an online learning algorithm which leads to via Eqn. (7) in the same work. Notice how these constructions are related to the decision variable and,\nin the following, we leverage the probabilistic\ncharacterization\n of the\ndistribution for solutions\nto (P ###reference_###)."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "IV A Tractable Problem Reformulation\nand Its Specialization to Two Problem Classes",
|
| 33 |
+
"text": "In this section, we start by describing how to deal with\nthe unknown in Problem (P ###reference_###), via\nambiguity sets, which results in (P1 ###reference_2###).\nBy doing this, the solution of (P1 ###reference_2###) provides\nguarantees on the performance\nof (P ###reference_###). Unfortunately, this results into an\nonline intractable problem. Thus, we find a tractable\nreformulation (P2 ###reference_5###) which is equivalent to (P1 ###reference_2###)\nunder certain conditions. After this, we focus the rest of\nour work on two problem sub-classes, which allows us to present and\nanalyze the online algorithms for these problems in the following\nsection. Formally, let us consider\nwhere, for a fixed and\n, it holds that\n with high\nprobability. This results in\nObserve that, the probability measure and the bound coincides\nwith that in (3 ###reference_###) and notice how the value\n changes for various data-set sizes \nin Theorem III.1 ###reference_theorem1###.\nThe solution and the objective value of (P1 ###reference_2###)\nensure that, when we select to be the decision\nfor (P ###reference_###), the expected loss of (P ###reference_###)\nis no worse than that from (P1 ###reference_2###) with high\nprobability.\nThe\nformulation (P1 ###reference_2###) still requires expensive online computations\ndue to its semi-infinite inner optimization problem.\nThus, we propose an equivalent reformulation of (P1 ###reference_2###) for a\nclass of loss functions as in the following assumption.\n[Lipschitz loss\nfunctions] Consider the loss function\n,\n. There\nexists a Lipschitz function \nsuch that for each , it holds that\n for any .\nWith this, we obtain the following upper bound:\nLet Assumption IV ###reference_### hold. Then, for each ,\n, , and , we have\nwhere the empirical distribution\n and scalar\n are described as in Section III-A ###reference_###.\nHereafter, see the appendix for all proofs.\nNext, we claim that the upper bound in Lemma IV.1 ###reference_lemma1### is\ntight if the following assumption holds.\n{assumption}[Convex and gradient-accessible\nfunctions] The loss function is\nconvex in for each\n. Further, for each time\n with given and , there is a system prediction for some such that exists and\n is equal to at . \n\nThe above statement enables the following theorem.\nLet\nAssumptions IV ###reference_###\nand IV.1 ###reference_lemma1### hold. Let\n denote the support of the distribution\n. Then, if\n, (P1 ###reference_2###) is equivalent to the\nfollowing problem\n\nWe note that\nAssumption IV ###reference_### on the Lipschitz requirement of\nloss function is mild. In fact, many engineering problems take\nstate values in a compact set, which then only requires the loss\n to be continuous. Assumption IV.1 ###reference_lemma1###\nessentially requires accessible partial gradients (in\n) of loss functions\n. For simple loss functions , e.g. linear, quadratic, etc,\nits partial gradient can be readily evaluated. Notice that when\nAssumption IV.1 ###reference_lemma1### fails, Problem (P2 ###reference_5###)\nstill serves as a relaxation problem of (P1 ###reference_2###), providing\na solution with a valid upper bound.\nNotice that the tractability of solutions to (P2 ###reference_5###) now depend\non: 1) the choice of the loss function and the associated\nLipschitz function , and 2) the decision space . To\nbe able to further analyze (P2 ###reference_5###) and further\nevaluate Assumption IV.1 ###reference_lemma1### on gradient-accessible functions,\nwe\nwill impose further structure on the system as follows:\n{assumption}[Locally Lipschitz, control-affine system and\nbasis functions]\nThe system is locally Lipschitz in \nand affine in , i.e.,\nfor some unknown , , and . Similarly, for each , the\nbasis function is selected to be\nfor some known locally Lipschitz functions and\n.\n\n\n{assumption}[Convex decision oracle]\n The set is convex and compact. Furthermore, the projection\noperation of onto ,\n, admits computation complexity.\nFor simplicity of the discussion, we rewrite (P2 ###reference_5###) as\nwhere represents the objective function of (P2 ###reference_5###), depending on variables , , , , and , which are kept fixed in the optimization. Then, Assumption IV ###reference_7### allows an explicit expression of w.r.t. and\nAssumption IV ###reference_7### characterizes the convex feasible set of (P2 ###reference_5###). Note that is locally Lipschitz in .555This can be verified by the local Lipschitz condition on , , and finite composition of local Lipschitz functions are locally Lipschitz.\nIn the following, we consider two classes of general problems\nin the form of (P2 ###reference_5###): 1) an optimal control problem under the\nuncertainty; 2) an online resource allocation problem with a\nswitch. These problems leverage the probabilistic characterization of\nthe system and common loss functions . Then, we\npropose an online algorithm to achieve tractable solutions with a\nprobabilistic regret bound in the next section.\nProblem 1: (Optimal control under uncertainty) We\nconsider a problem in form (P ###reference_###), where the system is unknown and is to be optimally controlled. In particular, we employ the\nfollowing separable loss function\nwith the cost for the immediate control and the\noptimal cost-to-go function. We assume that both and \nare convex, and in addition, is Lipschitz continuous with a\nconstant , resulting in\n. Then, by selecting the\nambiguity radius and center of\n as in Section III-A ###reference_###,\nthe objective\nfunction of (P2 ###reference_5###) becomes\nwhere , are affine in , for\neach , , as\nand parameters , and are selected as in [29 ###reference_b29###, Section IV].\nIntuitively, is the projected outcome of the random variable and quantifies the variation of predictor with respect to its previous value.\nNotice that the objective function is convex in \nand therefore online problems (P2 ###reference_5###) are tractable. In\naddition, if has a constant gradient almost everywhere,\nthen Assumption IV.1 ###reference_lemma1### on accessible gradients holds and (P2 ###reference_5###) is\nequivalent to (P1 ###reference_2###).\nProblem 2: (Online resource allocation) We consider\nan online resource allocation problem with a\nswitch, where a decision maker aims to make online resource allocation decisions in an uncertain environment.\nThis problem is in form (P ###reference_###) and its objective\nis\nwhere is an affine feature map selected in advance. The\ndecision maker updates the decision online when , otherwise switches\noff. Notice that this type of objective functions appears in many classification problems.\nIn particular, we assume that the system is\nindependent from the allocation variable, i.e., . See Section VI-B ###reference_### for a more\nexplicit problem formulation involving resource allocation with an assignment switch.\nThen, problem (P2 ###reference_5###) has the objective function\nwhere time-dependent parameters , are\nwith , and as in [29 ###reference_b29###, Section IV].\nWe characterize the function by subgradients of the loss function .\nConsider , where is differentiable in . Then,\nthe function is\nwhere the set contains\nall the subgradients of at , given any in\nadvance, i.e.,\nwhere\nIn particular, if for some matrix ,\nthen\n. If is contained in a compact set , then\nwhere is the Lipschitz constant of on .\nLemma IV.2 ###reference_lemma2### indicates that, given a properly selected\nfeature mapping , the objective is convex in and therefore\nonline problems (P2 ###reference_5###) are convex and tractable. In addition,\nif is a linear map almost everywhere, then\nAssumption IV.1 ###reference_lemma1### on accessible gradients holds and (P2 ###reference_5###) is equivalent\nto (P1 ###reference_2###)."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "5",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Online Algorithms",
|
| 39 |
+
"text": "Online convex problems (P2 ###reference_5###) are non-smooth due to the\nnormed regularization terms in . To achieve fast, online solutions,\nwe propose a two-step procedure. First, we follow [35 ###reference_b35###, 36 ###reference_b36###] to obtain a smooth version of (P2 ###reference_5###),\ncalled (P2\u2032 ###reference_1###). Then, we extend\nthe Nesterov\u2019s accelerated-gradient method [37 ###reference_b37###]\u2014known to achieve an optimal\nfirst-order convergence rate for smooth and offline convex\nproblems\u2014to solve the problem (P2\u2032 ###reference_1###).\nFinally, we quantify the dynamic regret [4 ###reference_b4###]\nof online decisions w.r.t. solutions of (P1 ###reference_2###) in probability. \nStep 1: (Smooth approximation of (P2 ###reference_5###)) To\nsimplify the discussion, let us use the generic notation\n for a convex and potentially non-smooth\nfunction, which can represent any particular component of the\nobjective function of (P2 ###reference_5###) at time .\nWe call a\nconvex function smoothable on \nif there exists such that, for every , there is a\ncontinuously differentiable convex function\n satisfying \n(1) , for all . \n(2) There exists such\nthat has a Lipschitz gradient over with Lipschitz constant , i.e.,\n\nTo obtain a smooth approximation of , we follow the\nMoreau proximal approximation technique [35 ###reference_b35###],\ndescribed as in the following lemma.\nGiven\na convex function and any , let\nus denote by the set of subgradients of at ,\nrespectively. Let . Then, \nis smoothable with parameters , where the smoothed\nversion is the Moreau approximation:\nIn addition, if is -strongly convex with some , then\n is -strongly convex. And further, the\nminimization of over is\nequivalent to that of over in the sense that the set of minimizers of two problems\nare the same.\nFrom the definition of the smoothable function, we know that: 1) a\npositive linear combination of smoothable functions is smoothable666If is smoothable with\nparameter and with parameter , then\n is smoothable with parameter , for any ., and 2) the composition of a smoothable\nfunction with a linear transformation is smoothable777Let\n be a linear transformation and\nlet . Let \nbe a smoothable function with parameter . Then, the\nfunction , is smoothable with parameter , where . If , then is the\n norm. . These properties enable us to smooth each\ncomponent of , i.e., , , and ,\nwhich results in a smooth approximation of (P2 ###reference_5###) via the\ncorresponding as follows\nNote that is locally Lipschitz and minimizers\nof (P2\u2032 ###reference_1###) are that of (P2 ###reference_5###). We provide in the\nfollowing lemma explicit expressions of (P2\u2032 ###reference_1###) for the two problem classes.\nProblem 1: Consider the following loss function\nwhere is a smoothed -norm function888\nThe -norm function: \nConsider ,\n, and . Clearly, is\ndifferentiable almost everywhere, except at the origin. Then, \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwith the smoothing parameter .\n, with . Then,\nthe objective function is\nwhere , are affine in , defined as in\nSection IV ###reference_###. In addition, we have the smoothing parameter of , , where\nwith denoting the maximum singular value of the\nmatrix, and\nProblem 2:\nLet us select the feature map to be\nthe identity map with the dimension , and consider\nresulting in\nwhere , parameters , are as in Section IV ###reference_###, and\nfunctions and are the smoothed switch function999\nThe Switch function: \nConsider , , which is differentiable almost everywhere. For a given ,\nwe compute\n\n\n\n\n\n\n\n\n\n\n\n\n\nGiven that\n\n\n\n\n\n\n\nand\n\n\n\n\n\n\n\nresulting in\n\n\n\n\n\n\n\nwith the smoothing parameter .\n\nand -norm function, respectively. Note that has the smoothing parameter .\nStep 2: (Solution to (P2\u2032 ###reference_1###) as a\ndynamical system) To solve (P2\u2032 ###reference_1###) online, we propose a\ndynamical system extending the Nesterov\u2019s accelerated-gradient\nmethod by adapting gradients of the time-varying objective function.\nIn particular, let , , be solutions\nof (P2\u2032 ###reference_1###) and let us consider the solution system with\nsome and ,\nas\nwhere with positive parameters \nand being those define . We denote by\n the derivative of w.r.t. its second\nargument and denote by the projection of\n onto as in Assumption IV ###reference_7### on convex decision oracle. Note that, the gradient function can be computed in closed form for problems of interest, see, e.g., Appendix -A ###reference_### for those of the proposed problems. Further, we\nselect the moment coefficient as in\nAppendix -B ###reference_###. In the following, we leverage\nAppendix -B ###reference_### on the stability analysis of the\nsolution system (5 ###reference_###) for a regret bound between online\ndecisions and optimal solutions of (P1 ###reference_2###).\nGiven any , let us\ndenote by and the decision\ngenerated by (5 ###reference_###) and an optimal solution which solves\nthe online Problem (P1 ###reference_2###), respectively. Consider the dynamic\nregret to be the difference of the cost expected to incur if we\nimplement instead of , defined\nas\nThen, the regret is bounded in probability as\nfollows\nwhere depends on the system state at time\n, and depends on the variation of the optimal objective values in , i.e.,\nwhere is the optimal\nobjective value of (P2 ###reference_5###), or equivalently that\nof (P1 ###reference_2###). Further, is the variation bound of \nw.r.t. time, and\nthe rest of the parameters are the same as before.\nFurthermore, if all historical data are assimilated for the decision , then, we have\nwith a given, arbitrary confidence value.\nTheorem V.1 ###reference_theorem1### quantifies the dynamic regret of online\ndecisions w.r.t. solutions to (P1 ###reference_2###) in high\nprobability. Notice that, the regret bound is dominated by terms: , and , which\nmainly depend on three factors: the data-driven parameters\n, and of the solution\nsystem (5 ###reference_###), the variation over optimal objective values, and the parameters , ,\n and that are related to the system and environment\nlearning. In practice, a small regret bound is determined by 1) an\neffective learning procedure which contributes to small\n; 2) a proper selection of the loss function \nwhich results in smoothing procedure with a small parameter ;\nand 3) the problem structure leading to small variations of the optimal objectives values. Furthermore, when we use all the historical data for the objective gradients in the solution system (5 ###reference_###), the effect of system ambiguity learning is negligible asymptotically.\nOnline Procedure:\nOur online algorithm is summarized in the Algorithm 1 ###reference_###."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "6",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "VI Implementation",
|
| 45 |
+
"text": "In this section, we apply our algorithm to the introduced motivating examples, resulting in online-tractable, effective system learning with guaranteed, regret-bounded performance in high probability."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "6.1",
|
| 49 |
+
"parent_section_id": "6",
|
| 50 |
+
"section_name": "VI-A Optimal control of an uncertain nonlinear system",
|
| 51 |
+
"text": "We consider the two-wheel vehicle driving under various road conditions, and our goal is to learn one-step prediction of the system state distribution and leverage for path tracking under various unknown road zones. In particular, we represent the two-wheel vehicle as a differential-drive robot subject to uncertainty [38 ###reference_b38###]:\nwhere components of states represent vehicle position and orientation on the 2-D plane. We take the discretization parameter and assume subGaussian\nuncertainty to be a zero-mean, mixture of\nGaussian and Uniform distributions with .\nThe intermediate variable\n depends on the wheel radius m, the\ndistance between wheels m, the controlled left-right wheel speed\n, and an unknown parameter , which depends on the wheel quality and road conditions. For simplicity, we assume that the planner adapts the system (6 ###reference_###) with and , and the vehicle can move over three types of road zones, the regular zone with , the slippery zone with , and the sandy zone with\n, where locations of these zones are described in\nFig. 2 ###reference_###.\nTo adapt the proposed approach, we consider Problem (P ###reference_###) with the following loss function\nwhere are signals generated by the planner, and we select the parameter for components which are not smooth. In addition, we assume and utilize basis functions in form of (6 ###reference_###), with\n, and\nNote that the ground truth parameter in the regular zone, in the slippery zone,\nand in the sandy zone.\nAt each time , we have access to model sets\n and as well as the real-time data set with size , which corresponds to the moving time window of order 0.1 second. For the system learning algorithm, notions of norm and inner product are those defined on the vector space .\nWe employ our online optimization and learning algorithm for the\ncharacterization of the uncertain vehicle states, learning of the unknown road-condition parameter , and control towards planned behaviors in real time. The achieved system behaviors are demonstrated in Fig 3 ###reference_###, contrasted with the case without the proposed approach, as in Fig. 2 ###reference_###. In the following, we analyze each case separately and notice how the proposed approach strikes balance between the given planned control and the actual control which reduces the weighted tracking error in road uncertainty.\n###figure_4### ###figure_5### Example (Lane-changing behavior adaptation) In this scenario,\nwe assume the initial system state . Further, the\nvehicle can access path plan in Fig. 2 ###reference_###(a) and as\nwell as the suggested wheel speed plan as the gray signal in\nFig. 4 ###reference_###(a). To demonstrate the learning effect of the\nalgorithm, we show in Fig. 5 ###reference_### components \nand of , where\nthe black lines indicate value of the ground truth\n on the planned trajectory and the gray lines\nrepresent the learned, real-time estimate of and \nat the actual vehicle position. Notice that is\ninaccessible in practice, and from this case study, the proposed\napproach indeed learns the system dynamics effectively. See,\ne.g. [29 ###reference_b29###] for more analysis regarding to the effect\nof the learning behavior and ambiguity sets characterization on the\nselection of and .\nAs the proposed loss function measures the weighted tracking\nerror, the resulting control system trajectory in\nFig. 3 ###reference_###(a) already reveals the effectiveness of\nthe method and as well as the low regrets in probability. On the other\nhand, because the system is highly non-linear and uncertain,\nevaluating the actual optimal objective value of Problem (P ###reference_###)\nis difficult. Therefore, it\u2019s very challenging to evaluate the regret\n in practice, even though the its probabilistic bounded is\nproved. Here, we provide in Fig. 4 ###reference_###(b) the realized loss\n and as well as the realized objective value of\nProblem (P2 ###reference_5###), where the loss reveals one possible\nobjective value of (P ###reference_###), and the objective value\nof (P2 ###reference_5###) serves as an upper-bound of that of (P ###reference_###) in\nhigh probability. In addition, notice that the derived (black) control signal in\nFig. 4 ###reference_###(a) has undesirable, high-oscillatory\nbehavior. This is because the chosen loss function is only\nlocally convex in . When the system disturbances are\nsignificant, the proposed approach then revealed certain degradation\nand control being oscillatory. Nevertheless, a desirable system\nbehavior in Fig. 3 ###reference_###(a) is achieved.\n###figure_6### ###figure_7### (a) (b)\n###figure_8### ###figure_9### Example (Circular route tracking)\nIn this scenario, we consider . We omit the details as the analysis shares the same spirit as the last lane-changing example."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "6.2",
|
| 55 |
+
"parent_section_id": "6",
|
| 56 |
+
"section_name": "VI-B Online resource allocation\nproblem",
|
| 57 |
+
"text": "We consider an online resource\nallocation problem where an agent or decision maker aims to 1) achieve\nat least target profit under uncertainty, and 2) allocate resources as\nuniformly as possible. To do this, the agent distributes available\nresources, e.g., wealth, time, energy or human resources, to various\nprojects or assets. In particular, for the trading-market motivating example, let us consider that the agent\ntries to make an online allocation of a unit\nwealth to three assets. At each time , the agent receives random\nreturn rates of assets from some unknown and uncertain dynamics\nwhere is a stepsize, the vector is randomly\ngenerated, unknown and piecewise constant, and the uncertainty vector\n is assumed to be sub-Gaussian with . Note that\nthis model can serve to characterize a wide class of dynamic (linear and\nnonlinear) systems. In addition, we assume that the third asset is\nvalue preserved, i.e., the third component of and \nare zero and . Over time, an example of the resulting unit return\nrates is demonstrated in Fig. 6 ###reference_###. Then, we\ndenote by and the\ntarget profit and the predicted instantaneous profit, respectively. Note that\nthe decision maker aims to obtain at least a profit and\nallocate resources online for this purpose. In particular, the\ndecision maker implements an allocation online if , otherwise does nothing. This results\nin (P ###reference_###) with the loss function\nand set a unit simplex. We propose basis functions\nwhere and .\nAt each , we assume that only historical data are available for online resource allocations. Applying the proposed\nprobabilistic characterization of as\nin (P1 ###reference_2###), we equivalently write it as in\nform (P2\u2032 ###reference_1###), where\nwith functions and , and\nreal-time data and determined as in\nProblem 2. We claim that has a time-dependent\nLipschitz gradient constant in given by , and we use in the\nsolution system (5 ###reference_###) to compute the online decisions.\n###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### Fig. 7 ###reference_### shows the real-time evolution and of the\nparameter , while the\nbehavior of can be similarly characterized. In these\nfigures, black lines and are\ndetermined by the unknown signal while gray lines and are those\ncomputed as in [29 ###reference_b29###]. Note that\n represents the unknown\ndynamics and they are not accessible in reality. It can be seen\nthat the proposed method effectively learns .\nFig. 8 ###reference_### demonstrates the online resource\nallocation obtained by implementing (5 ###reference_###) and the achieved\nreal-time profit . The decision\n starts from the uniform allocation\n and is then adjusted to approach the\ntarget profit . Once the target is achieved, the agent then\nmaintains the profit while trying to balance the allocation if\npossible. When the return rate is low/unbalanced,\nas in Fig. 6 ###reference_###, the agent tries to improve\nand achieve the target profit by allocating resources more\naggressively. Though did not appear in the current\nscenario, in case that the return rate is high and the target\nprofit value is achieved, the agent focuses on balancing the\nallocation while maintaining the profit. If both the target\nprofit and allocation balance are achieved, then the agent stops\nre-allocating resources and monitors the return rate until\nthe switch turns on, e.g., when the near future profit prediction\ndrops below again. In addition, notice how the target profit was achieved with\nthe proposed control strategy as demonstrated in\nFig. 8 ###reference_###, which contrasts with uniform\nallocation case as in Fig. 6 ###reference_###.\n###figure_16### Fig. 9 ###reference_### demonstrates the evaluation\nof the time-varying loss as well as the realized objective\nvalue of Problem (P2 ###reference_5###). Due to the unknown time-varying\ndistributions , the evaluation of the objective values of\nProblem (P ###reference_###) is intractable, and the realized loss\nof (P2 ###reference_5###) serves as a high-confidence upper bound of that\nof(P ###reference_###). Nevertheless, the target profit is achieved with low\nregret in high confidence, as revealed in\nFig. 8 ###reference_###."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "7",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "VII Conclusions",
|
| 63 |
+
"text": "In this paper, we proposed a unified solution framework for online\nlearning and optimization problems in form of (P ###reference_###). The\nproposed method allowed us to learn an unknown and uncertain dynamic\nsystem, while providing a characterization of the system\nwith online-quantifiable probabilistic guarantees that certify the\nperformance of online decisions. The approach provided tractable,\nonline convex version of (P ###reference_###), via a series of equivalent\nreformulation techniques. We explicitly demonstrated the framework via\ntwo problem classes conforming to (P ###reference_###): an optimal control\nproblem under uncertainty and an online resource allocation\nproblem. These two problem classes resulted in explicit, online and\nnon-smooth convex optimization problems. We extended Nesterov\u2019s\naccelerated-gradient method to an online fashion and provided a\nsolution system for online decision generation of (P ###reference_###). The\nquality of the online decisions were analytically certified via a\nprobabilistic regret bound, which revealed its relation to the\nlearning parameters and ambiguity sets. Two motivating examples applying the proposed framework were empirically tested, demonstrating the effectiveness of the proposed framework with the bounded regret guarantees in probability.\nWe leave the relaxation of assumptions and the comparison of this work with other methods as the future work."
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [],
|
| 67 |
+
"tables": {},
|
| 68 |
+
"image_paths": {
|
| 69 |
+
"1": {
|
| 70 |
+
"figure_path": "2102.09111v2_figure_1.png",
|
| 71 |
+
"caption": "Figure 1: A two-wheeled vehicle model with\n(x,y)\u22082superscript2\ud835\udc65\ud835\udc66absent(x,y)\\in^{2}( italic_x , italic_y ) \u2208 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT the position of the center and \u03b8\ud835\udf03\\thetaitalic_\u03b8 the\ndirection.",
|
| 72 |
+
"url": "http://arxiv.org/html/2102.09111v2/x1.png"
|
| 73 |
+
},
|
| 74 |
+
"2(a)": {
|
| 75 |
+
"figure_path": "2102.09111v2_figure_2(a).png",
|
| 76 |
+
"caption": "Figure 2: The (gray) planned trajectory and\n(black) actual system trajectory in various road zones, with the\nsystem state \ud835\udc99=(x,y,\u03b8)\u22082\u00d7[\u2212\u03c0,\u03c0)\\boldsymbol{x}=(x,y,\\theta)\\in^{2}\\times[-\\pi,\\pi)bold_italic_x = ( italic_x , italic_y , italic_\u03b8 ) \u2208 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT \u00d7 [ - italic_\u03c0 , italic_\u03c0 ). The red region indicates sandy zone while the blue\nregion indicates the slippery zone. Due to unknown road\nconditions, the actual system trajectories deviate from planned\ntrajectories.",
|
| 77 |
+
"url": "http://arxiv.org/html/2102.09111v2/x2.png"
|
| 78 |
+
},
|
| 79 |
+
"2(b)": {
|
| 80 |
+
"figure_path": "2102.09111v2_figure_2(b).png",
|
| 81 |
+
"caption": "Figure 2: The (gray) planned trajectory and\n(black) actual system trajectory in various road zones, with the\nsystem state \ud835\udc99=(x,y,\u03b8)\u22082\u00d7[\u2212\u03c0,\u03c0)\\boldsymbol{x}=(x,y,\\theta)\\in^{2}\\times[-\\pi,\\pi)bold_italic_x = ( italic_x , italic_y , italic_\u03b8 ) \u2208 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT \u00d7 [ - italic_\u03c0 , italic_\u03c0 ). The red region indicates sandy zone while the blue\nregion indicates the slippery zone. Due to unknown road\nconditions, the actual system trajectories deviate from planned\ntrajectories.",
|
| 82 |
+
"url": "http://arxiv.org/html/2102.09111v2/x3.png"
|
| 83 |
+
},
|
| 84 |
+
"3(a)": {
|
| 85 |
+
"figure_path": "2102.09111v2_figure_3(a).png",
|
| 86 |
+
"caption": "Figure 3: An example of the (gray) planned trajectory and (black) controlled system trajectory in various road zones, with the system state \ud835\udc99=(x,y,\u03b8)\ud835\udc99\ud835\udc65\ud835\udc66\ud835\udf03\\boldsymbol{x}=(x,y,\\theta)bold_italic_x = ( italic_x , italic_y , italic_\u03b8 ). The red region indicates sandy zone while the blue region indicates the slippery zone. With the implemented control, the vehicle follows the planned path with low regrets in high probability.",
|
| 87 |
+
"url": "http://arxiv.org/html/2102.09111v2/x4.png"
|
| 88 |
+
},
|
| 89 |
+
"3(b)": {
|
| 90 |
+
"figure_path": "2102.09111v2_figure_3(b).png",
|
| 91 |
+
"caption": "Figure 3: An example of the (gray) planned trajectory and (black) controlled system trajectory in various road zones, with the system state \ud835\udc99=(x,y,\u03b8)\ud835\udc99\ud835\udc65\ud835\udc66\ud835\udf03\\boldsymbol{x}=(x,y,\\theta)bold_italic_x = ( italic_x , italic_y , italic_\u03b8 ). The red region indicates sandy zone while the blue region indicates the slippery zone. With the implemented control, the vehicle follows the planned path with low regrets in high probability.",
|
| 92 |
+
"url": "http://arxiv.org/html/2102.09111v2/x5.png"
|
| 93 |
+
},
|
| 94 |
+
"4(a)": {
|
| 95 |
+
"figure_path": "2102.09111v2_figure_4(a).png",
|
| 96 |
+
"caption": "Figure 4: (a) The (gray) control signal provided by the planner and an example of the (black) control signal derived from the proposed approach. (b) The realized loss \u2113\u2113\\ellroman_\u2113 and the achieved objective of (P2).",
|
| 97 |
+
"url": "http://arxiv.org/html/2102.09111v2/x6.png"
|
| 98 |
+
},
|
| 99 |
+
"4(b)": {
|
| 100 |
+
"figure_path": "2102.09111v2_figure_4(b).png",
|
| 101 |
+
"caption": "Figure 4: (a) The (gray) control signal provided by the planner and an example of the (black) control signal derived from the proposed approach. (b) The realized loss \u2113\u2113\\ellroman_\u2113 and the achieved objective of (P2).",
|
| 102 |
+
"url": "http://arxiv.org/html/2102.09111v2/x7.png"
|
| 103 |
+
},
|
| 104 |
+
"5(a)": {
|
| 105 |
+
"figure_path": "2102.09111v2_figure_5(a).png",
|
| 106 |
+
"caption": "Figure 5: The component \u03b11subscript\ud835\udefc1\\alpha_{1}italic_\u03b1 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \u03b12subscript\ud835\udefc2\\alpha_{2}italic_\u03b1 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of the real-time parameter \ud835\udf36:=(\u03b11,\u03b12,\u03b13)assign\ud835\udf36subscript\ud835\udefc1subscript\ud835\udefc2subscript\ud835\udefc3\\boldsymbol{\\alpha}:=(\\alpha_{1},\\alpha_{2},\\alpha_{3})bold_italic_\u03b1 := ( italic_\u03b1 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) in the learning procedure.",
|
| 107 |
+
"url": "http://arxiv.org/html/2102.09111v2/x8.png"
|
| 108 |
+
},
|
| 109 |
+
"5(b)": {
|
| 110 |
+
"figure_path": "2102.09111v2_figure_5(b).png",
|
| 111 |
+
"caption": "Figure 5: The component \u03b11subscript\ud835\udefc1\\alpha_{1}italic_\u03b1 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \u03b12subscript\ud835\udefc2\\alpha_{2}italic_\u03b1 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of the real-time parameter \ud835\udf36:=(\u03b11,\u03b12,\u03b13)assign\ud835\udf36subscript\ud835\udefc1subscript\ud835\udefc2subscript\ud835\udefc3\\boldsymbol{\\alpha}:=(\\alpha_{1},\\alpha_{2},\\alpha_{3})bold_italic_\u03b1 := ( italic_\u03b1 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) in the learning procedure.",
|
| 112 |
+
"url": "http://arxiv.org/html/2102.09111v2/x9.png"
|
| 113 |
+
},
|
| 114 |
+
"6(a)": {
|
| 115 |
+
"figure_path": "2102.09111v2_figure_6(a).png",
|
| 116 |
+
"caption": "Figure 6: An example of random returns\n\ud835\udc99=(x1,x2,x3)\ud835\udc99subscript\ud835\udc651subscript\ud835\udc652subscript\ud835\udc653\\boldsymbol{x}=(x_{1},x_{2},x_{3})bold_italic_x = ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ), where returns of the first two assets\nx1subscript\ud835\udc651x_{1}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, x2\u2208[0,+\u221e)subscript\ud835\udc6520x_{2}\\in[0,+\\infty)italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT \u2208 [ 0 , + \u221e ) are highly fluctuating and the third is\nvalue-preserving with return x3\u22611subscript\ud835\udc6531x_{3}\\equiv 1italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT \u2261 1. Without asset allocation, agent does not achieve the goal profit r0=1.3subscript\ud835\udc5f01.3r_{0}=1.3italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 1.3 and has a chance of losing assets.",
|
| 117 |
+
"url": "http://arxiv.org/html/2102.09111v2/x10.png"
|
| 118 |
+
},
|
| 119 |
+
"6(b)": {
|
| 120 |
+
"figure_path": "2102.09111v2_figure_6(b).png",
|
| 121 |
+
"caption": "Figure 6: An example of random returns\n\ud835\udc99=(x1,x2,x3)\ud835\udc99subscript\ud835\udc651subscript\ud835\udc652subscript\ud835\udc653\\boldsymbol{x}=(x_{1},x_{2},x_{3})bold_italic_x = ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ), where returns of the first two assets\nx1subscript\ud835\udc651x_{1}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, x2\u2208[0,+\u221e)subscript\ud835\udc6520x_{2}\\in[0,+\\infty)italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT \u2208 [ 0 , + \u221e ) are highly fluctuating and the third is\nvalue-preserving with return x3\u22611subscript\ud835\udc6531x_{3}\\equiv 1italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT \u2261 1. Without asset allocation, agent does not achieve the goal profit r0=1.3subscript\ud835\udc5f01.3r_{0}=1.3italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 1.3 and has a chance of losing assets.",
|
| 122 |
+
"url": "http://arxiv.org/html/2102.09111v2/x11.png"
|
| 123 |
+
},
|
| 124 |
+
"7(a)": {
|
| 125 |
+
"figure_path": "2102.09111v2_figure_7(a).png",
|
| 126 |
+
"caption": "Figure 7: The component \u03b11subscript\ud835\udefc1\\alpha_{1}italic_\u03b1 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \u03b12subscript\ud835\udefc2\\alpha_{2}italic_\u03b1 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of the real-time parameter \ud835\udf36:=(\u03b11,\u03b12,\u03b13)assign\ud835\udf36subscript\ud835\udefc1subscript\ud835\udefc2subscript\ud835\udefc3\\boldsymbol{\\alpha}:=(\\alpha_{1},\\alpha_{2},\\alpha_{3})bold_italic_\u03b1 := ( italic_\u03b1 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) in learning, where the values \u03b11\u22c6subscriptsuperscript\ud835\udefc\u22c61\\alpha^{\\star}_{1}italic_\u03b1 start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \u03b12\u22c6subscriptsuperscript\ud835\udefc\u22c62\\alpha^{\\star}_{2}italic_\u03b1 start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are the online-inaccessible ground truth. Notice the responsive behavior of the proposed learning algorithm.",
|
| 127 |
+
"url": "http://arxiv.org/html/2102.09111v2/x12.png"
|
| 128 |
+
},
|
| 129 |
+
"7(b)": {
|
| 130 |
+
"figure_path": "2102.09111v2_figure_7(b).png",
|
| 131 |
+
"caption": "Figure 7: The component \u03b11subscript\ud835\udefc1\\alpha_{1}italic_\u03b1 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \u03b12subscript\ud835\udefc2\\alpha_{2}italic_\u03b1 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of the real-time parameter \ud835\udf36:=(\u03b11,\u03b12,\u03b13)assign\ud835\udf36subscript\ud835\udefc1subscript\ud835\udefc2subscript\ud835\udefc3\\boldsymbol{\\alpha}:=(\\alpha_{1},\\alpha_{2},\\alpha_{3})bold_italic_\u03b1 := ( italic_\u03b1 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) in learning, where the values \u03b11\u22c6subscriptsuperscript\ud835\udefc\u22c61\\alpha^{\\star}_{1}italic_\u03b1 start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \u03b12\u22c6subscriptsuperscript\ud835\udefc\u22c62\\alpha^{\\star}_{2}italic_\u03b1 start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are the online-inaccessible ground truth. Notice the responsive behavior of the proposed learning algorithm.",
|
| 132 |
+
"url": "http://arxiv.org/html/2102.09111v2/x13.png"
|
| 133 |
+
},
|
| 134 |
+
"8(a)": {
|
| 135 |
+
"figure_path": "2102.09111v2_figure_8(a).png",
|
| 136 |
+
"caption": "Figure 8: Real-time resource allocation \ud835\udc96\ud835\udc96\\boldsymbol{u}bold_italic_u and profit \u27e8\ud835\udc96,\ud835\udc99\u27e9\ud835\udc96\ud835\udc99\\langle\\boldsymbol{u},\\boldsymbol{x}\\rangle\u27e8 bold_italic_u , bold_italic_x \u27e9. Notice how the decision \ud835\udc96=(u1,u2,u3)\ud835\udc96subscript\ud835\udc621subscript\ud835\udc622subscript\ud835\udc623\\boldsymbol{u}=(u_{1},u_{2},u_{3})bold_italic_u = ( italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) respects constraints and how the allocation tries to balance the assets when the goal profit r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is met.",
|
| 137 |
+
"url": "http://arxiv.org/html/2102.09111v2/x14.png"
|
| 138 |
+
},
|
| 139 |
+
"8(b)": {
|
| 140 |
+
"figure_path": "2102.09111v2_figure_8(b).png",
|
| 141 |
+
"caption": "Figure 8: Real-time resource allocation \ud835\udc96\ud835\udc96\\boldsymbol{u}bold_italic_u and profit \u27e8\ud835\udc96,\ud835\udc99\u27e9\ud835\udc96\ud835\udc99\\langle\\boldsymbol{u},\\boldsymbol{x}\\rangle\u27e8 bold_italic_u , bold_italic_x \u27e9. Notice how the decision \ud835\udc96=(u1,u2,u3)\ud835\udc96subscript\ud835\udc621subscript\ud835\udc622subscript\ud835\udc623\\boldsymbol{u}=(u_{1},u_{2},u_{3})bold_italic_u = ( italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) respects constraints and how the allocation tries to balance the assets when the goal profit r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is met.",
|
| 142 |
+
"url": "http://arxiv.org/html/2102.09111v2/x15.png"
|
| 143 |
+
},
|
| 144 |
+
"9": {
|
| 145 |
+
"figure_path": "2102.09111v2_figure_9.png",
|
| 146 |
+
"caption": "Figure 9: The realized loss \u2113\u2113\\ellroman_\u2113 and the achieved objective of (P2).",
|
| 147 |
+
"url": "http://arxiv.org/html/2102.09111v2/x16.png"
|
| 148 |
+
}
|
| 149 |
+
},
|
| 150 |
+
"validation": true,
|
| 151 |
+
"references": [
|
| 152 |
+
{
|
| 153 |
+
"1": {
|
| 154 |
+
"title": "Prentice Hall, 1999.",
|
| 155 |
+
"author": "L. Ljung, System identification.",
|
| 156 |
+
"venue": null,
|
| 157 |
+
"url": null
|
| 158 |
+
}
|
| 159 |
+
},
|
| 160 |
+
{
|
| 161 |
+
"2": {
|
| 162 |
+
"title": "Athena scientific, 2012.",
|
| 163 |
+
"author": "D. Bertsekas, Dynamic programming and optimal control: Volume I, vol. 4.",
|
| 164 |
+
"venue": null,
|
| 165 |
+
"url": null
|
| 166 |
+
}
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"3": {
|
| 170 |
+
"title": "Springer Science & Business Media, 2013.",
|
| 171 |
+
"author": "Y. Nesterov, Introductory lectures on convex optimization: A basic\ncourse, vol. 87.",
|
| 172 |
+
"venue": null,
|
| 173 |
+
"url": null
|
| 174 |
+
}
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"4": {
|
| 178 |
+
"title": "Cambridge University Press, 2006.",
|
| 179 |
+
"author": "S. M. LaValle, Planning algorithms.",
|
| 180 |
+
"venue": null,
|
| 181 |
+
"url": null
|
| 182 |
+
}
|
| 183 |
+
},
|
| 184 |
+
{
|
| 185 |
+
"5": {
|
| 186 |
+
"title": "Springer, 1998.",
|
| 187 |
+
"author": "R. T. Rockafellar and R. J.-B. Wets, Variational analysis.",
|
| 188 |
+
"venue": null,
|
| 189 |
+
"url": null
|
| 190 |
+
}
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"6": {
|
| 194 |
+
"title": "Springer, 2011.",
|
| 195 |
+
"author": "H. H. Bauschke and P. L. Combettes, Convex analysis and monotone operator\ntheory in Hilbert spaces, vol. 408.",
|
| 196 |
+
"venue": null,
|
| 197 |
+
"url": null
|
| 198 |
+
}
|
| 199 |
+
}
|
| 200 |
+
],
|
| 201 |
+
"url": "http://arxiv.org/html/2102.09111v2"
|
| 202 |
+
}
|
20240721/2206.00851v4.json
ADDED
|
@@ -0,0 +1,476 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Finite Element Complexes in Two Dimensions",
|
| 3 |
+
"abstract": "In this study, two-dimensional finite element complexes with various levels of smoothness, including the de Rham complex, the curldiv complex, the elasticity complex, and the divdiv complex, are systematically constructed. Smooth scalar finite elements in two dimensions are developed based on a non-overlapping decomposition of the simplicial lattice and the Bernstein basis of the polynomial space, with the order of differentiability at vertices being greater than twice that at edges. Finite element de Rham complexes with different levels of smoothness are devised using smooth finite elements with smoothness parameters that satisfy certain relations. Finally, finite element elasticity complexes and finite element divdiv complexes are derived from finite element de Rham complexes by using the Bernstein-Gelfand-Gelfand (BGG) framework. This study is the first work to construct finite element complexes in a systematic way. Moreover, the novel tools developed in this work, such as the non-overlapping decomposition of the simplicial lattice and the discrete BGG construction, can be useful for further research in this field.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "1. Introduction",
|
| 9 |
+
"text": "Hilbert complexes play a fundamental role in the theoretical analysis and the design of stable numerical methods for partial differential equations [5 ###reference_b5###, 2 ###reference_b2###, 3 ###reference_b3###, 12 ###reference_b12###]. Recently in [7 ###reference_b7###] Arnold and Hu have developed a systematical approach to derive new complexes from well-understood differential complexes such as the de Rham complex involving Sobolev spaces. In this work we shall construct two-dimensional finite element complexes with various smoothness in a systematic way, including finite element de Rham complexes, finite element elasticity complexes, and finite element divdiv complexes etc.\nWe first construct smooth finite elements in two dimensions by a geometric approach, in which the simplicial lattice as the multi-index set with sum is employed. The smoothness (order of differentiability) at vertices and edges are specified by parameters and , respectively. Let be a triangulation of a domain and denote by . When and , we construct -continuous finite element spaces using a non-overlapping decomposition (partition) of the simplicial lattice and the Bernstein basis of polynomial space , where is the barycentric coordinate. Notice that the -continuity implies .\nWe then move to the finite element de Rham complexes with various smoothness which include discrete versions of the de Rham complex, for ,\nand one with mixed regularities, for ,\nwhere\nObviously (1 ###reference_###) is a special case of (2 ###reference_###) and also known as the Stokes complex.\nGiven three integer vectors , , satisfying and , for sufficiently large,\nwe devise finite element de Rham complexes of various smoothness\nwhich is a conforming discretization of the de Rham complex (2 ###reference_###). The finite element de Rham complex (3 ###reference_###) with and has been developed recently in [28 ###reference_b28###].\nWe refer to\n[33 ###reference_b33###, 24 ###reference_b24###] for some nonconforming Stokes complexes modified from conforming finite element de Rham complexes.\nBy rotation of the vector field and differential operators, we also obtain the finite element de Rham complex involving operators:\nin which the space can find applications in the discretization of Maxwell equation or the fourth-order curl problems.\nSeveral existing finite element de Rham complexes in two dimensions are special examples of (3 ###reference_###) or (4 ###reference_###), and summarized in Table 1 ###reference_###.\nBased on finite element de Rham complexes, we use the Bernstein-Gelfand-Gelfand (BGG) framework [7 ###reference_b7###] to construct more finite element complexes. For and satisfying and polynomial degree sufficiently large,\nwe design the BGG diagram\nwhich leads to the finite element elasticity complex\nFor , , and , we build the BGG diagram\nwhich leads to the finite element divdiv complex\nwhere . We refer to Section 5 ###reference_### for details. By a refinement of the BGG diagram, the finite element divdiv complexes presented in [29 ###reference_b29###] and [13 ###reference_b13###] with are also covered.\nSeveral existing finite element complexes in two dimensions can be viewed as special cases of (5 ###reference_###) or (6 ###reference_###), and are summarized in Table 2 ###reference_###. However, discrete elasticity complexes and rot\u2009rot complexes based on the Clough-Tocher split in [20 ###reference_b20###] are constructed using piece-wise polynomials as shape functions, which are not covered by (5 ###reference_###) and (6 ###reference_###).\nThe rest of this paper is organized as follows. The de Rham complex and BGG framework are reviewed in Section 2 ###reference_###.\nIn Section 3 ###reference_### the geometric decomposition of -conforming finite elements in two dimensions is studied. Finite element de Rham complexes with various smoothness are constructed in Section 4 ###reference_###.\nMore finite element complexes based on the BGG approach are developed in Section 5 ###reference_###."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "2. Preliminaries on Hilbert complexes",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "2.1. Notation",
|
| 21 |
+
"text": "For scalar function , denote\nwhere is the rotation clock-wisely, , and\nThen .\nFor vector function , denote\nFor tensor function , denote\nBy direct calculation, we have"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "2.2. Hilbert complex and exact sequence",
|
| 27 |
+
"text": "A Hilbert complex is a sequence of Hilbert spaces connected by a sequence of closed densely defined linear operators\nsatisfying the property .\nWe will abbreviate Hilbert complexes as complexes. The complex is called an exact sequence if and for . Therefore if (8 ###reference_###) is exact, is injective and is surjective. To save notation, we usually skip the trivial space in the beginning of the complex and use the embedding to indicate is injective. For more background on Hilbert complexes, we refer to [3 ###reference_b3###].\nWhen the Hilbert spaces are finite-dimensional, to verify the exactness, we rely on the following result on the dimension count.\nLet\nbe a complex, where are finite-dimensional linear spaces for . Assume , and\nIf either or , then complex (9 ###reference_###) is exact.\nGiven the identity (10 ###reference_###) and the relation , we prove the equivalence of and by dimension count. By ,\nThen it follows from (10 ###reference_###) that\nas required.\n\u220e"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "2.3. The de Rham complex",
|
| 33 |
+
"text": "For a domain , the de Rham complex is\nWhen is simply connected, the de Rham complex (11 ###reference_###) is exact. By changing smoothness of the Sobolev spaces, we obtain the version (2 ###reference_###).\nRestricted to one triangle, a polynomial de Rham complex is, for integer ,\nwhere denotes the set of real valued polynomials defined on of degree less than or equal to , and for being vector space , tensor space , or symmetric tensor space .\nThe following identity\ncan be verified directly. The relation is due to the fact: if , then , and in two dimensions is a rotation of . Therefore complex (12 ###reference_###) is exact by Lemma 2.1 ###reference_theorem1###."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "2.4. Bernstein-Gelfand-Gelfand construction",
|
| 39 |
+
"text": "Eastwood\u2019s work [21 ###reference_b21###] established the relationship between the elasticity complex and the de Rham complex via the Bernstein-Gelfand-Gelfand (BGG) construction [9 ###reference_b9###]. Arnold, Falk, and Winther [4 ###reference_b4###] expanded upon this connection by replicating the same construction in the discrete setting, which they used to reconstruct the finite element elasticity complex from the finite element de Rham complexes, as previously introduced in [8 ###reference_b8###]. While a systematic BGG construction has been developed more recently in [7 ###reference_b7###], our focus in this work is limited to two-dimensional complexes, so we will rely on specific examples rather than the abstract framework in [7 ###reference_b7###].\nWe stack two de Rham complexes into the BGG diagram\nwhich leads to the elasticity complex\nBy rotation, we also have the Hessian complex\nTo provide a more effective explanation of how (15 ###reference_###) is derived from (14 ###reference_###), we present a step-by-step breakdown of the process.\nThe anti-commutativity is exactly the first identity in (7 ###reference_###), by which we can change to as follows. For , by the exactness of the bottom complex in (14 ###reference_###), there exists satisfying . Then apply the top complex in (14 ###reference_###) to find satisfying . Set . Clearly . By the anti-commutativity, we have , i.e. . This explains the div stability .\nThe relation of these functions is summarized below:\nThe composition of two operators leads to . The null space consists of .\nThe BGG diagram\nwill lead to the divdiv complex\nand, again by rotation, the strain complex\nwhere and .\nThe anti-commutativities in (16 ###reference_###) are for and for .\nFor , by the exactness of the bottom complex in (16 ###reference_###), there exists satisfying . Then apply the top complex in (16 ###reference_###) to find s.t. . Set with .\nBy the anti-community, .\nHence , and , i.e. . This explains .\nThe chase of the diagram is summarized below:\nThe null space is given by .\nWe shall construct finite element counterparts of the BGG diagrams (14 ###reference_###)-(16 ###reference_###), and derive several finite element elasticity and divdiv complexes.\nThe first step is to design finite element de Rham complexes of different smoothness."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "3. Smooth Finite Elements in Two Dimensions",
|
| 45 |
+
"text": "In this section, we shall construct -continuous finite elements on two-dimensional triangular grids, firstly constructed by Bramble and Zl\u00e1mal [10 ###reference_b10###], by a decomposition of the simplicial lattice.\nWe use a pair of integers for the smoothness at vertices and at edges, respectively. Value means no continuity. To be -continuous, is the minimum requirement for edges and for vertices. The polynomial degree . For a vector and a constant , means for all components , and . Define"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "3.1. Simplicial lattice",
|
| 51 |
+
"text": "For two non-negative integers , we will use the multi-index notation , meaning with integer . The length of is . The sum (absolute value) of a multi-index is for and the factorial is . Denote\nA simplicial lattice of degree in two dimensions is a multi-index set of length with fixed sum , i.e.,\nAn element is called a node of the lattice.\nWe can embed the simplicial lattice into a triangle with vertices . Given , the barycentric coordinate of is given by\n, and the geometric embedding is\nThe left side of Fig. 1 ###reference_### illustrates the embedding of a two-dimensional simplicial lattice within a reference triangle with vertices , while the right side shows the embedding of the same lattice into an equilateral triangle.\n###figure_1### ###figure_2### A simplicial lattice is, by definition, an algebraic set. Through the geometric embedding , we can apply operators for the geometric simplex . For example, for a subset , we use to denote the portion of lattice nodes whose geometric embedding is inside ."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "3.2. Bernstein basis",
|
| 57 |
+
"text": "It holds that\nLet be a triangle with vertices and be the barycentric coordinate.\nThe Bernstein basis of is\nFor a subset , we define\nBy establishing a one-to-one mapping between the lattice node and the corresponding Bernstein polynomial , we can analyze polynomial properties through the simplicial lattice. In fact, all lattice nodes serve as interpolation nodes for the -th order Lagrange element."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.3",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "3.3. Sub-simplicial lattices and distance",
|
| 63 |
+
"text": "We adopt the notation of [6 ###reference_b6###] and define as the set of all sub-simplices of , and as the set of all sub-simplices of dimension , where . A sub-simplex is determined by choosing vertices from the vertices of . We will overload the notation for both the geometric simplex and the algebraic set of indices. As an algebraic set, is a subset of indices, and also\nis the -dimensional simplex spanned by the vertices . We also use notation for the edge formed by vertices and for .\nFor and , we let denote the sub-simplex of opposite to . When treating as a subset of , so that , i.e., is the complementary set of . Geometrically,\nrepresents the -dimensional simplex spanned by vertices not contained in . When is a vertex , we simply write as . Note that can be identified as the zero level set of the barycentric coordinate associated with the index set , i.e., .\nGiven a sub-simplex , through the geometric embedding , we define the prolongation/extension operator as follows:\nFor example, for\n, when , the extension , and when , the extension . The geometric embedding justifies the notation .\nWith a slight abuse of notation, for a node , we still use the same notation to denote . Then we have the following direct decomposition\nBased on (17 ###reference_###), we can write a Bernstein polynomial as\nwhere is the bubble function on and also denoted by .\nThe bubble polynomial of is\nGeometrically as the bubble polynomial space vanished on the boundary, it is generated by the interior lattice nodes only. In Fig. 1 ###reference_###, consists of the nodes inside the red triangle, and for is in the blue trapezoid region."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.4",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "3.4. Derivative and distance",
|
| 69 |
+
"text": "Given , we define the distance of a node to as\nWe define the lattice tube of with radius as\nwhich contains lattice nodes at most distance away from . Define\nThen by definition,\nWe have the following characterization of lattice nodes in .\nFor lattice node ,\nBy definition of and the fact .\n\u220e\nFor each vertex and an integer , the tube\nis isomorphic to a simplicial lattice of degree . In Fig. 1 ###reference_###, consists of lattice nodes in the green triangle which itself can be treated as a smaller simplicial lattice . For an edge , is a trapezoid of height with base .\nRecall that in [6 ###reference_b6###] a smooth function is said to vanish to order on if for all satisfying . The following result shows that the vanishing order of a Bernstein polynomial on a sub-simplex is the distance .\nLet be a sub-simplex of . For , and , i.e., , then\nFor , we write . When , the derivative will contain a factor with and . Therefore as for .\n\u220e"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.5",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "3.5. Derivatives at vertices",
|
| 75 |
+
"text": "Consider a function . The set of derivatives of order up to can be written as\nNotice that the multi-index is not in . We can add a component with value to form a simplicial lattice of degree , which can be used to determine the derivatives at that vertex.\nLet . The polynomial space\nis uniquely determined by the DoFs\nWithout loss of generality, consider . Define map which induces a one-to-one map from to . So the dimension of matches the number of DoFs (18 ###reference_###). It suffices to show that for if DoFs (18 ###reference_###) vanish, then .\nRecall the multivariate calculus result\nwhere is the Kronecker delta function.\nWhen the triangle is the reference triangle, is the origin and . So we conclude that the homogenous polynomial space\n is determined by DoFs Running , we then finish the proof when the triangle is the reference triangle.\nFor a general triangle, instead of changing to the reference triangle, we shall use the barycentric coordinate.\nClearly forms a basis of .\nChoose another basis of , being dual to , i.e., for . Indeed is the edge vector as is orthogonal to for . We can express the derivatives in this non-orthogonal basis and denote by with .\nBy the duality , , we have the generalization of (19 ###reference_###)\nBy the chain rule, it is easy to show that the vanishing is equivalent to the vanishing . So we will work with .\nA Bernstein basis of is given by .\nAssume with and for all satisfying . We shall prove by induction.\nFor , as , we conclude . Assume for all satisfying , i.e., . By Lemma 3.2 ###reference_theorem2###, the derivative vanishes at for all satisfying . Hence, for , using (20 ###reference_###),\nwhich implies for all , .\nRunning , we conclude .\n\u220e"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.6",
|
| 79 |
+
"parent_section_id": "3",
|
| 80 |
+
"section_name": "3.6. Normal derivatives on edges",
|
| 81 |
+
"text": "Given an edge , we identify lattice nodes to determine the normal derivative up to order\nBy Lemma 3.2 ###reference_theorem2###, if the lattice node is away from the edge, then the corresponding Bernstein polynomial will have vanishing normal derivatives up to order .\nWe have used lattice nodes to determine the derivatives at vertices.\nWe will use for the normal derivative.\nLet and . Let be an edge of a triangle .\nThe polynomial function space is determined by DoFs\nWithout loss of generality, take . By definition , where recall that\nconsists of lattice nodes parallel to and with distance . Define the map which is one-to-one between and .\nNow we use the requirement to figure out the bound of the components. Using Lemma 3.1 ###reference_theorem1###, we derive from that . Together with , we get the lower bound . Similarly .\nTherefore\nDefine the one-to-one mapping\nWith the help of this one-to-one mapping, we shall prove the polynomial function space is determined by DoFs\nTake a with coefficients . By the chain rule and the fact , in the non-zero terms of , the derivative in will all apply to , so\nNoting that is a constant and the bubble polynomial is always positive in the interior of , the vanishing DoF (21 ###reference_###) means for all .\nIt follows from Lemma 3.2 ###reference_theorem2### that for and . That is the matrix\nis lower block triangular as follows.\nSince we have proved each block matrix is invertible, then the whole lower block triangular matrix is invertible which is equivalent to the unisolvence.\n\u220e"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "3.7",
|
| 85 |
+
"parent_section_id": "3",
|
| 86 |
+
"section_name": "3.7. Geometric decompositions of the simplicial lattice",
|
| 87 |
+
"text": "Inside a triangle, a vertex will be shared by two edges and to have enough lattice nodes for each edge, is required; see Fig. 2 ###reference_###(b).\nLet , , and nonnegative integer . Let be a triangle. Then it holds that\nwhere\nwith cardinality\nThis leads to the decomposition of the polynomial space\nAs , the sets are disjoint.\nWe then show that the sets are disjoints.\nA node implies and implies . Therefore , i.e., . Repeat the argument for each pair of edges to conclude are disjoint.\nFor a given edge , the vertex is opposite to and . As , we conclude and consequently .\nThen decompositions (22 ###reference_###) and (23 ###reference_###) follow.\n\u220e\n###figure_3### ###figure_4### Denote by\nand call it the polynomial bubble space, which will play an important role in our construction of finite element de Rham complexes. Polynomials in will have vanishing derivatives up to order , and more precisely\nLet and . Then\n, when ;\n, when .\nThe first statement has been proved in [14 ###reference_b14###]. We can prove the second statement by verifying the following inequality directly\n\u220e"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "3.8",
|
| 91 |
+
"parent_section_id": "3",
|
| 92 |
+
"section_name": "3.8. Smooth finite elements in two dimensions",
|
| 93 |
+
"text": "We are in the position to present -finite elements on a triangulation.\nLet , , and nonnegative integer . Let be a triangle. The shape function space is determined by the DoFs\nBy the decomposition (23 ###reference_###) of , the dimension of matches the number of DoFs. Let satisfy all the DoFs (24a ###reference_.1###)-(24c ###reference_.3###) vanish.\nThanks to Lemma 3.3 ###reference_theorem3###, Lemma 3.4 ###reference_theorem4### and Lemma 3.5 ###reference_theorem5###, it follows from the vanishing DoFs (24a ###reference_.1###) and (24b ###reference_.2###) that . Then holds from the vanishing DoF (24c ###reference_.3###).\n\u220e\nWhen and , this is known as Argyris element [1 ###reference_b1###, 34 ###reference_b34###].\nWhen , and , -continuous finite elements have been constructed in [10 ###reference_b10###, 35 ###reference_b35###], see also [31 ###reference_b31###, Section 8.1] and the references therein, whose DoFs are different from (24b ###reference_.2###)-(24c ###reference_.3###). Here DoFs (24a ###reference_.1###)-(24c ###reference_.3###) are firstly constructed in [28 ###reference_b28###]. The DoFs in [31 ###reference_b31###], also called nodal minimal determining sets in the spline literature, are the point evaluation of functions and their derivatives at some nodes. While DoFs (24b ###reference_.2###)-(24c ###reference_.3###) are in the integral form, which is beneficial to the unisolvence of the DoFs and the construction of the finite element de Rham complexes. Smooth finite elements with the DoFs in the integral form on simplexes in arbitrary dimension were firstly constructed in [28 ###reference_b28###].\nWith mesh , define the global -continuous finite element space\nSince , the single-valued DoFs (24a ###reference_.1###) and (24b ###reference_.2###) will imply .\nThe finite element space admits the following geometric decomposition\nThe dimension of is\nIn particular, denote by the minimum degree case: with , which is firstly constructed in [10 ###reference_b10###],\nand the dimension is\nWhen , there is no interior moments as is not large enough."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "4. Finite Element de Rham Complexes",
|
| 99 |
+
"text": "In this section we shall construct finite element spaces with appropriate DoFs which make the global finite element complexes (25 ###reference_###) exact\nSpace is denoted as to emphasize it is considered as a subspace of although it might be continuous when .\nUnlike the classical FEEC [5 ###reference_b5###], additional smoothness on lower sub-simplexes (vertices and edges for a two-dimensional triangulation) will be imposed, which are described by three vectors and with the subscript referring to the -form for . Each consists of two parameters for the smoothness at vertices and edges, respectively, and for .\nThe finite element de Rham complexes constructed in [28 ###reference_b28###] are exactly complex (25 ###reference_###) with and .\nWe shall consider the general case and ."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.1",
|
| 103 |
+
"parent_section_id": "4",
|
| 104 |
+
"section_name": "4.1. Continuous vector finite element space and decay smoothness",
|
| 105 |
+
"text": "We first consider a simple case in which the smoothness parameters are decreased by :\nAs , , and is at least in . In this case, (25 ###reference_###) is also called a discrete Stokes complex. The vector element is -conforming and can be used as the velocity space in discretization of Stokes equation.\nLet , and let .\nWrite\nThe coefficients are presented in the following table\nThe dimension related to and , can be verified directly. For the column of , by removing the same , we compute\nWith these two identities, the third column is an easy consequence of (13 ###reference_###).\n\u220e\nAs a corollary, we obtain the following polynomial bubble complex.\nLet , and let . The polynomial bubble complex\nis exact, where is the -projection onto the constant space.\nClearly we have . For , apply complex (12 ###reference_###) to get with . As , we have , which means is constant.\nHence by subtracting a constant, we can choose to satisfy , as a result . This proves .\nThanks to the last column of the table in Lemma 4.1 ###reference_theorem1###,\nwhich together with Lemma 2.1 ###reference_theorem1### concludes the exactness of bubble complex (26 ###reference_###).\n\u220e\nLet , and let . The finite element complex\nis exact.\nBy construction (27 ###reference_###) is a complex, and\nBy Lemma 4.1 ###reference_theorem1### and the Euler\u2019s formula,\nTherefore the exactness of complex (27 ###reference_###) follows from Lemma 2.1 ###reference_theorem1###.\n\u220e\nThe two-dimensional finite element de Rham complexes constructed by Falk and Neilan [22 ###reference_b22###] correspond to the case , and :\nTo fit the space, we skip in the notation and write as column vectors.\nThe vector element is and thus the previous finite element is for which the lowest degree is the Argyris element with shape function space . The last one is discontinuous but continuous at vertices. If we want to use a continuous element for the pressure, i.e., , then and , which may find an application in the strain gradient elasticity problem [17 ###reference_b17###, 32 ###reference_b32###]. Later on, we will relax the relation and construct relative low degree Stokes pair with continuous pressure elements.\nNotice that the pair and are not allowed since cannot define a element. Indeed the div stability for Stokes pair is more subtle and not covered in our framework."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.2",
|
| 109 |
+
"parent_section_id": "4",
|
| 110 |
+
"section_name": "4.2. Normal continuous finite elements for vector functions",
|
| 111 |
+
"text": "We continue to consider the case and the smoothness on edges are fixed by:\nThe constraints on the vertex smoothness are\nThe finite element spaces for scalar functions and remain unchanged.\nWe need to define the finite element space for with parameters . The vector function is not continuous on edges. But to be -conforming, the normal component should be continuous. A refined notation for the smoothness parameter would be \nwhere the tangential component is (discontinuous) and the normal component is (continuous);\nnotation is adopted in [28 ###reference_b28###]. To simplify notation, we still use the simplified form and understand that for space means the normal continuity.\nTake with as the space of shape functions. For , the DoFs are\nAlthough , we still use not as the interior moments so that we can have DoFs (28b ###reference_.2###)-(28c ###reference_.3###) on edges. Namely locally we use the vector Hermite-type element with parameter . When defining the global -conforming finite element space, the tangential component (28c ###reference_.3###) is considered as local, i.e., double valued on interior edges.\nWhen , there is no DoFs on vertices and DoFs are\nThe normal component is the full degree polynomial but the tangential component is corresponding to the edge bubble . The interior moments become . Locally we use vector Lagrange finite element. At each edge, we use (tangential-normal) coordinate and at a vertex we use the coordinate formed by the two normal direction of two edges containing that vertex and merge into (29a ###reference_.1###). Then the uni-solvence in one triangle follows from that of vector Lagrange elements.\nDefine the global -conforming finite element space\nfor , and\nwhere the tangential component (28c ###reference_.3###) and (29b ###reference_.2###) are considered as local and may be double-valued for each interior edge.\nAssume parameters satisfy\nLet . The finite element complex\nis exact.\nApparently (30 ###reference_###) is a complex, and\nThen we count the dimension. The dimension count in Lemma 4.1 ###reference_theorem1### is still valid except . As and , the identity still holds.\nThe rest of the proof is the same as that of Theorem 4.3 ###reference_theorem3###.\n\u220e\nFor , , we recover the standard finite element de Rham complex\nWe can choose and to get\nwhich has been constructed in [19 ###reference_b19###]."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.3",
|
| 115 |
+
"parent_section_id": "4",
|
| 116 |
+
"section_name": "4.3. General cases with inequality constraint",
|
| 117 |
+
"text": "We consider more general cases with an inequality constraint on the smoothness parameters and :\nTo define the finite element spaces, we further require\nwhere the Iverson bracket if the statement inside the bracket is true and otherwise.\nThe finite element spaces for scalar functions and remain unchanged.\nNext we define a new finite element space for . Take as the space of shape functions.\nThe degrees of freedom are\nWe explain the change of DoFs. We add DoFs (32b ###reference_.2###), (32e ###reference_.5###), and (32f ###reference_.6###) on to determine . For interior moments, we use the bubble complex (26 ###reference_###) to split it into range of and its orthogonal complement. On edges, DoFs on introduce some linear dependence of normal derivatives of the tangential and normal components and thus need to remove some redundancy.\nMore precisely, for with ,\nThe second term will be determined by (32a ###reference_.1###) and (32d ###reference_.4###). The normal derivative of the normal component is built into (32e ###reference_.5###) but not which should be explicitly included in (32c ###reference_.3###). A linear combination of (32c ###reference_.3###), (32d ###reference_.4###), and (32e ###reference_.5###) will determine\nConsequently it returns to the smooth finite elements defined before.\nAssume satisfy (31 ###reference_###), and .\nThe DoFs (32a ###reference_.1###)-(32g ###reference_.7###) are uni-solvent for .\nThe condition ensures which can be verified by showing cf. Lemma 3.5 ###reference_theorem5###.\nThe number of DoFs (32b ###reference_.2###) and (32e ###reference_.5###)-(32f ###reference_.6###) on is\n\nwhich is constant with respect to . Hence the number of DoFs (32a ###reference_.1###)-(32g ###reference_.7###) is also constant with respect to . As a result the number of DoFs (32a ###reference_.1###)-(32g ###reference_.7###) equals to , which has been proved for case .\nTake and assume all the DoFs (32a ###reference_.1###)-(32g ###reference_.7###) vanish.\nThe vanishing DoF (32c ###reference_.3###) implies .\nBy the vanishing DoFs (32a ###reference_.1###)-(32b ###reference_.2###) and (32e ###reference_.5###)-(32f ###reference_.6###), we get .\nAnd it follows from the vanishing DoFs (32a ###reference_.1###) and (32c ###reference_.3###)-(32d ###reference_.4###) that . Therefore holds from the vanishing DoF (32g ###reference_.7###).\n\u220e\nDefine global -continuous finite element space\nWhen , we have\nNamely additional smoothness on is imposed. We use Figure 3 ###reference_### to illustrate the exactness of the finite element de Rham complex (33 ###reference_###), which is obtained by adding more constraints on .\n###figure_5### Let satisfying . Assume . The finite element complex\nis exact.\nIt is straightforward to verify that (33 ###reference_###) is a complex by showing and .\nIt is also obvious that\nWe have proved the exactness for . When counting the dimension, only need to check the difference.\nThe added vertex DoFs for and are equal, i.e.,\nSame argument can be applied to edge DoFs. Therefore the alternating column sums remain the same and the proof of Theorem 4.3 ###reference_theorem3### can be still applied.\n\u220e\nWe present two examples of the de Rham complex ending with the Lagrange element.\nConsider the case , and , which is also constructed as Stokes pair in Falk and Neilan [22 ###reference_b22###]. Now we can choose continuous pressure space without increasing the polynomial degree. The complex is\nThe velocity space is a reduced Hermite space with continuity of at vertices and edges. With such modification, this Stokes pair with continuous pressure element is point-wise divergence free comparing to the Taylor-Hood element.\nConsider the case and , and .\nThe complex is\nwhich is the rotation of the finite element de Rham complex in [30 ###reference_b30###, Section 5.2.1].\nThe space can be used to discretize fourth-order div or curl equations [23 ###reference_b23###, 30 ###reference_b30###].\nWe can also apply the pair and to mixed finite element methods for Poisson equation , in which the discrete is continuous.\nFor simplicity, hereafter we will omit the triangulation in the notation of global finite element spaces. For example, will be abbreviated as ."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "5",
|
| 121 |
+
"parent_section_id": null,
|
| 122 |
+
"section_name": "5. Beyond the de Rham Complex",
|
| 123 |
+
"text": "In this section, we shall construct more finite element complexes from copies of finite element de Rham complexes."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5.1",
|
| 127 |
+
"parent_section_id": "5",
|
| 128 |
+
"section_name": "5.1. Finite element curl\u2009div complexes",
|
| 129 |
+
"text": "Based on the finite element de Rham complex (33 ###reference_###), we can obtain the finite element discretization of the curl\u2009div complex [7 ###reference_b7###]\nwhere , and the operator is defined by for and .\nLet satisfying . Assume . The finite element complex\n\nis exact.\nBy complex (33 ###reference_###), clearly (34 ###reference_###) is a complex, and . We will focus on the exactness of complex (34 ###reference_###).\nThe condition implies , and implies .\nWe get from the exactness of complex (33 ###reference_###) that\nHence follows from when .\nFor , there exists constant such that . Then we have , i.e., .\nTherefore holds from the exactness of complex (33 ###reference_###).\n\u220e"
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "5.2",
|
| 133 |
+
"parent_section_id": "5",
|
| 134 |
+
"section_name": "5.2. Finite element elasticity and Hessian complexes",
|
| 135 |
+
"text": "We first present two examples. Denote by\nWe take two vector functions by row to form a matrix and each row belongs to .\nTo fit the space, we skip the constant space in the beginning and at the end in the sequence, and in the spaces. The first example has been presented in [19 ###reference_b19###] for :\nThis will lead to the elasticity complex\nWe then present another example with rotated differential operators and use to increase the smoothness of the last space. The finite element BGG diagram for\nwill lead to the finite element Hessian complex constructed in [15 ###reference_b15###]\nNote that complex (36 ###reference_###) is not a rotation of complex (35 ###reference_###) as complex (36 ###reference_###) ends at a continuous Lagrange element.\nWe now present the general case.\nLet and satisfying and let polynomial degree .\nThen we have the BGG diagram\nwhich leads to the finite element elasticity complex\nwhere .\nFirst we show that . For , by the exactness of the complex in the top line of (37 ###reference_###), there exists such that . Then we get from the anti-commutative property (7 ###reference_###) that .\nAgain condition ensures .\nWe can apply the BGG framework in [7 ###reference_b7###] to get the complex (38 ###reference_###) and its exactness. In two dimensions, we will provide a simple proof without invoking the machinery.\nClearly (38 ###reference_###) is a complex. We prove the exactness of complex (38 ###reference_###) in two steps.\nStep 1. Prove . For , by the bottom complex in (37 ###reference_###), there exists such that . Then it follows from (7 ###reference_###) that\nBy the exactness of the top de Rham complex, there exists such that . Thus .\nStep 2. Prove .\nAs is surjective, given a , we can find such that By the diagram (37 ###reference_###), we can find such that . Set . Then and , i.e. is symmetric. Therefore we find and .\n\u220e\nIn (38 ###reference_###), is defined as .\nNext we give the finite element description of space and thus can obtain locally supported basis. On each triangle, we take as the shape function space. By symmetrizing DoFs (32a ###reference_.1###)-(32g ###reference_.7###), we propose the following local DoFs for space\nThe DoFs (39a ###reference_.1###)-(39g ###reference_.7###) are uni-solvent for .\nThe number of DoFs (39b ###reference_.2###) and (39e ###reference_.5###)-(39f ###reference_.6###) is\n\nThen the number of DoFs (39a ###reference_.1###)-(39g ###reference_.7###) is\nby (13 ###reference_###),\nwhich equals to .\nTake , and assume all the DoFs (39a ###reference_.1###)-(39g ###reference_.7###) vanish. It follows from the integration by parts and (39c ###reference_.3###) that\nThanks to DoFs (39a ###reference_.1###)-(39b ###reference_.2###) and (39e ###reference_.5###)-(39f ###reference_.6###), we get .\nOn each edge ,\nThen we acquire from DoFs (39a ###reference_.1###)-(39e ###reference_.5###) that . Finally we get from the vanishing DoF (39g ###reference_.7###).\n\u220e\nNext we define the global finite element space and show it is .\nIt holds\nApparently . By comparing DoFs and direct computation, we can show and the desired result follows.\n\u220e"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "5.3",
|
| 139 |
+
"parent_section_id": "5",
|
| 140 |
+
"section_name": "5.3. Finite element divdiv complexes",
|
| 141 |
+
"text": "We first consider the case: the tensor finite element space is continuous.\nLet and . We introduce the space with constraint on .\nThe shape function space is with and DoFs are\nAssume .\nThe DoFs (40a ###reference_.1###)-(40j ###reference_.10###) are uni-solvent for .\nThe number of DoFs (40b ###reference_.2###), (40g ###reference_.7###) and (40i ###reference_.9###) is\nAnd\nthe number of DoFs (32a ###reference_.1###)-(32g ###reference_.7###) for minus the number of DoFs (40a ###reference_.1###), (40c ###reference_.3###)-(40f ###reference_.6###), (40h ###reference_.8###) and (40j ###reference_.10###) is\nby (13 ###reference_###), which equals to . Hence the number of DoFs (40a ###reference_.1###)-(40j ###reference_.10###) equals to .\nTake , and assume all the DoFs (40a ###reference_.1###)-(40j ###reference_.10###) vanish.\nLet .\nApplying the integration by parts, it follows from (40c ###reference_.3###) and (40e ###reference_.5###) that\nApplying Lemma 4.7 ###reference_theorem7###, i.e. the unisolvence of space , it follows from DoFs (40a ###reference_.1###)-(40b ###reference_.2###) and (40e ###reference_.5###)-(40i ###reference_.9###) that . Then with some . Thanks to Theorem 3.7 ###reference_theorem7###, we derive and from DoFs (40a ###reference_.1###), (40c ###reference_.3###)-(40d ###reference_.4###) and (40j ###reference_.10###).\n\u220e\nDefine global -conforming finite element space\nThe super-script in indicates the smoothness is more than -conforming. Indeed we have .\nLet and . Assume . The BGG diagram\nwhich leads to the finite element divdiv complex\nwhere .\nBy the anti-commutative property , we can conclude complex\n(42 ###reference_###) from the BGG framework in [7 ###reference_b7###].\nIn the following we give a self-contained proof without invoking the BGG framework.\nClearly (42 ###reference_###) is a complex. As , we have\nBy two complexes in diagram (41 ###reference_###), we have\nCombining the last two equations yields\nHence\nTherefore the exactness of complex (42 ###reference_###) follows from Lemma 2.1 ###reference_theorem1###.\n\u220e\nNext we give the finite element characterization of . We choose as the shape function space.\nBy symmetrizing DoFs (40a ###reference_.1###)-(40j ###reference_.10###), we propose the following local DoFs:\nUsing a similar proof as that in Lemma 5.5 ###reference_theorem5###, we can prove the unisolvence.\nLet and . Assume .\nThe DoFs (43a ###reference_.1###)-(43j ###reference_.10###) are uni-solvent for .\nLet and . Assume . It holds that\nand .\nApparently . It suffices to prove which can be verified by a direct computation and the Euler\u2019s formula.\n\u220e\nWe choose to get the divdiv complex constructed in [15 ###reference_b15###] for\n\nThe finite element divdiv complexes presented in [29 ###reference_b29###, 13 ###reference_b13###] with are not included in complex (42 ###reference_###) due to the mis-match of the smoothness. In (41 ###reference_###), is discontinuous for The operator is still injective. But it is unclear if consists of symmetric matrix functions with desirable normal continuity.\nThe continuous version of the divdiv complex is [29 ###reference_b29###]\nNow we consider the finite element discretization of the divdiv complex (44 ###reference_###) by using the BGG framework.\nFor the case with , and , we refine the BGG diagram (41 ###reference_###) to\nHere the space is the subspace of defined by\nThe diagram (45 ###reference_###) will lead to the finite element divdiv complex (46 ###reference_###) with the finite element space .\nNext we prove the exactness of the derived finite element divdiv complex directly rather than using the BGG framework. The space is still defined by DoFs (43 ###reference_###) and recall that means empty and thus (43d ###reference_.4###) and (43f ###reference_.6###) are not present.\nLet and .\nAssume .\nThe following finite element divdiv complex is exact\n\nIt is easy to check that (46 ###reference_###) is a complex. We will prove the exactness of complex (46 ###reference_###).\nBy divdiv complex (42 ###reference_###), we have . Noting that\nhence . On the other side,\nThanks to the DoFs (32a ###reference_.1###)-(32g ###reference_.7###) for and the Euler\u2019s formula,\nwhich together with Lemma 2.1 ###reference_theorem1### indicates the exactness of complex (46 ###reference_###).\n\u220e\nWhen and , we recover the finite element divdiv complex constructed in [29 ###reference_b29###] for\nAnother modification is to relax the smoothness to only. We will modify (43 ###reference_###) by replacing (43c ###reference_.3###)-(43f ###reference_.6###) with\nwhere is one of the trace operators of ; see [13 ###reference_b13###].\nDefine\nAs is local, the vector is not continuous across edges. But and are continuous. So the space but not in . It cannot be derived from the BGG diagram (45 ###reference_###) as the induced space should be in .\nThe following finite element divdiv complex is exact\n\nFor , we have [13 ###reference_b13###, Lemma 2.2]\nThen it is obvious that (48 ###reference_###) is a complex. We will show the exactness of complex (48 ###reference_###).\nNoting that , by the exactness of complex (46 ###reference_###), we have\nOn the other side,\nThanks to the DoFs (24a ###reference_.1###)-(24c ###reference_.3###) for and the Euler\u2019s formula,\nwhich together with Lemma 2.1 ###reference_theorem1### ends the proof.\n\u220e\nWhen and , we recover the finite element divdiv complex constructed in [13 ###reference_b13###] for\nThe first finite element divdiv complex in [11 ###reference_b11###]\nis based on the distributional divdiv complex\nand not covered in this paper."
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "6",
|
| 145 |
+
"parent_section_id": null,
|
| 146 |
+
"section_name": "6. Conclusion and future work",
|
| 147 |
+
"text": "In recent years, there have been several advancements in the construction of finite element Hessian complexes, elasticity complexes, and divdiv complexes, as documented in [13 ###reference_b13###, 16 ###reference_b16###, 15 ###reference_b15###, 18 ###reference_b18###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 29 ###reference_b29###]. Our primary objective is to extend the BGG construction to finite element complexes, unifying these findings and producing more systematic results. In this work, we have achieved this goal in two dimensions. However, the extension to three dimensions presents several challenges.\nOne of the challenges is the existence of finite element de Rham complexes with varying degrees of smoothness in three dimensions, which we will discuss in a forthcoming work [14 ###reference_b14###]. Additionally, there is a mismatch in the continuity of Sobolev spaces , and . The main obstacle to generalizing BGG to the discrete case is the mismatch of tangential or normal continuity of or conforming finite element spaces, respectively. In [7 ###reference_b7###], these spaces are replaced by Sobolev spaces with matching indices . We will investigate further solutions in our future work. Moreover, edge-type finite elements in three dimensions are the most complex elements and require additional investigation.\nTo facilitate a clear and effective discussion, we will separate the two-dimensional and three-dimensional cases. Although the two-dimensional case is more straightforward and provides some insight into the three-dimensional case, treating them simultaneously in a simple and effective way is not possible due to the differences between the two cases. For instance, the proof of the div stability can be established by dimension count in 2D, but is much more technical in 3D."
|
| 148 |
+
}
|
| 149 |
+
],
|
| 150 |
+
"appendix": [],
|
| 151 |
+
"tables": {
|
| 152 |
+
"1": {
|
| 153 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>Examples of finite element de Rham complexes (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#S1.E3\" title=\"In 1. Introduction \u2023 Finite Element Complexes in Two Dimensions\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>).</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S1.T1.24\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S1.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.1.1.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.2.2.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.3.3.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.4.4.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T1.4.4.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">Results</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T1.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.5.5.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.6.6.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.7.7.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.8.8.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S1.T1.8.8.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">standard</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.9.9.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.10.10.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.11.11.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.12.12.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.12.12.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#bib.bib30\" title=\"\">30</a>, Section 5.2.1]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.16.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.13.13.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.14.14.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.15.15.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.16.16.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.16.16.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#bib.bib22\" title=\"\">22</a>, Section 3]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.20.20\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.17.17.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.18.18.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.19.19.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.20.20.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.20.20.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#bib.bib22\" title=\"\">22</a>, Section 4]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.24.24\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.21.21.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.22.22.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.23.23.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.24.24.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S1.T1.24.24.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#bib.bib19\" title=\"\">19</a>, Section 2.2]</cite></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 154 |
+
"capture": "Table 1. Examples of finite element de Rham complexes (3)."
|
| 155 |
+
},
|
| 156 |
+
"2": {
|
| 157 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S1.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2. </span>Examples of finite element elasticity and finite element divdiv complexes.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S1.T2.18\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S1.T2.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T2.3.3.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">Type</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T2.1.1.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T2.2.2.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T2.3.3.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S1.T2.3.3.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">Results</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T2.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.6.6.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">Elasticity complex (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#S1.E5\" title=\"In 1. Introduction \u2023 Finite Element Complexes in Two Dimensions\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.4.4.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.5.5.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T2.6.6.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S1.T2.6.6.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#bib.bib19\" title=\"\">19</a>, Section 6]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.9.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.9.9.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">Hessian complex (rotation of (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#S1.E5\" title=\"In 1. Introduction \u2023 Finite Element Complexes in Two Dimensions\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>))</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.7.7.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.8.8.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.9.9.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T2.9.9.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#bib.bib15\" title=\"\">15</a>, Section 5.1]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.12.12.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">divdiv complex (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#S1.E6\" title=\"In 1. Introduction \u2023 Finite Element Complexes in Two Dimensions\"><span class=\"ltx_text ltx_ref_tag\">6</span></a>)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.10.10.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.11.11.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.12.12.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T2.12.12.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#bib.bib15\" title=\"\">15</a>, Section 5.2]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.15.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.15.15.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">divdiv complex (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#S5.E46\" title=\"In Theorem 5.10. \u2023 5.3. Finite element divdiv complexes \u2023 5. Beyond the de Rham Complex \u2023 Finite Element Complexes in Two Dimensions\"><span class=\"ltx_text ltx_ref_tag\">46</span></a>)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.13.13.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.14.14.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T2.15.15.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T2.15.15.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#bib.bib29\" title=\"\">29</a>, Section 2.3]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T2.18.18\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T2.18.18.4\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">divdiv complex (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#S5.E48\" title=\"In Theorem 5.12. \u2023 5.3. Finite element divdiv complexes \u2023 5. Beyond the de Rham Complex \u2023 Finite Element Complexes in Two Dimensions\"><span class=\"ltx_text ltx_ref_tag\">48</span></a>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T2.16.16.1\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T2.17.17.2\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T2.18.18.3\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S1.T2.18.18.5\" style=\"padding-top:1.75pt;padding-bottom:1.75pt;\">\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2206.00851v4#bib.bib13\" title=\"\">13</a>, Section 3.3]</cite></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 158 |
+
"capture": "Table 2. Examples of finite element elasticity and finite element divdiv complexes."
|
| 159 |
+
},
|
| 160 |
+
"3": {
|
| 161 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.Thmtheorem1.19\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.Thmtheorem1.19.19\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.Thmtheorem1.3.3.3\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.Thmtheorem1.3.3.3.4\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.Thmtheorem1.1.1.1.1\" style=\"padding-top:4pt;padding-bottom:4pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.Thmtheorem1.2.2.2.2\" style=\"padding-top:4pt;padding-bottom:4pt;\"></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.Thmtheorem1.3.3.3.3\" style=\"padding-top:4pt;padding-bottom:4pt;\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.Thmtheorem1.7.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.Thmtheorem1.4.4.4.1\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.Thmtheorem1.5.5.5.2\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.Thmtheorem1.6.6.6.3\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.Thmtheorem1.7.7.7.4\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.Thmtheorem1.11.11.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.Thmtheorem1.8.8.8.1\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.Thmtheorem1.9.9.9.2\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.Thmtheorem1.10.10.10.3\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.Thmtheorem1.11.11.11.4\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.Thmtheorem1.15.15.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.Thmtheorem1.12.12.12.1\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.Thmtheorem1.13.13.13.2\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.Thmtheorem1.14.14.14.3\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.Thmtheorem1.15.15.15.4\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.Thmtheorem1.19.19.19\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.Thmtheorem1.16.16.16.1\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.Thmtheorem1.17.17.17.2\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.Thmtheorem1.18.18.18.3\" style=\"padding-top:4pt;padding-bottom:4pt;\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.Thmtheorem1.19.19.19.4\" style=\"padding-top:4pt;padding-bottom:4pt;\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S4.Thmtheorem1.19.19.19.4.1\"></span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 162 |
+
"capture": "Figure 3. Explanation of the smooth finite element de Rham complex with increased smoothness in pressure.\n"
|
| 163 |
+
}
|
| 164 |
+
},
|
| 165 |
+
"image_paths": {
|
| 166 |
+
"1(a)": {
|
| 167 |
+
"figure_path": "2206.00851v4_figure_1(a).png",
|
| 168 |
+
"caption": "Figure 1. Two embedding of the simplicial lattice \ud835\udd4b82superscriptsubscript\ud835\udd4b82\\mathbb{T}_{8}^{2}blackboard_T start_POSTSUBSCRIPT 8 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT in two dimensions.",
|
| 169 |
+
"url": "http://arxiv.org/html/2206.00851v4/x1.png"
|
| 170 |
+
},
|
| 171 |
+
"1(b)": {
|
| 172 |
+
"figure_path": "2206.00851v4_figure_1(b).png",
|
| 173 |
+
"caption": "Figure 1. Two embedding of the simplicial lattice \ud835\udd4b82superscriptsubscript\ud835\udd4b82\\mathbb{T}_{8}^{2}blackboard_T start_POSTSUBSCRIPT 8 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT in two dimensions.",
|
| 174 |
+
"url": "http://arxiv.org/html/2206.00851v4/x2.png"
|
| 175 |
+
},
|
| 176 |
+
"2(a)": {
|
| 177 |
+
"figure_path": "2206.00851v4_figure_2(a).png",
|
| 178 |
+
"caption": "(a) The geometric decomposition of a Hermite element: m=0,re=0,rv=1,k=8formulae-sequence\ud835\udc5a0formulae-sequencesuperscript\ud835\udc5f\ud835\udc520formulae-sequencesuperscript\ud835\udc5fv1\ud835\udc588m=0,r^{e}=0,r^{\\texttt{v}}=1,k=8italic_m = 0 , italic_r start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT = 0 , italic_r start_POSTSUPERSCRIPT v end_POSTSUPERSCRIPT = 1 , italic_k = 8.\nFigure 2. Comparison of the geometric decompositions of a two-dimensional Hermite element and a C1superscript\ud835\udc361C^{1}italic_C start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT-conforming element.",
|
| 179 |
+
"url": "http://arxiv.org/html/2206.00851v4/x3.png"
|
| 180 |
+
},
|
| 181 |
+
"2(b)": {
|
| 182 |
+
"figure_path": "2206.00851v4_figure_2(b).png",
|
| 183 |
+
"caption": "(b) The geometric decomposition of a C1superscript\ud835\udc361C^{1}italic_C start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT element: m=1,re=1,rv=2,k=8formulae-sequence\ud835\udc5a1formulae-sequencesuperscript\ud835\udc5f\ud835\udc521formulae-sequencesuperscript\ud835\udc5fv2\ud835\udc588m=1,r^{e}=1,r^{\\texttt{v}}=2,k=8italic_m = 1 , italic_r start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT = 1 , italic_r start_POSTSUPERSCRIPT v end_POSTSUPERSCRIPT = 2 , italic_k = 8.\nFigure 2. Comparison of the geometric decompositions of a two-dimensional Hermite element and a C1superscript\ud835\udc361C^{1}italic_C start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT-conforming element.",
|
| 184 |
+
"url": "http://arxiv.org/html/2206.00851v4/x4.png"
|
| 185 |
+
},
|
| 186 |
+
"3": {
|
| 187 |
+
"figure_path": "2206.00851v4_figure_3.png",
|
| 188 |
+
"caption": "Figure 3. Explanation of the smooth finite element de Rham complex with increased smoothness in pressure.",
|
| 189 |
+
"url": "http://arxiv.org/html/2206.00851v4/x5.png"
|
| 190 |
+
}
|
| 191 |
+
},
|
| 192 |
+
"validation": true,
|
| 193 |
+
"references": [
|
| 194 |
+
{
|
| 195 |
+
"1": {
|
| 196 |
+
"title": "The TUBA family of plate elements for the matrix displacement\nmethod.",
|
| 197 |
+
"author": "J. Argyris, I. Fried, and D. Scharpf.",
|
| 198 |
+
"venue": "Aero. J. Roy. Aero. Soc., 72:701\u2013709, 1968.",
|
| 199 |
+
"url": null
|
| 200 |
+
}
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"2": {
|
| 204 |
+
"title": "Finite element exterior calculus: from Hodge theory to numerical\nstability.",
|
| 205 |
+
"author": "D. Arnold, R. Falk, and R. Winther.",
|
| 206 |
+
"venue": "Bull. Amer. Math. Soc. (N.S.), 47(2):281\u2013354, 2010.",
|
| 207 |
+
"url": null
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"3": {
|
| 212 |
+
"title": "Finite element exterior calculus.",
|
| 213 |
+
"author": "D. N. Arnold.",
|
| 214 |
+
"venue": "Society for Industrial and Applied Mathematics (SIAM), Philadelphia,\nPA, 2018.",
|
| 215 |
+
"url": null
|
| 216 |
+
}
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"4": {
|
| 220 |
+
"title": "Differential complexes and stability of finite element methods. II.\nThe elasticity complex.",
|
| 221 |
+
"author": "D. N. Arnold, R. S. Falk, and R. Winther.",
|
| 222 |
+
"venue": "In Compatible spatial discretizations, volume 142 of IMA\nVol. Math. Appl., pages 47\u201367. Springer, New York, 2006.",
|
| 223 |
+
"url": null
|
| 224 |
+
}
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"5": {
|
| 228 |
+
"title": "Finite element exterior calculus, homological techniques, and\napplications.",
|
| 229 |
+
"author": "D. N. Arnold, R. S. Falk, and R. Winther.",
|
| 230 |
+
"venue": "Acta Numer., 15:1\u2013155, 2006.",
|
| 231 |
+
"url": null
|
| 232 |
+
}
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"6": {
|
| 236 |
+
"title": "Geometric decompositions and local bases for spaces of finite element\ndifferential forms.",
|
| 237 |
+
"author": "D. N. Arnold, R. S. Falk, and R. Winther.",
|
| 238 |
+
"venue": "Comput. Methods Appl. Mech. Engrg., 198(21-26):1660\u20131672,\n2009.",
|
| 239 |
+
"url": null
|
| 240 |
+
}
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"7": {
|
| 244 |
+
"title": "Complexes from complexes.",
|
| 245 |
+
"author": "D. N. Arnold and K. Hu.",
|
| 246 |
+
"venue": "Found. Comput. Math., 21(6):1739\u20131774, 2021.",
|
| 247 |
+
"url": null
|
| 248 |
+
}
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"8": {
|
| 252 |
+
"title": "Mixed finite elements for elasticity.",
|
| 253 |
+
"author": "D. N. Arnold and R. Winther.",
|
| 254 |
+
"venue": "Numer. Math., 92(3):401\u2013419, 2002.",
|
| 255 |
+
"url": null
|
| 256 |
+
}
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"9": {
|
| 260 |
+
"title": "Differential operators on the base affine space and a study of\n-modules.",
|
| 261 |
+
"author": "I. N. Bern\u0161te\u012dn, I. M. Gelfand, and S. I. Gelfand.",
|
| 262 |
+
"venue": "In Lie groups and their representations (Proc. Summer\nSchool, Bolyai J\u00e1nos Math. Soc., Budapest, 1971), pages\n21\u201364, 1975.",
|
| 263 |
+
"url": null
|
| 264 |
+
}
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"10": {
|
| 268 |
+
"title": "Triangular elements in the finite element method.",
|
| 269 |
+
"author": "J. H. Bramble and M. Zl\u00e1mal.",
|
| 270 |
+
"venue": "Math. Comp., 24:809\u2013820, 1970.",
|
| 271 |
+
"url": null
|
| 272 |
+
}
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"11": {
|
| 276 |
+
"title": "Multigrid methods for Hellan\u2013Herrmann\u2013Johnson mixed method of\nKirchhoff plate bending problems.",
|
| 277 |
+
"author": "L. Chen, J. Hu, and X. Huang.",
|
| 278 |
+
"venue": "Journal of Scientific Computing, 76(2):673\u2013696, 2018.",
|
| 279 |
+
"url": null
|
| 280 |
+
}
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"12": {
|
| 284 |
+
"title": "Decoupling of mixed methods based on generalized Helmholtz\ndecompositions.",
|
| 285 |
+
"author": "L. Chen and X. Huang.",
|
| 286 |
+
"venue": "SIAM J. Numer. Anal., 56(5):2796\u20132825, 2018.",
|
| 287 |
+
"url": null
|
| 288 |
+
}
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"13": {
|
| 292 |
+
"title": "Finite elements for divdiv-conforming symmetric tensors.",
|
| 293 |
+
"author": "L. Chen and X. Huang.",
|
| 294 |
+
"venue": "arXiv preprint arXiv:2005.01271, 2020.",
|
| 295 |
+
"url": null
|
| 296 |
+
}
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"14": {
|
| 300 |
+
"title": "Finite element de Rham and Stokes complexes in three dimensions.",
|
| 301 |
+
"author": "L. Chen and X. Huang.",
|
| 302 |
+
"venue": "arXiv preprint arXiv:2206.09525, 2022.",
|
| 303 |
+
"url": null
|
| 304 |
+
}
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"15": {
|
| 308 |
+
"title": "A finite element elasticity complex in three dimensions.",
|
| 309 |
+
"author": "L. Chen and X. Huang.",
|
| 310 |
+
"venue": "Math. Comp., 91(337):2095\u20132127, 2022.",
|
| 311 |
+
"url": null
|
| 312 |
+
}
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"16": {
|
| 316 |
+
"title": "Finite elements for conforming symmetric tensors\nin three dimensions.",
|
| 317 |
+
"author": "L. Chen and X. Huang.",
|
| 318 |
+
"venue": "Math. Comp., 91(335):1107\u20131142, 2022.",
|
| 319 |
+
"url": null
|
| 320 |
+
}
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"17": {
|
| 324 |
+
"title": "A robust lower order mixed finite element method for a strain\ngradient elasticity model.",
|
| 325 |
+
"author": "M. Chen, J. Huang, and X. Huang.",
|
| 326 |
+
"venue": "SIAM J. Numer. Anal., arXiv:2210.09552, 2023.",
|
| 327 |
+
"url": null
|
| 328 |
+
}
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"18": {
|
| 332 |
+
"title": "A discrete elasticity complex on three-dimensional Alfeld splits.",
|
| 333 |
+
"author": "S. H. Christiansen, J. Gopalakrishnan, J. Guzm\u00e1n, and K. Hu.",
|
| 334 |
+
"venue": "arXiv preprint arXiv:2009.07744, 2020.",
|
| 335 |
+
"url": null
|
| 336 |
+
}
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"19": {
|
| 340 |
+
"title": "Nodal finite element de Rham complexes.",
|
| 341 |
+
"author": "S. H. Christiansen, J. Hu, and K. Hu.",
|
| 342 |
+
"venue": "Numer. Math., 139(2):411\u2013446, 2018.",
|
| 343 |
+
"url": null
|
| 344 |
+
}
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"20": {
|
| 348 |
+
"title": "Finite element systems for vector bundles: Elasticity and curvature.",
|
| 349 |
+
"author": "S. H. Christiansen and K. Hu.",
|
| 350 |
+
"venue": "Found. Comput. Math., 2022.",
|
| 351 |
+
"url": null
|
| 352 |
+
}
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"21": {
|
| 356 |
+
"title": "A complex from linear elasticity.",
|
| 357 |
+
"author": "M. Eastwood.",
|
| 358 |
+
"venue": "In The Proceedings of the 19th Winter School \u201cGeometry\nand Physics\u201d (Srn\u00ed, 1999), pages 23\u201329, 2000.",
|
| 359 |
+
"url": null
|
| 360 |
+
}
|
| 361 |
+
},
|
| 362 |
+
{
|
| 363 |
+
"22": {
|
| 364 |
+
"title": "Stokes complexes and the construction of stable finite elements with\npointwise mass conservation.",
|
| 365 |
+
"author": "R. S. Falk and M. Neilan.",
|
| 366 |
+
"venue": "SIAM J. Numer. Anal., 51(2):1308\u20131326, 2013.",
|
| 367 |
+
"url": null
|
| 368 |
+
}
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"23": {
|
| 372 |
+
"title": "Mixed schemes for fourth-order DIV equations.",
|
| 373 |
+
"author": "R. Fan, Y. Liu, and S. Zhang.",
|
| 374 |
+
"venue": "Comput. Methods Appl. Math., 19(2):341\u2013357, 2019.",
|
| 375 |
+
"url": null
|
| 376 |
+
}
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"24": {
|
| 380 |
+
"title": "A family of nonconforming elements for the Brinkman problem.",
|
| 381 |
+
"author": "J. Guzm\u00e1n and M. Neilan.",
|
| 382 |
+
"venue": "IMA J. Numer. Anal., 32(4):1484\u20131508, 2012.",
|
| 383 |
+
"url": null
|
| 384 |
+
}
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"25": {
|
| 388 |
+
"title": "Conforming discrete Gradgrad-complexes in three dimensions.",
|
| 389 |
+
"author": "J. Hu and Y. Liang.",
|
| 390 |
+
"venue": "Math. Comp., 90(330):1637\u20131662, 2021.",
|
| 391 |
+
"url": null
|
| 392 |
+
}
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"26": {
|
| 396 |
+
"title": "Conforming finite element divdiv complexes and the application for\nthe linearized Einstein\u2013Bianchi system.",
|
| 397 |
+
"author": "J. Hu, Y. Liang, and R. Ma.",
|
| 398 |
+
"venue": "SIAM J. Numer. Anal., 60(3):1307\u20131330, 2022.",
|
| 399 |
+
"url": null
|
| 400 |
+
}
|
| 401 |
+
},
|
| 402 |
+
{
|
| 403 |
+
"27": {
|
| 404 |
+
"title": "New conforming finite element divdiv complexes in three dimensions.",
|
| 405 |
+
"author": "J. Hu, Y. Liang, R. Ma, and M. Zhang.",
|
| 406 |
+
"venue": "arXiv preprint arXiv:2204.07895, 2022.",
|
| 407 |
+
"url": null
|
| 408 |
+
}
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"28": {
|
| 412 |
+
"title": "A construction of conforming finite element spaces in any\ndimension.",
|
| 413 |
+
"author": "J. Hu, T. Lin, and Q. Wu.",
|
| 414 |
+
"venue": "arXiv:2103.14924, 2021.",
|
| 415 |
+
"url": null
|
| 416 |
+
}
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"29": {
|
| 420 |
+
"title": "A family of mixed finite elements for the biharmonic equations on\ntriangular and tetrahedral grids.",
|
| 421 |
+
"author": "J. Hu, R. Ma, and M. Zhang.",
|
| 422 |
+
"venue": "Sci. China Math., 64(12):2793\u20132816, 2021.",
|
| 423 |
+
"url": null
|
| 424 |
+
}
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"30": {
|
| 428 |
+
"title": "Simple curl-curl-conforming finite elements in two dimensions.",
|
| 429 |
+
"author": "K. Hu, Q. Zhang, and Z. Zhang.",
|
| 430 |
+
"venue": "SIAM J. Sci. Comput., 42(6):A3859\u2013A3877, 2020.",
|
| 431 |
+
"url": null
|
| 432 |
+
}
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"31": {
|
| 436 |
+
"title": "Spline functions on triangulations, volume 110.",
|
| 437 |
+
"author": "M.-J. Lai and L. L. Schumaker.",
|
| 438 |
+
"venue": "Cambridge University Press, 2007.",
|
| 439 |
+
"url": null
|
| 440 |
+
}
|
| 441 |
+
},
|
| 442 |
+
{
|
| 443 |
+
"32": {
|
| 444 |
+
"title": "Taylor-Hood like finite elements for nearly incompressible strain\ngradient elasticity problems.",
|
| 445 |
+
"author": "Y. Liao, P. Ming, and Y. Xu.",
|
| 446 |
+
"venue": "J. Sci. Comput., 95(1):Paper No. 4, 2023.",
|
| 447 |
+
"url": null
|
| 448 |
+
}
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"33": {
|
| 452 |
+
"title": "A robust finite element method for Darcy-Stokes flow.",
|
| 453 |
+
"author": "K. A. Mardal, X.-C. Tai, and R. Winther.",
|
| 454 |
+
"venue": "SIAM J. Numer. Anal., 40(5):1605\u20131631, 2002.",
|
| 455 |
+
"url": null
|
| 456 |
+
}
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"34": {
|
| 460 |
+
"title": "A nodal basis for piecewise polynomials of degree .",
|
| 461 |
+
"author": "J. Morgan and R. Scott.",
|
| 462 |
+
"venue": "Math. Comput., 29:736\u2013740, 1975.",
|
| 463 |
+
"url": null
|
| 464 |
+
}
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"35": {
|
| 468 |
+
"title": "Interpolation polynomials on the triangle.",
|
| 469 |
+
"author": "A. \u017den\u00ed\u0161ek.",
|
| 470 |
+
"venue": "Numer. Math., 15:283\u2013296, 1970.",
|
| 471 |
+
"url": null
|
| 472 |
+
}
|
| 473 |
+
}
|
| 474 |
+
],
|
| 475 |
+
"url": "http://arxiv.org/html/2206.00851v4"
|
| 476 |
+
}
|
20240721/2209.10517v13.json
ADDED
|
@@ -0,0 +1,555 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "On Model-Checking Probabilistic \ud835\udf14-Pushdown Systems, and \ud835\udf14-PCTL\u2217 Characterization of Weak Bisimulation",
|
| 3 |
+
"abstract": "In this paper, we obtain the following equally important new results:",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "As is well-known, logic is the originating and ongoing topic of theoretical computer science. Dating back to 1936, one of the main goals of Alan Turing in defining the Turing machine [50 ###reference_b50###] was to investigate the logic issue of the Entscheidungsproblem. In the modern day, logic plays a fundamental role in computer science. Some of the key areas of logic that are particularly significant are computability theory, modal logic, and category theory. More significantly, the theory of computation is mainly based on concepts defined by logicians such as Alonzo Church [12 ###reference_b12###, 13 ###reference_b13###] and mathematician Alan Turing [50 ###reference_b50###], and so on.\nOver the last four decades, within the area of logic in computer science, Model-checking [11 ###reference_b11###] has become an essential tool for formal verification, which is an interesting and important topic and particularly plays an important role in the verification of digital circuits (chips). With respect to the task of model-checking a designed system, one describes the system to be verified as a model of some logic, expresses the property to be verified as a formula in that logic, and then checks by using automated algorithms that the formula holds or not in that model; see e.g., [3 ###reference_b3###]. Specifically, it is an automatic method for guaranteeing that a formal model of a system satisfies a formula representing a desired property. Traditionally, model checking has been applied to finite-state systems and non-probabilistic programs. Furthermore, during the last two decades, researchers in computer science have paid much attention to model-checking of probabilistic infinite-state systems; see, e.g., [27 ###reference_b27###].\nTo the best of our knowledge, one of those probabilistic infinite-state systems is the probabilistic pushdown system, dubbed \u201cprobabilistic pushdown automata\u201d in [7 ###reference_b7###, 8 ###reference_b8###, 27 ###reference_b27###, 28 ###reference_b28###], the input alphabet of which contains only one symbol. In this paper, we name such a limited version of probabilistic pushdown automata \u201cprobabilistic pushdown system.\u201d Namely, probabilistic pushdown systems can be seen as a limited version of the more general notion of probabilistic pushdown automaton, whose input alphabet contains not only an input symbol but many, roughly. Their model-checking question, initiated in [27 ###reference_b27###], has attracted a lot of attention; see, e.g., [7 ###reference_b7###, 8 ###reference_b8###], where the model-checking of stateless probabilistic pushdown systems (pBPA) against PCTL\u2217 was studied, as well as the model-checking question of probabilistic pushdown systems (pPDS) against PCTL. Recently, we provided an answer in [39 ###reference_b39###] to the question of model-checking of stateless probabilistic pushdown systems (pBPA) against PCTL. To the best of our knowledge, this question was first proposed in [27 ###reference_b27###] and continuously kept open in [8 ###reference_b8###] till our recent work [39 ###reference_b39###].\nNow let us shift our focus to temporal logic. From [29 ###reference_b29###], we know that there are two possible points of view with regard to the underlying nature of time: one is that time is linear, i.e., at each moment there is only one possible future; the other is that time has a branching, i.e., at each moment, time may split into alternate courses representing different possible futures. The reader will see from the sequel that most conclusions in this paper are on the branching time properties. But the logic mentioned above to specify probabilistic and branching-time properties lacks the capability to describe the -properties. We note that a celebrated extension of PCTL that can express -regular properties, named -PCTL, was defined by Chatterjee, Sen, and Henzinger in [14 ###reference_b14###]. Besides, Chatterjee, Chmel\u00edk, and Tracol [15 ###reference_b15###] also considered partially observable Markov decision processes (POMDPs) with -regular conditions specified as parity objectives. Indeed, the logic of -PCTL extended in [14 ###reference_b14###] can describe not only -regular properties but also probabilistic -pushdown properties. Thus, another important goal of this paper is that we try to define the -extension of the probabilistic pushdown system, i.e., the probabilistic -pushdown systems. Once we have successfully defined the notion of probabilistic -pushdown systems, we can further study its important and interesting questions, such as model-checking against -PCTL, etc. It is worth mentioning that there is another interesting -extension of branching computational tree logic. For example, see [37 ###reference_b37###]. However, it seems that it is somewhat impossible to further give a probabilistic extension of the logic defined in [37 ###reference_b37###].\nBisimulation equivalence is undoubtedly a central one in formal verification among the various notions of behavioral equivalence in concurrency theory [40 ###reference_b40###, 23 ###reference_b23###, 22 ###reference_b22###, 4 ###reference_b4###], which are helpful to model-checking by reducing the number of states of systems. In history, bisimulation was first defined in the context of CCS [40 ###reference_b40###] and turned out to be a fundamental relation for its simplicity and the elegance of its axiomatization [19 ###reference_b19###]. Remarkably, the study of strong bisimulation in the purely probabilistic context was initiated in [38 ###reference_b38###], where an equivalence notion was developed. Furthermore, this theory has been extended to continuous state spaces and, in the discrete setting, to weak bisimulation [6 ###reference_b6###]. As is well-known, weak bisimulation is an important notion in probabilistic concurrency theory: two decades ago, Baier and Hermanns [6 ###reference_b6###] introduced a notion of weak bisimulation for fully probabilistic systems and presented a polynomial-time algorithm for deciding it. In the nonprobabilistic setting of the compositional verification of systems where abstraction from internal computation, weak bisimulations have shown to be fundamental. For example, the work of [22 ###reference_b22###] investigated weak bisimulation of probabilistic systems in the presence of nondeterminism, i.e., the probabilistic systems of labeled concurrent Markov chains [22 ###reference_b22###], and proved its celebrated result that weak bisimulation is sound and complete for probabilistic logic pCTL\u2217 (a logic defined in [20 ###reference_b20###])."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "Main Results",
|
| 15 |
+
"text": "Now let us introduce our new main results. As our first main contribution to this paper, we extend the classical notion of probabilistic pushdown automata to probabilistic -pushdown automata. There are also many interesting questions that deserve to be studied. In particular, we study the model-checking question of stateless probabilistic -pushdown systems against -PCTL and obtain the following important and interesting result:\nThe model-checking of stateless probabilistic -pushdown system (-pBPA) against the logic -PCTL is generally undecidable.\nThe following corollary is a clear and immediate consequence of Theorem 1 ###reference_orem1###, since the logic -PCTL is a sublogic of -PCTL\u2217:\nThe model-checking of stateless probabilistic -pushdown system (-pBPA) against the logic -PCTL\u2217 is generally undecidable.\nFurther, the following corollary is deduced in Remark 4.3 ###reference_remark3###:\nThe model-checking of probabilistic -pushdown system (-pPDS) against the logic -PCTL\u2217 is generally undecidable.\nWe continue to study the probabilistic labelled transition systems induced by our definition of probabilistic -pushdown automata and define the notion of weak bisimulation on the model of probabilistic labelled transition systems. Motivated by the celebrated work of [22 ###reference_b22###, 24 ###reference_b24###, 23 ###reference_b23###], our next contribution to this paper is to study weak bisimulation in the setting of probabilistic -pushdown automata. The main contribution of this part of the aforementioned study is a logical (-PCTL\u2217) characterization of probabilistic weak bisimulation. To be specific, as our second contribution, we show the following important and interesting result:\nThe weak bisimulation is sound and complete for -PCTL\u2217.\nLastly, we stress that all of our above new results are equally important. Namely, the order of mention of the above results does not imply the importance of that result. However, the reader should note that the authors dare not and cannot say that the proof techniques used to prove the above conclusions are all our own innovations, because theoretical computer science, as a branch of applied mathematics, mostly applies, adapts, or generalizes some proof techniques from pure mathematics or applied mathematics itself to solve some important problems in theoretical computer science."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "Related Work",
|
| 21 |
+
"text": "During the last two decades, researchers in computer science have paid much attention to model-checking of probabilistic infinite-state systems. The study of the model-checking question for the probabilistic pushdown systems first appeared in [27 ###reference_b27###]. To the best of our knowledge, but maybe not accurately, the article [27 ###reference_b27###] is the first paper on model-checking of probabilistic infinite-state systems. Since the paper [27 ###reference_b27###], there are papers on model-checking for probabilistic pushdown systems (pPDS) and stateless probabilistic pushdown systems (pPBA) against PCTL/PCTL\u2217 such as [8 ###reference_b8###], where the results of undecidability of model-checking for against PCTL and for against PCTL\u2217 are obtained. Recently, we provided an answer in [39 ###reference_b39###] to the question of model-checking stateless probabilistic pushdown systems against PCTL, and this problem was first raised in [27 ###reference_b27###].\nThe celebrated extension of PCTL that can express -regular properties, namely the -PCTL, was given by Chatterjee, Sen, and Henzinger in [14 ###reference_b14###] and is also an important logic to describe probabilistic -pushdown properties in this paper. The notion of probabilistic -pushdown automaton and probabilistic -pushdown systems appear for the first time in this paper. But our extension is based on the excellent work [16 ###reference_b16###, 21 ###reference_b21###].\nIn theoretical computer science, probabilistic bisimulation, see for example [1 ###reference_b1###], is an extension of the concept of bisimulation for fully probabilistic transition systems first described by Larsen and Skou [38 ###reference_b38###]. Our motivation to study -PCTL\u2217 characterization of weak bisimulation was first inspired by the celebrated work [22 ###reference_b22###] in which the soundness and completeness of weak bisimulation for a minor variant of the probabilistic logic pCTL\u2217 [20 ###reference_b20###] was shown, and by the excellent work [4 ###reference_b4###] where bisimulation spectrum with silent moves for Markov decision processes, and further by the seminal work [38 ###reference_b38###] in which a probabilistic modal logic (PML) characterization of probabilistic bisimulation was given, and [3 ###reference_b3###] where various logic equivalences for probabilistic bisimulation have been extensively studied."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "1.3",
|
| 25 |
+
"parent_section_id": "1",
|
| 26 |
+
"section_name": "Organization",
|
| 27 |
+
"text": "The rest of this paper is structured as follows: in the next section, i.e., Section 2 ###reference_###, some basic definitions will be reviewed and useful notation will be fixed. In Section 3 ###reference_### we introduce the probabilistic -pushdown automata for the first time and study its model-checking question against logic of -PCTL in Section 4 ###reference_###. In Section 5 ###reference_###, we introduce the probabilistic labelled transition systems induced by our probabilistic -pushdown automata and study weak bisimulation, in which the main result of Theorem 4 ###reference_orem4### is shown. The last section is for conclusions, in which some possible research questions are presented."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Preliminaries",
|
| 33 |
+
"text": "For the convenience of the reader, we make the paper self-contained, and most notation in probabilistic verification will follow the paper [8 ###reference_b8###]. For elementary probability theory, the reader is referred to [44 ###reference_b44###] by Shiryaev or [35 ###reference_b35###, 36 ###reference_b36###] by Lo\u00e8ve.\nLet and . For an , will denote the set of . Let be the set of all rational numbers. Let denote the cardinality of any finite set . Let and denote non-empty finite alphabets. Then is the set of all finite words (including the empty word ) over , and . For any word , represents its length, i.e., the number of symbols in it."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.1",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "Markov Chains",
|
| 39 |
+
"text": "Let us introduce the Markov chains first. Roughly, Markov chains are probabilistic transition systems, which are accepted as the most popular operational model for the evaluation of the performance and dependability of information-processing systems. For more details, see e.g., [3 ###reference_b3###].\nA (discrete) Markov chain is a triple where is a finite or countably infinite set of states, is a transition relation such that for each there exists such that , and is a function from domain to range which to each transition assigns its probability such that for each .\nmeans where is the set of all transition relations whose current state is .\nA path in is a finite or infinite sequence of states of (or ) where such that for each . A run of is an infinite path. We denote the set of all runs in by , and to denote the set of all runs starting with a given finite path . If a run starts with a given finite path , then we denote this case as . Let be a run; then denotes the state of , and the run . In this way, it is clear that . Further, a state is from a state if there is a finite path starting in and ending at .\nFor each , is a probability space, where is the -field generated by all basic cylinders and is a finite path initiating from ,\nand is the unique probability measure such that\nwhere and ."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.2",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "Probabilistic Computational Tree Logic",
|
| 45 |
+
"text": "The logic PCTL was originally introduced in [32 ###reference_b32###], where the corresponding model-checking question has been focused mainly on finite-state Markov chains.\nLet be a fixed set of atomic propositions. Formally, the syntax of probabilistic computational tree logic PCTL is given by\nwhere and denote the state formula and path formula, respectively; is an atomic proposition. In the above, is drawn from\n111 The comparison relations \u201c\u201d and \u201c\u201d are sufficient enough for our discussion.,\nis a rational number with .\nLet be a Markov chain, an assignment, and the symbol true the abbreviation of always true. Then the semantics of PCTL, over , is given by the following rules:\nThe abbreviation \u201cs.t.\u201d means \u201csuch that.\u201d The logic PCTL or PCTL\u2217 can be interpreted over an Markov decision process (MDP) in the similar way that we just did with the Markov chain. But it is outside our topic here.\nThe logic PCTL\u2217 extends PCTL by deleting the requirement that any temporal operator must be preceded by a state formula, and its path formulas are generated by the following syntax:\nThe difference between PCTL and PCTL\u2217 is very clear: a well-defined PCTL formula is definitely a well-defined PCTL\u2217 formula. However, the inverse is not necessarily true. The semantics of PCTL\u2217 path formulas over are defined as follows:"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.3",
|
| 49 |
+
"parent_section_id": "2",
|
| 50 |
+
"section_name": "Post Correspondence Problem",
|
| 51 |
+
"text": "The Post Correspondence Problem (PCP), originally introduced and shown to be undecidable by Post [42 ###reference_b42###], has been used to show that many problems arising from formal languages are undecidable.\nFormally, a PCP instance consists of a finite alphabet and a finite set of pairs of strings over , determining whether there is a word such that .\nThere are numerous variants of the PCP definition, but the modified PCP [8 ###reference_b8###] is the most convenient for our discussion in this paper. Since the word is of finite length, we can suppose that .\nIf we put \u2018\u2019 into the gap between two letters of or to form the or such that , then the modified PCP problem is to ask whether there exists such that the equation holds after erasing all \u2018\u2019 in and .\nEssentially, the modified PCP problem is equivalent to the original PCP problem. That we stuff the -pair strings and with \u2018\u2019 to make them the same length is useful in Section 4 ###reference_### to prove our main results.\nOther background information and notions will be given along the way in proving our main results stated in Section 1 ###reference_###."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "The -PCTL and Probabilistic -Pushdown Automata",
|
| 57 |
+
"text": "In this section, denotes a finite alphabet, and and denote the set of finite words and the set of -sequences (or -words) over , respectively. An -word over is written in the form\nwith\nLet . Notation for segments of -words are\nand\nFor more details about -words and -languages, the reader is referred to the excellent works [45 ###reference_b45###, 46 ###reference_b46###]."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.1",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "-PCTL",
|
| 63 |
+
"text": "Now let us introduce the -extension of PCTL defined in the celebrated work [14 ###reference_b14###]. As an obvious drawback, PCTL/PCTL\u2217 cannot express useful specifications such as liveness properties, namely, the infinitely repeated occurrence of an event. But the -PCTL/-PCTL\u2217 can, so the expressiveness of -PCTL/-PCTL\u2217 is much stronger than that of PCTL/PCTL\u2217.\nThe formal syntax and semantics of -PCTL logic are as follows.\nLet be a fixed set of atomic propositions. Formally, the syntax of -probabilistic computational tree logic -PCTL is defined by\nwhere and denote the state formulas and path formulas, respectively; and represents path formulas that depend on the set of states that appear infinitely often in a path (we call them infinitary path formulas); is an atomic proposition, , and is a rational number with .\nThe notion that a state (or a path ) satisfies a formula in a Markov chain is denoted by (or ) under some assignment , and is defined inductively as follows:"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.2",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "Probabilistic -Pushdown Automata",
|
| 69 |
+
"text": "Let be a finite stack alphabet and . If , then the head of , denoted by , is the symbol . If , then , where denotes the empty word.\nLet us introduce the definition of probabilistic -pushdown automata; for classical versions of -pushdown automata, we refer the reader to the classical work [16 ###reference_b16###, 21 ###reference_b21###]. Our notion of probabilistic -pushdown automata is a probabilistic extension from classical versions of -pushdown automata [16 ###reference_b16###, 21 ###reference_b21###].\nA probabilistic -pushdown automaton is an -tuple where\nis a finite set of states;\nis a finite input alphabet;\nis a finite stack alphabet;\nis a mapping from to finite subsets of ;\nis the initial state;\nis the start symbol;\nis the final state;\nis a function from to to which each rule\nin assigns its probability\ns.t. for each satisfying the following condition\nFurthermore, without loss of generality, we assume . The configurations of are elements in .\nThe transition rule states that when the machine is in state , and the input symbol is , and the top of the stack is , then it goes to the new state and uses the string of stack symbols to replace the stack symbol at the top of the stack; see e.g., p. 228 of [31 ###reference_b31###]. For example, the machine is in state , and the input symbol is , and the content of the stack is\nwhere is at the top of the stack, then applying the transition rule\nwill lead to the new configuration\nLet be a probabilistic -pushdown automaton, and let\nwhere , . An infinite sequence of configurations is called a complete run of on , starting in configuration , iff\n;\nfor each , there exists satisfying\nsuch that\nEvery such run induces a mapping from into ,\nwhere , the pair of state and head of stack string entered in the th step of the computation described by run . For , we define the projection of :\nNow define to be the set of states that occur infinitely often in , i.e.,\nThe run is called successful if\nFurthermore, we call an infinite sequence\na path such that for all , and denote the -word by , i.e.,\nLet Path denote the set of all infinite paths of with starting configuration . And the word is called accepted with probability at least if where , and\nGiven an input word , we define the scheduler such that . That is, in step , the scheduler chooses with probability the letter as the next action. Then, the operational behavior of reading the input word is formalized by the Markov chain . We fix the following notation for the acceptance probability of a word and a given probabilistic -pushdown automaton :\nBy [17 ###reference_b17###, 51 ###reference_b51###], the set of accepting paths for word is measurable.\nNow with the above notions, we are going to define the probabilistic -pushdown systems.\nA probabilistic -pushdown system (-pPDS) , whose configurations are elements , where is a finite stack alphabet, a finite set of rules fulfilling\nfor each , there is at least one rule of the form where . In the following, we write instead of ; we assume, w.l.o.g., that .\nis a function from to which to every rule in assigns its probability\ns.t. for each , it meets the condition that\nis the final states.\nan infinite sequence of configurations is called a complete run of , starting in configuration , iff\n;\nfor each , .\nEvery such run induces a mapping from into , ,\nwhere\nentered in the th step of the computation described by run . Now define\nThe run is called successful if\nFurther, we call an infinite sequence\na path. Let Path denote the set of all infinite paths of with starting configuration .\nThe stateless probabilistic -pushdown system (-pBPA for short) is a limited version of the probabilistic -pushdown system, which will be defined later. Before defining it, a question naturally arises from the difference between stateless probabilistic -pushdown systems and probabilistic -pushdown systems. Since in the stateless probabilistic -pushdown system, there is only a state in from which we can view that . Thus, we are unable to define the success of a run that is similar to Definition 3.3 ###reference_definition3###. So, we need to adjust a little, and we can specify to achieve the goal. We are ready to define -pBPA as follows:\nA stateless probabilistic -pushdown system (-pBPA) is a triple , whose configurations are elements , where is a finite stack alphabet, a finite set of rules satisfying\nfor each , there is at least one rule , where . In the following, we write instead of ; we assume, w.l.o.g., that .\nis a function from to which to every rule \nin assigns its probability \ns.t. for each , it meets the condition that .\nis the final symbol.\nan infinite sequence of configurations \nis called a complete run of , starting in configuration , iff\n;\nfor each , .\nEvery such run induces a mapping from into , , where , i.e., the head of configuration entered in the th step of the computation described by run . Now define\nThe run is called successful if\nFurther, we call an infinite sequence\na path. Let Path denote the set of all infinite paths of with starting configuration .\nWe have defined the head of a string above, but we did not define the head of a configuration . As shown in [28 ###reference_b28###] with respect to the probabilistic setting, if there are no effective valuation assumptions, undecidable properties can be easily encoded to pushdown configurations. Thus, throughout the paper, we consider the simple assignment as in [28 ###reference_b28###, 27 ###reference_b27###, 8 ###reference_b8###], whose definition is given as follows.\nThe head of a configuration is either or , where , depending on whether or , respectively. Further, we say that is a simple assignment if for each there is a subset of heads such that iff the head of is in , where denotes the reverse of , i.e.,\nGiven an -pPDS or -pBPA , all of its configurations and all of its transition rules induce an infinite-state Markov chain . The model-checking question for properties expressed by the -PCTL formula is defined as determining whether\nwhere is a hard -PCTL formula, i.e., is an -PCTL formula but not a PCTL formula. 222Note that is a simple assignment; see Definition 3.5 ###reference_definition5###."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Undecidability of Model-Checking of -pBPA against -PCTL",
|
| 75 |
+
"text": "Our goal in this section is to establish a theorem with respect to model-checking stateless probabilistic -pushdown systems against -PCTL, the question of which we conjecture that it is undecidable. Clearly, the natural method is to encode the modified Post Correspondence Problem into a path formula of -PCTL. However, the quickest way to do so is to employ the conclusion of our work [39 ###reference_b39###] already obtained, although there exists some difficulty. In fact, the difficulty is how to adapt the ideas used in our work [39 ###reference_b39###] to construct a suitable -PCTL formula to confirm our conjecture.\n###figure_1### Let us observe the U operator in Figure 1 ###reference_### above: If we can construct a path formula that likes where encodes the modified PCP problem, then we are done.\nTo do so, let us fix , and the stack alphabet of a -pBPA is as follows:\nThe elements in serve as symbols of atomic propositions. We will detail how to build the desirable -pBPA .\nLike to [39 ###reference_b39###], our -pBPA works in two steps, the first of which is to guess a possible solution to a modified PCP instance by storing pairs of words in the stack, which is done by the following transition rules (the probabilities of which are uniformly distributed):\nEquivalently, we let the symbol serve as the initial stack symbol. It begins with pushing () into the stack with probability . Then, the symbol at the top of the stack is (we read the stack from left to right). The rules in (1 ###reference_###) state that is replaced with probability by . The process will be repeated until is stored at the top of the stack, indicating that the first pair of has been stored.\nThen, with the probability , the will go to push symbol or into the stack, depending on whether the guessing procedure is at the end or not. When the rule is applied, the goes to check whether the pairs of words stored in the stack are a solution of a modified PCP instance. It is clear that the above guess procedure will lead to a word corresponding to the sequence of the words pushed orderly into the stack. In addition, there are no other transition rules in the guessing step for except those illustrated by (1 ###reference_###). By this, we arrive at the following lemma:\nA configuration of the form of is reachable from if and only if where , and there is a word such that and . And the probability from to is .\nThe next step is for to verify a stored pair of words. The transition rules (the probabilities of them are uniformly distributed) are given as follows:\nOf course, this step is slightly different from the previous one given in [39 ###reference_b39###]. Namely, we replace the rule of\nby\nfor our purpose to construct an -PCTL formula describing this procedure.\nFurther, we need the following two path formulas\nfor conveniently constructing an -PCTL formula, since the rule of\nhas been replaced by\nWe define the following two state formulas:\nwhere can be any rational number in the set of .\nFurther construct the following path formula:\nwhich will be useful in the sequel.\nIt is not hard to prove that the formula is equivalent to the following -PCTL formula :\nNow, let us proceed to show Theorem 1 ###reference_orem1###. Similar to [39 ###reference_b39###], we define the functions , , , and and prove the following:\nLet and be two functions from to , given by\nFurther, let and be two functions from to , given by\nThen, for any ,\nif and only if\nThe proof is similar to [39 ###reference_b39###], so omitted.\nAlso let denote the word obtained by erasing all the \u2018\u2019 in . Likewise, means the word obtained by erasing all the \u2018\u2019 in . Then we show the following:\nLet be the pair of words pushed into the stack by , where , and , , the pair of words after erasing all in and . Then\nLet and denote and , respectively. Namely,\nWe will show by induction on (i.e., the length of ) that ; similar arguments apply for\nNote that by (2 ###reference_###), with probability , we have . Thus, to prove the lemma, we need only to show .\nWe give a proof by induction on . We should note that by Lemma 4.2 ###reference_lemma2###, .\nBase case: In the case of , this immediately follows from the definition, i.e.,\nInduction step: Suppose the induction hypothesis for is true, i.e.,\nNow we consider the case of , i.e., where .\nNote that and , we have the following cases:\nif , then by\nwe have\nif , then by\nwe obtain\nif , then by\nwe get\nFrom the above cases it immediately follows that\nThe similar arguments apply for .\nCombining Lemma 4.2 ###reference_lemma2### and Lemma 4.3 ###reference_lemma3###, we get the following:\nLet be the pair of words pushed into the stack by . Let , , be the pair of words after erasing all in and . Then \nif and only if\n\nWith Lemma 4.4 ###reference_lemma4### in hand, we can show the following:\nLet be the pair of words pushed into the stack by . Let , , be the pair of words after erasing all in and . Then,\nif and only if \nwhere is a rational constant.\nIt is obvious that when is pushed into the stack of , the stack\u2019s content is (read from left to right). Note that there is only one rule, , which is applicable. Thus, with probability , the content of the stack changes to .\nThe \u201cif\u201d part. Suppose that .\nThe probability of paths from that satisfy is then , and the probability of paths from that satisfy is . As a result, the probability of paths from satisfying is , while the probability of paths from satisfying is . Because and , we have the following:\nThus, by (5 ###reference_###) and Lemma 4.4 ###reference_lemma4###, we conclude that (4 ###reference_###) holds.\nThe \u201conly if\u201d part. Assume (4 ###reference_###) holds. Then, by Lemma 4.4 ###reference_lemma4### we have\nNamely, . This, together with shown above, further implies that . The lemma follows.\nWith the above lemmas, we proceed to prove the following:\nLet be a path of -pBPA , starting at , induced by , where is guessed by as a solution of the modified PCP instance. Then, we have\nif and only if is a solution of the modified PCP instance for any constant .\n(4 ###reference_### ) is true\nThus\nif and only if is a solution of the modified PCP instance.\nBut the formula\nis strictly a PCTL formula, not an -PCTL formula. To finish our proof of Theorem 1 ###reference_orem1###, we need to do a little additional work in the following subsection."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.1",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Proof of Theorem 1",
|
| 81 |
+
"text": "The following lemma tries to apply an -PCTL path formula defined in Remark 4.1 ###reference_remark1### to prove Theorem 1 ###reference_orem1###:\nLet -pBPA and be defined as above. Let be a path of -pBPA , starting at , induced by , where is guessed by as a solution of the modified PCP instance. Then\nif and only if is a solution of the modified PCP instance, where the formula is defined in Remark 4.1 ###reference_remark1###.\nNote that , and for any positive integers , . Moreover, when is on the top of s stack, we can apply the transition rule infinitely often, which means\nThe \u201cif\u201d part. Suppose that is a solution of the modified PCP instance; then, by Lemma 4.6 ###reference_lemma6###,\nSo, replacing by in the following formula\nwe have that for any ,\nThus, by applying the transition rule infinitely often, we have\ni.e.,\nThe \u201conly if\u201d part. If (7 ###reference_###) is true, namely, there is a such that\nfor each , and\nObviously, for any , we have\nso we only can have that\ni.e.,\nwhich completes the proof.\nNow, we are in the right position to give the proof of Theorem 1 ###reference_orem1### naturally:\nBy Remark 4.2 ###reference_remark2###, we can replace by in Lemma 4.7 ###reference_lemma7### and its proof, i.e.,\nThis finishes the proof of Theorem 1 ###reference_orem1### with an -PCTL path formula.\nNote that the above proof of Theorem 1 ###reference_orem1### is based on an -PCTL path formula. We also can show it with an -PCTL state formula. To do so, we need to add an additional initial symbol to of , i.e., suppose with the transition rule of probability . Then, we modify the to as follows:\nThen, it is clear that\nif and only if is a solution of the modified PCP instance.\nNow, is an -PCTL state formula.\nNote again that in Eq. (8 ###reference_###), the value of can be any rational number that is in .\nNow Corollary 2 ###reference_orem2### is clear, since the logic of -PCTL is a sublogic of -PCTL\u2217. But to obtain Corollary 3 ###reference_orem3###, we should pick a state and replace the rule with in the construction of an -pPDS."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "-PCTL\u2217 Characterizing Weak Bisimulation",
|
| 87 |
+
"text": "In this section, we consider the equivalence relations induced by the logic -PCTL\u2217 and discuss their connection to weak bisimulation equivalence. Note that non-probabilistic cases and probabilistic cases, i.e., bisimulation vs. CTL\u2217 equivalence and probabilistic bisimulation vs. PCTL\u2217 equivalence, are systematically studied in the standard textbook [3 ###reference_b3###] by Baier and Katoen.\nBisimilarity is one of the most important relations for comparing the behavior of formal systems in concurrency theory [40 ###reference_b40###, 23 ###reference_b23###, 22 ###reference_b22###, 4 ###reference_b4###]. As per the point of view given in [4 ###reference_b4###] by Baier, D\u2019Argenio, and Hermanns, bisimulation relations are the prime vehicle to equate or distinguish processes according to the behavior they exhibit when interacting with other processes, taking the stepwise behavior of states in labelled transition systems as a reference.\nBecause of connections between modal logics and bisimulations, whenever a new bisimulation is proposed, the quest starts for the associated logic, such that two states or systems are bisimilar if and only if they satisfy the same modal logical formulas [52 ###reference_b52###]. Along this line of research, a great amount of work has appeared that characterizes various kinds of classical (or probabilistic) bisimulation by appropriate logics; for example, see e.g., [4 ###reference_b4###, 18 ###reference_b18###, 23 ###reference_b23###, 22 ###reference_b22###, 38 ###reference_b38###, 52 ###reference_b52###]. In this section, we study a logical characterization of weak bisimulation for probabilistic -pushdown automata, which has never been touched on by others.\nFor the convenience of the reader, we recall some basic notions that are needed in the sequel. In particular, the notions on weak transitions and weak bisimulation are mainly followed from [19 ###reference_b19###, 30 ###reference_b30###, 48 ###reference_b48###]. Let us first introduce these basic definitions as follows."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.1",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Definitions and Notation",
|
| 93 |
+
"text": "Let be a set whose powerset is . A discrete probability distribution over is a function\nsuch that its support is countable and . If is a singleton, then we call a Dirac distribution, and if a Dirac distribution has support , we commonly denote the distribution as . is the set of all probability distributions over . If and , then we often write for .\nLet be an equivalence relation (see [2 ###reference_b2###]) on ; then we write for the quotient space, namely, the set of -equivalence classes. The lifting of to an equivalence on is given by\nIt can be verified that is indeed an equivalence. Furthermore, if , then and can be decomposed to satisfy that , such that for all , where is a countable index set. See e.g., the work [26 ###reference_b26###] by Deng and Du.\nWe commonly use the fact that if and , then\nwhere is the equivalence class of . Further, if is a countable family of distributions with and for such that then .\nOur probabilistic models are probabilistic labelled transition systems induced by probabilistic -pushdown automata defined in Section 3 ###reference_###.\nLet be a probabilistic -pushdown automaton given by Definition 3.1 ###reference_definition1###. Let . Then the probabilistic labelled transition system (PLTS) induced by is a tuple where\n is a set of (countable) configurations, i.e., , is the set of (external) actions, and finite transition relation333 We only consider the case , so , which is finite.\nA transition , also denoted by , is said to leave from state , to be labelled by , and to lead to the distribution . We denote by the source state , by the action , and by the target distribution , also denoted by . Namely, for a , we sometimes write it as . We also say that enables the action , that the action is enabled from , and that is enabled from . We call a transition internal or external whenever or , respectively. Finally, we let be the set of transitions with label .\nAn execution fragment of a PLTS is a finite or infinite sequence of transitions:\nor\nstarting from a state , also denoted by , and, if the sequence is finite, ending with a state denoted by , such that for each , there exists a transition such that . The length of , denoted by , is the number of occurrences of actions in . If is infinite, then . We denote by the state and by the action , if and . Denote by the set of execution fragments of and by the set of finite execution fragments of . An execution fragment is a prefix of an execution fragment , denoted by , if the sequence is a prefix of the sequence . The trace of , denoted by , is the sub-sequence of external actions of ; we denote by the empty trace, and we extend to actions by defining if and if .\nA (randomized) scheduler for PLTS is a function\nsuch that for every execution fragment and each transition in the support of , then we have . Or equivalently, . So, there are transitions and real numbers such that and schedules with probability .\nA scheduler and a state induce a probability measure over execution fragments as follows. The basic measure events are the cones of finite execution fragments, where the cone of , denoted by , is the set . The probability measure of a cone is defined recursively as follows:\nAn execution fragment is called a -execution fragment if can be generated by following \u2019s decisions. For example, if is an infinite execution fragment, then is a -execution fragment if for each , \u2019s decision for prefix is a distribution where .\nFor convenience, in a similar way to [22 ###reference_b22###], we define computations of a PLTS as transition trees obtained by unfolding the PLTS from the root, resolving the nondeterministic choices by schedulers. A computation thus can be viewed as a purely probabilistic labelled Markov chain.\nA computation of a PLTS is an infinite subtree of the tree obtained by partially unfolding the PLTS. In a computation, every nondeterministic choice has been resolved by a scheduler . We call such a computation a -computation.\nIntuitively, an internal (combined) weak transition is formed by an arbitrarily long sequence of internal transitions, and an external weak transition is formed by an external transition preceded and followed by arbitrarily long sequences of internal transitions. To define the (internal) weak transition, we need to define first the (external) weak transition as follows:\nGiven a PLTS , we say that there is a weak combined transition from to labelled by ,444Note that . denoted by , if there exists a scheduler such that the following holds for the induced probabilistic execution fragment :\n;\nfor each , if then ;\nfor each state , .\nIn particular, every sequence of transitions has an associated weak sequence of labels Weak, obtained by removing the labels of -transitions.\nTransitions from states to distributions as above are one way to the definition of bisimulation, from which this paper follows.\nGiven a PLTS , an equivalence relation on is a weak bisimulation if, for each pair of states such that , if for some probability distribution , then there exists a probability distribution such that and .\nIn the sequel, we refer to the condition \u201cthere exists such that and \u201d as the step condition of the weak bisimulation.\nFinally, we present the following definition:\nLet , where on is a weak bisimulation. Let be a finite execution fragment from , and a finite execution fragment from , such that for all , and for . Then, we say that the finite execution fragments and are equivalent. A similar definition applies to two infinite execution fragments."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.2",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "The Semantics of -PCTL\u2217 under a PLTS",
|
| 99 |
+
"text": "Just as the logic of PCTL\u2217 is an extension of the logic of PCTL, the logic of -PCTL\u2217 is an extension of the logic of -PCTL, whose syntax can be defined as follows.\nLet be a fixed set of atomic propositions. Formally, the syntax of -probabilistic computational tree logic -PCTL\u2217 is defined by\nwhere and denote the state formula and path formula, respectively; and represents path formulas that depend on the set of states that appear infinitely often in a path (we call them infinitary path formulas); is an atomic proposition, , and is a constant with .\nThe basic semantic relation is of the form for state formulas and for path formulas, where is an infinite execution fragment and is a state, is a state formula, and is a path formula. The state formula is true at a state if for all schedulers , the measure of the set of paths (i.e., execution fragments) that satisfy is in the relation to . More precisely, let be the measure induced on the set of paths starting from under all schedulers , then\nFor each , is an atomic proposition, and the path formula is true of an execution fragment whose first weak label is . Formally,\nLike CTL\u2217-equivalence given in [3 ###reference_b3###] (see Definition 7.17, [3 ###reference_b3###]), we can define the notion of -PCTL\u2217-equivalence in the following.\nLet be a PLTS induced by an -pushdown automaton, then states and in are -PCTL\u2217-equivalent, denoted , if for all -PCTL\u2217 state formula ,"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.3",
|
| 103 |
+
"parent_section_id": "5",
|
| 104 |
+
"section_name": "Soundness",
|
| 105 |
+
"text": "We follow the paradigm given in the work [22 ###reference_b22###] by Desharnais, Gupta, Jagadeesan, and Panangaden to prove the soundness, which is more intuitive.\nThe following lemma is a standard use of the co-inductive definition of weak bisimulation, and its proof can be done on similar lines to Lemma 5.2 ###reference_lemma2###.\nLet , where on is a weak bisimulation. Then for any execution fragment from , there is an execution fragment with equal trace: , from such that, for all , and .\nLet be weak bisimilar states. Let be a scheduler, and let be the induced -computation from . Then, there is a scheduler such that every finite execution fragment in is equivalent to a finite execution fragment in the induced by -computation from such that\nwhere (resp. ) is the cone of (resp. ).\nThe proof is a routine induction. has countably many transitions. Consider any ordering of these transitions such that a transition occurs after all the transitions leading up to it. We construct by mimicking transitions in the order prescribed by . Our induction hypothesis is that at the \u2019th stage, every finite execution fragment from in the subtree induced by the first transitions (as per ) is an equivalence of the finite execution fragment from in -computation from with the same probability.\nLet the \u2019st transition be a transition at . Let be the probability of the path from to in . Let be the set of leaves in such that\n\nThe finite execution fragment from to in is an equivalence of the finite execution fragment from to in (see Definition 5.5 ###reference_definition5###).\nBy the induction hypothesis, . There are two cases based on the kind of the st transition.\nThe st transition is a combined internal weak transition . Since , by Definition 5.4 ###reference_definition4###, this transition can be matched by a combined weak transition such that . So, there are states , such that and\nThe st transition is a combined external weak transition . By Definition 5.4 ###reference_definition4###, since , there is a combined external weak transition such that . So, there are states , such that and\nIn either case, let be the extension of by these matching transitions. So, the lemma follows.\nIf , where on is a weak bisimulation, then for all -PCTL\u2217 state formulas , if and only if .\nWe proceed by case analysis. Cases such as true, , , and are straightforward, thus we omit them here.\nThe only one left is the formula .\nCase : Suppose that . Every scheduler induces a computation from . For every execution fragment from , by Lemma 5.2 ###reference_lemma2###, there is an equivalent execution fragment from that attributes the same measure that satisfies . Hence, ."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.4",
|
| 109 |
+
"parent_section_id": "5",
|
| 110 |
+
"section_name": "Completeness",
|
| 111 |
+
"text": "Although the notions of \u201cweak combined transition\u201d and \u201cweak bisimulation\u201d in this paper are different from [4 ###reference_b4###], the techniques used to show completeness of weak bisimulation can be adapted to fit our goal.\nThe completeness somewhat can be proved in a similar way to that of Theorem 10.67 in [3 ###reference_b3###] (see pp. 813\u2013816). To proceed to completeness, first note that the state formulas\nwhere , is a valid -PCTL\u2217 state formula. Also note that is shorthand for . Thus\nBut, if and only if there is a scheduler such that the computation induced by assigns probability to the states satisfying reachable on a weak transition.\nLet be the language containing all state formulas generated by syntax of -PCTL\u2217. Let be the equivalence relation on states induced by , i.e.,\nFurther, we define the language generated by the following syntax:\nLet be the equivalence relation on distributions induced by , i.e.,\nwhere the semantics of the relation\nis given by\nand . Namely,\nThen, the following properties of \u201ccharacteristic formulas\u201d are useful tools for us to establish that the equivalence relation is a weak bisimulation.\nFor each there exists with .\nif and only if .\nFor each there exists with .\nFor original proof of this lemma, we refer to [4 ###reference_b4###]. For self-contained, we modify them to look as follows:\nTo show item (a), we first observe that for all equivalence classes , with , there must be a state formula that distinguishes the states in from the states in . Because contains negation, we can assume that for all and for all . For , define the following (see also [3 ###reference_b3###], p. 814):\nWith , also is finite (see footnote 3 ###reference_te3### for is finite). So, . Moreover, iff . That is, , which proves statement (a).\nWe proceed to show item (b).\nThe \u201conly if\u201d part of (b): Assume that . We need to show . Suppose now that there exists such that . By (a), but , which is a contradiction.\nThe \u201cif\u201d part of (b): Assume that ; we need to show . Notice that every formula in can be written in CNF by , where each literal has the form or . Therefore, it suffices to prove that for all . But this is an immediate consequence of after observing that is a union of equivalence classes.\nNow, we are going to prove item (c).\nLet . By item (b), for each , there exists such that for all distributions . Hence, for .\nLet us consider the distribution formula\nThen, iff for all iff . Hence, .\nIf two states satisfy the same formulas of , i.e., , then and are bisimilar.\nClearly, is an equivalence relation. And, in fact, is a weak bisimulation. To see this, suppose that and consider first the case . Let be the characteristic formulas of the -equivalence class of , where denotes the equivalence class in related to the distribution .\nFrom the above arguments, we know that\nis shorthand for where is for some fixed rational. Then,\nif and only if there is a scheduler and a distribution such that under scheduler we have that with probability the following transition is made\nand the distribution satisfies\nBecause , we also have and hence there is a scheduler and distribution such that under we have that with probability the weak transition\nis made and the distribution satisfies\nTherefore, by Lemma 5.4 ###reference_lemma4###, we conclude that\nThus, this completes the proof.\nWith the above in hand, we are naturally at the right point to give the proof of Theorem 4 ###reference_orem4###:"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "5.5",
|
| 115 |
+
"parent_section_id": "5",
|
| 116 |
+
"section_name": "Proof of Theorem 4",
|
| 117 |
+
"text": "Clearly, Theorem 4 ###reference_orem4### follows from Theorem 5.3 ###reference_lemma3### and Theorem 5.5 ###reference_lemma5###.\nUnlike the case of probabilistic bisimulation for Markov chains, in which states and are probabilistic bisimulation, they fulfill the same PCTL formulae (also fulfill the same PCTL\u2217 formulae); see Theorem 10.67 in [3 ###reference_b3###]. In this paper, we are unable to manage to show a result of -PCTL logical characterization of weak bisimulation, i.e., our result only holds for -PCTL\u2217, since the formulas can not be constructed by the -PCTL syntax."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "6",
|
| 121 |
+
"parent_section_id": null,
|
| 122 |
+
"section_name": "Conclusions",
|
| 123 |
+
"text": "To summarize, we have defined the notion of probabilistic -pushdown automata for the first time in this paper and studied the model-checking question of it against -PCTL, showing that it is undecidable for model-checking -pBPA against -PCTL, which has some corollaries such as Corollary 2 ###reference_orem2### and Corollary 3 ###reference_orem3###.\nWe have presented the -PCTL\u2217 logical characterization of weak bisimulation for probabilistic -pushdown automata. As we know, the notion of weak bisimulation relation is so important and interesting in concurrency theory [40 ###reference_b40###, 23 ###reference_b23###, 22 ###reference_b22###, 3 ###reference_b3###], we showed in this paper that the weak bisimulation is sound and complete for -PCTL\u2217. Our models are probabilistic labelled transition systems induced by probabilistic -pushdown automata. On the other hand, we are unable to manage to show an outcome of -PCTL logical characterization of weak bisimulation, since the formulas can not be constructed by the -PCTL syntax.\nThere are too many interesting questions we did not touch on in this paper. For example, the following are possible directions for future study:\nThe readers interested in the theory of probabilistic -pushdown systems can try to relocate the problems of probability -automata investigated in [5 ###reference_b5###] to probabilistic -pushdown automata and further to obtain some interesting conclusions;\nWe also do not know whether the logic of -PCTL\u2217 is expressively equivalent to probabilistic -pushdown automaton, which deserves further study.\nThe readers interested in the theory of quantum -pushdown automata can try to relocate the problems of probability -automata investigated in [5 ###reference_b5###] to quantum -pushdown automata and further to obtain some interesting conclusions; Furthermore, the equivalence problem of quantum -pushdown automata, like that of quantum measure-many one-way quantum finite automata studied in [34 ###reference_b34###], is also very interesting and important.\nFor the weak bisimulation on probabilistic labelled transition system induced by probabilistic -pushdown automaton, one can study axiomatization for it; note that similar studies on other models have already been conducted; see, for example, [9 ###reference_b9###].\nLastly, all logics discussed in the paper, when compared with the logics presented in [10 ###reference_b10###, 41 ###reference_b41###], are unable to describe semantics of concurrent programs that share access to mutable data. Then natural questions arise: How to adapt the logics discussed in the paper to be able to describe properties of concurrent programs, and the model-checking question for the adapted logic (which is able to describe properties of concurrent programs that are able to handle race conditions) is also interesting."
|
| 124 |
+
}
|
| 125 |
+
],
|
| 126 |
+
"appendix": [],
|
| 127 |
+
"tables": {},
|
| 128 |
+
"image_paths": {
|
| 129 |
+
"1": {
|
| 130 |
+
"figure_path": "2209.10517v13_figure_1.png",
|
| 131 |
+
"caption": "Figure 1: Until operator",
|
| 132 |
+
"url": "http://arxiv.org/html/2209.10517v13/x1.png"
|
| 133 |
+
}
|
| 134 |
+
},
|
| 135 |
+
"validation": true,
|
| 136 |
+
"references": [
|
| 137 |
+
{
|
| 138 |
+
"1": {
|
| 139 |
+
"title": "Probabilistic bisimulation.",
|
| 140 |
+
"author": "Anonymous authors.",
|
| 141 |
+
"venue": "Avaliable at https://en.wikipedia.org/wiki/Probabilistic_bisimulation.",
|
| 142 |
+
"url": null
|
| 143 |
+
}
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"2": {
|
| 147 |
+
"title": "Equivalence relation.",
|
| 148 |
+
"author": "Anonymous authors.",
|
| 149 |
+
"venue": "Avaliable at https://en.wikipedia.org/wiki/Equivalence_relation.",
|
| 150 |
+
"url": null
|
| 151 |
+
}
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"3": {
|
| 155 |
+
"title": "Principles of Model Checking.",
|
| 156 |
+
"author": "C. Baier and J. P. Katoen.",
|
| 157 |
+
"venue": "MIT Press, 2008.",
|
| 158 |
+
"url": null
|
| 159 |
+
}
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"4": {
|
| 163 |
+
"title": "On the probabilistic bisimulation spectrum with silent moves.",
|
| 164 |
+
"author": "C. Baier, Pedro R. D\u2019Argenio and Holger Hermanns.",
|
| 165 |
+
"venue": "Acta Informatica 57, 465\u2013512 (2020). https://doi.org/10.1007/s00236-020-00379-2.",
|
| 166 |
+
"url": null
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"5": {
|
| 171 |
+
"title": "Probabilistic -Automata.",
|
| 172 |
+
"author": "C. Baier, M. Gr\u00f6sser and N. Bertrand.",
|
| 173 |
+
"venue": "Journal of the ACM 59, 1, Article 1 (February 2012), 52 pages. https://doi.org/10.1145/2108242.2108243.",
|
| 174 |
+
"url": null
|
| 175 |
+
}
|
| 176 |
+
},
|
| 177 |
+
{
|
| 178 |
+
"6": {
|
| 179 |
+
"title": "Weak bisimulation for fully probabilistic processes.",
|
| 180 |
+
"author": "C. Baier and H. Hermanns.",
|
| 181 |
+
"venue": "Proceedings of the 1997 International Conference on Computer Aided Verification, Lecture Notes in Computer Science, vol. 1254, Springer\u2013Verlag, 1997. https://doi.org/10.1007/3-540-63166-6_14.",
|
| 182 |
+
"url": null
|
| 183 |
+
}
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"7": {
|
| 187 |
+
"title": "Verification of probabilistic recursive sequential programs, Ph.D. thesis.",
|
| 188 |
+
"author": "T. Br\u00e1zdil.",
|
| 189 |
+
"venue": "Masaryk University, Faculty of Informatics, 2007.",
|
| 190 |
+
"url": null
|
| 191 |
+
}
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"8": {
|
| 195 |
+
"title": "Branching-time model-checking of probabilistic pushdown automata.",
|
| 196 |
+
"author": "T. Br\u00e1zdil, V. Bro\u017eek, V. Forejt and A. Ku\u010dera.",
|
| 197 |
+
"venue": "Journal of Computer and System Sciences 80 (2014) 139 \u2013 156. https://doi.org/10.1016/j.jcss.2013.07.001.",
|
| 198 |
+
"url": null
|
| 199 |
+
}
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"9": {
|
| 203 |
+
"title": "Axiomatizations for Probabilistic Bisimulation.",
|
| 204 |
+
"author": "E. Bandini and R. Segala.",
|
| 205 |
+
"venue": "In: Orejas, F., Spirakis, P.G., van Leeuwen, J. (eds) Automata, Languages and Programming. ICALP 2001, LNCS, vol 2076, pp. 370\u2013381, 2001. https://doi.org/10.1007/3-540-48224-5_31.",
|
| 206 |
+
"url": null
|
| 207 |
+
}
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"10": {
|
| 211 |
+
"title": "A semantics for concurrent separation logic.",
|
| 212 |
+
"author": "Stephen Brookes.",
|
| 213 |
+
"venue": "Theoretical Computer Science 375 (2007) 227\u2013270. https://doi.org/10.1016/j.tcs.2006.12.034.",
|
| 214 |
+
"url": null
|
| 215 |
+
}
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"11": {
|
| 219 |
+
"title": "Model Checking.",
|
| 220 |
+
"author": "E. M. Clarke, O. Grumberg and D. A. Peled.",
|
| 221 |
+
"venue": "MIT Press, 1999.",
|
| 222 |
+
"url": null
|
| 223 |
+
}
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"12": {
|
| 227 |
+
"title": "An unsolvable problem of elementary number theory.",
|
| 228 |
+
"author": "A. Church.",
|
| 229 |
+
"venue": "American journal of mathematics, vol. 58 (1936), pp. 345 \u2013 363.",
|
| 230 |
+
"url": null
|
| 231 |
+
}
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"13": {
|
| 235 |
+
"title": "A note on the Entscheidungsproblem.",
|
| 236 |
+
"author": "A. Church.",
|
| 237 |
+
"venue": "The Journal of Symbolic Logic, Vol. 1, No. 1. (Mar., 1936), pp. 40 \u2013 41.",
|
| 238 |
+
"url": null
|
| 239 |
+
}
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"14": {
|
| 243 |
+
"title": "Model-Checking -Regular Properties of Interval Markov Chains.",
|
| 244 |
+
"author": "K. Chatterjee, K. Sen and Thomas A. Henzinger.",
|
| 245 |
+
"venue": "FOSSACS 2008, LNCS 4962, pp. 302\u2013317, 2008. https://doi.org/10.1007/978-3-540-78499-9_22.",
|
| 246 |
+
"url": null
|
| 247 |
+
}
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"15": {
|
| 251 |
+
"title": "What is decidable about partially observable Markov decision processes with -regular objectives.",
|
| 252 |
+
"author": "K. Chatterjee, M. Chmel\u00edk and M. Tracol.",
|
| 253 |
+
"venue": "Journal of Computer and System Sciences 82 (2016) 878\u2013911. https://doi.org/10.1016/j.jcss.2016.02.009.",
|
| 254 |
+
"url": null
|
| 255 |
+
}
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"16": {
|
| 259 |
+
"title": "Theory of -Languages I: Characterizations of -Context-Free Languages.",
|
| 260 |
+
"author": "Rina S. Cohen and Arie Y. Gold.",
|
| 261 |
+
"venue": "Journal of Computer and System Sciences 15, 169\u2013184 (1977). https://doi.org/10.1016/S0022-0000(77)80004-4.",
|
| 262 |
+
"url": null
|
| 263 |
+
}
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"17": {
|
| 267 |
+
"title": "The complexity of probabilistic verification.",
|
| 268 |
+
"author": "C. Courcoubetis and M. Yannakakis.",
|
| 269 |
+
"venue": "Journal of the ACM, Vol. 42, No. 4, July 1995, pp. 857\u2013907. https://doi.org/10.1145/210332.210339.",
|
| 270 |
+
"url": null
|
| 271 |
+
}
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"18": {
|
| 275 |
+
"title": "Logical characterizations of behavioral relations on transition systems of probability distribution.",
|
| 276 |
+
"author": "S. Crafa and F. Ranzato.",
|
| 277 |
+
"venue": "ACM Transactions on Computational Logic, Volume 16, Issue 1, Article No.: 2, Pages 1\u201324. https://doi.org/10.1145/2641566.",
|
| 278 |
+
"url": null
|
| 279 |
+
}
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"19": {
|
| 283 |
+
"title": "Decision algorithms for probabilistic bisimulation.",
|
| 284 |
+
"author": "S. Cattani and R. Segals.",
|
| 285 |
+
"venue": "In: L. Brim, M. Kretinsky, A. Kucera, P. Jancar (eds) CONCUR 2002, Lecture Notes in Computer Science, Vol. 2421, Springer, Berlin, Heidelberg, 2002, pp. 371\u2013385. https://doi.org/10.1007/3-540-45694-5_25.",
|
| 286 |
+
"url": null
|
| 287 |
+
}
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"20": {
|
| 291 |
+
"title": "Formal Verification of Probabilistic Systems.",
|
| 292 |
+
"author": "L. de Alfaro.",
|
| 293 |
+
"venue": "Ph.D. Thesis, Technical Report STAN\u2013CS\u2013TR\u201398\u20131601, Stanford University, 1997.",
|
| 294 |
+
"url": null
|
| 295 |
+
}
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"21": {
|
| 299 |
+
"title": "Logic for -pushdown automata.",
|
| 300 |
+
"author": "M. Droste, S. Dziadek and W. Kuich.",
|
| 301 |
+
"venue": "Information and Computation 282 (2022) 104659. https://doi.org/10.1016/j.ic.2020.104659.",
|
| 302 |
+
"url": null
|
| 303 |
+
}
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"22": {
|
| 307 |
+
"title": "Weak bisimulation is sound and complete for pCTL\u2217.",
|
| 308 |
+
"author": "J. Desharnais, V. Gupta, R. Jagadeesan and P. Panangaden.",
|
| 309 |
+
"venue": "Information and Computation 208 (2010) 203 \u2013 219. https://doi.org/10.1016/j.ic.2009.11.002.",
|
| 310 |
+
"url": null
|
| 311 |
+
}
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"23": {
|
| 315 |
+
"title": "Bisimulation for Labelled Markov Processes.",
|
| 316 |
+
"author": "J. Desharnais, A. Edalat and P. Panangaden.",
|
| 317 |
+
"venue": "Information and Computation 179 (2002) 163\u2013193. https://doi.org/10.1006/inco.2001.2962.",
|
| 318 |
+
"url": null
|
| 319 |
+
}
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"24": {
|
| 323 |
+
"title": "A Logical Characterization of Bisimulation for Labeled Markov Processes.",
|
| 324 |
+
"author": "J. Desharnais, A. Edalat and P. Panangaden.",
|
| 325 |
+
"venue": "In: Proceedings of th Annual IEEE Symposium on Logic in Computer Science, 1998. https://doi.org/10.1109/LICS.1998.705681.",
|
| 326 |
+
"url": null
|
| 327 |
+
}
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"25": {
|
| 331 |
+
"title": "Private communication.",
|
| 332 |
+
"author": "Jos\u00e9e Desharnais.",
|
| 333 |
+
"venue": "July 2023.",
|
| 334 |
+
"url": null
|
| 335 |
+
}
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"26": {
|
| 339 |
+
"title": "A Local Algorithm for Checking Probabilistic Bisimilarity.",
|
| 340 |
+
"author": "Yuxin Deng and Wenjie Du.",
|
| 341 |
+
"venue": "In: Proceedings of the th International Conference on Frontier of Computer Science and Technology, IEEE Computer Society, 2009, pp. 401\u2013407. https://doi.org/10.1109/FCST.2009.37.",
|
| 342 |
+
"url": null
|
| 343 |
+
}
|
| 344 |
+
},
|
| 345 |
+
{
|
| 346 |
+
"27": {
|
| 347 |
+
"title": "Model-checking probabilistic pushdown automata.",
|
| 348 |
+
"author": "J. Esparza, A. Ku\u010dera and R. Mayr,",
|
| 349 |
+
"venue": "Logical Methods in Computer Science, Vol. 2 (1:2) 2006, pp. 1 \u2013 31. https://doi.org/10.2168/LMCS-2(1:2)2006.",
|
| 350 |
+
"url": null
|
| 351 |
+
}
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"28": {
|
| 355 |
+
"title": "Model checking LTL with regular valuations for pushdown systems.",
|
| 356 |
+
"author": "J. Esparza, A. Ku\u010dera and S. Schwoon,",
|
| 357 |
+
"venue": "Information and Computation 186, 2003, pp. 355 \u2013 376. https://doi.org/10.1016/S0890-5401(03)00139-1.",
|
| 358 |
+
"url": null
|
| 359 |
+
}
|
| 360 |
+
},
|
| 361 |
+
{
|
| 362 |
+
"29": {
|
| 363 |
+
"title": "\u201cSometimes\u201d and \u201cNot Never\u201d Revisited: On Branching versus Linear Time Temporal Logic.",
|
| 364 |
+
"author": "E. Allen Emerson and Joseph Y. Halpern.",
|
| 365 |
+
"venue": "Journal of the ACM, Vol. 33, No. 1, January 1986, pp. 151\u2013178. https://doi.org/10.1145/4904.4999.",
|
| 366 |
+
"url": null
|
| 367 |
+
}
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"30": {
|
| 371 |
+
"title": "Deciding probabilistic automata weak bisimulation: theory and practice.",
|
| 372 |
+
"author": "Luis Maria Ferrer Fioriti, V. Hashemi, H. Hermanns and A. Turrini.",
|
| 373 |
+
"venue": "Formal Aspects of Computing (2016) 28: 109\u2013143. https://doi.org/10.1007/s00165-016-0356-4.",
|
| 374 |
+
"url": null
|
| 375 |
+
}
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"31": {
|
| 379 |
+
"title": "Introduction to Automata Theory, Languages, and Computation.",
|
| 380 |
+
"author": "J. E. Hopcroft, R. Motwani and J. D. Ullman.",
|
| 381 |
+
"venue": "3rd ed., Addison\u2013Wesley, 2007.",
|
| 382 |
+
"url": null
|
| 383 |
+
}
|
| 384 |
+
},
|
| 385 |
+
{
|
| 386 |
+
"32": {
|
| 387 |
+
"title": "A logic for reasoning about time and reliability.",
|
| 388 |
+
"author": "H. Hansson and B. Jonsson.",
|
| 389 |
+
"venue": "Formal Aspects of Computing 6 (1994) 512 \u2013 535. https://doi.org/10.1007/BF01211866.",
|
| 390 |
+
"url": null
|
| 391 |
+
}
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"33": {
|
| 395 |
+
"title": "A new polynomial-time algorithm for linear programming.",
|
| 396 |
+
"author": "N. Karmarkar.",
|
| 397 |
+
"venue": "Combinatorica 4 (4) (1984) 273\u2013395. https://doi.org/10.1007/BF02579150.",
|
| 398 |
+
"url": null
|
| 399 |
+
}
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"34": {
|
| 403 |
+
"title": "Another approach to the equivalence of measure-many one-way quantum finite automata and its application.",
|
| 404 |
+
"author": "Tianrong Lin.",
|
| 405 |
+
"venue": "Journal of Computer and System Sciences 78 (2012) 807\u2013821. https://doi.org/10.1016/j.jcss.2012.01.004.",
|
| 406 |
+
"url": null
|
| 407 |
+
}
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"35": {
|
| 411 |
+
"title": "Probability Theory I (th edition).",
|
| 412 |
+
"author": "M. Lo\u00e8ve.",
|
| 413 |
+
"venue": "Spring-Verlag, New York, 1978.",
|
| 414 |
+
"url": null
|
| 415 |
+
}
|
| 416 |
+
},
|
| 417 |
+
{
|
| 418 |
+
"36": {
|
| 419 |
+
"title": "Probability Theory II (th edition).",
|
| 420 |
+
"author": "M. Lo\u00e8ve.",
|
| 421 |
+
"venue": "Spring-Verlag, New York, 1978.",
|
| 422 |
+
"url": null
|
| 423 |
+
}
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"37": {
|
| 427 |
+
"title": "Branching-time logics with path relativisation.",
|
| 428 |
+
"author": "M. Latte and M. Lange.",
|
| 429 |
+
"venue": "Journal of Computer and System Sciences 80 (2014) 375\u2013389. https://doi.org/10.1016/j.jcss.2013.05.005.",
|
| 430 |
+
"url": null
|
| 431 |
+
}
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"38": {
|
| 435 |
+
"title": "Bisimulation through probabilistic testing.",
|
| 436 |
+
"author": "K. G. Larsen and A. Skou.",
|
| 437 |
+
"venue": "Information and Computation 94 (1991) 1\u201328. https://doi.org/10.1016/0890-5401(91)90030-6.",
|
| 438 |
+
"url": null
|
| 439 |
+
}
|
| 440 |
+
},
|
| 441 |
+
{
|
| 442 |
+
"39": {
|
| 443 |
+
"title": "Model-Checking PCTL Properties of Stateless Probabilistic Pushdown Systems.",
|
| 444 |
+
"author": "D. Lin and T. Lin.",
|
| 445 |
+
"venue": "arXiv: 1405.4806, 2024. https://doi.org/10.48550/arXiv.1405.4806.",
|
| 446 |
+
"url": null
|
| 447 |
+
}
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"40": {
|
| 451 |
+
"title": "Communication and Concurrency.",
|
| 452 |
+
"author": "R. Milner.",
|
| 453 |
+
"venue": "Prentice\u2013Hall International, Englewood Cliffs, 1989.",
|
| 454 |
+
"url": null
|
| 455 |
+
}
|
| 456 |
+
},
|
| 457 |
+
{
|
| 458 |
+
"41": {
|
| 459 |
+
"title": "Resources, concurrency, and local reasoning.",
|
| 460 |
+
"author": "Peter W. O\u2019Hearn.",
|
| 461 |
+
"venue": "Theoretical Computer Science 375 (2007) 271\u2013307. https://doi.org/10.1016/j.tcs.2006.12.035.",
|
| 462 |
+
"url": null
|
| 463 |
+
}
|
| 464 |
+
},
|
| 465 |
+
{
|
| 466 |
+
"42": {
|
| 467 |
+
"title": "A variant of a recursively unsolvable problem.",
|
| 468 |
+
"author": "E. L. Post.",
|
| 469 |
+
"venue": "Bulletin of the American Mathematical Society 52, 1946, pp. 264 \u2013 268.",
|
| 470 |
+
"url": null
|
| 471 |
+
}
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"43": {
|
| 475 |
+
"title": "Private communication.",
|
| 476 |
+
"author": "Prakash Panangaden.",
|
| 477 |
+
"venue": "June 2023.",
|
| 478 |
+
"url": null
|
| 479 |
+
}
|
| 480 |
+
},
|
| 481 |
+
{
|
| 482 |
+
"44": {
|
| 483 |
+
"title": "Probability,",
|
| 484 |
+
"author": "A. N. Shiryaev.",
|
| 485 |
+
"venue": "( Edition). Springer-Verlag, New York, 1995.",
|
| 486 |
+
"url": null
|
| 487 |
+
}
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"45": {
|
| 491 |
+
"title": "Handbook of Formal Languages.",
|
| 492 |
+
"author": "L. Staiger.",
|
| 493 |
+
"venue": "vol. 3: Beyond Words, Chapter -Languages, Springer, 1997. pp. 339\u2013387.",
|
| 494 |
+
"url": null
|
| 495 |
+
}
|
| 496 |
+
},
|
| 497 |
+
{
|
| 498 |
+
"46": {
|
| 499 |
+
"title": "Automata on Infinite Objects.",
|
| 500 |
+
"author": "W. Thomas.",
|
| 501 |
+
"venue": "In: J. van Leeuwen, ed., Handbook of Theoretical Computer Science, Vol. B (Elsevier, 1990) 133\u2013191.",
|
| 502 |
+
"url": null
|
| 503 |
+
}
|
| 504 |
+
},
|
| 505 |
+
{
|
| 506 |
+
"47": {
|
| 507 |
+
"title": "Deciding Probabilistic Automata Weak Bisimulation in Polynomial Time.",
|
| 508 |
+
"author": "Holger Hermanns, Andrea Turrini.",
|
| 509 |
+
"venue": "arXiv:1205.0376, 2012. https://doi.org/10.48550/arXiv.1205.0376.",
|
| 510 |
+
"url": null
|
| 511 |
+
}
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"48": {
|
| 515 |
+
"title": "Polynomial time decision algorithms for probabilistic automata.",
|
| 516 |
+
"author": "A. Turrini and H. Hermanns.",
|
| 517 |
+
"venue": "Information and Computation 244 (2015) 134\u2013171. https://doi.org/10.1016/j.ic.2015.07.004.",
|
| 518 |
+
"url": null
|
| 519 |
+
}
|
| 520 |
+
},
|
| 521 |
+
{
|
| 522 |
+
"49": {
|
| 523 |
+
"title": "Private communication.",
|
| 524 |
+
"author": "Andrea Turrini.",
|
| 525 |
+
"venue": "June 2025.",
|
| 526 |
+
"url": null
|
| 527 |
+
}
|
| 528 |
+
},
|
| 529 |
+
{
|
| 530 |
+
"50": {
|
| 531 |
+
"title": "On computable numbers with an application to the entscheidnungsproblem.",
|
| 532 |
+
"author": "Alan M. Turing.",
|
| 533 |
+
"venue": "Proceedings of the London Mathematical Society, Volume s2-42, Issue 1, 1937, Pages 230 \u2013 265. Reprint available at https://doi.org/10.1016/0066-4138(60)90045-8.",
|
| 534 |
+
"url": null
|
| 535 |
+
}
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"51": {
|
| 539 |
+
"title": "Automatic verification of probabilistic concurrent finite-state programs.",
|
| 540 |
+
"author": "M. Y. Vardi.",
|
| 541 |
+
"venue": "In: Proceedings of the th IEEE Symposium on Foundations of Computer Science, 1985, pp. 327\u2013338. https://doi.org/10.1109/SFCS.1985.12.",
|
| 542 |
+
"url": null
|
| 543 |
+
}
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"52": {
|
| 547 |
+
"title": "Algorithmic and logical characterizations of bisimulations for non-deterministic fuzzy transition systems.",
|
| 548 |
+
"author": "H. Wu, Y. Chen, T. Bu and Y. Deng.",
|
| 549 |
+
"venue": "Fuzzy Sets and Systems, 333 (2018) 106\u2013123. https://doi.org/10.1016/j.fss.2017.02.008.",
|
| 550 |
+
"url": null
|
| 551 |
+
}
|
| 552 |
+
}
|
| 553 |
+
],
|
| 554 |
+
"url": "http://arxiv.org/html/2209.10517v13"
|
| 555 |
+
}
|
20240721/2210.12777v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2212.00250v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2212.04687v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2301.11290v3.json
ADDED
|
@@ -0,0 +1,301 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Graph Encoder Ensemble for Simultaneous Vertex Embedding and Community Detection",
|
| 3 |
+
"abstract": "In this paper, we introduce a novel and computationally efficient method for vertex embedding, community detection, and community size determination. Our approach leverages a normalized one-hot graph encoder and a rank-based cluster size measure. Through extensive simulations, we demonstrate the excellent numerical performance of our proposed graph encoder ensemble algorithm.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "1. Introduction",
|
| 9 |
+
"text": "Graph data represents pairwise relationships between vertices through a collection of vertices and edges. Typically, a graph (or network) is represented by an adjacency matrix of size , where denotes the edge weight between the th and th vertices. Alternatively, the graph can be stored in an edgelist of size , with the first two columns indicating the vertex indices of each edge and the last column representing the edge weight.\nCommunity detection, also known as vertex clustering or graph partitioning, is a fundamental problem in graph analysis (Girvan and Newman, 2002 ###reference_b9###; Newman, 2004 ###reference_b14###; Fortunato, 2010 ###reference_b7###; Karrer and Newman, 2011 ###reference_b11###). The primary objective is to identify natural groups of vertices where intra-group connections are stronger than inter-group connections. Over the years, various approaches have been proposed, including modularity-based methods (Blondel et al., 2008 ###reference_b3###; Traag\net al., 2019 ###reference_b23###), spectral-based methods (Rohe\net al., 2011 ###reference_b16###; Sussman\net al., 2012 ###reference_b22###), and likelihood-based techniques (Gao\net al., 2018 ###reference_b8###; Abbe, 2018 ###reference_b2###), among others.\nSpectral-based and likelihood-based methods are extensively studied in the statistics community, but they tend to be computationally slow for large graphs. On the other hand, modularity-based methods are faster and widely used in practice, but they lack theoretical investigations and only provide community labels without vertex embedding. Moreover, determining the appropriate community size poses a challenge for any method and is often addressed in an ad-hoc manner or assumed to be known. Therefore, a desirable approach is to develop a method that can achieve community detection, vertex representation, and community size determination under a unified framework.\nIn this paper, we propose a graph encoder ensemble algorithm that simultaneously fulfills all these objectives. Our algorithm leverages a normalized one-hot graph encoder (Shen\net al., 2023c ###reference_b20###), ensemble learning (Maclin and Opitz, 1999 ###reference_b13###; Breiman, 2001 ###reference_b4###), k-means clustering (Lloyd, 1982 ###reference_b12###; Forgy, 1965 ###reference_b6###), and a novel rank-based cluster size measure called the minimal rank index. The proposed algorithm exhibits linear running time and demonstrates excellent numerical performance. The code for the algorithm is available on GitHub111https://github.com/cshen6/GraphEmd ###reference_###."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "2. Methods",
|
| 15 |
+
"text": "We begin by introducing the one-hot graph encoder embedding from (Shen\net al., 2023c ###reference_b20###), known for its computational efficiency and theoretical guarantees under random graph models. This embedding forms the foundation of our proposed ensemble method, outlined in Algorithm 1 ###reference_###. The ensemble algorithm incorporates crucial enhancements, including normalization, the minimal rank index, and ensemble embedding, which are elaborated in the subsequent subsections."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "2.1. Prerequisite",
|
| 21 |
+
"text": "Given the graph adjacency matrix and a label vector , we define as the number of observations per class, where\nfor . We construct the one-hot encoding matrix on , then normalize it by the number of observations per-class. Specifically, for each vertex , we set\nif and only if , and otherwise. The graph encoder embedding is then obtained by performing a simple matrix multiplication:\nEach row represents a -dimensional Euclidean representation of vertex . The computational advantage of the graph encoder embedding lies in the matrix multiplications, which can be efficiently implemented by iterating over the edge list only once, without the need for the adjacency matrix (Shen\net al., 2023c ###reference_b20###). In Algorithm 1 ###reference_###, we denote the above steps as"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "2.2. Main Algorithm",
|
| 27 |
+
"text": "The proposed ensemble method is described in detail in Algorithm 1 ###reference_###. It can be applied to binary or weighted graphs, as well as directed or undirected graphs. Throughout this paper, we set the number of random replicates , the maximum number of iterations , and the clustering range is determined based on the specific experiment.\nIn the pseudo-code, the normalization step is represented by , which normalizes each vertex representation to have unit norm (see Section 2.3 ###reference_### for more details). Additionally, given an embedding and a label vector , the minimal rank index is denoted as , which measures the quality of clustering with a lower value indicating better clustering (details in Section 2.4 ###reference_###). The k-means clustering step is denoted as , and the adjusted Rand index is denoted as , which measures the similarity between two label vectors of the same size. The ARI is a popular matching metric that ranges from to , with a larger positive value indicating better match quality and a value of representing a perfect match (Rand, 1971 ###reference_b15###)."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "2.3. Why Normalization",
|
| 33 |
+
"text": "The normalization step in Algorithm 1 ###reference_### scales each vertex embedding to unit norm. Specifically, for each vertex ,\nif . The normalization step plays a crucial role in achieving improved clustering results, as demonstrated in Figure 1 ###reference_### using a sparse random graph model with two communities. The normalized embedding is represented on a unit sphere, effectively capturing the connectivity information while mitigating the influence of vertex degrees. In contrast, the un-normalized embedding is significantly affected by the original vertex degrees, resulting in vertices from the same community being widely dispersed. This distinction bears resemblance to the two-truth phenomenon observed in graph adjacency and graph Laplacian, where the Laplacian spectral embedding (LSE) can be seen as a degree-normalized version of the adjacency spectral embedding (ASE). The LSE typically performs better on sparse graphs. Further numerical evaluations on the normalization effect can be found in Section 3.2 ###reference_### and Table 1 ###reference_###.\n###figure_1###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "2.4. The Minimal Rank Index",
|
| 39 |
+
"text": "We introduce a new rank-based measure called the minimal rank index (MRI) to assess the quality of clustering. This measure plays a crucial role in Algorithm 1 ###reference_### as it enables the comparison of multiple embeddings generated from different initializations and community sizes.\nGiven the cluster index of vertex , the Euclidean distance function , and the mean of the th cluster denoted as\nthe minimal rank index is computed as:\nThe MRI measures how often the vertex embedding is not closest to its corresponding cluster mean. A smaller MRI value indicates better clustering quality, with MRI equal to indicating that every vertex is closest to its cluster mean. In the context of k-means clustering, MRI is non-zero when the k-means algorithm fails to converge.\nIn comparison to common cluster size measures such as Silhouette Score, Davies-Bouldin index, Variance Ratio Criterion, and Gap criterion (Rousseeuw, 1987 ###reference_b17###; Davies and\nBouldin, 1989 ###reference_b5###), MRI is rank-based rather than based on actual distances. These other measures compute ratios of within-cluster distances to between-cluster distances. If any of these measures were used in Algorithm 1 ###reference_### instead of MRI, the choice of cluster size would be biased towards the smallest possible value. This is due to the incremental nature of graph encoder embedding in Algorithm 1 ###reference_###, where the embedding dimension is equal to the community size . Consequently, within-cluster distances become smaller for smaller values of , resulting in a bias towards the smallest when using actual distance."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.5",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "2.5. Ensemble Embedding and Cluster Size Determination",
|
| 45 |
+
"text": "Ensemble learning is utilized in Algorithm 1 ###reference_### to improve learning performance and reduce variance by employing multiple models. The approach can be summarized as follows: for each value of in the cluster range, we generate a set of vertex embeddings and community labels using random label initialization. The model with the smallest MRI is selected as the best model. In cases where multiple models have the same smallest MRI, the average embedding is used.\nAdditionally, among all possible choices of cluster size , the best embedding with the smallest MRI is selected. If there are multiple embeddings with the same smallest MRI, the one with the largest is chosen. For instance, if the MRI values are for , the graph encoder ensemble would select ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.6",
|
| 49 |
+
"parent_section_id": "2",
|
| 50 |
+
"section_name": "2.6. Computational Complexity Analysis",
|
| 51 |
+
"text": "Algorithm 1 ###reference_### comprises several steps, including one-hot graph encoder embedding, k-means clustering, MRI computation, and ensembles. Let be the number of vertices and be the number of edges. At any fixed , the one-hot graph encoder embedding takes , k-means takes , and the MRI computation takes . Therefore, the overall time complexity of Algorithm 1 ###reference_### is , which is linear with respect to the number of vertices and edges. The storage requirement is also . In practical terms, the graph encoder ensemble algorithm exhibits remarkable efficiency and scalability. Testing on simulated graphs with default parameters and , it takes less than 3 minutes to process 1 million edges and less than 20 minutes for 10 million edges."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "3. Results",
|
| 57 |
+
"text": "In this section, we conduct extensive numerical experiments to demonstrate the advantages of the graph encoder ensemble, as well as the individual benefits of normalization, ensemble, and MRI. We compare these approaches against benchmarks including the algorithm without normalization, without ensemble, with MRI replaced, and using adjacency/Laplacian spectral embedding. The performance is evaluated using the adjusted Rand index (ARI), which measures the degree of agreement between the estimated communities and the ground-truth labels."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.1",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "3.1. Simulation Set-up",
|
| 63 |
+
"text": "The stochastic block model (SBM) is a widely used random graph model for studying community structure (Holland\net al., 1983 ###reference_b10###; Snijders and\nNowicki, 1997 ###reference_b21###). Each vertex is associated with a class label . The class label may be fixed a-priori, or generated by a categorical distribution with prior probability . Then a block probability matrix specifies the edge probability between a vertex from class and a vertex from class . For any ,\nThe degree-corrected stochastic block model (DC-SBM) (Zhao\net al., 2012 ###reference_b24###) is a generalization of SBM to better model the sparsity of real graphs. Everything else being the same as SBM, each vertex has an additional degree parameter , and the adjacency matrix is generated by\nIn our simulations, we consider three DC-SBM models with increasing community sizes. In all models, the degrees are generated randomly by .\nSimulation 1: , , equally likely, and the block probability matrix is\nSimulation 2: , , with prior probability , and the block probability matrix is\nSimulation 3: , , with equally likely prior probability, and the block probability matrix satisfies and for all and ."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.2",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "3.2. Normalization Comparison",
|
| 69 |
+
"text": "Table 1 ###reference_### provides clear evidence of the superior clustering performance achieved by the normalized algorithm compared to the un-normalized algorithm. To isolate the impact of normalization, we set and assume the cluster size is known. The observed improvement aligns with the phenomenon observed between adjacency spectral embedding (ASE) and Laplacian spectral embedding (LSE), where LSE, being a normalized version of ASE, consistently outperforms ASE."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.3",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "3.3. Ensemble Comparison",
|
| 75 |
+
"text": "In this simulation, we assume a known cluster size and conduct Monte Carlo replicates to compare the performance of the ensemble algorithm () with the no-ensemble version (). The results in Table 2 ###reference_### clearly demonstrate the superiority of the ensemble algorithm: it achieves higher mean ARI and significantly reduces the variance compared to the no-ensemble version. Based on our empirical observations, the default choice of yields satisfactory results across our experiments. Additionally, if the graph size is sufficiently large and the community structure is well-separated, using a smaller value of or even is sufficient. This is evident in simulation 1 of Table 2 ###reference_###."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.4",
|
| 79 |
+
"parent_section_id": "3",
|
| 80 |
+
"section_name": "3.4. Cluster Size Estimation",
|
| 81 |
+
"text": "In this analysis, we explore the performance of the algorithm in estimating the community size. Instead of using the ground-truth size, we consider a range of potential sizes from to , and the results are presented in Figure 2 ###reference_###.\nThese findings provide insights into the performance of the algorithm in accurately estimating the community size and highlight the importance of the MRI measure in achieving accurate size determination.\n###figure_2###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "4. Conclusion",
|
| 87 |
+
"text": "This paper introduces the graph encoder ensemble, which achieves graph embedding, community detection, and community size determination in a unified framework. Its main advantages include ease of implementation, computational efficiency, and excellent performance in community detection and community size selection. Several potential future directions include exploring mathematical proofs for asymptotic clustering optimality, investigating theoretical properties of MRI, and extending the method to dynamic and multi-modal graphs (Shen\net al., 2023b ###reference_b19###; Shen et al., 2023a ###reference_b18###)."
|
| 88 |
+
}
|
| 89 |
+
],
|
| 90 |
+
"appendix": [],
|
| 91 |
+
"tables": {
|
| 92 |
+
"1": {
|
| 93 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.12.13.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" colspan=\"5\" id=\"S3.T1.12.13.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">ARI</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.12.14.2\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T1.12.14.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.12.14.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">GEE</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.12.14.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">GEE no norm</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.12.14.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">ASE</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.12.14.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">LSE</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T1.4.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T1.8.8.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.5.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.6.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.7.7.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_rr ltx_border_t\" id=\"S3.T1.12.12.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 3</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.9.9.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.10.10.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.11.11.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.12.12.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>This table demonstrates the advantage of normalization in the graph encoder ensemble. The \u201dGEE\u201d column refers to the graph encoder ensemble using Algorithm\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2301.11290v3#alg1\" title=\"Algorithm 1 \u2023 2.2. Main Algorithm \u2023 2. Methods \u2023 Graph Encoder Ensemble for Simultaneous Vertex Embedding and Community Detection\"><span class=\"ltx_text ltx_ref_tag\">1</span></a>, while \u201dGEE no norm\u201d indicates that normalization is not applied. The reported results are averages obtained from Monte Carlo replicates.</figcaption>\n</figure>",
|
| 94 |
+
"capture": "Table 1. This table demonstrates the advantage of normalization in the graph encoder ensemble. The \u201dGEE\u201d column refers to the graph encoder ensemble using Algorithm\u00a01, while \u201dGEE no norm\u201d indicates that normalization is not applied. The reported results are averages obtained from Monte Carlo replicates."
|
| 95 |
+
},
|
| 96 |
+
"2": {
|
| 97 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.7.8.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" colspan=\"3\" id=\"S3.T2.7.8.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Average ARI + std</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T2.1.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">GEE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">GEE ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T2.3.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.2.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.3.3.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T2.5.5.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.4.4.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.5.5.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_rr ltx_border_t\" id=\"S3.T2.7.7.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 3</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.6.6.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.7.7.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2. </span>This table assesses the advantage of the ensemble approach in the graph encoder ensemble. The reported results include the mean and standard deviation of the Adjusted Rand Index (ARI) obtained from Monte Carlo replicates.</figcaption>\n</figure>",
|
| 98 |
+
"capture": "Table 2. This table assesses the advantage of the ensemble approach in the graph encoder ensemble. The reported results include the mean and standard deviation of the Adjusted Rand Index (ARI) obtained from Monte Carlo replicates."
|
| 99 |
+
}
|
| 100 |
+
},
|
| 101 |
+
"image_paths": {
|
| 102 |
+
"1": {
|
| 103 |
+
"figure_path": "2301.11290v3_figure_1.png",
|
| 104 |
+
"caption": "Figure 1. This figure visually demonstrates the effect of normalization. The left panel displays the adjacency heatmap of a simulated sparse graph using simulation 1 in Section 3.1. The center panel shows the resulting embedding without the normalization step, while the right panel displays the resulting embedding with normalization. The blue and red dots represent the true community labels of each vertex.",
|
| 105 |
+
"url": "http://arxiv.org/html/2301.11290v3/x1.png"
|
| 106 |
+
},
|
| 107 |
+
"2": {
|
| 108 |
+
"figure_path": "2301.11290v3_figure_2.png",
|
| 109 |
+
"caption": "Figure 2. This figure presents the results of cluster size estimation using the graph encoder ensemble. The estimation accuracy and the performance of different size measures are evaluated for various simulations and graph sizes. For each simulation and each graph size, we independently generate 100100100100 graphs, and run the ensemble algorithm to estimate the community size. The left panel of the figure illustrates the estimation accuracy as the graph size increases. The estimation accuracy represents the proportion of cases where the algorithm correctly chooses the community size. As the graph size increases, the estimation accuracy gradually improves, reaching a perfect estimation accuracy of 1111 for all simulations. The center panel focuses on simulation 3 at n=5000\ud835\udc5b5000n=5000italic_n = 5000. The MRI calculates K^=5^\ud835\udc3e5\\hat{K}=5over^ start_ARG italic_K end_ARG = 5 as the estimated community size, which matches the ground-truth size. In the right panel, the average Silhouette Score is computed as an alternative size measure, which is biased towards smaller community sizes and chooses K^S\u2062S=2subscript^\ud835\udc3e\ud835\udc46\ud835\udc462\\hat{K}_{SS}=2over^ start_ARG italic_K end_ARG start_POSTSUBSCRIPT italic_S italic_S end_POSTSUBSCRIPT = 2, resulting in a different estimation compared to the ground-truth size.",
|
| 110 |
+
"url": "http://arxiv.org/html/2301.11290v3/x2.png"
|
| 111 |
+
}
|
| 112 |
+
},
|
| 113 |
+
"validation": true,
|
| 114 |
+
"references": [
|
| 115 |
+
{
|
| 116 |
+
"1": {
|
| 117 |
+
"title": "Community Detection and Stochastic Block Models:\nRecent Developments.",
|
| 118 |
+
"author": "Emmanuel Abbe.\n2018.",
|
| 119 |
+
"venue": "Journal of Machine Learning Research\n18, 177 (2018),\n1\u201386.",
|
| 120 |
+
"url": null
|
| 121 |
+
}
|
| 122 |
+
},
|
| 123 |
+
{
|
| 124 |
+
"2": {
|
| 125 |
+
"title": "Fast unfolding of communities in large networks.",
|
| 126 |
+
"author": "V. D. Blondel, J. L.\nGuillaume, R. Lambiotte, and E.\nLefebvre. 2008.",
|
| 127 |
+
"venue": "Journal of Statistical Mechanics: Theory and\nExperiment 10008 (2008),\n6.",
|
| 128 |
+
"url": null
|
| 129 |
+
}
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"3": {
|
| 133 |
+
"title": "Random Forests.",
|
| 134 |
+
"author": "L. Breiman.\n2001.",
|
| 135 |
+
"venue": "Machine Learning 4,\n1 (October 2001),\n5\u201332.",
|
| 136 |
+
"url": null
|
| 137 |
+
}
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"4": {
|
| 141 |
+
"title": "A Cluster Separation Measure.",
|
| 142 |
+
"author": "David L. Davies and\nDonald W. Bouldin. 1989.",
|
| 143 |
+
"venue": "IEEE Transactions on Pattern Analysis and\nMachine Intelligence 1, 2\n(1989), 224\u2013227.",
|
| 144 |
+
"url": null
|
| 145 |
+
}
|
| 146 |
+
},
|
| 147 |
+
{
|
| 148 |
+
"5": {
|
| 149 |
+
"title": "Cluster analysis of multivariate data: efficiency\nversus interpretability of classifications.",
|
| 150 |
+
"author": "Edward W. Forgy.\n1965.",
|
| 151 |
+
"venue": "Biometrics 21,\n3 (1965), 768\u2013769.",
|
| 152 |
+
"url": null
|
| 153 |
+
}
|
| 154 |
+
},
|
| 155 |
+
{
|
| 156 |
+
"6": {
|
| 157 |
+
"title": "Community detection in graphs.",
|
| 158 |
+
"author": "Santo Fortunato.\n2010.",
|
| 159 |
+
"venue": "Physics Reports 486,\n3\u20135 (2010), 75\u2013174.",
|
| 160 |
+
"url": null
|
| 161 |
+
}
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"7": {
|
| 165 |
+
"title": "Community detection in degree-corrected block\nmodels.",
|
| 166 |
+
"author": "Chao Gao, Zongming Ma,\nAnderson Y. Zhang, and Harrison H.\nZhou. 2018.",
|
| 167 |
+
"venue": "Annals of Statistics 46,\n5 (2018), 2153\u20132185.",
|
| 168 |
+
"url": null
|
| 169 |
+
}
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"8": {
|
| 173 |
+
"title": "Community Structure in Social and Biological\nNetworks.",
|
| 174 |
+
"author": "M. Girvan and M. E. J.\nNewman. 2002.",
|
| 175 |
+
"venue": "Proceedings of National Academy of Science\n99, 12 (2002),\n7821\u20137826.",
|
| 176 |
+
"url": null
|
| 177 |
+
}
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"9": {
|
| 181 |
+
"title": "Stochastic Blockmodels: First Steps.",
|
| 182 |
+
"author": "P. Holland, K. Laskey,\nand S. Leinhardt. 1983.",
|
| 183 |
+
"venue": "Social Networks 5,\n2 (1983), 109\u2013137.",
|
| 184 |
+
"url": null
|
| 185 |
+
}
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"10": {
|
| 189 |
+
"title": "Stochastic blockmodels and community structure in\nnetworks.",
|
| 190 |
+
"author": "B. Karrer and M. E. J.\nNewman. 2011.",
|
| 191 |
+
"venue": "Physical Review E 83\n(2011), 016107.",
|
| 192 |
+
"url": null
|
| 193 |
+
}
|
| 194 |
+
},
|
| 195 |
+
{
|
| 196 |
+
"11": {
|
| 197 |
+
"title": "Least squares quantization in PCM.",
|
| 198 |
+
"author": "Stuart P. Lloyd.\n1982.",
|
| 199 |
+
"venue": "IEEE Transactions on Information Theory\n28, 2 (1982),\n129\u2013137.",
|
| 200 |
+
"url": null
|
| 201 |
+
}
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"12": {
|
| 205 |
+
"title": "Popular Ensemble Methods: An Empirical Study.",
|
| 206 |
+
"author": "R. Maclin and D.\nOpitz. 1999.",
|
| 207 |
+
"venue": "Journal Of Artificial Intelligence Research\n11 (1999), 169\u2013198.",
|
| 208 |
+
"url": null
|
| 209 |
+
}
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"13": {
|
| 213 |
+
"title": "Detecting community structure in networks.",
|
| 214 |
+
"author": "M. E. J. Newman.\n2004.",
|
| 215 |
+
"venue": "European Physical Journal B\n38, 2 (2004),\n321\u2013330.",
|
| 216 |
+
"url": null
|
| 217 |
+
}
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"14": {
|
| 221 |
+
"title": "Objective criteria for the evaluation of clustering\nmethods.",
|
| 222 |
+
"author": "W. M. Rand.\n1971.",
|
| 223 |
+
"venue": "J. Amer. Statist. Assoc.\n66, 336 (1971),\n846\u2013850.",
|
| 224 |
+
"url": null
|
| 225 |
+
}
|
| 226 |
+
},
|
| 227 |
+
{
|
| 228 |
+
"15": {
|
| 229 |
+
"title": "Spectral Clustering and the High-Dimensional\nStochastic Blockmodel.",
|
| 230 |
+
"author": "K. Rohe, S. Chatterjee,\nand B. Yu. 2011.",
|
| 231 |
+
"venue": "Annals of Statistics 39,\n4 (2011), 1878\u20131915.",
|
| 232 |
+
"url": null
|
| 233 |
+
}
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"16": {
|
| 237 |
+
"title": "Silhouettes: a Graphical Aid to the Interpretation\nand Validation of Cluster Analysis.",
|
| 238 |
+
"author": "Peter J. Rousseeuw.\n1987.",
|
| 239 |
+
"venue": "Computational and Applied Mathematics\n20 (1987), 53\u201365.",
|
| 240 |
+
"url": null
|
| 241 |
+
}
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"17": {
|
| 245 |
+
"title": "Discovering Communication Pattern Shifts in\nLarge-Scale Labeled Networks using Encoder Embedding and Vertex Dynamics.",
|
| 246 |
+
"author": "C. Shen, J. Larson,\nH. Trinh, X. Qin, Y.\nPark, and C. E. Priebe.\n2023a.",
|
| 247 |
+
"venue": "https://arxiv.org/abs/2305.02381\n(2023).",
|
| 248 |
+
"url": null
|
| 249 |
+
}
|
| 250 |
+
},
|
| 251 |
+
{
|
| 252 |
+
"18": {
|
| 253 |
+
"title": "Synergistic Graph Fusion via Encoder Embedding.",
|
| 254 |
+
"author": "C. Shen, C. E. Priebe,\nJ. Larson, and H. Trinh.\n2023b.",
|
| 255 |
+
"venue": "https://arxiv.org/abs/2303.18051\n(2023).",
|
| 256 |
+
"url": null
|
| 257 |
+
}
|
| 258 |
+
},
|
| 259 |
+
{
|
| 260 |
+
"19": {
|
| 261 |
+
"title": "One-Hot Graph Encoder Embedding.",
|
| 262 |
+
"author": "C. Shen, Q. Wang, and\nC. E. Priebe. 2023c.",
|
| 263 |
+
"venue": "IEEE Transactions on Pattern Analysis and\nMachine Intelligence 45, 6\n(2023), 7933 \u2013 7938.",
|
| 264 |
+
"url": null
|
| 265 |
+
}
|
| 266 |
+
},
|
| 267 |
+
{
|
| 268 |
+
"20": {
|
| 269 |
+
"title": "Estimation and Prediction for Stochastic\nBlockmodels for Graphs with Latent Block Structure.",
|
| 270 |
+
"author": "T. Snijders and K.\nNowicki. 1997.",
|
| 271 |
+
"venue": "Journal of Classification\n14, 1 (1997),\n75\u2013100.",
|
| 272 |
+
"url": null
|
| 273 |
+
}
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"21": {
|
| 277 |
+
"title": "A Consistent Adjacency Spectral Embedding for\nStochastic Blockmodel Graphs.",
|
| 278 |
+
"author": "D. Sussman, M. Tang,\nD. Fishkind, and C. Priebe.\n2012.",
|
| 279 |
+
"venue": "J. Amer. Statist. Assoc.\n107, 499 (2012),\n1119\u20131128.",
|
| 280 |
+
"url": null
|
| 281 |
+
}
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"22": {
|
| 285 |
+
"title": "From Louvain to Leiden: guaranteeing well-connected\ncommunities.",
|
| 286 |
+
"author": "V. A. Traag, L. Waltman,\nand N. J. van Eck. 2019.",
|
| 287 |
+
"venue": "Scientific Reports 9\n(2019), 5233.",
|
| 288 |
+
"url": null
|
| 289 |
+
}
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"23": {
|
| 293 |
+
"title": "Consistency of Community Detection in Networks\nunder Degree-Corrected Stochastic Block Models.",
|
| 294 |
+
"author": "Y. Zhao, E. Levina, and\nJ. Zhu. 2012.",
|
| 295 |
+
"venue": "Annals of Statistics 40,\n4 (2012), 2266\u20132292.",
|
| 296 |
+
"url": null
|
| 297 |
+
}
|
| 298 |
+
}
|
| 299 |
+
],
|
| 300 |
+
"url": "http://arxiv.org/html/2301.11290v3"
|
| 301 |
+
}
|
20240721/2301.12195v3.json
ADDED
|
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "BAFFLE: A Baseline of Backpropagation-Free Federated Learning",
|
| 3 |
+
"abstract": "Federated learning (FL) is a general principle for decentralized clients to train a server model collectively without sharing local data. FL is a promising framework with practical applications, but its standard training paradigm requires the clients to backpropagate through the model to compute gradients. Since these clients are typically edge devices and not fully trusted, executing backpropagation on them incurs computational and storage overhead as well as white-box vulnerability. In light of this, we develop backpropagation-free federated learning, dubbed BAFFLE, in which backpropagation is replaced by multiple forward processes to estimate gradients. BAFFLE is 1) memory-efficient and easily fits uploading bandwidth; 2) compatible with inference-only hardware optimization and model quantization or pruning; and 3) well-suited to trusted execution environments, because the clients in BAFFLE only execute forward propagation and return a set of scalars to the server. Empirically we use BAFFLE to train deep models from scratch or to finetune pretrained models, achieving acceptable results.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Federated learning (FL) allows decentralized clients to collaboratively train a server model [62 ###reference_b62###]. In each training round, the selected clients compute model gradients or updates on their local private datasets, without explicitly exchanging sample points to the server. While FL describes a promising blueprint and has several applications [98 ###reference_b98###, 32 ###reference_b32###, 51 ###reference_b51###], the mainstream training paradigm of FL is still gradient-based that requires the clients to locally execute backpropagation, which leads to two practical limitations:\n(i) Overhead for edge devices. The clients in FL are usually edge devices, such as mobile phones and IoT sensors, whose hardware is primarily optimized for inference-only purposes [79 ###reference_b79###, 88 ###reference_b88###], rather than for backpropagation. Due to the limited resources, computationally affordable models running on edge devices are typically quantized and pruned [93 ###reference_b93###], making exact backpropagation difficult. In addition, standard implementations of backpropagation rely on either forward-mode or reverse-mode auto-differentiation in contemporary machine learning packages [14 ###reference_b14###, 73 ###reference_b73###], which increases storage requirements.\n(ii) White-box vulnerability. To facilitate gradient computing, the server regularly distributes its model status to the clients, but this white-box exposure of the model renders the server vulnerable to, e.g., poisoning or inversion attacks from malicious clients [80 ###reference_b80###, 97 ###reference_b97###, 103 ###reference_b103###, 25 ###reference_b25###]. With that, recent attempts are made to exploit trusted execution environments (TEEs) in FL, which can isolate the model status within a black-box secure area and significantly reduce the success rate of malicious evasion [19 ###reference_b19###, 64 ###reference_b64###, 65 ###reference_b65###]. However, TEEs are highly memory-constrained [87 ###reference_b87###], while backpropagation is memory-consuming to restore intermediate states.\nWhile numerous solutions have been proposed to alleviate these limitations (related work discussed in Section 5 ###reference_###), we raise an essential question: how to perform backpropagation-free FL? Inspired by the literature on zero-order optimization [82 ###reference_b82###], we intend to substitute backpropagation with multiple forward or inference processes to estimate the gradients. Technically speaking, we propose the framework of BAckpropagation-Free Federated LEarning (BAFFLE). As illustrated in Figure 1 ###reference_###, BAFFLE consists of three conceptual steps: (1) each client locally perturbs the model parameters times as (the server sends the random seed to clients for generating ); (2) each client executes forward processes on the perturbed models using its private dataset and obtains loss differences ; (3) the server aggregates loss differences to estimate gradients.\n###figure_1### BAFFLE\u2019s defining characteristic is that it only utilizes forward propagation, which is memory-efficient and does not require auto-differentiation. It is well-adapted to model quantization and pruning as well as inference-only hardware optimization on edge devices. Compared to backpropagation, the computation graph of BAFFLE is more easily optimized, such as by slicing it into per-layer calculation [44 ###reference_b44###]. Since each loss difference is a scalar, BAFFLE can easily accommodate the uploading bandwidth of clients by adjusting the value of as opposed to using, e.g., gradient compression [84 ###reference_b84###]. BAFFLE is also compatible with recent advances in inference approaches for TEE [85 ###reference_b85###, 87 ###reference_b87###], providing an efficient solution for combining TEE into FL and preventing white-box evasion.\nWe adapt secure aggregation [10 ###reference_b10###] to zero-order optimization and investigate ways to improve gradient estimation in BAFFLE. Empirically, BAFFLE is used to train models from scratch on MNIST [49 ###reference_b49###] and CIFAR-10/100 [48 ###reference_b48###], and transfer ImageNet-pretrained models to OfficeHome [89 ###reference_b89###]. Compared to conventional FL, it achieves suboptimal but acceptable performance. These results shed light on the potential of BAFFLE and general backpropagation-free methods in FL."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Preliminaries",
|
| 15 |
+
"text": "Finite difference. Gradient-based optimization techniques (either first-order or higher-order) are the most frequently used tools to train deep networks [27 ###reference_b27###]. Nevertheless, recent progress demonstrates promising applications of zero-order optimization methods for training, particularly when exact derivatives cannot be obtained [23 ###reference_b23###, 69 ###reference_b69###, 55 ###reference_b55###] or backward processes are computationally prohibitive [70 ###reference_b70###, 34 ###reference_b34###]. Zero-order approaches require only multiple forward processes that may be executed in parallel. Along this routine, finite difference stems from the definition of derivatives and can be generalized to higher-order and multivariate cases by Taylor\u2019s expansion. For any differentiable loss and a small perturbation , finite difference employs the forward difference scheme\nwhere is a scaled directional derivative along . Furthermore, we can use the central difference scheme to obtain higher-order residuals as\nFederated learning. Suppose we have clients, and the -th client\u2019s private dataset is defined as with input-label pairs. Let represent the loss function for the dataset , where denotes the server model\u2019s global parameters. The training objective of FL is to find that minimize the total loss function as\nIn the conventional FL framework, clients compute gradients or model updates locally through backpropagation and then upload them to the server. Federated average [62 ###reference_b62###] performs global aggregation using , where is the local update obtained via executing multiple times and is learning rate.\nZeroth-order FL. Similar to our work, DLZO [54 ###reference_b54###] and FedZO [22 ###reference_b22###] present zeroth-order optimization methods for FL independently in batch-level and epoch-level communications. However, they concentrate primarily on basic linear models with softmax regression problems and ignore deep models. Besides, they also do not account for server security aggregation in conjunction with zero-order optimization. In comparison, BAFFLE enables security aggregation, can train deep models such as WideResNet from scratch and achieves reasonable results, e.g. 95.17% accuracy on MNIST with 20 communication rounds versus 83.58% for FedZO with 1,000 rounds."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Backpropagation-Free Federated Learning",
|
| 21 |
+
"text": "In this section, we introduce zero-order optimization into FL and develop BAFFLE, a backpropagation-free federated learning framework that uses multiple forward processes in place of backpropagation. An initial attempt is to apply finite difference as the gradient estimator. To estimate the full gradients, we need to perturb each parameter once to approximate the partial derivative , causing the forward computations to grow with (recall that ) and making it difficult to scale to large models. In light of this, we resort to Stein\u2019s identity [82 ###reference_b82###] to obtain an unbiased estimation of gradients from loss differences calculated on various perturbations. As depicted in Figure 1 ###reference_###, BAFFLE clients need only download random seeds and global parameters update, generate perturbations locally, execute multiple forward propagations and upload loss differences back to the server. Furthermore, we also present convergence analyses of BAFFLE,\nproviding\nguidelines for model design and acceleration of training."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Unbiased Gradient Estimation with Stein\u2019s Identity",
|
| 27 |
+
"text": "Previous work on sign-based optimization [66 ###reference_b66###] demonstrates that deep networks can be effectively trained if the majority of gradients have proper signs. Thus, we propose performing forward propagation multiple times on perturbed parameters, in order to obtain a stochastic estimation of gradients without backpropagation. Specifically, assuming that the loss function is continuously differentiable w.r.t. given any dataset , which is true (almost everywhere) for deep networks using non-linear activation functions, we define a smoothed loss function as:\nwhere the perturbation follows a Gaussian distribution with mean and covariance . Stein [82 ###reference_b82###] proves the Stein\u2019s identity\n(proof recapped in Appendix A):\nwhere is the loss difference. Note that computing a loss difference only requires the execution of two forwards and without backpropagation. It is trivial that is continuously differentiable for any and converges uniformly as ; hence, it follows that . Therefore, we can obtain a stochastic estimation of gradients using Monte Carlo by 1) selecting a small value of ; 2) randomly sampling perturbations from as ; and 3) utilizing the Stein\u2019s identity in Eq. (5 ###reference_###) to calculate"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Operating Flow of BAFFLE",
|
| 33 |
+
"text": "Based on the forward-only gradient estimator derived in Eq. (6 ###reference_###), we outline the basic operating flow of our BAFFLE system (Algorithm 1 ###reference_###) as follows:\nModel initialization. (Lines 34, done by server) The server initializes the model parameters to and optionally encodes the computing paradigm of loss differences into the TEE module\n(see Appendix B for more information on TEE);\nDownloading paradigms. (Lines 67, server all clients) In round , the server distributes the most recent model parameters (or the model update ) and the computing paradigm to all the clients. In addition, in BAFFLE, the server sends a random seed (rather than directly sending the perturbations to reduce communication burden);\nLocal computation. (Lines 1112, done by clients) Each client generates perturbations locally from using random seed , and executes the computing paradigm to obtain loss differences. is chosen adaptively based on clients\u2019 computation capability;\nUploading loss differences. (Line 13, all clients server) Each client uploads noisy outputs to the server, where each output is a floating-point number and the noise is negotiated by all clients to be zero-sum. The total uploaded Bytes is ;\nSecure aggregation. (Lines 1516, done by server) In order to prevent the server from recovering the exact loss differences and causing privacy leakage [25 ###reference_b25###], we adopt the secure aggregation method [10 ###reference_b10###] that was originally proposed for conventional FL and apply it to BAFFLE. Specifically, all clients negotiate a group of noises satisfying . Then we can reorganize our gradient estimator as\nSince are zero-sum, there is and Eq. (7 ###reference_###) holds. Therefore, the server can correctly aggregate and protect client privacy against recovering .\nRemark on communication cost. After getting the gradient estimation , the server updates the parameters to using techniques such as gradient descent with learning rate . Similar to the discussion in McMahan et al. [62 ###reference_b62###], the BAFFLE form presented in Algorithm 1 ###reference_### corresponds to the batch-level communication (also named FedSGD) where Lines 1112 execute once for each round . In batch-level settings, we reduce the uploaded Bytes from to . We can generalize BAFFLE to an analog of epoch-level communication (also named FedAvg), in which each client updates its local parameters multiple steps using the gradient estimator derived from via Eq. (6 ###reference_###), and upload model updates to the server after several local epochs. In epoch-level settings, the uploaded Bytes are the same as FedAvg. In experiments, we analyze both batch-level and epoch-level settings for BAFFLE and report the results."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Convergence Analyses",
|
| 39 |
+
"text": "Now we analyze the convergence rate of our gradient estimation method. For continuously differentiable loss functions, we have , so we choose a relatively small value for . The convergence guarantee can be derived as follows:\n(Proof in Appendix A)\nSuppose is a small value and the central\ndifference scheme in Eq. (2 ###reference_###) holds. For perturbations , the empirical covariance matrix is and mean is . Then for any , the relation between and the true gradient can be written as\nwhere\nTaking the expectation on both sides of Eq. (8 ###reference_###),\nwe obtain , which degrades to Stein\u2019s identity. To determine the convergence rate w.r.t. the value of , we have:\n(Adamczak et al. [3 ###reference_b3###])\nWith overwhelming probability, the empirical covariance matrix satisfies the inequality , where denotes the 2-norm for matrix and is an absolute positive constant.\nNote that in the finetuning setting, represents the number of trainable parameters, excluding frozen parameters. As concluded, provides an unbiased estimation for the true gradients with convergence rate of . Empirically, is used as a noisy gradient to train models, the generalization of which has been analyzed in previous work [105 ###reference_b105###, 50 ###reference_b50###]."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Experiments",
|
| 45 |
+
"text": "###figure_2### We evaluate BAFFLE on 4 benchmark datasets: MNIST [49 ###reference_b49###], CIFAR-10/100 [48 ###reference_b48###] and OfficeHome [89 ###reference_b89###]. We consider three models: 1) LeNet [49 ###reference_b49###] with two convolutional layers as the shallow model ( parameters); 2) WideResNet [100 ###reference_b100###] with and (WRN-10-2) as the light weight deep model ( parameters) and 3) MobileNet [38 ###reference_b38###] as the deep neural networks ( parameters) that works on ImageNet.\nParticipation and communication settings. To perform a comprehensive evaluation of BAFFLE, we simulate three popular FL scenarios [17 ###reference_b17###] with the FedLab [101 ###reference_b101###] participations: iid participations, label non-iid participations and feature non-iid participations. For iid participations, we set the client number and use uniform distribution to build local datasets. Then we evaluate our BAFFLE on MNIST and CIFAR-10/100 under both batch-level (FedSGD) and epoch-level (FedAvg) communication settings. For label non-iid participations, we set client number , use Dirichlet distribution with to build clients. For feature non-iid participations, we build clients from the prevailing domain adaptation dataset OfficeHome, which contains 65 categories from 4 different domains, i.e. Art, Clipart, Product and Real-world. We set the total client number to and generate clients from each domain. As results, we report Top-1 accuracy for MNIST, CIFAR-10 and OfficeHome and Top-5 accuracy for OfficeHome and CIFAR-100.\nHyperparameters. Following the settings in Section 2 ###reference_###, we use FedAVG to aggregate gradients from multiple clients and use SGD-based optimizer to update global parameters. Specifically, we use Adam [46 ###reference_b46###] to train a random initialized model with , learning rate and epochs for MNIST and CIFAR-10/100. For OfficeHome, we adapt the transfer learning [40 ###reference_b40###] by loading the ImageNet-pretrained model and finetuning the final layers with Adam, but setting learning rate and epochs . In BAFFLE, the perturbation scale and number are the most important hyperparameters. As shown in Theorem 3.1 ###reference_theorem1###, with less noise and more samples, the BAFFLE will obtain more accurate gradients, leading to improved performance. However, there exists a trade-off between accuracy and computational efficiency: an extremely small will cause the underflow problem [27 ###reference_b27###] and a large will increase computational cost. In practice, we empirically set because it is the smallest value that does not cause numerical problems in all experiments, and works well on edge devices with half-precision floating-point numbers. We also evaluate the impact of across a broad range from to ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "Four Guidelines for BAFFLE",
|
| 51 |
+
"text": "For a general family of continuously differentiable models, we analyze their convergence rate of BAFFLE in Section 3.3 ###reference_###. Since deep networks are usually stacked with multiple linear layers and non-linear activation, this layer linearity can be utilized to improve the accuracy-efficiency trade-off. Combining the linearity property and the unique conditions in edge devices (e.g., small data size and half-precision format), we present four guidelines for model design and training that can increase accuracy without introducing extra computation\n(Appendix C shows the details of linearity analysis):\nUsing twice forward difference (twice-FD) scheme rather than central scheme. Combining difference scheme Eq. (1 ###reference_###) and Eq. (2 ###reference_###), we find that by executing twice as many forward inferences (i.e.), the central scheme achieves lower residuals than twice-FD, despite the fact that twice-FD can benefit from additional sample times. With the same forward times (e.g., ), determining which scheme performs better is a practical issue. As shown in\nAppendix C,\nwe find that twice-FD performs better in all experiments, in part because the linearity reduces the benefit from second-order residuals.\nUsing Hardswish in BAFFLE. ReLU is effective when the middle features ( denotes the feature mapping) have the same sign before and after perturbations, i.e. . Since ReLU is not differentiable at zero, the value jump occurs when the sign of features changes after perturbations, i.e. . We use Hardswish [37 ###reference_b37###] to overcome this problem as it is continuously differentiable at zero and easy to implement on edge devices.\nUsing exponential moving average (EMA) to reduce oscillations. As shown in Theorem 3.1 ###reference_theorem1###, there exists an zero-mean white-noise between the true gradient and our estimation. To smooth out the oscillations caused by white noise, we apply EMA strategies from BYOL [28 ###reference_b28###] to the global parameters, with a smoothing coefficient of .\nUsing GroupNorm as opposed to BatchNorm. On edge devices, the dataset size is typically small, which leads to inaccurate batch statistics estimation and degrades performance when using BatchNorm. Thus we employ GroupNorm [96 ###reference_b96###] to solve this issue.\n###figure_3###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Performance on IID Clients",
|
| 57 |
+
"text": "Following the settings in Section 4.1 ###reference_###, we evaluate the performance of BAFFLE in the iid scenarios. We reproduce all experiments on the BP-based FL systems with the same settings and use them as the baselines. We refer to the baseline results as exact gradients and report the training process of BAFFLE in Figure 2 ###reference_###. The value of (e.g., for LeNet and for WRN-10-2) is much less than the dimensions of parameter space (e.g., for LeNet and for WRN-10-2). Since the convergence rate to the exact gradient is , the marginal benefit of increasing decreases. For instance, increasing from to on CIFAR-10 with WRN-10-2 barely improves accuracy by . Given that the convergence rate of Gaussian perturbations is , the sampling efficiency may be improved by choosing an alternative distribution for perturbations.\nAblation studies. As depicted in Figure 3 ###reference_###, we conduct ablation studies for BAFFLE to evaluate the aforementioned guidelines. In general, twice-FD, Hardswish and EMA can all improve the accuracy. For two difference schemes, we compare the twice-FD to central scheme with the same computation cost and show that the former outperforms the later, demonstrating that linearity reduces the gain from second-order residuals. As to activation functions, Hardswish is superior to ReLU and SELU because it is differentiable at zero and vanishes to zero in the negative part. Moreover, EMA enhances the performance of training strategies by reducing the effect of white noise.\nCommunication efficiency. Compared to the batch-level communication settings (FedSGD) in a BP-based FL system, BAFFLE requires each client to upload a -dimensional vector to the server and downloads the updated global parameters in each communication round. Since is significantly less than the parameter amounts (e.g., versus million), BAFFLE reduces data transfer by approximately half. To reduce communication costs, the prevalent FL system requires each client to perform model optimization on the local training dataset and upload the model updates to the server after a specified number of local epochs. BAFFLE can also perform epoch-level communications by employing an additional memory to store the perturbation in each forward and estimate the local gradient using Eq. (6 ###reference_###). Then each client optimizes the local model with SGD and uploads local updates after several epochs. As shown in Table 1 ###reference_###, we evaluate the performance of BAFFLE under one-epoch communication settings. As epoch-level communication is more prevalent in the real-world FL, all the following experiments will be conducted in this context. In brief, BAFFLE uploads the same Bytes as BP-based FL in epoch-level communication while the total communication rounds are much less than FedZO [22 ###reference_b22###], e.g. 20 versus 1000 on MNIST."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.3",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Performance on Non-IID Clients",
|
| 63 |
+
"text": "Following Section 4.1 ###reference_###, we evaluate the performance of BAFFLE in both label non-iid and feature non-iid scenarios. For label non-iid scenarios, we use the CIFAR-10/100 datasets and employ Dirichlet distribution to ensure that each client has a unique label distribution. We evaluate the performance of BAFFLE with 100 clients and various K values. As seen in Table 2 ###reference_###, the model suffers a significant drop in accuracy (e.g., in CIFAR-10 and in CIFAR-100) due to the label non-iid effect.\nFor feature non-iid scenarios, we construct clients using the OfficeHome dataset and use MobileNet as the deep model. As seen in Table 3 ###reference_###, we use the transfer learning strategy to train MobileNet, i.e., we load the parameters pretrained on ImageNet, freeze the backbone parameters, and retrain the classification layers. The accuracy decrease is approximately ."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.4",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Computation Efficiency, Memory and Robustness",
|
| 69 |
+
"text": "BAFFLE uses times forward passes instead of backward. Since the backward pass is about as expensive as two normal forward passes [35 ###reference_b35###] and five single-precision accelerated forward passes [67 ###reference_b67###], BAFFLE results in approximately times the computation expense of BP-based FL. Although BAFFLE results in times extra computation cost, we show the cost can be reduced with proper training strategies, e.g., the transfer learning in Table 3 ###reference_### can reduce to on the MobileNet and the sized OfficeHome dataset.\nMoreover, BAFFLE can reduce huge memory cost on edge devices with the efficiency in static memory and dynamic memory. The auto-differential framework is used to run BP on deep networks, which requires extra static memory (e.g., 200MB for Caffe [41 ###reference_b41###] and 1GB for Pytorch [72 ###reference_b72###]) and imposes a considerable burden on edge devices such as IoT sensors. Due to the necessity of restoring intermediate states, BP also requires enormous amounts of dynamic memory ( 5GB for MobileNet [24 ###reference_b24###]). Since BAFFLE only requires inference, we can slice the computation graph and execute the forwards per layer [44 ###reference_b44###]. As shown in Table 4 ###reference_###, BAFFLE reduces the memory cost to 5%10% by executing layer-by-layer inference. By applying kernel-wise computations, we can further reduce the memory cost to approximately 1% (e.g., 64MB for MobileNet [87 ###reference_b87###]), which is suitable for scenarios with extremely limited storage resources, such as TEE.\nRecent works exploit TEE to protect models from white-box attacks by preventing model exposure [44 ###reference_b44###]. However, due to the security guarantee, the usable memory of TEE is usually small [87 ###reference_b87###] (e.g., 90MB on Intel SGX for Skylake CPU [61 ###reference_b61###]), which is typically far less than what a backpropagation-based FL system requires. In contrast, BAFFLE can execute in TEE due to its little memory cost (more details are in\nAppendix B).\nMembership inference attacks and model inversion attacks need to repeatedly perform model inference\nand obtain confidence values or classification scores [80 ###reference_b80###, 103 ###reference_b103###]. Given that BAFFLE provides stochastic loss differences associated with the random perturbation , the off-the-shelf inference attacks may not perform on BAFFLE directly (while adaptively designed attacking strategies are possible to evade BAFFLE). Motivated by differential privacy [1 ###reference_b1###], we further design heuristic experiments to study the information leakage from \n(details in Appendix D).\nAs shown in\nFigure 5,\nthe between real data and random noise is hard to distinguish, indicating it is difficult for attackers to obtain useful information from BAFFLE\u2019s outputs."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Related Work",
|
| 75 |
+
"text": "Along the research routine of FL, many efforts have been devoted to, e.g., dealing with non-IID distributions [104 ###reference_b104###, 76 ###reference_b76###, 21 ###reference_b21###, 92 ###reference_b92###, 53 ###reference_b53###], multi-task learning [81 ###reference_b81###, 60 ###reference_b60###], and preserving privacy of clients [11 ###reference_b11###, 12 ###reference_b12###, 63 ###reference_b63###, 86 ###reference_b86###, 31 ###reference_b31###, 58 ###reference_b58###, 26 ###reference_b26###, 56 ###reference_b56###]. Below we introduce the work on efficiency and vulnerability in FL following the survey of Kairouz et al. [42 ###reference_b42###], which is more related to this paper.\nEfficiency in FL. It is widely understood that the communication and computational efficiency is a primary bottleneck for deploying FL in practice [94 ###reference_b94###, 74 ###reference_b74###, 18 ###reference_b18###, 6 ###reference_b6###, 91 ###reference_b91###]. Specifically, communicating between the server and clients could be potentially expensive and unreliable. The seminal work of Kone\u010dn\u1ef3 et al. [47 ###reference_b47###] introduces sparsification and quantization to reduce the communication cost, where several theoretical works investigate the optimal trade-off between the communication cost and model accuracy [102 ###reference_b102###, 15 ###reference_b15###, 30 ###reference_b30###, 2 ###reference_b2###, 7 ###reference_b7###]. Since practical clients usually have slower upload than download bandwidth, much research interest focuses on gradient compression [84 ###reference_b84###, 4 ###reference_b4###, 36 ###reference_b36###, 8 ###reference_b8###]. On the other hand, different methods have been proposed to reduce the computational burden of local clients [16 ###reference_b16###, 29 ###reference_b29###, 33 ###reference_b33###], since these clients are usually edge devices with limited resources. Training paradigms exploiting tensor factorization in FL can also achieve promising performance [45 ###reference_b45###, 59 ###reference_b59###].\nVulnerability in FL. The characteristic of decentralization in FL is beneficial to protecting data privacy of clients, but in the meanwhile, providing white-box accessibility of model status leaves flexibility for malicious clients to perform poisoning/backdoor attacks [9 ###reference_b9###, 5 ###reference_b5###, 90 ###reference_b90###, 97 ###reference_b97###, 71 ###reference_b71###], model/gradient inversion attacks [103 ###reference_b103###, 25 ###reference_b25###, 39 ###reference_b39###], and membership inference attacks [80 ###reference_b80###, 68 ###reference_b68###, 57 ###reference_b57###]. To alleviate the vulnerability in FL, several defense strategies have been proposed via selecting reliable clients [43 ###reference_b43###], data augmentation [13 ###reference_b13###], update clipping [83 ###reference_b83###], robust training [52 ###reference_b52###], model perturbation [99 ###reference_b99###], detection methods [78 ###reference_b78###, 20 ###reference_b20###], and methods based on differential privacy [95 ###reference_b95###]."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "6",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Conclusion and Discussion",
|
| 81 |
+
"text": "Backpropagation is the gold standard for training deep networks, and it is also utilized by traditional FL systems. However, backpropagation is unsuited for edge devices due to their limited resources and possible lack of reliability. Using zero-order optimization techniques, we explore the possibility of BAFFLE in this paper. We need to specify that there are scenarios in which clients are fully trusted and have sufficient computing and storage resources. In these situations, traditional FL with backpropagation is preferred.\nWhile our preliminary studies on BAFFLE have generated encouraging results, there are still a number of tough topics to investigate: (i) Compared to the models trained using exact gradients, the accuracy of models trained using BAFFLE is inferior. One reason is that we select small values of (e.g., ) relative to the number of model parameters (e.g., ); another reason is that gradient descent is designed for exact gradients, whereas our noisy gradient estimation may require advanced learning algorithms. (ii) The empirical variance of zero-order gradient estimators affects training convergence in BAFFLE. It is crucial to research variance reduction approaches, such as control variates and non-Gaussian sampling distributions. (iii) Stein\u2019s identity is proposed for loss functions with Gaussian noises imposed on model parameters. Intuitively, this smoothness is related to differential privacy in FL, but determining their relationship requires theoretical derivations."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [
|
| 85 |
+
{
|
| 86 |
+
"section_id": "Appendix t0",
|
| 87 |
+
"parent_section_id": null,
|
| 88 |
+
"section_name": "Appendix 0.A Proofs",
|
| 89 |
+
"text": "We recap the proof of Stein\u2019s identity following He et al. [34 ###reference_b34###], where\nBy symmetry, we change to and obtain\nand further we prove that\n\u220e\nWe rewrite the format of as follows:\nThen we prove . Suppose , then we have . Since , we have and . So with high probability, . Substituting it into Eq. (11 ###reference_.E11###), we have with high probability,\nwhere we regard as a constant for a given model architecture. Finally, we prove and . It is trivial that since . For , we can observe by examining each of its entries\nwhere we have used subscripts and to denote the usual indexing of matrices and vectors. Specifically, for diagonal entries (i.e., ), we observe distributes as , which means and ; for non-diagonal entries (i.e., ), we have , due to the independence between different dimensions in .\n\u220e\n###figure_4###"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"section_id": "Appendix t0",
|
| 93 |
+
"parent_section_id": null,
|
| 94 |
+
"section_name": "Appendix 0.B Trusted execution environment",
|
| 95 |
+
"text": "A trusted execution environment (TEE) [75 ###reference_b75###] is regarded as the ultimate solution for defending against all white-box attacks by preventing any model exposure. TEE protects both data and model security with three components: physical secure storage to ensure the confidentiality, integrity, and tamper-resistance of stored data; a root of trust to load trusted code; and a separate kernel to execute code in an isolated environment, as illustrated in Figure 4 ###reference_.F4###. Using TEE, the FL system is able to train deep models without revealing model specifics. However, due to the security guarantee, the usable memory of TEE is typically small [87 ###reference_b87###] (e.g., 90MB on Intel SGX for Skylake CPU [61 ###reference_b61###]), which is considerably less than what deep models require for backpropagation (e.g., 5GB for VGG-16 [24 ###reference_b24###]).\n###figure_5###"
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix t0",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix 0.C Convergence analyses of deep linear networks in BAFFLE",
|
| 101 |
+
"text": "We analyze the convergence of BAFFLE in Section 3 using a general technique applicable to any continuously differentiable models corresponding to the loss function . Since deep networks are the most prevalent models in FL, which has strong linearity, it is simpler to investigate the convergence of deep linear networks [77 ###reference_b77###].\nConsider a two-layer deep linear network in a classification task with categories. We denote the model parameters as , where in the first layer , in the second layer consists of vectors related to the categories as and . For the input data with label , we train the deep linear network by maximizing the classification score on the -th class. Since there is no non-linear activation in deep linear networks, the forward inference can be represented as , and the loss is . It is easy to show that and . We sample from noise generator , where and . Let , we discover that the BAFFLE estimation in Eq. (6) follows the same pattern for both forward (2) and central schemes (3):\nThis equivalent form in deep linear networks illustrates that the residual benefit from the central scheme is reduced by the linearity, hence the performance of the two finite difference schemes described above is same in deep linear networks. We refer to this characteristic as FD scheme independence. We also find the property of independence, that is, the choice of does not effect the results of finite difference, due to the fact that and follow the standard normal distribution.\nBased on the findings from Eq. (13 ###reference_.E13###), we propose the following useful guideline that improves accuracy under the same computation cost: Using twice forward difference (twice-FD) scheme rather than central scheme. Combining the forward scheme Eq. (2) and central scheme Eq. (3), we find that the central scheme produces smaller residuals than the forward scheme by executing twice as many forward inferences, i.e. . With the same forward inference times (e.g., 2), one practical difficulty is to identify which scheme performs better. We find that the forward scheme performs better in all experiments, in part because the linearity reduces the benefit from second-order residuals, as demonstrated by Eq. (13 ###reference_.E13###)."
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"section_id": "Appendix t0",
|
| 105 |
+
"parent_section_id": null,
|
| 106 |
+
"section_name": "Appendix 0.D Robustness to inference attacks",
|
| 107 |
+
"text": "To explore the information leakage from outputs , we design heuristic experiments. Regular attacks such as membership inference attacks and model inversion attacks cannot directly target BAFFLE since they must repeatedly do model inference and get confidence values or classification scores. To analyze the possibility of information leaking, we employ the concept of differential privacy [1 ###reference_b1###] and compare the BAFFLE\u2019s outputs from private data to random noise. If we cannot discriminate between private data and random noise merely from the BAFFLE\u2019s outputs, we can assert that the outputs do not contain private information. In details, we utilize the validation dataset as the private data and generate random input pairs from Gaussian and Laplacian noise as . Then we apply BAFFLE to both private data and random noise and compare the distributions of their respective outputs . As shown in Figure 5 ###reference_.F5###, it is difficult to distinguish the BAFFLE\u2019s outputs between private data and random noise, showing that it is difficult for attackers to acquire meaningful information rather than random noise from the BAFFLE\u2019s outputs."
|
| 108 |
+
}
|
| 109 |
+
],
|
| 110 |
+
"tables": {
|
| 111 |
+
"1": {
|
| 112 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.17.7.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S3.T1.12.6\" style=\"font-size:90%;\">The classification accuracy (%) of BAFFLE in <span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.12.6.1\">iid scenarios</span> () and epoch-level communication settings with different values ( annotations mean using for MNIST and for CIFAR-10/100). In this configuration, each client updates its local model based on BAFFLE estimated gradients and uploads model updates to the server after an entire epoch on the local dataset.\nThe four guidelines work well under epoch-level settings with total communication rounds for MNIST and CIFAR-10/100.\n</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.14\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.14.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T1.14.3.1.1\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.14.3.1.1.1\">Settings</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"3\" id=\"S3.T1.14.3.1.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.14.3.1.2.1\">LeNet</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S3.T1.14.3.1.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.14.3.1.3.1\">WRN-10-2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.4.2.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">MNIST</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.4.2.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">CIFAR-10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.4.2.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">CIFAR-100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.4.2.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">MNIST</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.4.2.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">CIFAR-10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.4.2.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">CIFAR-100</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.13.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.13.1.1\" rowspan=\"3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S3.T1.13.1.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.13.1.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">100/200</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.13.1.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">87.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.13.1.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">48.78</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.13.1.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">41.54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.13.1.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">88.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.13.1.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">52.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.13.1.8\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">46.61</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.14.5.3.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">200/500</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.5.3.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">89.48</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.5.3.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">51.82</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.5.3.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">45.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.5.3.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">89.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.5.3.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">55.59</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.5.3.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">51.65</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.14.6.4.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">500/1000</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.6.4.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.14.6.4.2.1\">92.18</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.6.4.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.14.6.4.3.1\">53.62</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.6.4.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.14.6.4.4.1\">48.72</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.6.4.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.14.6.4.5.1\">95.17</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.6.4.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.14.6.4.6.1\">58.63</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.6.4.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.14.6.4.7.1\">53.15</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.14.2.1\" rowspan=\"4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S3.T1.14.2.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.14.2.1.1.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.14.2.1.1.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T1.14.2.1.1.1.2.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Ablation</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.14.2.1.1.1.3\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T1.14.2.1.1.1.3.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Study</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.14.2.1.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T1.14.2.1.1.1.1.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">()</span></span>\n</span></span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.14.2.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">w/o EMA</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.14.2.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">85.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.14.2.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">47.97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.14.2.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">36.81</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.14.2.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">85.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.14.2.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">50.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.14.2.8\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">45.86</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.14.7.5.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">ReLU</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.7.5.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">81.55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.7.5.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">44.99</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.7.5.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">39.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.7.5.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">79.08</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.7.5.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">49.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.7.5.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">44.44</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.14.8.6.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">SELU</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.8.6.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">86.18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.8.6.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">48.65</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.8.6.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">37.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.8.6.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">76.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.8.6.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">43.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.8.6.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">41.79</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.14.9.7.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Central</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.9.7.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">76.02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.9.7.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">45.97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.9.7.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">36.53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.9.7.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">77.45</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.9.7.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">42.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.9.7.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">39.62</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.10.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.14.10.8.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">BP Baselines</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.14.10.8.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">94.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.14.10.8.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">58.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T1.14.10.8.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">54.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.14.10.8.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">97.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.14.10.8.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">62.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.14.10.8.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">60.08</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 113 |
+
"capture": "Table 1: The classification accuracy (%) of BAFFLE in iid scenarios () and epoch-level communication settings with different values ( annotations mean using for MNIST and for CIFAR-10/100). In this configuration, each client updates its local model based on BAFFLE estimated gradients and uploads model updates to the server after an entire epoch on the local dataset.\nThe four guidelines work well under epoch-level settings with total communication rounds for MNIST and CIFAR-10/100.\n"
|
| 114 |
+
},
|
| 115 |
+
"2": {
|
| 116 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T2.22.4.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S3.T2.6.3\" style=\"font-size:90%;\">The accuracy (%) of BAFFLE in <span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.6.3.1\">label non-iid scenarios</span> () and epoch-level settings with total communication rounds 40 and different values. We employ Dirichlet dist. with to ensure each client has a unique label distribution.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.19\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.19.14.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T2.19.14.1.1\" rowspan=\"2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.19.14.1.1.1\">Settings</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T2.19.14.1.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.19.14.1.2.1\">LeNet</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.19.14.1.3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.19.14.1.3.1\">WRN-10-2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.10.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.8.2.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">\u2004CIFAR-10 CIFAR-100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.10.4.4\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">\u2004CIFAR-10 CIFAR-100</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.13.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.11.5.1\" rowspan=\"3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text\" id=\"S3.T2.11.5.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T2.13.7.4\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">200</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.12.6.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">35.21 28.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.13.7.3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">39.53 30.44</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.15.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.15.9.3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">500</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.14.8.1\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">38.14 30.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.15.9.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">41.69 32.89</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.17.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.17.11.3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">1000</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.16.10.1\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.16.10.1.1\">39.71</span> <span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.16.10.1.2\">33.35</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.17.11.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.17.11.2.1\">43.42</span> <span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.17.11.2.2\">34.08</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.19.13\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T2.19.13.3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">BP Baselines</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T2.18.12.1\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">44.41 38.43</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T2.19.13.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">51.18 40.85</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 117 |
+
"capture": "Table 2: The accuracy (%) of BAFFLE in label non-iid scenarios () and epoch-level settings with total communication rounds 40 and different values. We employ Dirichlet dist. with to ensure each client has a unique label distribution."
|
| 118 |
+
},
|
| 119 |
+
"3": {
|
| 120 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T3.38.3.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S4.T3.4.2\" style=\"font-size:90%;\">The Top-1Top-5 accuracy (%) of BAFFLE on OfficeHome with <span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.2.1\">feature non-iid participations</span> () and epoch-level settings with 40 comm. rounds. We use the pretrained MobileNet, freeze the backbone and finetune the FC layers.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.35\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.35.32.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T3.35.32.1.1\" rowspan=\"2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.35.32.1.1.1\">Settings</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S4.T3.35.32.1.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.35.32.1.2.1\">Domains</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.35.32.1.3\" rowspan=\"2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.35.32.1.3.1\">Avg.</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.35.33.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T3.35.33.2.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">Art</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T3.35.33.2.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">Clipart</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T3.35.33.2.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">Product</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T3.35.33.2.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">Real World</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.10.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.5.1.1\" rowspan=\"5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.5.1.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.10.6.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">20</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.2.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.7.3.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.8.4.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.9.5.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.6.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.15.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.15.11.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">50</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.11.7.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.12.8.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.13.9.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.14.10.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.15.11.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.20.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.20.16.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">100</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.16.12.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.17.13.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.18.14.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.19.15.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.20.16.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.25.21\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.25.21.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">200</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.21.17.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.22.18.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.23.19.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.24.20.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.25.21.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.30.26\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.30.26.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">500</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.26.22.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.27.23.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.28.24.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.29.25.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.30.26.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.35.31\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.T3.35.31.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">BP Baselines</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.31.27.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.32.28.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.33.29.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T3.34.30.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.35.31.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 121 |
+
"capture": "Table 3: The Top-1Top-5 accuracy (%) of BAFFLE on OfficeHome with feature non-iid participations () and epoch-level settings with 40 comm. rounds. We use the pretrained MobileNet, freeze the backbone and finetune the FC layers."
|
| 122 |
+
},
|
| 123 |
+
"4": {
|
| 124 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T4.12.2.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S4.T4.2.1\" style=\"font-size:90%;\">The GPU memory cost (MB) of vanilla BP and BAFFLE, respectively. \u2018<span class=\"ltx_text\" id=\"S4.T4.2.1.1\">min</span><span class=\"ltx_text\" id=\"S4.T4.2.1.2\">max</span>\u2019 denotes the minimum and maximum dynamic memory for BAFFLE. We also report the ratio (%) of vanilla BP to BAFFLE\u2019s max memory cost.\n</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.8.7.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T4.8.7.1.1\" rowspan=\"2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.7.1.1.1\">Backbone</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"3\" id=\"S4.T4.8.7.1.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.7.1.2.1\">CIFAR-10/100</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S4.T4.8.7.1.3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.7.1.3.1\">OfficeHome/ImageNet</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.8.8.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.8.8.2.1\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">BP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.8.8.2.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">BAFFLE</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T4.8.8.2.3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">Ratio</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.8.8.2.4\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">BP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.8.8.2.5\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">BAFFLE</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.8.8.2.6\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">Ratio</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.4.2.3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">LeNet</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.2.4\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">1680</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.1\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">67174</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.4.2.5\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.2.5.1\">10.35</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.2.6\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">2527</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.2.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">86201</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.2.7\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.2.7.1\">7.95</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.6.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T4.6.4.3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">WRN-10-2</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.4.4\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">1878</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.5.3.1\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">75196</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.6.4.5\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.4.5.1\">10.43</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.4.6\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">3425</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.4.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">94251</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.4.7\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.4.7.1\">7.32</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.8.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T4.8.6.3\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">MobileNet</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.8.6.4\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">2041</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.7.5.1\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">102217</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.8.6.5\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.6.5.1\">10.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.8.6.6\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">5271</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.8.6.2\" style=\"padding-left:10.0pt;padding-right:10.0pt;\">121289</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.8.6.7\" style=\"padding-left:10.0pt;padding-right:10.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.6.7.1\">5.48</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 125 |
+
"capture": "Table 4: The GPU memory cost (MB) of vanilla BP and BAFFLE, respectively. \u2018minmax\u2019 denotes the minimum and maximum dynamic memory for BAFFLE. We also report the ratio (%) of vanilla BP to BAFFLE\u2019s max memory cost.\n"
|
| 126 |
+
}
|
| 127 |
+
},
|
| 128 |
+
"image_paths": {
|
| 129 |
+
"1": {
|
| 130 |
+
"figure_path": "2301.12195v3_figure_1.png",
|
| 131 |
+
"caption": "Figure 1: A sketch map of BAFFLE. In addition to the global parameters update \u0394\u2062\ud835\udc16\u0394\ud835\udc16\\Delta{\\mathbf{W}}roman_\u0394 bold_W, each client downloads random seeds to locally generate perturbations \u00b1\ud835\udf391:Kplus-or-minussubscript\ud835\udf39:1\ud835\udc3e\\pm\\bm{\\delta}_{1:K}\u00b1 bold_italic_\u03b4 start_POSTSUBSCRIPT 1 : italic_K end_POSTSUBSCRIPT and perform 2\u2062K2\ud835\udc3e2K2 italic_K times of forward propagation (i.e., inference) to compute loss differences. The server can recover these perturbations using the same random seeds and obtain \u0394\u2062\u2112\u2062(\ud835\udc16,\ud835\udf39k)\u0394\u2112\ud835\udc16subscript\ud835\udf39\ud835\udc58\\Delta\\mathcal{L}({\\mathbf{W}},\\bm{\\delta}_{k})roman_\u0394 caligraphic_L ( bold_W , bold_italic_\u03b4 start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) by secure aggregation. Each loss difference \u0394\u2062\u2112\u2062(\ud835\udc16,\ud835\udf39k;\ud835\udd3bc)\u0394\u2112\ud835\udc16subscript\ud835\udf39\ud835\udc58subscript\ud835\udd3b\ud835\udc50\\Delta\\mathcal{L}({\\mathbf{W}},\\bm{\\delta}_{k};\\mathbb{D}_{c})roman_\u0394 caligraphic_L ( bold_W , bold_italic_\u03b4 start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ; blackboard_D start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) is a floating-point number, so K\ud835\udc3eKitalic_K can be easily adjusted to fit the uploading bandwidth.",
|
| 132 |
+
"url": "http://arxiv.org/html/2301.12195v3/x1.png"
|
| 133 |
+
},
|
| 134 |
+
"2": {
|
| 135 |
+
"figure_path": "2301.12195v3_figure_2.png",
|
| 136 |
+
"caption": "Figure 2: The classification accuracy (%) of BAFFLE in iid scenarios (C=10\ud835\udc3610C=10italic_C = 10) and batch-level communication settings with various K\ud835\udc3eKitalic_K values. We treat the models trained by exact gradients on conventional FL systems as the backpropagation (BP) baselines. On different datasets and architectures, our BAFFLE achieves comparable performance to the exact gradient results with a reasonable K\ud835\udc3eKitalic_K.",
|
| 137 |
+
"url": "http://arxiv.org/html/2301.12195v3/x2.png"
|
| 138 |
+
},
|
| 139 |
+
"3": {
|
| 140 |
+
"figure_path": "2301.12195v3_figure_3.png",
|
| 141 |
+
"caption": "Figure 3: The ablation study of BAFFLE guidelines, with K=100\ud835\udc3e100K=100italic_K = 100 on MNIST and K=500\ud835\udc3e500K=500italic_K = 500 on CIFAR-10. As seen, twice-FD, Hardswish, and EMA all improve performance without extra computation. EMA reduces oscillations by lessening Gaussian noise.",
|
| 142 |
+
"url": "http://arxiv.org/html/2301.12195v3/x3.png"
|
| 143 |
+
},
|
| 144 |
+
"4": {
|
| 145 |
+
"figure_path": "2301.12195v3_figure_4.png",
|
| 146 |
+
"caption": "Figure 4: A sketch map to run BAFFLE in one trusted execution environment. The pipeline contains three steps: (1) Load the data and model into the security storage. (2) Load the code of BAFFLE into the root of trust. (3) Run the BAFFLE program in a separation kernel.",
|
| 147 |
+
"url": "http://arxiv.org/html/2301.12195v3/x4.png"
|
| 148 |
+
},
|
| 149 |
+
"5": {
|
| 150 |
+
"figure_path": "2301.12195v3_figure_5.png",
|
| 151 |
+
"caption": "Figure 5: The robustness of BAFFLE to inference attacks. For real data, we randomly sample some input-label pairs from the validation dataset. For random noise, we generate input-label pairs from standard normal distribution. We sample 500500500500 perturbations \ud835\udf39\ud835\udf39\\bm{\\delta}bold_italic_\u03b4 from \ud835\udca9\u2062(0,\u03c32\u2062\ud835\udc08)\ud835\udca90superscript\ud835\udf0e2\ud835\udc08\\mathcal{N}(0,\\sigma^{2}{\\mathbf{I}})caligraphic_N ( 0 , italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT bold_I ), collect the values of \u0394\u2062\u2112\u2062(\ud835\udc16,\ud835\udf39;\ud835\udd3b)\u0394\u2112\ud835\udc16\ud835\udf39\ud835\udd3b\\Delta\\mathcal{L}({\\mathbf{W}},\\bm{\\delta};{\\mathbb{D}})roman_\u0394 caligraphic_L ( bold_W , bold_italic_\u03b4 ; blackboard_D ) for real data and random noise separately, and compare their distributions.",
|
| 152 |
+
"url": "http://arxiv.org/html/2301.12195v3/x5.png"
|
| 153 |
+
}
|
| 154 |
+
},
|
| 155 |
+
"validation": true,
|
| 156 |
+
"references": [],
|
| 157 |
+
"url": "http://arxiv.org/html/2301.12195v3"
|
| 158 |
+
}
|
20240721/2302.12246v5.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2303.10460v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2303.11884v2.json
ADDED
|
@@ -0,0 +1,170 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Better Understanding Differences in Attribution Methods via Systematic Evaluations",
|
| 3 |
+
"abstract": "Deep neural networks are very successful on many vision tasks, but\nhard to interpret due to their black box nature. To overcome this, various\npost-hoc attribution methods have been proposed to identify image regions most influential to the models\u2019 decisions.\nEvaluating such methods is challenging since no ground truth attributions exist.\nWe thus propose three novel evaluation schemes\nto more reliably measure the faithfulness of those methods, to make comparisons between them more fair, and to make visual inspection more systematic.\nTo address faithfulness, we propose a novel evaluation setting (DiFull) in which we carefully control which parts of the input can influence the output in order to distinguish possible from impossible attributions.\nTo address fairness, we note that different methods are applied at different layers, which skews any comparison,\nand so evaluate all methods on the same layers (ML-Att) and discuss how this impacts their performance on quantitative metrics.\nFor more systematic visualizations, we propose a scheme (AggAtt) to qualitatively evaluate the methods on complete datasets.\nWe use these evaluation schemes to study strengths and shortcomings of some widely used attribution methods over a wide range of models.\nFinally, we propose a post-processing smoothing step that significantly improves the performance of some attribution methods,\nand discuss its applicability.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "###figure_1### Deep neural networks (DNNs) are highly successful on many computer vision tasks.\nHowever, their black box nature makes it hard to interpret and thus trust their decisions.\nTo shed light on the models\u2019 decision-making process, several methods have been proposed that aim to attribute importance values to individual input features (see Sec. 2 ###reference_###).\nHowever, given the lack of ground truth importance values, it has proven difficult to compare and evaluate these attribution methods in a holistic and systematic manner.\nIn this work that extends [1 ###reference_b1###], we take a three-pronged approach towards addressing this issue. In particular, we focus on three important components for such evaluations: reliably measuring the methods\u2019 model-faithfulness, ensuring a fair comparison between methods, and providing a framework that allows for systematic visual inspections of their attributions.\nFirst, we propose an evaluation scheme (DiFull), which allows distinguishing possible from impossible importance attributions. This effectively provides ground truth annotations for whether or not an input feature can possibly have influenced the model output. As such, it can highlight distinct failure modes of attribution methods (Fig. 1 ###reference_###, left).\nSecond, a fair evaluation requires attribution methods to be compared on equal footing. However, we observe that different methods explain DNNs to different depths (e.g., full network or classification head only).\nThus, some methods in fact solve a much easier problem (i.e., explain a much shallower network). To even the playing field, we propose a multi-layer evaluation scheme for attributions (ML-Att) and\nthoroughly evaluate\ncommonly used methods across multiple layers and models (Fig. 1 ###reference_###, left).\nWhen compared on the same level, we find that performance differences between some methods essentially vanish.\nThird, relying on individual examples for a qualitative comparison is prone to skew the comparison and cannot fully represent the evaluated attribution methods. To overcome this, we propose a qualitative evaluation scheme for which we aggregate attribution maps (AggAtt) across many input samples. This allows us to observe trends in the performance of attribution methods across complete datasets, in addition to looking at individual examples (Fig. 1 ###reference_###, right).\nContributions.\n(1) We propose a novel evaluation setting, DiFull,\nin which we control which regions cannot possibly influence a model\u2019s output, which allows us to highlight definite failure modes of attribution methods.\n(2)\nWe argue that methods can only be compared fairly when evaluated on the same layer. To do this, we introduce ML-Att and evaluate all attribution methods at multiple layers.\nWe show that, when compared fairly, apparent performance differences between some methods effectively vanish.\n(3) We propose a novel aggregation method, AggAtt, to qualitatively evaluate attribution methods across all images in a dataset. This allows to qualitatively assess a method\u2019s performance across many samples (Fig. 1 ###reference_###, right), which complements the evaluation on individual samples.\n(4) We propose a post-processing smoothing step that significantly improves localization performance on some attribution methods.\nWe observe significant differences when evaluating these smoothed attributions on different architectures, which highlights how architectural design choices can influence an attribution method\u2019s applicability.\nIn this extended version of [1 ###reference_b1###], we additionally provide the following:\n(1) We evaluate on a wider variety of network architectures, in particular deeper networks with higher classification accuracies, including VGG19 [2 ###reference_b2###], ResNet152 [3 ###reference_b3###], ResNeXt [4 ###reference_b4###], Wide ResNet [5 ###reference_b5###], and GoogLeNet [6 ###reference_b6###]. We show that the results and trends discussed in [1 ###reference_b1###] generalize well to diverse CNN architectures.\n(2) We evaluate our settings on multiple configurations of the layer-wise relevance propagation (LRP) [7 ###reference_b7###] family of attribution methods, that modify the gradient flow during backpropagation to identify regions in the image important to the model. We show that while LRP can outperform all other methods, achieving good localization requires carefully choosing propagation rules and their parameters, and is sensitive to the model formulation and architecture.\n(3) We show that the trends in performance of attribution methods at multiple layers (ML-Att), which was visualized at a subset of layers (input, middle, and final) in [1 ###reference_b1###], generalizes across layers and architectures for each method.\nOur code is available at https://github.com/sukrutrao/Attribution-Evaluation ###reference_valuation###."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": "Post-hoc attribution methods\nbroadly use one of three main mechanisms.\nBackpropagation-based methods [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 7 ###reference_b7###, 14 ###reference_b14###] typically rely on the gradients with respect to the input [8 ###reference_b8###, 10 ###reference_b10###, 11 ###reference_b11###, 9 ###reference_b9###] or with respect to intermediate layers[15 ###reference_b15###, 13 ###reference_b13###].\nActivation-based methods [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###] weigh activation maps to assign importance, typically of the final convolutional layer.\nThe activations may be weighted by their gradients[16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 21 ###reference_b21###] or by estimating their importance to the classification score[19 ###reference_b19###, 20 ###reference_b20###].\nPerturbation-based methods [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###] treat the network as a black-box and assign importance by observing the change in output on perturbing the input. This is done by occluding parts of the image [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###] or\noptimizing for a mask that maximizes/minimizes class confidence[25 ###reference_b25###, 26 ###reference_b26###].\nIn this work, we evaluate on a diverse set of attribution methods spanning all three categories.\nEvaluation Metrics:\nSeveral metrics have been proposed to evaluate attribution methods, and can broadly be categorised into Sanity checks,\nlocalization-, and perturbation-based metrics.\nSanity checks [27 ###reference_b27###, 15 ###reference_b15###, 28 ###reference_b28###] test for basic properties attributions must satisfy (e.g., explanations should depend on the model parameters).\nLocalization-based metrics evaluate how well attributions localize class discriminative features of the input.\nTypically, this is done by measuring how well attributions coincide with object bounding boxes or image grid cells (see below) [13 ###reference_b13###, 29 ###reference_b29###, 30 ###reference_b30###, 25 ###reference_b25###, 31 ###reference_b31###].\nPerturbation-based metrics measure model behaviour under input perturbation guided by attributions.\nExamples include removing the most[32 ###reference_b32###] or least[12 ###reference_b12###] salient pixels, or using the attributions to scale input features and measuring changes in confidence[18 ###reference_b18###].\nOur work combines aspects from localization metrics and sanity checks to evaluate the model-faithfulness of an attribution method.\nLocalization on Grids:\nRelying on object bounding boxes for localization assumes that the model only relies on information within those bounding boxes.\nHowever, neural networks are known to also rely on context information for their decisions, cf. [33 ###reference_b33###].\nTherefore, recent work [31 ###reference_b31###, 34 ###reference_b34###, 35 ###reference_b35###] proposes creating a grid of inputs from distinct classes and measuring localization to the entire grid cell, which allows evaluation on datasets where bounding boxes are not available.\nHowever, this does not guarantee that the model only uses information from within the grid cell, and may fail for similar looking features (Fig. 3 ###reference_###, right). In our work, we propose a metric that controls the flow of information and guarantees that grid cells are classified independently."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Evaluating Attribution Methods",
|
| 21 |
+
"text": "###figure_2### ###figure_3### ###figure_4### \nWe present our evaluation settings for better understanding the strengths and shortcomings of attribution methods. Similar to the Grid Pointing Game (GridPG)[31 ###reference_b31###], these metrics evaluate attribution methods on image grids with multiple classes. In particular, we propose a novel quantitative metric, DiFull, and an extension to it, DiPart (3.1 ###reference_###), as stricter tests of model faithfulness than GridPG. Further, we present a qualitative metric, AggAtt (3.2 ###reference_###) and an evaluation setting that compares methods at identical layers, ML-Att (3.3 ###reference_###)."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Quantitative Evaluation: Disconnecting Inputs",
|
| 27 |
+
"text": "In the following, we introduce the quantitative metrics that we use to compare attribution methods. For this, we first describe GridPG and the grid dataset construction it uses[31 ###reference_b31###]. We then devise a novel setting, in which we carefully control which features can influence the model output.\nBy construction, this provides ground truth annotations for image regions that can or cannot possibly have influenced the model output. While GridPG evaluates how well the methods localize class discriminative features, our metrics complement it by evaluating their model-faithfulness."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.1.1",
|
| 31 |
+
"parent_section_id": "3.1",
|
| 32 |
+
"section_name": "3.1.1 Grid Data and GridPG",
|
| 33 |
+
"text": "For GridPG [31 ###reference_b31###], the attribution methods are evaluated on a synthetic grid of images in which each class may occur at most once. In particular, for each of the occurring classes, GridPG measures the fraction of positive attribution assigned to the respective grid cell versus the overall amount of positive attribution. Specifically, let refer to the positive attribution given to the pixel. The localization score for the subimage is given by:\nAn \u2018optimal\u2019 attribution map would thus yield , while uniformly distributing attributions would yield .\nBy only using confidently classified images from distinct classes, GridPG aims to ensure that the model does not find \u2018positive evidence\u2019 for any of the occurring classes in the grid cells of other classes.\nHowever, specifically for class-combinations that share low-level features, this assumption might not hold, see Fig. 3 ###reference_### (right): despite the two dogs (upper left and lower right) being classified correctly as single images, the output for the logit of the dog in the upper left is influenced by the features of the dog in the lower right in the grid image.\nSince all images in the grid can indeed influence the model output in GridPG 111As shown in Fig. 2(a) ###reference_sf1###, the convolutional layers of the model under consideration process the entire grid to obtain feature maps, which are then classified point-wise. Finally, a single output per class is obtained by globally pooling all point-wise classification scores. As such, the class logits can, of course, be influenced by all images in the grid., it is unclear whether such an attribution is in fact not model-faithful."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1.2",
|
| 37 |
+
"parent_section_id": "3.1",
|
| 38 |
+
"section_name": "3.1.2 Proposed Metric: DiFull",
|
| 39 |
+
"text": "As discussed, the assumption in GridPG that no feature outside the subimage of a given class should positively influence the respective class logit might not hold. Hence, we propose to fully disconnect (DiFull) the individual subimages from the model outputs for other classes.\nFor this, we introduce two modifications. First, after removing the GAP operation, we use classification heads, one for each subimage, and only locally pool those outputs that have their receptive field center above the same subimage. Second, we ensure that their receptive field does not overlap with other subimages by zeroing out the respective connections.\nIn particular, we implement DiFull by passing the subimages separately through the CNN backbone of the model under consideration222\nThis is equivalent to setting the respective weights of a convolutional kernel to zero every time it overlaps with another subimage., see Fig. 2(b) ###reference_sf2###. Then, we apply the classification head separately to the feature maps of each subimage. As we discuss in the supplement, DiFull has similar computational requirements as GridPG.\nAs a result, we can guarantee that no feature outside the subimage of a given class can possibly have influenced the respective class logit\u2014they are indeed fully disconnected.\nNote that this setting differs from pixel removal metrics (e.g. [32 ###reference_b32###, 12 ###reference_b12###]), where \u2018removing\u2019 a patch of pixels at the input and replacing it with a baseline (e.g. zero) values may still result in the patch influencing the network\u2019s decision, for example, based on the shape and the location of the patch. In contrast, we effectively make the weights between the CNN backbone and the classification heads for other grid cells zero, which ensures no influence from pixels in those grid cells to the output."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1.3",
|
| 43 |
+
"parent_section_id": "3.1",
|
| 44 |
+
"section_name": "3.1.3 Natural Extension: DiPart",
|
| 45 |
+
"text": "At one end, GridPG allows any subimage to influence the output for any other class, while at the other, DiFull completely disconnects the subimages. In contrast to GridPG, DiFull might be seen as a constructed setting not seen in typical networks.\nAs a more natural setting, we therefore propose DiPart, for which we only partially disconnect the subimages from the outputs for other classes, see Fig. 2(c) ###reference_sf3###. Specifically, we do not zero out all connections (Sec. 3.1.2 ###reference_.SSS2###), but instead only apply the local pooling operation from DiFull and thus obtain local classification heads for each subimage (as in DiFull).\nHowever, in this setting, the classification head for a specific subimage can be influenced by features in other subimages that lie within the head\u2019s receptive field. For models with a small receptive field, this yields very similar results as DiFull (Sec. 5 ###reference_### and Supplement)."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Qualitative Evaluation: AggAtt",
|
| 51 |
+
"text": "In addition to quantitative metrics, attribution methods are often compared qualitatively on individual examples for a visual assessment. However, this is sensitive to the choice of examples and does not provide a holistic view of the method\u2019s performance.\nBy constructing standardized grids, in which \u2018good\u2019 and \u2018bad\u2019 (GridPG) or possible and impossible (DiFull) attributions are always located in the same regions, we can instead construct aggregate attribution maps.\nThus, we propose a new qualitative evaluation scheme, AggAtt, for which we generate a set of aggregate maps for each method that progressively show the performance of the methods from the best to the worst localized attributions.\nFor this, we first select a grid location and then sort all corresponding attribution maps in descending order of the localization score, see Eq. 1 ###reference_###. Then, we bin the maps into percentile ranges and, finally, obtain an aggregate map per bin by averaging all maps within a single bin. In our experiments, we observed that attribution methods typically performed consistently over a wide range of inputs, but showed significant deviations in the tails of the distributions (best and worst case examples). Thus,\nto obtain a succinct visualization that highlights both distinct failure cases as well as the best possible results, we use bins of unequal sizes. Specifically, we use smaller bins for the top and bottom percentiles. For an example of AggAtt, see Fig. 1 ###reference_###.\nAs a result, AggAtt allows for a systematic qualitative evaluation and provides a holistic view of the performance of attribution methods across many samples."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.3",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Attributions Across Network Layers: ML-Att",
|
| 57 |
+
"text": "Attribution methods often vary significantly in the degree to which they explain a model. Activation-based attribution methods like Grad-CAM [17 ###reference_b17###], e.g., are typically applied on the last spatial layer, and thus only explain a fraction of the full network.\nThis is a significantly easier task as compared to explaining the entire network, as is done by typical backpropagation-based methods. Activations from deeper layers of the network would also be expected to localize better, since they would represent the detection of higher level features by the network (Fig. 1 ###reference_###, left). Therefore, there is a potential trade-off between the extent to which the network is explained and how well localized the attribution explanations are, which in turn would likely determine how useful the attributions are to end users.\nFor a fair comparison between methods, and to further examine this trade-off, we thus propose a multi-layer evaluation scheme for attributions (ML-Att). Specifically, we evaluate methods at various network layers and compare their performance on the same layers. For this, we evaluate all methods at the input, an intermediate, and the final spatial layer of multiple network architectures, see Sec. 4 ###reference_### for details.\nImportantly, we find that apparent differences found between some attribution methods vanish when compared fairly, i.e., on the same layer (Sec. 5.1 ###reference_###).\nLastly, we note that most attribution methods have been designed to assign importance values to input features of the model, not intermediate network activations. The generalisation to intermediate layers, however, is straightforward. For this, we simply divide the full model into two virtual parts: . Specifically, we treat as a pre-processing step and use the attribution methods to explain the outputs of with respect to the inputs . Note that in its standard use case, in Grad-CAM is given by all convolutional layers of the model, whereas for most gradient-based methods is the identity."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Experimental Setup",
|
| 63 |
+
"text": "###figure_5### Dataset and Architectures:\nWe run our experiments on VGG19[2 ###reference_b2###] and Resnet152[3 ###reference_b3###] trained on Imagenet[36 ###reference_b36###]; similar results on other architectures and on CIFAR10[37 ###reference_b37###] can be found in the supplement.\nFor each model, we separately select images from the validation set that were classified with a confidence score of at least 0.99. By only using highly confidently classified images[31 ###reference_b31###, 35 ###reference_b35###], we ensure that the features within each grid cell constitute positive evidence of its class for the model, and features outside it contain low positive evidence since they get confidently classified to a different class.\nEvaluation on GridPG, DiFull, and DiPart:\nWe evaluate on grids constructed by randomly sampling images from the set of confidently classified images (see above). Specifically, we generate 2000 attributions per method for each of GridPG, DiFull, and DiPart. For GridPG, we use images from distinct classes, while for DiFull and DiPart we use distinct classes except in the bottom right corner, where we use the same class as the top left. By repeating the same class twice, we can test whether an attribution method simply highlights class-related features, irrespective of them being used by the model.\nSince subimages are disconnected from the classification heads of other locations in DiFull and DiPart, the use of repeating classes does not change which regions should be attributed (Sec. 3.1.2 ###reference_.SSS2###).\nEvaluation at Intermediate Layers:\nWe evaluate each method at the input (image), middle333We show a single intermediate layer to visualize trends from the input to the final layer; for results on all layers, see supplement.\n (Conv9 for VGG19, Conv3_x for Resnet152), and final spatial layer (Conv16 for VGG19, Conv5_x for Resnet152) of each network, see Sec. 3.3 ###reference_###. Evaluating beyond the input layer leads to lower dimensional attribution maps, given by the dimensions of the activation maps at those layers.\nThus, as is common practice [17 ###reference_b17###], we upsample those maps to the dimensions of the image () using bilinear interpolation.\nQualitative Evaluation on AggAtt:\nAs discussed, for AggAtt we use bins of unequal sizes (Sec. 3.2 ###reference_###). In particular, we bin the attribution maps into the following percentile ranges: 0\u20132%, 2\u20135%, 5\u201350%, 50\u201395%, 95\u201398%, and 98\u2013100%; cf. Fig. 1 ###reference_###. Further, in our experiments we evaluate the attributions for classes at the top-left grid location.\nAttribution Methods:\nWe evaluate a diverse set of attribution methods, for an overview see Sec. 2 ###reference_###. As discussed in Sec. 3.3 ###reference_###, to apply those methods to intermediate network layers, we divide the full model into two virtual parts and and treat the output of as the input to to obtain importance attributions for those \u2018pre-processed\u2019 inputs. In particular, we evaluate the following methods.\nFrom the set of backpropagation-based methods, we evaluate on Guided Backpropagation [9 ###reference_b9###], Gradient [8 ###reference_b8###], IntGrad [11 ###reference_b11###], IxG [10 ###reference_b10###], and LRP [7 ###reference_b7###].\nFrom the set of activation-based methods, we evaluate on Grad-CAM [17 ###reference_b17###], Grad-CAM++ [18 ###reference_b18###], Ablation-CAM [19 ###reference_b19###], Score-CAM [20 ###reference_b20###], and Layer-CAM [21 ###reference_b21###].\nNote that in our framework, these methods can be regarded as using the classification head only (except [21 ###reference_b21###]) for , see Sec. 3.3 ###reference_###. To evaluate them at earlier layers, we simply expand accordingly to include more network layers.\nFrom the set of perturbation-based methods, we evaluate Occlusion [24 ###reference_b24###] and RISE [23 ###reference_b23###]. These are typically evaluated on the input layer, and measure output changes when perturbing (occluding) the input (Fig. 3 ###reference_###, left).\nNote that Occlusion involves sliding an occlusion kernel of size with stride over the input.\nWe use for the input, and at the middle and final layers to account for the lower dimensionality of the feature maps.\nFor RISE, we use random masks, generated separately for evaluations at different network layers.\nFor LRP, following [35 ###reference_b35###, 38 ###reference_b38###],\nwe primarily use a configuration that applies the -rule with for the fully connected layers in the network, the -rule for the convolutional layers except the first convolutional layer, and the -rule for the first convolutional layer. We discuss the performance across other configurations, including the composite configuration proposed by [14 ###reference_b14###], in Sec. 5.5 ###reference_###. Note that since certain LRP rules, such as the -rule, are not implementation invariant ([11 ###reference_b11###]), relevance may be distributed differently for functionally equivalent models. In particular, relevance propagation through batch normalization layers can be handled in multiple ways, such as by replacing them with convolutions or by merging them with adjacent linear layers. In our experiments, as in\n[14 ###reference_b14###],\nbatch normalization layers are handled by merging them with adjacent convolutional or fully connected layers. We further discuss some ramifications of the lack of implementation invariance to attribution localization in Sec. 5.5 ###reference_### and the supplement."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Experimental Results and Discussion",
|
| 69 |
+
"text": "###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### In this section, we first present the quantitative results for all attribution methods on GridPG, DiPart, and DiFull and compare their performance at multiple layers (5.1 ###reference_###).\nFurther, we present a simple smoothing mechanism that provides highly performant attributions on all three settings, and discuss architectural considerations that impact its effectiveness (5.3 ###reference_###). Finally, we present qualitative results using AggAtt, and show its use in highlighting strengths and deficiencies of attribution methods (5.4 ###reference_###)."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.1",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "Evaluation on GridPG, DiFull, and DiPart",
|
| 75 |
+
"text": "We perform ML-Att evaluation using the the input (Inp), and the activations at a middle layer (Mid) and final convolutional layer (Fin) before the classification head\n(x-ticks in Fig. 4 ###reference_###) for all three quantitative evaluation settings (GridPG, DiFull, DiPart, minor columns in Fig. 4 ###reference_###) discussed in Sec. 3 ###reference_###. In the following, we discuss the methods\u2019 results, grouped by their \u2018method family\u2019: backpropagation-based, activation-based, and perturbation-based methods (major columns in Fig. 4 ###reference_###).\nBackpropagation-based methods:\nWe observe that all methods except LRP perform poorly at the initial layer on GridPG (Fig. 4 ###reference_###, left).\nSpecifically, we observe that they yield noisy attributions that do not seem to reflect the grid structure of the images; i.e., positive attributions are nearly as likely to be found outside of a subimage for a specific class as they are to be found inside.\nHowever, they improve on later layers. At the final layer, IntGrad and IxG show very good localization (comparable to Grad-CAM), which suggests that the methods may have similar explanatory power when compared on an equal footing. We note that IxG at the final layer has been previously proposed under the name DetGrad-CAM [39 ###reference_b39###].\nLRP, on the other hand, performs strongly at all three layers.\nWe believe that this is likely because the rule used in the convolutional layers propagates relevance backwards in a manner that favours activations that contribute positively to the final output. As the localization metric only considers positive attributions, such a propagation scheme would result in a high localization score. Note that this only evaluates a single LRP configuration, as we discuss in Sec. 5.5 ###reference_###, we find that the performance can significantly vary based on the propagation rules used.\nOn DiFull, all methods show near-perfect localization across layers (Fig. 8 ###reference_###). No attribution is given to disconnected subimages since the gradients with respect to them are zero (after all, they are fully disconnected);\ndegradations for other layers can be attributed to the applied upsampling. However, the lack of implementation invariance [11 ###reference_b11###] in LRP implies that relevance could be made to effectively propagate through disconnected regions by constructing an appropriate functionally equivalent model, as we discuss in Sec. 5.5 ###reference_### and the supplement.\nSimilar results are seen in DiPart, but with decreasing localization when moving backwards from the classifier, which can be attributed to the fact that the receptive field can overlap with other subimages in this setting. Overall, we find that similar performance is obtained on DiFull and DiPart across all methods.\nActivation-based methods:\nWe see that all methods with the exception of Layer-CAM improve in localization performance from input to final layer on all three settings. Since attributions are computed using a scalar weighted sum of attribution maps, this improvement could be explained by improved localization of activations from later layers. In particular, localization is very poor at early layers, which is a well-known limitation of Grad-CAM [21 ###reference_b21###]. The weighting scheme also causes final layer attributions for all methods except Layer-CAM to perform worse on DiFull than on GridPG, since these methods attribute importance to both instances of the repeated class (Fig. 8 ###reference_###). This issue is absent in Layer-CAM as it does not apply a pooling operation.\nPerturbation-based methods:\nWe observe (Fig. 4 ###reference_###, right) Occlusion to perform well across layers on DiFull, since occluding disconnected subimages cannot affect the model outputs and are thus not attributed importance.\nHowever, the localization drops slightly for later layers. This is due to the fact that the relative size (w.r.t. activation map) of the overlap regions between occlusion kernels and adjacent subimages increases.\nThis highlights the sensitivity of performance to the choice of hyperparameters, and the tradeoff between computational cost and performance.\nOn GridPG, Occlusion performance improves with layers.\nOn the other hand, RISE performs poorly across all settings and layers. Since it uses random masks, pixels outside a target grid cell that share a mask with pixels within get attributed equally. So while attributions tend to concentrate more in the target grid cell, the performance can be inconsistent (Fig. 8 ###reference_###)."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.2",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "Localization across network depths",
|
| 81 |
+
"text": "In this section, we evaluate the trends in localization performance across the full range of network depths for the seven models we evaluate on (VGG19, VGG11 [2 ###reference_b2###], Resnet152, Resnet18 [3 ###reference_b3###], ResNeXt [4 ###reference_b4###], Wide ResNet [5 ###reference_b5###], GoogLeNet [6 ###reference_b6###]). Our quantitative evaluation using our proposed ML-Att scheme so far (Fig. 4 ###reference_###) focused on three representative network depths \u2013 at the input, a middle layer, and the final layer of each model. We found that several methods (e.g. IxG, IntGrad, Grad-CAM, LRP) localize well at the final layer. Here, we evaluate whether the performance on these three layers is representative of the general trend across all layers, and whether the trends for each attribution methods generalize across diverse network architectures.\nThe quantitative results for a subset of attribution methods can be found in Fig. 9 ###reference_###; for the remaining methods, see supplement.\nWe pick four methods, two backpropagation-based (IntGrad, IxG) and two activation-based (Grad-CAM, Ablation-CAM), whose performance increases most prominently from the input to the final layer in Fig. 4 ###reference_###. In addition, we show results on LRP, the best performing method overall. Full results on all methods can be found in the supplement. For each attribution method, we plot the mean localization score on each model across all network depths. The x-axis shows the fraction of the model depth, where 0 refers to the input layer and 1 refers to the final convolutional layer, and the y-axis shows the localization score. Each line plots the mean localization score across all possible depths for a single model.\nWe find that the trends in performance at the chosen three layers in Fig. 4 ###reference_### generalize to all layers, with the localization performance improving at deeper layers for all the chosen methods (except LRP). Furthermore, we find that these trends also generalize across network architectures, and demonstrates the utility of ML-Att in finding similar performance across diverse attribution methods when compared fairly at identical depths. We find that the performance of IntGrad and IxG steadily improves from the input to the final layer, while that of Grad-CAM and Ablation-CAM is poor except near the final layer. LRP, on the other hand, scores highly throughout the network.\n###figure_11###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.3",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "Smoothing Attributions",
|
| 87 |
+
"text": "From Sec. 5.1 ###reference_###, we see that Grad-CAM localizes well at the final layer in GridPG, but performs poorly on all the other settings as a consequence of global pooling of gradients (for DiFull) and poor localization of early layer features (for GridPG early layers).\nSince IxG, in contrast, does not use a pooling operation, it performs well on DiFull at all layers and on GridPG at the final layer.\nHowever, it performs poorly at the input and middle layers on GridPG due to the noisiness of gradients; IntGrad shows similar results.\nDevising an approach to eliminate this noise would provide an attribution method that performs well across settings and layers.\nPrevious approaches to reduce noise include averaging attribution maps over many perturbed samples (SmoothGrad[40 ###reference_b40###], see supplement for a comparison) or adding a gradient penalty during training[41 ###reference_b41###]. However, SmoothGrad is computationally expensive as it requires several passes on the network to obtain attributions, and is sensitive to the chosen perturbations.\nSimilarly, adding a penalty term during training requires retraining the network.\nHere, we propose to simply apply a Gaussian smoothing kernel on existing IntGrad and IxG attributions. We evaluate on DiFull and GridPG using several kernel sizes, using standard deviation for kernels of size . We refer to the smooth versions as S-IntGrad and S-IxG respectively.\nOn VGG19 (Fig. 5 ###reference_###, top), we find that S-IntGrad and S-IxG localize significantly better than IntGrad and IxG, and the performance improves with increasing kernel size. In detail, S-IntGrad on the input layer with outperforms Grad-CAM on the final layer, despite explaining the full network. While performance on DiFull drops slightly as smoothing leaks attributions across grid boundaries, both S-IntGrad and S-IxG localize well across settings and layers. However, on Resnet18 (Fig. 5 ###reference_###, bottom), while S-IntGrad improves similarly, S-IxG does not, which we discuss next.\nImpact of Network Architecture:\nA key difference between the VGG19 and Resnet152 architectures used in our experiments is that VGG19 does not have batch normalization (BatchNorm) layers.\nWe note that batch norm effectively randomizes the sign of the input vectors to the subsequent layer, by centering those inputs around the origin (cf.[42 ###reference_b42###, 41 ###reference_b41###]). Since the sign of the input determines whether a contribution (weighted input) is positive or negative, a BatchNorm layer will randomize the sign of the contribution and the \u2018valence\u2019 of the contributions will be encoded in the BatchNorm biases.\nTo test our hypothesis, we evaluate S-IxG on a VGG19 with BatchNorm layers (Fig. 5 ###reference_###, middle), and observe results similar to Resnet152: i.e., we observe no systematic improvement by increasing the kernel size of the Gaussian smoothing operation. This shows that the architectural choices of a model can have a significant impact on the performance of attribution methods."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.4",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Qualitative Evaluation using AggAtt",
|
| 93 |
+
"text": "In this section, we present qualitative results using AggAtt for select attributions evaluated on GridPG and DiFull and multiple layers.\nFirst, to investigate the qualitative impact of smoothing, we use AggAtt to compare IxG, S-IxG, and Grad-CAM attributions on GridPG on multiple layers.\nWe employ AggAtt on DiFull to highlight specific characteristics and failure cases of some attribution methods.\nAggAtt on GridPG:\nWe show AggAtt results for IxG, S-IxG, Grad-CAM, and LRP at three layers on GridPG using VGG19 on the images at the top-left corner (Fig. 6 ###reference_###).\nFor each method, a set of three rows corresponds to the attributions at input, middle, and final layers. For S-IxG, we set to , , and respectively.\nWe further show individual samples (median bin) of the first and last bins per method.\nWe observe that the aggregate visualizations are consistent with the quantitative results (Figs. 4 ###reference_###, LABEL: and 5 ###reference_###) and the individual examples shown for each bin.\nThe performance improves for IxG and Grad-CAM from input to final layer, while S-IxG localizes well across three layers. Attributions from LRP are generally visually pleasing and localize well across layers. Finally, the last two columns show that all the attribution methods perform \u2018poorly\u2019 for some inputs; e.g., we find that IxG and Grad-CAM on the final layer attribute importance to other subimages if they exhibit features that are consistent with the class in the top-left subimage.\nWhile the attributions might be conceived as incorrect, we find that many \u2018failure cases\u2019 on GridPG highlight features that the underlying model might in fact use, even if they are in another subimage.\nGiven the lack of ground truth, it is difficult to assess whether these attributions faithfully reflect model behaviour or deficiencies of the attribution methods.\nDespite explaining significantly more layers, S-IntGrad and S-IxG at the input layer not only match Grad-CAM at the final layer quantitatively (Fig. 5 ###reference_###) and qualitatively (Fig. 6 ###reference_###), but are also highly consistent with it for individual explanations. Specifically, the Spearman rank correlation between the localization scores of Grad-CAM (final layer) and S-IntGrad (input layer) increases significantly as compared to IntGrad (input layer) (e.g., on VGG19), implying that their attributions for any input tend to lie in the same AggAtt bins (see supplement).\nTo further understand the effect of smoothing, we visualize S-IxG with varying kernel sizes while including negative attributions (Fig. 7 ###reference_###). The top row shows aggregate attributions across the dataset, while the middle and bottom rows show an example under the GridPG and standard localization settings respectively. We observe that while IxG attributions appear noisy (column 2), smoothing causes positive and negative attributions to cleanly separate out, with the positive attributions concentrating around the object. For instance, in the second row, IxG attributions concentrate around both the dog and the wolf, but S-IxG with correctly attributes only the dog positively. This could indicate a limited effective receptive field (RF) [43 ###reference_b43###] of the models. Specifically, note that for piece-wise linear models, summing the contributions (given by IxG) over all input dimensions within the RF exactly yields the output logit (disregarding biases). Models with a small RF would thus be well summarised by S-IxG for an adequately sized kernel; we elaborate on this in the supplement.\nAggAtt on DiFull:\nWe visually evaluate attributions on DiFull for one method per method family, i.e., from backpropagation-based (IxG, input layer), activation-based (Grad-CAM, final layer), and perturbation-based (RISE, input layer) methods at their standard layers (Fig. 8 ###reference_###). The top row corroborates the near-perfect localization shown by the backpropagation-based methods on DiFull. The middle row shows that Grad-CAM attributions concentrate at the top-left and bottom-right corners, which contain images of the same class, since global pooling of gradients makes it unable to distinguish between the two even though only the top-left instance (here) influences classification. Finally, for RISE, we observe that while attributions localize well for around half the images, the use of random masks results in noisy attributions for the bottom half."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.5",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Evaluation using various LRP Configurations",
|
| 99 |
+
"text": "###figure_12### From the previous sections, we saw that LRP using the configuration by [35 ###reference_b35###] outperformed all other attribution methods at all layers. More generally, LRP [7 ###reference_b7###] is a paradigm that encompasses a family of attribution methods that modify the gradients during backpropagation. The mechanism of relevance propagation is specified by a set of propagation rules used across the network. Rules are selected for each layer usually based on the type of layer and its position in the network, and a mapping of layers to rules constitutes a unique LRP configuration. Some of the existing backpropagation-based methods that were proposed independently, such as IxG [10 ###reference_b10###] and Excitation Backprop [13 ###reference_b13###], can be viewed as specific configurations of LRP [14 ###reference_b14###].\nIn this section, we study the impact of the choice of rules and their hyperparameters in attribution performance of LRP.\nSpecifically, following prior work [14 ###reference_b14###], we consider a composite configuration (hereafter referred to as LRP-Composite), that applies the -rule on fully connected layers, the -rule on convolutional layers except the first layer, and the -rule on the first convolutional layer. In contrast to the -rule that weighs positive and negative contributions equally when propagating relevance, the -rule uses a hyperparameter that increases the weight given to positive contributions. As , relevance is propagated only based on positive contributions, and the configuration is identical to the one used in [35 ###reference_b35###] and the previous sections (hereafter referred to as LRP-Focus). In our experiments, we investigate the impact of on performance of LRP, and evaluate LRP-Composite using values of in . corresponds to using the -rule where no additional weight is given to positive contributions, and is the value that is commonly used (e.g. [14 ###reference_b14###]). We also evaluate the setting when , i.e. using LRP-Focus. Quantitative results for both models on GridPG can be found in Fig. 10 ###reference_###.\nWe find that the performance is highly sensitive to the choice of . Low values of (up to 0.01) localize poorly, particularly at the input layer. For higher values of , including LRP-Focus where , the localization performance is high across layers for both models on GridPG.\nWe attribute this to the following: if only positive contributions are considered at intermediate layers, the sign of the attributions to the last layers will be maintained throughout the backpropagation process. In particular, the distribution of positive and negative attributions at the input layer will be largely dependent on the attributions at the final layer. Hence, since the -rule performs well at the final layer (similar to IxG and IntGrad), maintaining the sign of the attributions will lead to good results at the input layer, which the -rule achieves by suppressing negative contributions. We believe that understanding how to better integrate the negative contributions in the backward pass to reflect all model computations is thus an interesting direction to explore in future work.\nLack of Implementation Invariance:\nAs discussed in [11 ###reference_b11###], LRP in general is not implementation invariant, i.e., functionally equivalent models could be assigned highly dissimilar attribution maps for the same input. In particular, this also holds for the -rule, which is used in the best-performing LRP-Focus configuration. This leads to the possibility of controlling which pixels get attributed by appropriately formulating an equivalent model. Importantly, as we show in the supplement, this can also lead to pixels that have no influence on the output logit to get high attributions. This shows that while LRP can be highly performant, one must carefully consider the parameters used and the properties of the setting before using it in practice."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "Discussion and Conclusion",
|
| 105 |
+
"text": "In this section, we summarize our results, and discuss high-level recommendations. First, we proposed a novel quantitative evaluation setting, DiFull, to disentangle the behaviour of the model from that of the attribution method. This allowed us to evaluate for model-faithfulness by partitioning inputs into regions that could and could not influence the model\u2019s decision. Using this, we showed that (Fig. 4 ###reference_###) some popularly used attribution methods, such as Grad-CAM, can provide model-unfaithful attributions. On the other hand, while noisy, backpropagation-based methods like IntGrad and IxG localize perfectly under this setting. We note, however, that our setting cannot evaluate the correctness of attributions within the target grid cells, and as such a high localization performance on DiFull is a necessary condition for a good attribution method, but not a sufficient condition. In other words, DiFull can be viewed as a coarse sanity check that should be passed by any model-faithful attribution method, but our results show that several do not do so. This could be of practical importance in use cases where models learn to focus on a fixed local region in an image to reach their decisions.\nSecond, we observed that different attribution methods are typically evaluated at different depths, which leads to them being compared unfairly. To address this, we proposed a multi-layer evaluation scheme, ML-Att, through which we compared each attribution method at identical model depths (Figs. 4 ###reference_### and 9 ###reference_###). We found that surprisingly, a diverse set of methods perform very similarly and localize well, particularly at the final layer. This includes backpropagation-based methods like IxG and IntGrad, which have often been criticized for providing highly noisy and hard to interpret attributions. Combined with their perfect localization on DiFull, this shows that IxG and IntGrad at the final layer can be used as an alternative to Grad-CAM, when coarse localization is desired. Quantitative (Figs. 4 ###reference_### and 9 ###reference_###) and qualitative (Figs. 6 ###reference_### and 8 ###reference_###) results at intermediate layers also point to the existence of a trade-off between faithfulness and coarseness of attributions, particularly for methods like IxG and IntGrad. While attributions computed closer to the input explain a larger fraction of the network and provides more fine-grained attributions, such attributions often localize poorly and are not very helpful to end users. On the other hand, attributions computed closer to the final layer explain only a small part of the network, but are coarser, localize better and highlight the object features more clearly. As a result, the choice of layer to compute attributions would depend on the user\u2019s preference in the presence of this trade-off.\nThird, we proposed an aggregate attribution evaluation scheme, AggAtt, to holistically visualize the performance of an attribution method. Unlike evaluation on a small subset of examples, this shows the full range of localizations across the dataset and eliminates any inadvertent biases from the choice of examples. Furthermore, it allows one to easily visualize the performance at the best and worst localized examples, and could help identify cases when an attribution method unexpectedly fails.\nFourth, we showed that a simple post-hoc Gaussian smoothing step can significantly improve localization (Figs. 5 ###reference_### and 7 ###reference_###) for some attribution methods (IntGrad, IxG). Unlike commonly used smoothing techniques like SmoothGrad, this requires no additional passes through the network and no selection of hyperparameters. As we show in the supplement, it also results in better localized attributions. This shows that while originally noisy, obtaining a local summary of attribution maps from these methods could provide maps that are useful for humans in practice. However, we find that the effectiveness of smoothing is influenced by the network architecture, in particular the presence of batch normalization layers, which suggests that architectural considerations must be taken into account when using attribution methods.\nFinally, we find that certain configurations of layer-wise relevance propagation (LRP) consistently perform the best quantitatively and qualitatively across network depths. However, by interpolating between different LRP configurations (see Sec. 5.5 ###reference_###), we find that this is likely due to the fact that the well-performing LRP-configurations maintain the sign of the attributions to the final layer in the backpropagation process. As such, some aspects of the model computations are not reflected in the final attribution maps (negative contributions at intermediate layers are neglected) and the final attributions are largely dependent on the localisation performance at the final layer. How to better reflect those negative contributions in the backpropagation process is thus an interesting direction for future work.\nWhile we focus on CNNs in our work, performing a comprehensive evaluation for attribution methods on the recently proposed state-of-the-art image classification architectures such as vision transformers (ViTs) [44 ###reference_b44###] is another interesting direction for future work.\nOverall, we find that fair comparisons, holistic evaluations (DiFull, GridPG, AggAtt, ML-Att), and careful disentanglement of model behaviour from the explanations provide better insights in the performance of attribution methods."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [],
|
| 109 |
+
"tables": {},
|
| 110 |
+
"image_paths": {
|
| 111 |
+
"1": {
|
| 112 |
+
"figure_path": "2303.11884v2_figure_1.png",
|
| 113 |
+
"caption": "Fig. 1: \nLeft: Illustration of DiFull and ML-Att. In DiFull, we evaluate models on image grids (col. 1). Crucially, we employ separate classification heads for each subimage that cannot possibly be influenced by other subimages; this yields \u2018ground truths\u2019 for possible and impossible attributions (col. 2). For ML-Att, we evaluate methods at different network layers; and show attributions for the example grid image using Grad-CAM and IntGrad. Further, we show results after smoothing IntGrad (S-IntGrad), which we find to perform well (Sec. 5.3). Grad-CAM, for instance, incorrectly attributes the bottom-right butterfly which lies in the \u2018impossible\u2019 partition for attributions.\nRight: Visualisation of our AggAtt evaluation. By sorting attributions into percentile ranges w.r.t. their performance and aggregating them over many samples, we obtain a holistic view of a methods\u2019 performance. AggAtt can thus reflect both best and worst case behaviour of an attribution method.",
|
| 114 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/teaser.png"
|
| 115 |
+
},
|
| 116 |
+
"2(a)": {
|
| 117 |
+
"figure_path": "2303.11884v2_figure_2(a).png",
|
| 118 |
+
"caption": "(b) DiFull\nFig. 2: \nOur three evaluation settings. In GridPG, the classification scores are influenced by the entire input. In DiFull, on the other hand, we explicitly control which inputs can influence the classification score. For this, we pass each subimage separately through the spatial layers, and then construct individual classification heads for each of the subimages. DiPart serves as a more natural setting to DiFull, that still provides partial control over information. We show a 1\u00d72121\\times 21 \u00d7 2 grid for readability, but the experiments use 2\u00d72222\\times 22 \u00d7 2 grids.",
|
| 119 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/arch_difull.png"
|
| 120 |
+
},
|
| 121 |
+
"2(b)": {
|
| 122 |
+
"figure_path": "2303.11884v2_figure_2(b).png",
|
| 123 |
+
"caption": "(c) DiPart\nFig. 2: \nOur three evaluation settings. In GridPG, the classification scores are influenced by the entire input. In DiFull, on the other hand, we explicitly control which inputs can influence the classification score. For this, we pass each subimage separately through the spatial layers, and then construct individual classification heads for each of the subimages. DiPart serves as a more natural setting to DiFull, that still provides partial control over information. We show a 1\u00d72121\\times 21 \u00d7 2 grid for readability, but the experiments use 2\u00d72222\\times 22 \u00d7 2 grids.",
|
| 124 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/arch_dipart.png"
|
| 125 |
+
},
|
| 126 |
+
"3": {
|
| 127 |
+
"figure_path": "2303.11884v2_figure_3.png",
|
| 128 |
+
"caption": "Fig. 3: Left: Example Attributions on the Standard, GridPG, and DiFull Settings. We show attributions for all methods on their typically evaluated layers, i.e. input for backpropagation-based and perturbation-based, and final layer for activation-based methods. Blue boxes denote the object bounding box (Standard) or the grid cell (GridPG, DiFull) respectively. For DiFull, we use images of the same class at the top-left and bottom-right corners as in our experiments. Right: Occlusion attributions for an example evaluated on GridPG, DiFull, and DiPart. The top-left and bottom-right corners contain two different species of dogs, which share similar low-level features, causing both to be attributed in GridPG. In contrast, our disconnected construction in DiFull and DiPart ensures that the bottom-right subimage does not influence the classification of the top-left, and thus should not be attributed by any attribution methods, even though some do erroneously.",
|
| 129 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/examples_grid.png"
|
| 130 |
+
},
|
| 131 |
+
"4": {
|
| 132 |
+
"figure_path": "2303.11884v2_figure_4.png",
|
| 133 |
+
"caption": "Fig. 4: Quantitative Results on VGG19 and Resnet152. For each metric, we evaluate all attribution methods with respect to the input image (Inp), a middle (Mid), and the final (Fin) spatial layer. Boxes of the same colour correspond to the same attribution method, and each group of three boxes shows, from left to right, the results at the input (Inp), middle (Mid), and final (Fin) spatial layers respectively. We observe the performance to improve from Inp to Fin on most settings. See also Fig. 9.\nWe find similar results on DiFull and DiPart across methods.\nThe symbol * denotes boxes that collapse to a single value, for better readability.",
|
| 134 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/all_boxplot_vgg19resnet152.png"
|
| 135 |
+
},
|
| 136 |
+
"5": {
|
| 137 |
+
"figure_path": "2303.11884v2_figure_5.png",
|
| 138 |
+
"caption": "Fig. 5: Smoothing the attributions for IntGrad and IxG significantly improves their performance at the input image and middle layer. For reference, we show Grad-CAM on the final spatial layer.",
|
| 139 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/boxplot_smooth.png"
|
| 140 |
+
},
|
| 141 |
+
"6": {
|
| 142 |
+
"figure_path": "2303.11884v2_figure_6.png",
|
| 143 |
+
"caption": "Fig. 6: Qualitative Results for VGG19 on GridPG evaluated at the top-left corner. Centre: Aggregate attributions sorted and binned in descending order of localization. Each column corresponds to a bin, and set of three rows corresponds to a method. For each method, the three rows from top to bottom show the aggregate attributions at the input, middle, and final spatial layers. Left: Examples from the first bin, which corresponds to the best set of attributions. Right: Similarly, we show examples from the last bin, which corresponds to the worst set of attributions. For smooth IxG, we use K=129\ud835\udc3e129K=129italic_K = 129 for the input layer, K=17\ud835\udc3e17K=17italic_K = 17 at the middle layer, and K=9\ud835\udc3e9K=9italic_K = 9 at the final layer. All examples shown correspond to images whose attributions lie at the median position in their respective bins.",
|
| 144 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/vis_gridpg-Journal.png"
|
| 145 |
+
},
|
| 146 |
+
"7": {
|
| 147 |
+
"figure_path": "2303.11884v2_figure_7.png",
|
| 148 |
+
"caption": "Fig. 7: Qualitative Visualization of smoothing IxG attribution maps for various kernel sizes, including both positive and negative attributions. Top: Aggregate attribution maps for VGG19 on GridPG at the top-left corner across the dataset. We see that positive attributions (green) aggregate to the top-left grid cell and negative attributions (red) aggregate outside when smoothing with large kernel sizes. Middle and Bottom: Examples of smoothing on a single grid and non-grid image.\nPositive attributions concentrate inside the bounding box when smoothed with large kernels.",
|
| 149 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/sixg_negative_examples.png"
|
| 150 |
+
},
|
| 151 |
+
"8": {
|
| 152 |
+
"figure_path": "2303.11884v2_figure_8.png",
|
| 153 |
+
"caption": "Fig. 8: Qualitative Results for VGG19 on DiFull evaluated at the top-left corner. Centre: Aggregate attributions sorted and binned in descending order of localization. Each column corresponds to a bin and each row corresponds to a method applied at its standard layer. Left: Examples from the first bin, which corresponds to the best set of attributions. Right: Examples from the last bin, which corresponds to the worst set of attributions. All examples shown correspond to images whose attributions lie at the median position in their bins.",
|
| 154 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/vis_difull.png"
|
| 155 |
+
},
|
| 156 |
+
"9": {
|
| 157 |
+
"figure_path": "2303.11884v2_figure_9.png",
|
| 158 |
+
"caption": "Fig. 9: Mean localization performance layer-wise across seven models of a selected subset of attribution methods. For each method and network, we plot the mean localization score at at several depths. The x-axis shows the fraction of the total network depth (0 - input, 1 - final layer). As discussed in Sec. 3.3 and Fig. 4, the localization performance tends to improve towards the final layer.",
|
| 159 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/all_lineplot_grid_selected.png"
|
| 160 |
+
},
|
| 161 |
+
"10": {
|
| 162 |
+
"figure_path": "2303.11884v2_figure_10.png",
|
| 163 |
+
"caption": "Fig. 10: Quantitative Results for various LRP configurations on VGG19 and Resnet152. For each metric, we evaluate all attribution methods with respect to the input image (Inp), a middle (Mid), and the final (Fin) spatial layer.",
|
| 164 |
+
"url": "http://arxiv.org/html/2303.11884v2/extracted/5745725/figures/all_boxplot_zlrp_gridpg_gamma.png"
|
| 165 |
+
}
|
| 166 |
+
},
|
| 167 |
+
"validation": true,
|
| 168 |
+
"references": [],
|
| 169 |
+
"url": "http://arxiv.org/html/2303.11884v2"
|
| 170 |
+
}
|
20240721/2304.06372v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2305.07408v3.json
ADDED
|
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Distributed Gradient Descent for Functional Learning",
|
| 3 |
+
"abstract": "In recent years, different types of distributed and parallel learning schemes have received increasing attention for their strong advantages in handling large-scale data information. In the information era, to face the big data challenges that stem from functional data analysis very recently, we propose a novel distributed gradient descent functional learning (DGDFL) algorithm to tackle functional data across numerous local machines (processors) in the framework of reproducing kernel Hilbert space. Based on integral operator approaches, we provide the first theoretical understanding of the DGDFL algorithm in many different aspects of the literature. On the way of understanding DGDFL, firstly, a data-based gradient descent functional learning (GDFL) algorithm associated with a single-machine model is proposed and comprehensively studied. Under mild conditions, confidence-based optimal learning rates of DGDFL are obtained without the saturation boundary on the regularity index suffered in previous works in functional regression. We further provide a semi-supervised DGDFL approach to weaken the restriction on the maximal number of local machines to ensure optimal rates. To our best knowledge, the DGDFL provides the first divide-and-conquer iterative training approach to functional learning based on data samples of intrinsically infinite-dimensional random functions (functional covariates) and enriches the methodologies for functional data analysis.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Introduced by Ramsay in 1980s [31 ###reference_b31###], [32 ###reference_b32###], functional data analysis (FDA) has been intensively studied in recent years. Over the past three decades, the great success of FDA has been witnessed in a variety of fields including machine learning, image science, economics, medicine and electronic commerce [44 ###reference_b44###]. Different from conventional data analysis, FDA focuses on data that are intrinsically infinite-dimensional and often appear as random functions or time series. The high- or infinite-dimensional structure of functional data is a rich source of information and brings many opportunities for future studies in the information era. To reveal the functional nature, one of the most popularly studied frameworks is the functional linear model. In this paper, we consider the functional linear model\nwhere is a scalar response variable, is a square integrable functional predictor defined on a compact domain for some positive integer , is the slope function, is the intercept, is the random noise independent of with .\nFor the sake of simplicity, we assume and .\nOur goal is to recover the target functional given by\nby constructing an estimator based on a training sample set consisting of independent copies of .\nFor a prediction , the risk is defined as\nwhere is independent of and the training data, denotes the expectation taken over and only. For any prediction rule constructed from the training data set , its prediction accuracy can be measured by the excess risk\nwhere denotes the expectation with respect to .\nRecently, there is a growing literature circling the functional linear model (1 ###reference_###) [33 ###reference_b33###], [3 ###reference_b3###], [14 ###reference_b14###], [54 ###reference_b54###], [4 ###reference_b4###], [40 ###reference_b40###], [9 ###reference_b9###], [41 ###reference_b41###], [10 ###reference_b10###], [6 ###reference_b6###], [24 ###reference_b24###], [26 ###reference_b26###]. An earlier popular technique for handling such models is the functional principal component analysis (FPCA) which performs the estimation of by a linear combination of the eigenfunctions of the covariance function of the random function [3 ###reference_b3###], [14 ###reference_b14###]. In the past decade, introduced by Cai and Yuan [54 ###reference_b54###], [4 ###reference_b4###], an approach called the reproducing kernel approach to functional linear model has grown up quickly. The method introduces the RKHS framework in the functional linear model and focuses on establishing estimation of the slope function which lies in a reproducing kernel Hilbert space (RKHS), for details of RKHS, we refer to references e.g.[49 ###reference_b49###],[36 ###reference_b36###], [50 ###reference_b50###], [54 ###reference_b54###],[7 ###reference_b7###], [4 ###reference_b4###],[13 ###reference_b13###],[1 ###reference_b1###]. A well-known strategy to implement the RKHS approach is to consider the Tikhonov regularization scheme (see e.g. [56 ###reference_b56###], [2 ###reference_b2###]) over an RKHS induced by a Mercer kernel (continuous, symmetric, positive semi-definite function on ). To be more precise, given a training sample of independent copies of , one can utilize the estimator \ngenerated by the regularized least squares (RLS) scheme given by\nto realize the approximation of . There have been wide\nstudies on the convergence analysis of generated from the RLS scheme (2 ###reference_###) [54 ###reference_b54###], [4 ###reference_b4###], [40 ###reference_b40###], [41 ###reference_b41###].\nOur present work aims to establish a new distributed gradient descent functional learning algorithm (DGDFL) to solve the functional linear model (1 ###reference_###) and systematically carry out convergence analysis of the algorithm. The motivation of proposing DGDFL is to face massive data challenges which appear everywhere in modern society. In a single-machine model, when the data scale of random functions that the machine needs to handle is extremely large, it would be quite difficult to reduce the computational time, burden and single-machine memory requirements. Moreover, single-machine models are not convenient for preserving privacy. To address the above issues, in this paper,\ninspired by a divide and conquer approach [55 ###reference_b55###], we propose DGDFL for handling functional data. Distributed learning is a very hot topic and a preferable approach to conquer massive data information challenges. The theoretical foundation of divide-and-conquer learning has been established in the framework of learning theory in recent work [55 ###reference_b55###], [20 ###reference_b20###], [21 ###reference_b21###], [57 ###reference_b57###], [11 ###reference_b11###], [15 ###reference_b15###], [37 ###reference_b37###]. There is also another route for designing distributed learning algorithms, often referred to as the decentralized distributed learning algorithm (e.g. [18 ###reference_b18###], [34 ###reference_b34###], [53 ###reference_b53###], [45 ###reference_b45###], [16 ###reference_b16###]). However, in the literature of functional data analysis for handling datasets consisting of random functions, theoretical understanding of divide-and-conquer learning has not started until the very recent papers [41 ###reference_b41###], [24 ###reference_b24###] where the authors mainly focus on convergence analysis of the estimator from Tikhonov RLS schemes. It can be witnessed that a divide-and-conquer iterative training approach for the computational realization of recovering is still lacking in the functional linear model. Moreover, theoretical results on the convergence of such algorithms have not been established yet. To address the issue, we would introduce our divide-and-conquer iterative algorithm DGDFL and investigate its convergence ability in different aspects.\nTo realize the goal of recovering the functional , we first propose a functional-data based gradient descent functional learning (GDFL) algorithm that starts with and is iteratively given by\nwhere is the stepsize, is a Mercer kernel. The corresponding functional estimator for is defined by\n.\nBased on a divide-and-conquer approach, our distributed gradient descent functional learning (DGDFL) algorithm starts with partitioning the data set into disjoint sub-datasets with corresponding disjoint union . Then we assign the information of corresponding data set to one local machine (processor) to produce a local estimator via the algorithm (3 ###reference_###). These local estimators\nare communicated to a central processor, the central processor synthesizes a global estimator \nby taking the following weighted average\nThen the corresponding divide-and-conquer based prediction is obtained by\n. We remark that, in the above model, the disjoint union also includes the case when the data are stored naturally across multiple local machines in a distributive way, and they are not combined at the very beginning for the reasons of protecting privacy and reducing potential costs. Then the data partitioning step is not required, and in this case, the GDFL naturally cannot be done by a single machine or processor and a divide-and-conquer approach (DGDFL) has to be considered. There are many social examples belonging to this scenario. For example, in financial markets, the consumers\u2019 behavior data are stored in different institutions, these local institutions are not allowed to directly share data information to the others and their consumers\u2019 behavior data are not accessible to the public due to privacy considerations.\nIn medical systems, the clinical records from different medical institutions are often sensitive and cannot be shared. Thus it is difficult to analyze these sensitive data by directly combining them together. However, these institutions desires to collaboratively conduct training based on these clinical data to optimize medical decision-making under the premise of protecting their own clinical records.\nOur divide-and-conquer based learning algorithm DGDFL enables these local data holders to collaborate without directly sharing their data and improve the efficiency of analyzing functional data. For a given big data set with the same type of elements that are not naturally divided previously, there is no coercive restriction on the allocating manner of the data set . In our model, the data can be allocated with great freedom. For example, for a labeled data set which forms random copies of , we only need to allocate these data by randomly selecting elements from with at -th step with according to any distribution. Then there are data being naturally allocated to -th local machine and for any . As far as we know, the existing studies on stochastic gradient descent functional learning methods only appear very recently in references [6 ###reference_b6###], [10 ###reference_b10###], [26 ###reference_b26###] which focus on online learning with sound convergence analysis performed. However, these works are essentially related to single-machines and are relatively restricted when the functional data scale is extremely large. As a divide-and-conquer based training scheme, our divide-and-conquer functional learning algorithm can overcome the limitation and substantially reduce the computational burden in time\nand memory.\nTo investigate the approximation ability of our algorithms, we establish estimates for the estimator associated with one single-processor first and comprehensively study the learning rates of the excess risk\nof the estimator . The estimates related to would play an important role to further study our main estimator and its associated excess risk\nWe briefly summarize the contributions of the current work. To our best knowledge, the work is the first to propose a divide-and-conquer iterative approach to study the functional linear model via DGDFL and provide solid convergence analysis. Under some mild conditions, we first establish basic error analysis and optimal rates for the GDFL algorithm (3 ###reference_###). This part of the main results for GDFL are also foundational and meaningful for the field of functional data analysis. Based on analysis of (3 ###reference_###), we comprehensively establish convergence analysis for DGDFL algorithm (4 ###reference_###). Optimal learning rates are obtained under mild conditions. Our proofs also reveal the influence of two types of noise conditions on convergence results. It is shown that the noise condition on also influences the required maximal number of local processors to guarantee the optimal learning rates of the excess risk (6 ###reference_###). Our main results also indicate that GDFL and DGDFL can overcome the saturation phenomenon on the regularity index of the target function suffered by previous works in functional learning. Furthermore, based on our DGDFL algorithm, we also establish a semi-supervised DGDFL algorithm by introducing additional unlabeled functional data. We show that with unlabeled data, the restriction on can be relaxed, even when satisfies a weaker regularity restriction (detailed discussions are provided in section 2 ###reference_###)."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Main results and discussions",
|
| 15 |
+
"text": "We denote the standard inner product and -norm for any measurable functions , defined on . For a real, symmetric, square integrable and nonnegative definite function , use to denote the integral operator\nFor this operator, the spectral theorem implies that there exists a set of normalized eigenfunctions\n and a sequence of eigenvalues such that\n,\nand\n\nThen the square root operator of can be defined by\n,\nwhere\n.\nWe also define\n.\nThen it is easy to see . For any two self-adjoint operators and , we write if is positive semi-definite.\nIn functional learning, the covariance function of is an important object which is defined as\nIt is easy to see that the covariance function is symmetric and positive semi-definite. In this paper, we assume that is continuous and therefore is a Mercer kernel. Then the corresponding operator can be defined accordingly with replaced by in (7 ###reference_###) and is compact, positive semi-definite and of trace class. Due to the reason that and are Mercer kernels on the compact set , there exist positive finite constants and such that\nHence the spectral norms of and can be bounded by and . Given a Mercer kernel and covariance function , we define a composite operator\nIf we use to denote the closure of in , then it is well-known that is an isomorphism, namely, for . In this paper, for brevity, we assume that .\nWe use the effective dimension to measure the regularity of the operator that is defined to be the trace of the operator :\nWe assume the following capacity condition that there exists a constant and some such that for any ,\nThe effective dimension (9 ###reference_###) and the decaying condition (10 ###reference_###) have been widely considered in learning theory of kernel ridge regression problems (e.g. [5 ###reference_b5###], [20 ###reference_b20###], [11 ###reference_b11###], [51 ###reference_b51###], [38 ###reference_b38###],[41 ###reference_b41###]). The condition is slightly more general than the corresponding entropy assumption in the seminal work [4 ###reference_b4###] where a polynomial decaying condition on eigenvalues of the operator associated with the composite kernel ,\nis used for some constant and where are eigenpairs of . In fact, an easy calculation shows that implies with . Thus our assumption is more general than the entropy assumption. Additionally, the above decaying condition is satisfied for some well-known kernel function classes such as Sobolev classes and Besov classes that are commonly considered, thereby ensuring the meaningfulness of the capacity assumption in a large number of practical occasions.\nTo establish theoretical results for our GDFL and DGDFL algorithms, we also assume the following boundedness condition for predictor , that is, there is an absolute constant such that\nIntroduced by [40 ###reference_b40###], this technical assumption has been adopted in recent studies on functional linear models [40 ###reference_b40###], [41 ###reference_b41###], [42 ###reference_b42###] and in parts of main results in [24 ###reference_b24###]. Similar to the idea of assuming the input space of data samples to be a compact space in prior art of statistical learning theory e.g. [5 ###reference_b5###], [20 ###reference_b20###], [15 ###reference_b15###] this assumption can be understood as a natural extension of boundedness condition on the predictor in the scenario of functional linear model (1 ###reference_###). For example, an easy analysis shows that, if lies in the bounded subset , then it is easy to discern that . Additionally, just as pointed out by references e.g. [41 ###reference_b41###], [42 ###reference_b42###], the real-world data-sampling processes are usually bounded, so the assumption is reasonable and accessible in many practical settings.\nIn addition, we need a mild regularity condition on , that is, there exists some and function such that\nThe technical assumption (12 ###reference_###) can be treated as a regularity condition of the target function (functional) in the functional-linear-model scenario. This assumption has been considered in the learning theory of functional linear models of prior art such as [10 ###reference_b10###], [26 ###reference_b26###], [40 ###reference_b40###] and [41 ###reference_b41###] for establishing main results. If for some and , then it is easy to discern that the assumption (12 ###reference_###) is guaranteed when with . This form coincides with the widely-adopted regularity assumption with in a large literature of the kernel-based learning theory of prior art e.g. [5 ###reference_b5###], [20 ###reference_b20###], [21 ###reference_b21###], [11 ###reference_b11###], [15 ###reference_b15###], [38 ###reference_b38###] thereby showing an obvious relationship between the current assumption and these regularity assumptions. It is also well understood that, in learning theory, for algorithms to learn a target function based on a data set, a non-trivial learning rate (convergence rate) often depends on the regularity of the target function e.g. [5 ###reference_b5###], [20 ###reference_b20###], [40 ###reference_b40###], [11 ###reference_b11###], [15 ###reference_b15###], [41 ###reference_b41###], [51 ###reference_b51###], [38 ###reference_b38###], [10 ###reference_b10###].\nThe assumption (12 ###reference_###) in fact implies the target slope function lies in the underlying space . We also note that there also exist rather mature practical techniques to simulate the learning performance of algorithms in RKHS. Thus all the above discussions indicate the regularity assumption (12 ###reference_###) is a reasonable assumption in the current setting.\nIn this paper, we consider two types of noise conditions. The first is the second moment condition\nAssumption (13 ###reference_###) is a very common and standard technical assumption on random noise in functional regression e.g. [4 ###reference_b4###], [40 ###reference_b40###], [9 ###reference_b9###], [10 ###reference_b10###], [24 ###reference_b24###], [26 ###reference_b26###]. We also consider the following well-known moment condition which is slightly stricter than (13 ###reference_###). That is,\nthere exist and such that for any integer ,\nCondition (14 ###reference_###) is usually referred to as Bernstein condition and often appears in the setting of kernel-based learning theory by imposing restrictions on the performance of random variables e.g. [12 ###reference_b12###], [47 ###reference_b47###], [43 ###reference_b43###], [41 ###reference_b41###]. Noises that satisfy condition (14 ###reference_###) include the well-known noises encountered in practice such as Gaussian noises, sub-Gaussian noises, noises with compactly supported distributions and noises associated with some types of exponential distributions. Hence, in practical settings, the noise assumptions in this paper are reasonable and easily verifiable. In the subsequent sections of this paper, we aim to establish a comprehensive set of convergence results for GDFL, DGDFL, and semi-supervised DGDFL. To achieve this, we will establish our main theorems by considering these two widely recognized random noise conditions within a unified analytical framework."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Gradient descent functional learning algorithm",
|
| 21 |
+
"text": "Our first main result reveals the convergence ability of the basic GDFL algorithm (3 ###reference_###). We establish explicit optimal learning rates of the excess risk (5 ###reference_###) of .\nAssume conditions (10 ###reference_###)-(12 ###reference_###) hold. Let the stepsize be selected as , total iteration step .\nIf noise condition (13 ###reference_###) holds, we have, with probability at least ,\nand if noise condition (14 ###reference_###) holds, we have, with probability at least ,\nand are absolute constants given in the proof.\nTheorem 1 ###reference_1### establishes confidence-based convergence rates of GDFL (3 ###reference_###). We can see that when , optimal learning rates can be always derived. Even when , a confidence-based optimal rate up to logarithmic factor which is minimal effect can also be obtained. The results also enrich the understands of functional learning in the existing literature.\nThe next main result reveals confidence-based learning rates of the estimator generated from GDFL (3 ###reference_###) in terms of the RKHS norm .\nAssume conditions (10 ###reference_###), (11 ###reference_###) hold and (12 ###reference_###) holds for some . Let the stepsize be selected as , total iteration step . Then we have with probability at least ,\nwith absolute constant given in the proof."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Distributed gradient descent functional learning algorithm",
|
| 27 |
+
"text": "Our next result establishes explicit confidence-based learning rates of DGDFL.\nAssume conditions (10 ###reference_###)-(12 ###reference_###) hold. Let the stepsize be selected as , total iteration step and . If noise condition (13 ###reference_###) holds and total number of local machines satisfies\nthere holds\nwith being an absolute constant.\nAssume conditions (10 ###reference_###)-(12 ###reference_###) hold. Let the stepsize be selected as , total iteration step and . If noise condition (14 ###reference_###) holds and total number of local machines satisfies\nthen we have, with probability at least ,\nAfter establishing the results of Theorem 3 ###reference_3### and Theorem 4 ###reference_4###, the effectiveness of the DGDFL has been clearly understood. We observe from the results in Theorem 3 ###reference_3### and Theorem 4 ###reference_4###, there is an obvious difference in the requirements of the maximal number of local processors to guarantee the optimal learning rates of the excess risk in (15 ###reference_###) and (16 ###reference_###). This difference reflects the influence of the two types of noise conditions (13 ###reference_###) and (14 ###reference_###). The detailed reason for raising such a difference can be found in the estimates from the proof in Subsection (5.2 ###reference_###). In the literature of regression analysis for massive data, divide-and-conquer based kernel ridge regression has been intensively studied in the past decade [55 ###reference_b55###], [5 ###reference_b5###], [11 ###reference_b11###], [15 ###reference_b15###], [37 ###reference_b37###]. In the setting of functional linear regression, no result on divide-and-conquer based Tikhonov RLS functional linear regression (2 ###reference_###) has been established until the very recent works [41 ###reference_b41###], [24 ###reference_b24###]. However, the computational complexity of Tikhonov RLS functional linear regression scheme (2 ###reference_###) is which is much larger than of our DGDFL algorithm (see e.g. [48 ###reference_b48###]). Hence the proposed DGDFL algorithm largely reduces the computational cost in contrast to previous Tikhonov RLS functional linear regression methods.\nIt can be witnessed that, under current conditions, our convergence rates can nicely overcome the saturation phenomenon of the regularity index suffered in some previous works on functional linear regression e.g. [41 ###reference_b41###], [24 ###reference_b24###] and online functional learning algorithm [6 ###reference_b6###], [10 ###reference_b10###] in some aspects. The saturation means that, beyond a critical index , improvement of would not help to improve the convergence rates. Theorem 1 ###reference_1### shows that our regularity range satisfies , the convergence rates can always be improved when increases and remains optimal. In contrast, for example, in [41 ###reference_b41###], to obtain optimal rates of the RLS based functional regression method, a strict restriction on a narrow range of is required. The absence of the saturation phenomenon is mainly due to two ingredients. The first ingredient is the inherent advantage of the gradient descent type algorithm in overcoming the saturation phenomenon compared to the ridge regression/regularization schemes widely adopted in statistics, statistical learning theory and inverse problems. This advantage of gradient descent has been observed in various studies outside the realm of functional learning, such as those mentioned in [48 ###reference_b48###] and [21 ###reference_b21###]. It is also widely recognized in theoretical and practical studies on learning theory that regularization schemes tend to saturate when the regularity index exceeds a certain threshold.\nThe second ingredient is the novel utilization of integral operator techniques within the context of functional learning. The new incorporation of functional-learning-related error decomposition, along with the utilization of the integral operator techniques based on kernel and the covariance kernel also plays a crucial role in achieving a series of optimal learning rates without saturation. The regularity condition (12 ###reference_###) in fact implies that which is considered by previous works [54 ###reference_b54###], [4 ###reference_b4###], [41 ###reference_b41###]. The current existing works on divide-and-conquer functional learning in the empirical risk minimization (ERM) scheme (2 ###reference_###) are mainly [24 ###reference_b24###] and [42 ###reference_b42###]. The main goal of the current work is to provide an understanding of the learning rates of the first batch of divide-and-conquer gradient descent type algorithms for functional learning. In comparison to the two works, it is worth mentioning that, Theorem 3 ###reference_3### and Theorem 4 ###reference_4### demonstrate the significant impact of the noise moment condition of on the required maximum of the number of local processors to guarantee the optimal learning rates of the DGDFL estimator. In a nutshell, with the stricter moment performance satisfied by , there will be more a relaxed requirement on the maximum of . Such a key phenomenon has been observed through the study of DGDFL and can also be witnessed in the following results on semi-supervised DGDFL in Theorem 5 ###reference_5###. It would also be interesting and challenging to develop results of DGDFL for the case .\nThe following two direct corollaries on optimal learning rates in expectation and almost sure convergence can be easily obtained based on the result of confidence-based learning rates in Theorem 4 ###reference_4###.\nUnder the assumptions of Theorem 4 ###reference_4###, if and (16 ###reference_###) holds,\nthen we have, with probability at least ,\nUnder the assumptions of Theorem 4 ###reference_4###, if and (16 ###reference_###) holds,\nthen for arbitrary , there holds"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Semi-supervised DGDFL algorithm",
|
| 33 |
+
"text": "To enhance the performance of our DGDFL algorithm, we propose the semi-supervised DGDFL in this subsection by introducing unlabeled data in our DGDFL algorithm. One of the goals of doing so is to relax the restriction on the maximal number of local machines. The idea of introducing unlabeled data is mainly inspired by our earlier work on semi-supervised learning with kernel ridge regularized least squares regression [5 ###reference_b5###]. We use the notation , .\nWe assume that, in each local machine, in addition to the labeled data, we have a sequence of unlabeled data denoted by\nThen we can introduce the training data set associated with labeled and unlabeled data in -th local machine (processor) as\nwith\nLet , then we can use the following semi-supervised divide-and-conquer gradient descent functional learning algorithm\nand the semi-supervised divide-and-conquer gradient descent functional learning estimator is given by\nThroughout the paper, we use the function to denote\nAssume conditions (10 ###reference_###)-(12 ###reference_###) hold. Let the stepsize be selected as , total iteration step , and . If the noise condition (13 ###reference_###) holds and satisfies\nwe have, with probability at least ,\nIf the noise condition (14 ###reference_###) holds and satisfies\nwe have, with probability at least ,\nIn Theorem 5 ###reference_5###, we establish confidence-based optimal learning rates for our semi-supervised DGDFL algorithm. We can see that, by introducing unlabeled data via this semi-supervised DGDFL algorithm, this result can relax the restriction on in contrast to Theorems 3 ###reference_3### and 4 ###reference_4###. For example, under the noise condition (14 ###reference_###), when , if , it is easy to see and . Then the condition (23 ###reference_###) reduces to\nwhich coincides with (16 ###reference_###). However, when we assign some larger , the merit of utilizing unlabeled data can be obviously witnessed even for the case . We demonstrate this idea by selecting the sample size of the training data set as\nThen we know from Theorem 5 ###reference_5### that the corresponding range of the number of local machines is\nIt is easy to see\ntherefore (25 ###reference_###) is weaker than (16 ###reference_###). Moreover, even when , (25 ###reference_###) reduces to\nIt is obvious to see that (26 ###reference_###) allows freer selection of since can be selected to be larger when increases, while in (16 ###reference_###), the range of is only limited to when . These facts indicate some advantages of establishing a semi-supervised DGDFL algorithm by introducing unlabeled data."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "Some further remarks",
|
| 39 |
+
"text": ""
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.4.1",
|
| 43 |
+
"parent_section_id": "2.4",
|
| 44 |
+
"section_name": "2.4.1 Remarks on the notion \u201cdistributed learning\u201d",
|
| 45 |
+
"text": "We remark that, the adoption of the term \u201cdistributed\u201d in this paper is to emphasize that our divide-and-conquer-based algorithm DGDFL is mainly designed for scenarios where data is stored in a distributed manner and cannot be shared among local machines, while it is parallel since there is a global synchronization at a parameter server.\nThe literature has commonly referred to the classical distributed learning scheme where local machines/agents communicate with their neighbors to realize local information updating as \u201cdecentralized distributed learning\u201d e.g. [28 ###reference_b28###], [18 ###reference_b18###], [34 ###reference_b34###], [52 ###reference_b52###] while the parallel computation approach with a central server for distributively stored data is referred to as \u201ccentralized distributed learning\u201d or \u201cdivide-and-conquer distributed learning\u201d e.g. [55 ###reference_b55###], [20 ###reference_b20###], [23 ###reference_b23###], [37 ###reference_b37###], [41 ###reference_b41###], [38 ###reference_b38###]. In this paper, when we mention distributed gradient descent, it means the divide-and-conquer distributed gradient descent approach."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.4.2",
|
| 49 |
+
"parent_section_id": "2.4",
|
| 50 |
+
"section_name": "2.4.2 Remarks on decentralized kernel-based distributed learning",
|
| 51 |
+
"text": "In the previous subsections, we have discussed the relationship between our current work and related studies on the learning theory of functional linear models. We remark that, in addition to the divide-and-conquer distributed learning scheme, there is another approach called decentralization that is used to develop distributed algorithms in RKHSs. This approach has been explored in works such as [18 ###reference_b18###], [34 ###reference_b34###] and [45 ###reference_b45###] that allow direct information communications among local agents in a decentralized manner. The earlier work on decentralized distributed algorithms mainly lies in the field of multi-agent consensus distributed optimization e.g. [28 ###reference_b28###]. Recent studies have just turned to designing decentralized algorithms by constructing consensus optimization models in the framework of RKHSs. In fact, our work, which considers functional learning based on intrinsically infinite-dimensional random function data, differs significantly from the works [18 ###reference_b18###], [34 ###reference_b34###], and [45 ###reference_b45###], which concentrate on Euclidean data. In the following, we describe some obvious differences in problem formulations and theoretical approaches to clearly distinguish our current work from these references.\nFirstly, we remark that, there is a significant and fundamental distinction in the problem formulation between the current work and the main works on decentralized distributed learning in RKHSs that have been mentioned. In this work and related work on functional linear models, the main objective is to recover the functional in the model (1 ###reference_###)\nfrom an input data space consisting of random functions (predictors) that lie in to an output (response) space. The sample of random functions forms a stochastic process/field with sample paths in (see also e.g. e.g. [4 ###reference_b4###], [40 ###reference_b40###], [9 ###reference_b9###], [41 ###reference_b41###], [6 ###reference_b6###], [10 ###reference_b10###], [24 ###reference_b24###], [26 ###reference_b26###], [42 ###reference_b42###]).\nOne notable characteristic of this model is that the functional covariates (random sample) in input space are intrinsically infinite-dimensional data which include random functions or curves frequently encountered in modern neuroscience and econometrics. This is in contrast to the works on decentralized distributed learning that primarily focus on conventional regression models involving Euclidean data and aim to recover the target function defined on a Euclidean space e.g. [18 ###reference_b18###], [34 ###reference_b34###], [45 ###reference_b45###]. Consequently, there exists a significant distinction in the problem formulation, namely the sampling process, the data form and the ultimate goal of approximation.\nTo further distinguish the current work from the work in decentralized distributed kernel-based learning, we describe the main theoretical frameworks of these references. In [18 ###reference_b18###], by imposing a consensus constraint among neighbors of two agents, the work successfully transforms the problem of learning the target function into a multi-agent consensus distributed optimization (MCDO) problem (e.g. [28 ###reference_b28###])\nwith the local objective functions in [18 ###reference_b18###] being the penalty functional consisting of the summation of Tikhonov-regularized expected loss functions based on local data and an approximation consensus term. The theoretical approach in [18 ###reference_b18###] mainly focuses on MCDO through consensus SGD within the framework of RKHSs. For conducting the convergence analysis of an online distributed gradient descent, [18 ###reference_b18###] imposes a conventional gradient boundedness condition for the objective function which is widely adopted in the literature on multi-agent optimization. Notably, the disagreement analysis in [18 ###reference_b18###] stands out as a main feature of consensus-based techniques. The work [45 ###reference_b45###] also formulates the problem as an MCDO problem in the random feature space, then utilizes a distributed ADMM-based communication-censored to solve it. Rademacher complexity is the main tool utilized by [45 ###reference_b45###] for the corresponding learning rate analysis.\nIn contrast, our current work takes a completely different and innovative approach by employing integral operators in the context of functional learning models based on random function samples. As a result, we are able to obtain optimal learning rates for the excess risk of all the functional estimators , and studied in this work. Hence, the advantages of integral operator-based theoretical approaches for deriving optimal learning rates in the framework of functional learning/data analysis are clearly reflected.\nOn the other hand,\nthe work [34 ###reference_b34###] introduces a doubly-stochastic communication matrix to construct a decentralized gradient descent. The basic procedure is that, each local agent performs a local gradient descent with respect to their own data, subsequently, each agent performs averaging operations with its neighbors, facilitating information communications through the utilization of a communication weight matrix. Based on these descriptions, it is easy to clearly distinguish the current work from the references on decentralized kernel learning.\nIn the context of functional linear model (1 ###reference_###), the aforementioned methods in references [18 ###reference_b18###], [34 ###reference_b34###], [45 ###reference_b45###] and the conventional techniques in [12 ###reference_b12###], [5 ###reference_b5###], [21 ###reference_b21###], [11 ###reference_b11###], [15 ###reference_b15###], [38 ###reference_b38###] cannot be directly applied. In contrast to previous kernel-based learning, the difficulty of the prediction problem in functional linear model depends on both the kernels and . The analysis and derived rates depend on the kernel complexity of (as observed in e.g. [4 ###reference_b4###]). Thus it can be witnessed the covariance kernel of the random predictor (random function), its empirical version and their associated integral operators integral operators , are introduced in this work and they play a significant role for deriving main theoretical results (such as optimal learning rates of , and in previous sections) throughout this work. The approaches of utilizing these operators are also essentially different from conventional kernel learning problems in [12 ###reference_b12###], [5 ###reference_b5###], [21 ###reference_b21###], [18 ###reference_b18###], [11 ###reference_b11###], [15 ###reference_b15###], [34 ###reference_b34###], [45 ###reference_b45###], [38 ###reference_b38###] which do not require these further approaches. The corresponding novelties are also clearly reflected throughout this paper.\nWe also remark that, the semi-supervised learning approach in subsection 2.3 ###reference_### is also another important novelty, compared with the aforementioned work in this remark on decentralized kernel-based learning which has not developed theoretical results. In this work, our main results demonstrate the significance of incorporating unlabeled data of random functions in order to increase the number of data subsets\n and potentially enhance scalability. Theorem 5 ###reference_5### shows that by further considering our semi-supervised learning scheme, one can still obtain optimal learning rates by utilizing our analytical approaches while allowing for much greater flexibility in the total number of local machines/processors. It is interesting to note that the decentralized approach has not been established for the learning theory of functional learning problems based on samples of random functions. It would be valuable to develop appropriate decentralized distributed learning schemes for the functional learning problem addressed in our work. Additionally, establishing a decentralized semi-supervised functional data analysis scheme would be a challenging and worthwhile endeavor. The basic algorithm GDFL (3 ###reference_###) and its associated main results in Theorem 1 ###reference_1### and Theorem 2 ###reference_2### provide a potential foundation for developing these decentralized functional learning algorithms in future work.\nIn summary, there are significant differences in problem formulation/background, theoretical approaches, and main results between our work and the previous work on decentralized kernel-based learning. Through the discussion above, we have effectively distinguished our work from theirs and highlighted the contributions of this work."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "2.4.3",
|
| 55 |
+
"parent_section_id": "2.4",
|
| 56 |
+
"section_name": "2.4.3 Some advantages of DGDFL in privacy protection and discussion",
|
| 57 |
+
"text": "It is important to note that information communications among different local machines often come with the risk of privacy disclosure. However, the divide-and-conquer scheme considered in our work offers a high level of privacy protection because it does not allow direct data information communications among agents. This is particularly advantageous in scenarios such as the financial market, where consumer behavior data stored in different commercial institutions are not accessible to the public due to privacy considerations. Similarly, in the medical system, clinical records of a medical organization cannot be shared with or owned by different medical institutions to protect privacy. However, these medical organizations may need to collaboratively conduct classification based on the medical data to optimize medical decision-making, without compromising the privacy of their own clinical records. The methods proposed in our work provide effective solutions for these scenarios. Our divide-and-conquer based distributed learning algorithm DGDFL enables these local data holders (modeled as nodes) to collaborate without directly sharing their data information with their neighbors to realize a local updating process that many decentralized distributed learning schemes considered. This scheme has also contributed to the recent rapid development of federated learning (e.g. [27 ###reference_b27###], [46 ###reference_b46###]) which often utilizes an outer fusion center/master to aggregate the estimates of local processors/agents for protecting privacy.\nOn the other hand, by allowing information communications among local agents/processors in some decentralized schemes, the efficiency of the corresponding algorithms can be enhanced in certain settings. It is worth mentioning that the choice between the divide-and-conquer and decentralized approaches in applications depends on specific situations and requirements."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "2.4.4",
|
| 61 |
+
"parent_section_id": "2.4",
|
| 62 |
+
"section_name": "2.4.4 Remarks on scalability and possible future kernel approximation approaches",
|
| 63 |
+
"text": "The time complexity of our proposed DGDFL is significantly lower, with a time complexity of , compared to the regularized ridge functional regression scheme (2 ###reference_###) with a time complexity of . This clearly demonstrates the scalability advantage of DGDFL over the regularized ridge functional regression scheme. For future treatment of our proposed algorithms for extremely large-scale applications in functional statistical models, it would be intriguing to incorporate kernel approximation tools into our proposed algorithms, namely GDFL, divide-and-conquer DGDFL, and semi-supervised DGDFL. To make our algorithms more scalable to extremely large-scale sample size, note that random features are important tools for parameterization in kernel spaces for the merit of reducing the memory footprint and hence reducing the computational burden and complexity. While random features have not been well developed in the functional learning problem addressed in this paper, we discuss potential future treatments and difficulties that involve applying random feature techniques\nto the algorithms GDFL, DGDFL and semi-supervised DGDFL. The fundamental concept behind utilizing random features is to parameterize functions in a kernel space using a set of finite-dimensional feature maps that map elements from the data space to a Euclidean space. One popular example is the random Fourier features which are commonly employed to approximate positive definite kernels like Gaussians. For Euclidean data points and a properly scaled kernel with its Fourier transform , one can take a feature map as an approximation of kernel (approximately ) where and are sampled independently from and uniformly from . Then one can parameterize an RKHS function by in terms of [30 ###reference_b30###], [35 ###reference_b35###], [34 ###reference_b34###], [25 ###reference_b25###]. In the context of functional learning based on random function samples, the situation becomes more complex. Direct utilization of random features would be challenging since the data sample we encountered in this work consists of random functions from instead of Euclidean data. Additionally, the kernel we need to rigorously handle in this paper is the composite kernel , rather than the simpler kernel . This fundamental difference significantly increases the difficulty of incorporating random feature techniques into the functional linear model. It is also worth noting that, an obvious feature of the kernel is that it is generally not a shift-invariant kernel, which further complicates the theoretical realization of our algorithm using random features. Thus, for the theoretical and practical realization of our algorithm via random features, one must address the crucial influence of the covariance kernel in addition to .\nAs far as we know, the theoretical understanding of the random feature approaches to the functional learning scenario discussed in this paper is still an open question and falls outside the scope of the current work.\nEven for GDFL, the implementation of random features has not been carried out. Similarly, for the establishment of other kernel approximation approaches such as Nystr\u00f6m approximation (e.g. [17 ###reference_b17###]) and kernel-based sparse projections (e.g. [19 ###reference_b19###]) for the function learning problem in this work, some issues mentioned above also need to be rigorously addressed, and we leave them for future work."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "2.4.5",
|
| 67 |
+
"parent_section_id": "2.4",
|
| 68 |
+
"section_name": "2.4.5 Remarks on essential differences from conventional (regularized) linear regression",
|
| 69 |
+
"text": "The problem in this work is to recover the functional , based on intrinsically infinite-dimensional samples consisting of functions, in contrast to the conventional (regularized) linear regression which aims at regressing from Euclidean points (finite-dimensional space) to the output space. In the existing literature, handling random function samples and handling Euclidean samples follow totally different routes. That is also the reason why we introduce concepts such as covariance kernels associated with random function and the integral operators associated with the composite kernel . These elements are essential for constructing the theoretical framework and analyzing methods for the problem in current work, which are not required in the conventional (regularized) linear regression. Moreover, the recovery of a functional is also deeply related to the estimation of which is intrinsically an infinite-dimensional slope function, rather than a scalar in conventional linear regression. Hence, based on these facts, the approaches employed in this work differ significantly from conventional finite-dimensional (regularized) linear regression methods. We also refer to the reference [3 ###reference_b3###] for further details on essential distinctions."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Preliminary results",
|
| 75 |
+
"text": ""
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.1",
|
| 79 |
+
"parent_section_id": "3",
|
| 80 |
+
"section_name": "Approximation error of a data-free iterative GDFL algorithm",
|
| 81 |
+
"text": "To perform further convergence analysis of GDFL and DGDFL, we need to first investigate a data-free iterative GDFL algorithm associated with the original algorithm (3 ###reference_###) defined by\nFor simplifying notations, denote the regularization polynomial by\nwhere\nAlso denote the residue polynomial (see e.g. [48 ###reference_b48###])\nThe following lemma is from [6 ###reference_b6###].\nLet be a compact positive semi-definite operator on some real separate Hilbert space, such that for some . Let and , , \u2026, . Then when , there holds,\nThe following result is about the approximation error related to the data-free function sequence generated by the data-free functional learning algorithm (27 ###reference_###). It is the foundation to further establish convergence analysis of GDFL and DGDFL in this paper.\nLet satisfy the regularity condition (12 ###reference_###). If the stepsizes are selected as , , with satisfying , then we have, for ,\nand\nwhere the constant\nFrom the iteration (27 ###reference_###), we know\nDue to the fact that , we know\nThen an iteration implies\nHence we have\nAfter taking -norms on both sides, we have\nFor , Lemma 1 ###reference_1### implies that and\nFor , Lemma 1 ###reference_1### implies that\nwhere\nAfter using the trivial fact that and noting that , we obtain the first inequality.\nFor the second inequality, note from (33 ###reference_###) that\nThen using Lemma 1 ###reference_1### again and similar procedures with the above inequalities yield\nWe conclude the proof by setting .\n\u220e"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "3.2",
|
| 85 |
+
"parent_section_id": "3",
|
| 86 |
+
"section_name": "Empirical operator and basic lemmas",
|
| 87 |
+
"text": "Denote the empirical covariance function associated with the data set by\nthen we denote the corresponding empirical operator of by\nThe next result is a basic estimate on operators and that will be used later.\nFor the operators and , if the stepsizes satisfy , then for any , the following basic estimates hold,\nwith the constant .\nAccording to the representations of and in (8 ###reference_###) and (35 ###reference_###), we know the norms of and satisfy\nand\nThen we know from Lemma 1 ###reference_1### with , that\nand hence\nFinally, we have\nwhere . The estimates for operator follows in a similar way to that for .\n\u220e\nWe end this section with the following basic lemma from [15 ###reference_b15###] that will be used later.\nIf , , then\n, .\nIn particular, if , there holds,\n, where is an absolute constant defined by\nThe original lemma is expressed with . In fact, the original proof does not necessarily require and it is obvious that when , the lemma automatically holds. Hence, we state it in the above form."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Analysis of GDFL algorithm",
|
| 93 |
+
"text": ""
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.1",
|
| 97 |
+
"parent_section_id": "4",
|
| 98 |
+
"section_name": "Error analysis and error decomposition",
|
| 99 |
+
"text": "Let us start with the error analysis of the GDFL algorithm (3 ###reference_###). For our GDFL algorithm\nusing our operator representations of and in (8 ###reference_###) and (35 ###reference_###), we can rewrite it as\nActing operation on both sides of the above equality and noting that\nwe have\nAn iteration implies\nReturn to data-free iteration (27 ###reference_###), rewrite (31 ###reference_###) to\nThen an iteration implies\nThen we know (36 ###reference_###) and (38 ###reference_###) together imply\nwhich further gives the following error decomposition for as\nDenote the following norms\nThe next result gives a general error bound for .\nLet and be defined in GDFL (3 ###reference_###) and data-free GDFL (27 ###reference_###) respectively. Assume conditions (11 ###reference_###) and (12 ###reference_###) hold. Let the stepsize be selected as . Then for any , we have\nwhere is defined as in (21 ###reference_###) and and are some absolute constants given in the proof.\nWe make a decomposition for as\nUsing the fact that , , and for any two positive self-adjoint operators , on a separable Hilbert space, the above inequality can be bounded by\nWhen , , Lemma 1 ###reference_1###, Lemma 3 ###reference_3### and the basic fact , imply that\nwhere and are defined as in (30 ###reference_###). By using Lemma 3 ###reference_3###,\nIf we denote\nthen\nAlso, an easy calculation shows\nThen if we denote ,\nwe have\nand\nWhen , following similar procedures as in (43 ###reference_###) and using Lemma 3 ###reference_3###, we have\nwhere . Also, it is easy to see when ,\nwhere . Finally, Combining the above results for the case and , we have\nwhere .\nNow we estimate by making the following decomposition,\nBy using Lemma 2 ###reference_2###, we obtain\nwhere .\nA similar procedure implies that\nCombining the above estimates for , and yields\nwhich concludes the proof.\n\u220e"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.2",
|
| 103 |
+
"parent_section_id": "4",
|
| 104 |
+
"section_name": "Deriving learning rates: proof of Theorem 1",
|
| 105 |
+
"text": "In this subsection, we derive learning rates of the GDFL algorithm.\nDenote as\nwith . Firstly, we need a confidence-based upper bound for in terms of . The following lemma is needed and it follows directly from [40 ###reference_b40###] and [41 ###reference_b41###].\nAssume condition (11 ###reference_###) holds. With probability at least ,\nThe next lemma [29 ###reference_b29###] on Hilbert-valued random variables is needed.\nFor a random variable on with values in a separable Hilbert space satisfying almost surely, and a random sample independent drawn according to , there holds with probability ,\nThe next proposition provides our required confidence-based estimate for .\nAssume conditions (11 ###reference_###) and (12 ###reference_###) hold. With probability at least , there holds\nRecall that the functional linear model gives\nwe know the following decomposition for holds,\nBy using Lemma 4 ###reference_4###, after scaling on , we know with probability at least ,\nWe turn to estimate the first term. Denote the random variable\nwhich takes values in the space . Note that\nwe know\nLet be a set of normalized eigenpairs of on with being an orthonormal basis of . Expand , we have\nAfter taking expectations, we have\nOn the other hand, it is easy to see\nThen using Lemma 5 ###reference_5###, we obtain\nwith with probability at least ,\nCombining (51 ###reference_###) and (53 ###reference_###), using the fact , , we complete the proof of the proposition.\n\u220e\nOn the other hand, from [41 ###reference_b41###], we know with probability at least , each of the following inequalities holds,\nTherefore, combining the above two estimates with Lemma 2 ###reference_2###, , , can be bounded together by in a high confidence level.\nIt is easy to see that for any prediction estimator based on data set associated with corresponding slope function via , the following fact holds,\nThen we know for our proposed estimator , for any , there holds\nwhere .\nCombine (54 ###reference_###), (55 ###reference_###) with Proposition 2 ###reference_2###, after scaling on , we know with probability at least , the following inequalities hold simultaneously,\nThen after combining these estimates with (56 ###reference_###), we know that, if noise condition (13 ###reference_###) holds, with probability at least ,\nIf noise condition (14 ###reference_###) holds, we also have, with probability at least ,\nWhen , , using the condition we can directly derive\nand\nThen we know if noise condition (13 ###reference_###) holds, we have, with probability at least ,\nand if noise condition (14 ###reference_###) holds, we have, with probability at least ,\nwhere\nand\nThis completes the proof of Theorem 1 ###reference_1###.\n\u220e"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.3",
|
| 109 |
+
"parent_section_id": "4",
|
| 110 |
+
"section_name": "Convergence in the RKHS norm: proof of Theorem 2",
|
| 111 |
+
"text": "To establish the learning rates of in terms of the RKHS norm, we first consider an error decomposition for that will be used later. The proof of the results on the DGDFL algorithm in the next section also relies on this proposition.\nAssume conditions (11 ###reference_###) and (12 ###reference_###) hold. Let the stepsize be selected as . Then for any data set and any , we have\nWe start from the following decomposition,\nFor , we make the decomposition as\nThen following the same estimate as (42 ###reference_###), we can derive\nFor , we have following decomposition,\nThen Lemma 2 ###reference_2### implies that\nSimilarly,\nCombining estimates for , , , we have\nwhere .\n\u220e\nWe are ready to give the proof of Theorem 2 ###reference_2###.\nWe observe that the main difference between the error decompositions of and in Proposition 1 ###reference_1### comes from the additional terms . Other terms share the same estimates. Hence when taking and , we can directly use the established error bounds for , , in (57 ###reference_###), (58 ###reference_###), (59 ###reference_###) and the corresponding estimates for in (60 ###reference_###) and (61 ###reference_###) to obtain with probability at least ,\nwhere which is half of the first term of . Recall that Theorem 6 ###reference_6### and the basic fact for any give that\nThe triangle inequality finally implies\nwhere . This proves Theorem 2 ###reference_2###.\n\u220e"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "5",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "Analysis of DGDFL algorithm",
|
| 117 |
+
"text": ""
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "5.1",
|
| 121 |
+
"parent_section_id": "5",
|
| 122 |
+
"section_name": "Iterative procedures and error decompostion for DGDFL",
|
| 123 |
+
"text": "This section aims to provide corresponding representations and error decompositions related to the distributed estimator . We start with another representation of , that is,\nThen an iteration implies that\nRecalling the representation (32 ###reference_###) of data-free GDFL algorithm, we know\nApplying the above equality to the data set , with replaced by , we have\nSince ,\nNow we are ready to give the following general error bound estimate of the distributed estimator in the DGDFL algorithm.\nAssume conditions (11 ###reference_###) and (12 ###reference_###) hold. If , let the estimator be generated from DGDFL algorithm, there holds\nAfter taking norms on both sides of (62 ###reference_###), we have\nUsing Lemma 2 ###reference_2###, we can estimate as\nFor , we split it into three terms as follows,\nSince , it is easy to see\nwhere . Then we know that\nThen following the same procedure as that in getting (44 ###reference_###), we have\nwith given as before. For , using Lemma 2 ###reference_2###, we have\nwith defined as before. Now we estimate as\nApplying Proposition 3 ###reference_3### to data set , , we have, for and ,\nFollowing the same procedure in getting (45 ###reference_###) and (46 ###reference_###) with replaced by , we know, when ,\nwhere . Then we arrive at\nFinally, combining the above estimates for , , , , we obtain\nwhere .\n\u220e"
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5.2",
|
| 127 |
+
"parent_section_id": "5",
|
| 128 |
+
"section_name": "Convergence analysis: proofs of Theorem 3, Theorem 4",
|
| 129 |
+
"text": "This subsection aims to provide proofs of Theorems 3 ###reference_3### and 4 ###reference_4###. When and , Theorem 1 ###reference_1### directly implies the desired results in Theorem 3 ###reference_3### and Theorem 4 ###reference_4###. In this subsection, we focus on the case .\nAccording to (54 ###reference_###), (55 ###reference_###), Proposition 2 ###reference_2###, with probability at least , the following bounds hold simultaneously,\nwhere . Then we know with probability at least ,\nwhere\nNote that\nIf the noise condition (13 ###reference_###) is satisfied, then, when , , and the total number m of the local processors satisfy\nwe have\nand\nMeanwhile, we know that\nAfter scaling on , we have with probability at least , there holds,\nwhere the constant .\nFrom inequality (55 ###reference_###) and Proposition 2 ###reference_2###, with probability at least , the following inequalities hold simultaneously,\nThen we know, with probability at least ,\nwhere .\nIf the noise condition (14 ###reference_###) holds, then when , , and the total number of the local processors satisfy\nFollowing similar computations to (65 ###reference_###) and (66 ###reference_###), we have\nand\n.\nThen we also obtain\n. Accordingly,\nafter scaling on and using the condition , we have with probability at least , there holds,\nFrom inequality (55 ###reference_###) and Proposition 2 ###reference_2###, we know with probability at least , the following holds simultaneously\nThen we conclude that, with probability at least\nwith defined as before.\nFinally, when the noise condition (13 ###reference_###) holds and and the total number of the local processors satisfy\n(15 ###reference_###),\nwe have\nwhich concludes the proof of Theorem 3 ###reference_3###. Correspondingly, when the noise condition (14 ###reference_###) holds and the total number m of the local processors satisfy\n(16 ###reference_###), we have\nwhere . We conclude the proof of Theorem 4 ###reference_4###."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "5.3",
|
| 133 |
+
"parent_section_id": "5",
|
| 134 |
+
"section_name": "Proofs of Corollary 1, Corollary 2",
|
| 135 |
+
"text": "Set\nThen we know from Theorem 4 ###reference_4### that for any , there holds,\n.\nIf we set , then . It follows that when ,\nWhen , the above inequality also holds since the right hand side of the above inequality is greater than 1.\nHence we have\nwhere . \u220e\nTo prove Corollary 2 ###reference_2###, we need the following Borel-Cantelli lemma from [8 ###reference_b8###].\nLet be a sequence of events in some probability space and be a sequence of positive numbers satisfying . If\nthen converges to almost surely.\nDenote and set in Theorem 4 ###reference_4###. Then\nwhere\nIf we denote , then we know ,\nThen using Lemma 6 ###reference_6### yields our desired result.\n\u220e"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "6",
|
| 139 |
+
"parent_section_id": null,
|
| 140 |
+
"section_name": "Analysis of semi-supervised DGDFL algorithm",
|
| 141 |
+
"text": "Recall the representation of in (41 ###reference_###). For convenience, we denote\nand we have the representation . According to the definition of , the condition and , we have\nThen we know\nFor the local data set , we have\nand then we know for the -th local machine, there holds,\nAccording to Proposition 4 ###reference_4### with the data set replaced by , we have\nUsing a similar argument as (58 ###reference_###), (59 ###reference_###), (63 ###reference_###) and the fact that \nwe know with probability at least ,\nwith\nand with probability at least , the following two inequalities hold simultaneously,\nAfter making these preparation, we are ready to give the proof of Theorem 5 ###reference_5###.\nWhen the noise condition (13 ###reference_###) holds, using the fact that , , we have\nAccording to condition (22 ###reference_###), we know\nand\nwhich further imply that\nIt is also easy to see from in (22 ###reference_###) and the fact , we know , and hence\nAlso recall\nThen we have\nThen we can return to inequality (71 ###reference_###). After using the size condition (22 ###reference_###) on , we get with probability at least ,\nwhere .\nFinally, combining the above estimates with (73 ###reference_###) and (74 ###reference_###), we have with probability at least ,\nwith .\nWe turn to handle the case of the noise condition (14 ###reference_###). Following similar procedures with the above calculations of and , we can derive (76 ###reference_###), (77 ###reference_###), (78 ###reference_###), (80 ###reference_###) under our size condition (23 ###reference_###). Return to inequality (71 ###reference_###) and use the size condition (23 ###reference_###). We obtain that, with probability at least ,\nwhere .\nWe finally conclude that,\nwith probability at least ,\nThe proof of Theorem 5 ###reference_5### is complete.\n\u220e"
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "7",
|
| 145 |
+
"parent_section_id": null,
|
| 146 |
+
"section_name": "Numerical experiments",
|
| 147 |
+
"text": "In this section, we conduct some numerical experiments with simulated data to verify the effectiveness of our proposed algorithms, and compare the results with the previous methodologies for the functional linear model [54 ###reference_b54###, 24 ###reference_b24###]."
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"section_id": "7.1",
|
| 151 |
+
"parent_section_id": "7",
|
| 152 |
+
"section_name": "A simulation of the DGDFL algorithm",
|
| 153 |
+
"text": "In this subsection, we conduct a numerical simulation to verify the effectiveness of our proposed algorithms and the corresponding theoretical results, with the assumptions described in the paper being satisfied. We use the similar numerical settings as the previous papers [4 ###reference_b4###],[10 ###reference_b10###] for the functional linear regression.\nWe consider the domain , the functional predictors are generated through the process\nwhere are utilized in our experiments, and are independent uniform random variables. Then, the covariance function is\nMoreover, we consider the RKHS induced by the Mercer kernel as\nwhere is the -th Bernoulli polynomial, with the fact that\nFurthermore, we set the slope function to make the regularity assumption (12 ###reference_###) being satisfied, where we choose , and\nThe random noise is assumed to be independent of and follows the normal distribution. This makes the noise assumptions (13 ###reference_###) and (14 ###reference_###) being satisfied. Moreover, since are bounded random variables, the assumption (11 ###reference_###) is also satisfied with some absolute constant .\n###figure_1### We then conduct the numerical experiments to examine the empirical performance of our proposed algorithms. For all the experiments, the stepsizes are selected as with . For each local machine, the iteration stops when . The excess generalization error of the final estimator is calculated using a testing sample with size 5000.\nFigure 1 ###reference_### and Figure 2 ###reference_### exhibit the excess risk w.r.t. the sample size for our proposed DGDFL algorithm with and respectively. We conduct several experiments with the choice of different numbers of local machines. When , this is in fact the GDFL algorithm. Firstly, we can observe that for both algorithms, the excess risk decreases quite fast with the increase of the sample size. This corresponds to our theoretical results that both algorithms can achieve the almost optimal learning rates for some . Secondly, when the sample size is small (e.g., ), the DGDFL algorithm performs worse when the number of local machines increases, this corresponds to our theoretical result that the restriction on the maximal number of local machines is strict when is small. Finally, when the sample size is large (e.g., ), the restriction on the maximal number of local machines is lenient, and the performances of the DGDFL algorithm are similar with the usage of whatever number of local machines satisfying such restriction. Therefore, for a large sample size, we might use more local machines to achieve unimpaired performance with even less computational cost on each local machine. This embodies the effectiveness of our proposed DGDFL algorithm.\n###figure_2###"
|
| 154 |
+
},
|
| 155 |
+
{
|
| 156 |
+
"section_id": "7.2",
|
| 157 |
+
"parent_section_id": "7",
|
| 158 |
+
"section_name": "Comparison with previous methods",
|
| 159 |
+
"text": "In this subsection, we compare our proposed GDFL and DGDFL algorithms with the previous methods for functional linear model, i.e., the classical reproducing kernel (RK) approach [54 ###reference_b54###], and a subsequently proposed divide-and-conquer version of it called the divide-and-conquer reproducing kernel (DRK) approach [24 ###reference_b24###], to further verify the effectiveness of our proposed algorithms.\nWe consider the simulation setting of [14 ###reference_b14###, 54 ###reference_b54###] where the domain . The true slope function is given by\nwhere and for . The functional predictors are generated through the process\nwhere , and are independently sampled from the uniform distribution on .\nThe random noise of the functional linear model is with .\n###table_1### We then conduct the numerical experiments to examine the empirical performance of our proposed algorithms and compare with the previous methods. For all the experiments, we use the Gaussian kernel with bandwidth , and the stepsizes are selected as with . For each local machine, the iteration stops when , with the tolerance chosen based on the training sample size in the local machines. The estimation error of the final estimator is calculated based on the true slope function , and the prediction error (excess risk) is calculated by a testing sample with size 1000. The computation time represents the average running time of the local machines.\nWe present the performance of different algorithms in Table 1 ###reference_###, with the training sample size chosen as 100, 200, and 500 respectively. The GDFL algorithm and the RK algorithm utilize the whole training sample for the training in one machine, while the DGDFL algorithm and the DRK algorithm are the divide-and-conquer algorithms based on them, and the number inside the square brackets indicates the number of local machines.\nIt can be observed that compared with the classical RK algorithm, which requires computational cost due to the computation of the inverse of the kernel matrix, the GDFL algorithm can achieve comparable estimation error and prediction error, with much less computational cost due to the avoidance of the calculation of the inverse matrix, especially when the sample size is quite large. For example, the RK algorithm would be very slow when , while the GDFL algorithm only needs one fifteenth of the running time of the RK algorithm.\nThe DGDFL algorithm we proposed is a divide-and-conquer approach of the GDFL algorithm, when local machines are utilized, each contains training samples, thus making the computational cost of the DGDFL algorithm much smaller than that of the original GDFL algorithm due to a smaller training sample size in each local machine. The DRK algorithm is a divide-and-conquer approach of the classical RK approach, and it can approximately diminish the computational cost to of the initial requirements [24 ###reference_b24###]. These are also verified by our numerical simulation: while increasing the number of local machines, the estimation error and the prediction error are almost unchanged or only getting slightly worse, but the mean and variance of the computation time are largely improved.\nMoreover, it can be noticed in Table 1 ###reference_### that our proposed DGDFL algorithm can achieve similar accuracy as the classical RK approach, while largely reducing the computational cost, especially when the sample size is quite large or more local machines are utilized. Furthermore, compared with the DRK algorithm with the same local machines, the DGDFL algorithm can also achieve comparable accuracy with a smaller computational cost, especially when the sample size is larger.\n###figure_3### ###figure_4### We further plot the excess risk and the computation time w.r.t. the sample size for our proposed GDFL and DGDFL algorithms and the previously proposed RK and DRK algorithms in Figure 3 ###reference_### and Figure 4 ###reference_### respectively. It can be observed that when the sample size becomes larger, the DGDFL algorithm can achieve almost the same excess risk as the GDFL algorithm, while the computation time can be largely improved. However, for the DRK algorithm, even though it can also largely improve the computation time compared with the RK algorithm, with the number of local machines increasing such as , the excess risk of the DRK algorithm would become slightly worse than that of the RK algorithm.\nAs for the computation time of different algorithms shown in Figure 4 ###reference_###, comparing the GDFL algorithm with the RK algorithm, or comparing the DGDFL algorithm with the DRK algorithm that utilize the same number of local machines, the running time of our proposed GDFL and DGDFL algorithms is always better. This advantage of the computation time is more remarkable when the sample size becomes larger, since our proposed algorithms have a lower order of the computational cost. Moreover, we can also notice that, when the sample size becomes quite large such as , the DGDFL algorithm with only two local machines can even be slightly faster than the DRK algorithm with five local machines, and obtains a slightly better excess risk in the meantime. These numerical simulations further demonstrate the effectiveness and advantage of our proposed GDFL and DGDFL algorithms."
|
| 160 |
+
}
|
| 161 |
+
],
|
| 162 |
+
"appendix": [],
|
| 163 |
+
"tables": {
|
| 164 |
+
"1": {
|
| 165 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S7.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S7.T1.3\">\n<tr class=\"ltx_tr\" id=\"S7.T1.3.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S7.T1.3.4.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Data</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.3.4.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.3.4.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Estimation Error</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.3.4.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Prediction Error</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.3.4.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Computation Time [s]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S7.T1.1.1.1\" rowspan=\"6\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S7.T1.1.1.1.1\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.1.1.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">GDFL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.1.1.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1936 (0.1628)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.1.1.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0788 (0.0703)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.1.1.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">1.0650 (0.2991)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.5.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DGDFL [2]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.5.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1948 (0.1581)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.5.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0774 (0.0678)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.5.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.5016 (0.1531)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.6.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DGDFL [5]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.6.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.2252 (0.1467)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.6.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0888 (0.0698)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.6.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1509 (0.0442)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.7.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">RK</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.7.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.2179 (0.1656)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.7.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0851 (0.0694)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.7.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">3.3786 (0.1084)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.8.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DRK [2]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.8.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.2086 (0.1587)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.8.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0820 (0.0681)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.8.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.8256 (0.0288)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.9.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DRK [5]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.9.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.2205 (0.1761)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.9.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0914 (0.0762)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.9.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1996 (0.0087)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S7.T1.2.2.1\" rowspan=\"6\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S7.T1.2.2.1.1\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.2.2.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">GDFL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.2.2.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1140 (0.0630)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.2.2.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0403 (0.0256)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.2.2.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">3.749 (0.4958)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.10.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DGDFL [2]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.10.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1180 (0.0681)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.10.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0407 (0.0261)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.10.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">1.1659 (0.2815)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.11.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DGDFL [5]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.11.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1204 (0.0703)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.11.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0431 (0.0274)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.11.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.4562 (0.1049)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.12.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">RK</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.12.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1189 (0.0905)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.12.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0420 (0.0305)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.12.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">17.0356 (0.4466)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.13.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DRK [2]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.13.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1162 (0.0760)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.13.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0416 (0.0278)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.13.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">3.3752 (0.0767)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.14.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DRK [5]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.14.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1294 (0.0840)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.14.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0458 (0.0303)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.14.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.5570 (0.0135)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S7.T1.3.3.1\" rowspan=\"6\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S7.T1.3.3.1.1\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.3.3.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">GDFL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.3.3.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0678 (0.0325)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.3.3.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0214 (0.0117)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T1.3.3.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">16.1347 (1.0137)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.15.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DGDFL [2]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.15.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0732 (0.0317)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.15.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0220 (0.0113)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.15.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">5.1245 (0.5934)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.16.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DGDFL [5]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.16.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0792 (0.0342)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.16.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0238 (0.0124)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.16.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">1.1748 (0.1719)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.17.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">RK</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.17.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0625 (0.0452)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.17.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0206 (0.0150)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.17.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">315.816 (11.457)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.18\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.18.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DRK [2]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.18.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0724 (0.0492)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.18.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0230 (0.0152)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T1.3.18.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">33.5947 (0.8937)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.19\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T1.3.19.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DRK [5]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T1.3.19.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0795 (0.0465)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T1.3.19.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.0250 (0.0148)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T1.3.19.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">3.3646 (0.0436)</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Comparison of the estimation error, prediction error, and computation time of different algorithms for the simulation, the number inside the square brackets indicates the number of local machines. We repeat the experiments for 100 times, with the mean and standard deviation displayed.</figcaption>\n</figure>",
|
| 166 |
+
"capture": "Table 1: Comparison of the estimation error, prediction error, and computation time of different algorithms for the simulation, the number inside the square brackets indicates the number of local machines. We repeat the experiments for 100 times, with the mean and standard deviation displayed."
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
"image_paths": {
|
| 170 |
+
"1": {
|
| 171 |
+
"figure_path": "2305.07408v3_figure_1.png",
|
| 172 |
+
"caption": "Figure 1: The excess risk w.r.t. the sample size for the DGDFL algorithm, with the number of local machines being m=1,10,50,100\ud835\udc5a11050100m=1,10,50,100italic_m = 1 , 10 , 50 , 100 respectively, and \u03c3=1\ud835\udf0e1\\sigma=1italic_\u03c3 = 1. The experiments are repeated for 20 times.",
|
| 173 |
+
"url": "http://arxiv.org/html/2305.07408v3/x1.png"
|
| 174 |
+
},
|
| 175 |
+
"2": {
|
| 176 |
+
"figure_path": "2305.07408v3_figure_2.png",
|
| 177 |
+
"caption": "Figure 2: The excess risk w.r.t. the sample size for the DGDFL algorithm, with the number of local machines being m=1,10,50,100\ud835\udc5a11050100m=1,10,50,100italic_m = 1 , 10 , 50 , 100 respectively, and \u03c3=1.5\ud835\udf0e1.5\\sigma=1.5italic_\u03c3 = 1.5. The experiments are repeated for 20 times.",
|
| 178 |
+
"url": "http://arxiv.org/html/2305.07408v3/x2.png"
|
| 179 |
+
},
|
| 180 |
+
"3": {
|
| 181 |
+
"figure_path": "2305.07408v3_figure_3.png",
|
| 182 |
+
"caption": "Figure 3: The excess risk w.r.t. the sample size for the GDFL, DGDFL, RK, and DRK algorithms, with the the number in the label indicating the number of local machines. The experiments are repeated for 50 times.",
|
| 183 |
+
"url": "http://arxiv.org/html/2305.07408v3/x3.png"
|
| 184 |
+
},
|
| 185 |
+
"4": {
|
| 186 |
+
"figure_path": "2305.07408v3_figure_4.png",
|
| 187 |
+
"caption": "Figure 4: The computation time w.r.t. the sample size for the GDFL, DGDFL, RK, and DRK algorithms, with the the number in the label indicating the number of local machines. The y-axis is in the log scale. The experiments are repeated for 50 times.",
|
| 188 |
+
"url": "http://arxiv.org/html/2305.07408v3/x4.png"
|
| 189 |
+
}
|
| 190 |
+
},
|
| 191 |
+
"validation": true,
|
| 192 |
+
"references": [],
|
| 193 |
+
"url": "http://arxiv.org/html/2305.07408v3"
|
| 194 |
+
}
|
20240721/2306.06871v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2306.13421v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2307.16601v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2308.02785v2.json
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Demystifying the RSA Algorithm: An Intuitive Introduction for Novices in Cybersecurity11footnote 1Copyright \u00a92022 by the Consortium for Computing Sciences in Colleges. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the CCSC copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Consortium for Computing Sciences in Colleges. To copy otherwise, or to republish, requires a fee and/or specific permission.",
|
| 3 |
+
"abstract": "Given the escalating importance of cybersecurity, it becomes increasingly beneficial for a diverse community to comprehend fundamental security mechanisms. Among these, the RSA algorithm stands out as a crucial component in public-key cryptosystems. However, understanding the RSA algorithm typically entails familiarity with number theory, modular arithmetic, and related concepts, which can often exceed the knowledge base of beginners entering the field of cybersecurity. In this study, we present an intuitively crafted, student-oriented introduction to the RSA algorithm. We assume that our readers possess only a basic background in mathematics and cybersecurity. Commencing with the three essential goals of public-key cryptosystems, we provide a step-by-step elucidation of how the RSA algorithm accomplishes these objectives. Additionally, we employ a toy example to further enhance practical understanding. Our assessment of student learning outcomes, conducted across two sections of the same course, reveals a discernible improvement in grades for the students.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The three most widely accepted security goals of cybersecurity are shorted as \u201cCIA triad\u201d, which stands for Confidentiality, Integrity and Availability. Cryptographic algorithms play a pivotal role in achieving confidentiality through private-key and public-key cryptographic algorithms. Public-key cryptographic algorithms, exemplified by the RSA algorithm, also contribute significantly to attaining another vital security goal\u2014non-repudiation, particularly crucial in scenarios like electronic mail, where digital signatures are employed. Remarkably, the RSA algorithm was originally designed to address both confidentiality and non-repudiation goals in electronic mail [rivest1978method, wahab2021hiding].\nDeveloped by Ron Rivest, Adi Shamir, and Leonard Adleman at the Massachusetts Institute of Technology (MIT) in 1976, the RSA algorithm stands as a pioneering implementation of the public-key cryptosystem, conceptualized by Diffie and Hellman [diffie2022new]. Operating with two keys\u2014a private key and a public key\u2014the RSA algorithm facilitates secure communication. For instance, when two parties, Alice and Bob, aim to exchange messages covertly, Alice encrypts the message using Bob\u2019s public key, creating ciphertext . This ciphertext is then sent to Bob, who decrypts it with their private key to retrieve the original plaintext .\nWhile this process may appear straightforward, generating the public and private keys involves intricate mathematical concepts such as number theory and modular arithmetic. These topics often pose challenges for beginners in cybersecurity, especially undergraduate students. In our work, we offer an intuitive and accessible perspective on understanding the RSA algorithm. Beginning with the three primary goals the RSA algorithm aims to achieve, we employ a student-oriented approach to elucidate the step-by-step design of the system. We acknowledge the potential lack of background knowledge in readers regarding number theory, modular arithmetic etc., and hence, we aim to simplify the mathematical rigor to make the content more approachable.\nAdditionally, we provide a practical toy example of the RSA algorithm to enhance readers\u2019 understanding. Towards the end of the paper, we present a real-world student learning outcome assessment conducted on students from two different sections of the same course. Our results demonstrate that the proposed student-oriented approach outperforms the traditional method of explaining the RSA algorithm in terms of assignment grades.\nThe paper is organised as follows: the necessary foundational information of the RSA algorithm is provided in Section 2 ###reference_###. Then the detailed student-oriented style introduction of the algorithm is elaborated in Section 3 ###reference_###. In Section 4 ###reference_### we employed a specific toy example to demonstrate how to encrypt and decrypt the message in RSA from a practical perspective. We concluded the paper in Section 6 ###reference_###."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Background and Preliminaries",
|
| 15 |
+
"text": "In this section, we provide necessary background that gives the context and mathematical foundations of the RSA algorithm. Readers can also skip this section and use this section as a reference while reading Section 3 ###reference_###.\n###figure_1### ###figure_2###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Symmetric-key and Public-key Cryptosystems",
|
| 21 |
+
"text": "One of the major challenges modern cryptographies want to address is how to ensure two end users, let\u2019s say Alice and Bob, could secretly exchange messages in an open and potentially unsafe environment. We have two strategies to tackle this challenge[imam2021systematic].\nThe first strategy is to let both Alice and Bob share a secret key and make sure any one of them can encrypt the plaintext into ciphertext using , while the other can recover from using the same key . This strategy is also known as symmetric-key cryptography [anusha2020symmetric]. It is similar with a real-world padlock example in which we use a key to lock a cabinet. When someone wants to open the cabinet, they need to get the same key to unlock the padlock. The process of Alice using the symmetric-key cryptography to send a message to Bob is shown in Fig. 1 ###reference_###(a).\nOne of the major problems with the symmetric-key cryptography is that end users have to share the same key in advance, which is often impractical in modern communication systems such as computer networks due to: :\nIn computer network systems, communication connections are usually random and instantaneously. Requiring a shared key among all the communication connections would be costly;\nAny information of the shared key sent over the open environment could be intercepted by malicious attackers, which will put the encryption out of work. Therefore, it is unrealistic to require all end users to share the same secret key in advance when they want to exchange information.\nIn 1976, Diffie and Hellman [diffie2022new] proposed the second strategy named as public-key cryptosystems to tackle these challenges. The basic idea is that both Alice and Bob will still share the same cryptograhic algorithm, but they no longer need to share the same secret key. Instead, the system will maintain two keys: a private key and a public key. The private key is only known to the owner while the public key can be accessed by anyone who wants to communicate with the owner.\nEvery time if Alice wants to send a message to Bob, Alice will use Bob\u2019s public key to encrypt the message . On Bob\u2019s side, the ciphertext can be decrypted using Bob\u2019s private key . Since only Bob has , thus no one else could recover . The process of Alice using the public-key cryptosystem to send a message to Bob is shown in Fig. 1 ###reference_###(b).\nIn this system, the two communication entities no longer need to communicate a shared key in advance, which addresses the major problem in symmetric-key cryptography. However, one of the major disadvantages is the public-key cryptography algorithms is usually more computationally costly than symmetric-key cryptography algorithms [katz2020introduction, fotohi2020securing, liestyowati2020public].\nThe public-key cryptosystem is similar with our self-service drop box mechanism used in shipping industry. Anyone can put an envelope or a package (messages) into a public drop box (public key) provided by the shipping company (anyone could use the receiver\u2019s public key to encrypt the message in public-key cryptosystems). However, only authorised personnel (receiver) from the shipping company that has the key (private) could open the drop box to get the mails/packages.\nUsing public-key cryptosystems, two end users will no longer be required to share a secret key in advance when they need to exchange information. All the sender needs to know is the public key of the receiver and the cryptographic algorithm the receiver used, both of which are public information. The RSA algorithm is an implementation of the public-key cryptosystem concept."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Modular Arithmetic",
|
| 27 |
+
"text": "Modular arithmetic is a branch of arithmetic for integers, where numbers \u201cwrap around\u201d when reaching a certain value. If we have a modulus , which is an integer larger than 1, is the remainder of divided by . For example, . The result of for any number will always be less than and greater than or equal to 0, i.e., . In our example, obviously . If , then will always equal to itself. For example, . In the case where integers and have the same remainder when divided by , i.e., , we have the following definition:\nIf and are integers and is a positive integer, then is congruent to modulo if divides . We use the notation to indicate that is congruent to modulo .\nFor example, as 24 and 14 have the same remainder when divided by 5, we call 24 and 14 are congruent modulo 5, which can be represented as . In modular arithmetic, we use \"\" rather than \"\" to denote the equivalence of modulo results. There is an important theorem of congruence that we will use in explaining the RSA algorithm:\nIf for integers and , then and for any integer .\nThis can be proved by the definition of congruence. Since , then , i.e., for integers and . Further this can be written as for an integer . We multiply both sides by an integer to get , and perform modulo on both sides will get , i.e., , which completes the proof. We can use similar strategies to prove for any integer .\n\u220e\nAnother important theorem that we will use in proving the RSA algorithm is B\u00e9zout\u2019s theorem,\nIf and are positive integers, then there exist integers and such that the greatest common divisor of , i.e., , can be represented as .\nThe detailed proof of this theorem can be found in [rosen2011elementary]. The pair of and could be found using the Extended Euclidean Algorithm. For example, . Now we give the definition of modular multiplicative inverse.\nIf there exist integers such that , then is said to be an inverse of modulo and vice versa.\nBased on this definition of modular multiplicative inverse and B\u00e9zout\u2019s theorem, we can derive the following theorem:\nAn inverse of modulo is guaranteed to be existed whenever and are relatively prime.\nAs and are relatively prime, . According to B\u00e9zout\u2019s theorem, there are integers and such that . This implies that As it follows that Consequently, is an inverse of modulo .\n\u220e\nTo simplify the readability, we leave the proofs of these properties, such as the Extended Euclidean Algorithm in modular arithmetic, to the reader\u2019s interest. For those who wish to explore modular arithmetic and related theorems and proofs in greater depth, please refer to [rosen2019discrete] for a detailed explanation."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Prime Factorisation",
|
| 33 |
+
"text": "Prime factorization means the decomposition, if possible, of a positive integer into a product of prime integers. For example, the prime factorization of 15 is , in which both 3 and 5 are prime numbers. Prime factorization is an important problem in number theory because still no efficient enough way has been discovered to find the prime factorization of an extremely large integer with existing classical computer systems.\nThe RSA algorithm embeds prime factorization in its design to ensure there exists no efficient way to decipher the ciphertext in non-quantum computing systems. However, it does not mean that we would not find an efficient way to perform prime factorization in the future based on nowadays computer technology (a lot of mathematicians are still working on this problem); it also does not mean that we would not find an efficient way on future computers, such as quantum computing [national2019quantum, hidary2019quantum, easttom2022quantum]. In fact, an efficient way to perform prime factorization on quantum computers has already been found [shor1994algorithms]. The problem is that a workable quantum computer is still estimated to be at least decades away [bernstein2017post]. Therefore, we can safely say the RSA algorithm is secure at least for the time being."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "Euler\u2019s Theorem",
|
| 39 |
+
"text": "Before introducing Euler\u2019s theorem, let\u2019s first provide the definition of Euler\u2019s totient function:\nThe Euler\u2019s totient function is the number of positive integers that are less than and relatively prime to this integer, i.e., .\nFor example, given an integer 8, there exist four integers that are relatively prime to 8, thus Euler\u2019s totient function value . You might have already realised that Euler\u2019s totient function value for a prime number is always , i.e., , as all the positive integers less than are relative prime to . An important mathematical property of Euler\u2019s totient function is that:\nIf and are relatively prime integers, then .\nFor example, . We\u2019ll skip the proof here and the detailed proof of this theorem can be found in [rosen2011elementary]. This property offers a convenient way to calculate Euler\u2019s totient function value if an integer can be factorized into the product of two prime numbers and . In this case as are also relatively prime to each other, which we will use later in proving the RSA algorithm. The challenge here is that no efficient way has been found on modern computers to do prime factorization (as discussed in Section 2.3 ###reference_###).\nIt is worth noting that the complexity of prime factorization and computing the Euler\u2019s totient function is equivalent for arbitrary integers. Essentially, both require evaluating whether the integer is relative prime to all the positive integers less than it. Therefore, it is also computationally difficult to calculate Euler\u2019s totient function for large enough integers. Now we\u2019re ready to introduce Euler\u2019s Theorem.\nIf two integers and are relatively prime, i.e., , and , then .\nFor example, let and , then they are relatively prime and we have . Further we have , thus, . We leave the proof of Euler\u2019s theorem to the readers due to the abundance of online resources on this topic [rosen2011elementary]. It is worth noting that Euler\u2019s theorem provides a fast way to calculate when are relatively prime. This property plays a significant role in the RSA algorithm as we will see in the following section.\nAfter all the background information introduction, now we\u2019re ready to start the introduction of the RSA algorithm, which is an implementation of the public-key cryptosystem."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "The RSA algorithm",
|
| 45 |
+
"text": "The RSA system was introduced in 1976. Now it is one of the most widely used public-key encryption methods in computer networks. To materialise a public-key cryptosystem, as we introduced in Section 2.1 ###reference_###, we want to achieve the following three basic goals [rivest1978method]:\nEfficiency: The encryption and decryption process should be easy to compute for legitimate users who have the required key information.\nPlaintext recovery: We should be able to get the original plaintext through decrypting the ciphertext .\nComputational difficulty: Without the private key information, there is no known efficient way to perform the decryption process.\nThese three goals are critical in the success of the public-key systems. With these three goals in mind, we introduce the core encryption and decryption process of the RSA algorithm. The corresponding ciphertext of the plaintext is computed from\nand is the public key information of the receiver. The decryption process is similar, which is\nThe private key information consists of and . We use , not directly in Eq. (2 ###reference_###) because we want to highlight that this is the result we obtained from the decryption process. We will ensure in the plaintext recovery goal.\nSuppose Alice wants to send a secret message to Bob using the RSA algorithm. Bob\u2019s public key is and the corresponding private key is , which means that the ciphertext . Alice will send out to Bob. Bob can then decrypt the ciphertext to recover the plaintext through , which achieved the goal of . The detailed encryption and decryption process of the RSA algorithm is shown as follows in Algorithm 1 ###reference_###.\nWe now need to understand what conditions must be satisfied and how this process could achieve the three goals mentioned above. We will explain each goal with the associated conditions as follows."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Goal 1: Efficiency",
|
| 51 |
+
"text": "Both encryption and decryption procedures are identical from an implementation perspective, making them straightforward to implement in practice. Additionally, private and public keys can be determined using standard and efficient methods on modern computers [moriarty2016pkcs].\nWe also need to be able to find and efficiently without using an excessive amount of memory given that are all large numbers. Directly computing the exponentiation operation of or is impractical, as their results can be very extremely large and require significant memory to store. Fortunately, this problem can be addressed using the fast modular exponentiation algorithm, which reduces the computational complexity to a logarithmic level. The detailed algorithm is provided in [rosen2019discrete].\nHowever, despite the RSA algorithm\u2019s careful design for efficiency, it is generally accepted that public-key cryptosystems are usually less efficient than symmetric-key cryptosystems. Therefore, in real-world scenarios, the RSA algorithm is primarily used for delivering the pre-shared key in symmetric-key cryptosystems, which is often a short message. When encrypting large amounts of information, symmetric-key cryptosystems are still preferred for their efficiency [katz2020introduction]."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Goal 2: Plaintext Recovery",
|
| 57 |
+
"text": "The second goal is to guarantee the accurate recovery of original plaintext from ciphertext using receiver\u2019s private key , i.e., to ensure . Substituting in the encryption process as shown in Eq.(1 ###reference_###) to the decryption process as shown in Eq.(2 ###reference_###), it yields\nAs we know from Section 2.2 ###reference_###, could also be written as\nTherefore, the goal can be reinterpreted as finding the conditions to guarantee\nAs long as , the above equation will hold. According to Euler\u2019s theorem (Section 2.4 ###reference_###), if and are relatively prime, then . By the modular arithmetic properties (Section 2.2 ###reference_###), we can raise both sides to the -th power, with being a positive integer, to get . Multiplying both sides by yields,\nComparing Eq.(3 ###reference_###) to Eq.(6 ###reference_###), to ensure the correct recovery , we would now require\ni.e., we need\nUp until now, we found that we have two conditions need to be satisfied in order to make above equations hold: (1) and (2) and are relatively prime. As long as these two conditions are satisfied, the above derivation from Eq.(3 ###reference_###) to Eq.(8 ###reference_###) will hold. To satisfy the first condition, in real world, after choosing the large positive number , we need to break long messages into small blocks such that each block can be represented as an integer that is less than . We will explain how to ensure the second condition in Section 3.3 ###reference_###.\nWe now know that if we could find a pair of such that , is a positive integer. The two conditions for and are satisfied, then we\u2019re confident that the original plaintext could be recovered from . In the next section, we\u2019ll see how these conditions are met and at the same time the computational difficulty goal is also achieved."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.3",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Goal 3: Computational Difficulty",
|
| 63 |
+
"text": "Now the challenge is reduced to a problem of finding appropriate values of and , which are the major components of the public and private key respectively. The only clue we have now is , where is a positive integer.\nTo achieve the third goal of computational difficulty, we will start with the challenge of how to choose and . Let\u2019s first manipulate the equation a little bit.\nGiven that , when the modulus is , we have\nwhere the last congruent relation comes from the fact that is a positive integer. The congruence we get from the above manipulation reveals that if and are relatively prime, then is an inverse of modulo and the existence of is guaranteed according to the B\u00e9zout\u2019s theorem (Section 2.2 ###reference_###).\nNow we just need to find a number that is relatively prime to , and the corresponding inverse modulo , denoted by . Finding a number that is relatively prime to should not be a difficult problem if given . Finding the corresponding inverse of modulo could be done through the Extended Euclidean Algorithm efficiently as .\nWe have successfully found a way to find an appropriate and . However, this does not conclude the problem. In the third goal of public-key cryptosystems, it requires that there exists no known efficient way to calculate given the information of and . Obviously, we still have not reached that goal. If is not chosen carefully, an attacker might be able to easily figure out the value of and further efficiently figure out based on .\nAchieving the last goal of the public-key cryptosystems is one of the most elegant parts of the RSA algorithm. We know that there exist no known efficient method to perform prime factorisation(Section 2.3 ###reference_###). If the receiver can first find two large random prime numbers and privately and let , then there will exist no efficient way to reverse this process to get and from only . Further, it will be computationally difficult to get the value of as stated in Section 2.4 ###reference_###.\nHowever, it will be super easy for the valid receiver to calculate as . This is also known as the \u201ctrap-door one-way function\u201d, which is similar with how our shipping drop box works.\nFinally we have achieved all the three goals mentioned at the beginning. The receiver just needs to first choose two large enough prime numbers and , and get and . Then and can be destroyed to prevent potential leaks. The receiver can further get the public key by choosing a large enough that is relative prime to and then the private key could be computed based on . As there\u2019s no efficient way to compute based on as it requires a prime factorization, thus the third goal of computation difficulty will be achieved.\nWe still have one last question left unanswered from Section 3.2 ###reference_###. How can we ensure and to be relatively prime? Unfortunately, we cannot ensure it directly. However, we know that with being prime, which means will be relatively prime to all numbers less than except and their multiples. The only case in which and are not relatively prime is when is a multiple of or or both, which has an extremely low chance in terms of probability considering we also require in Goal 2.\nUp until this point, all the requirements to achieve the three goals of public-key cryptosystems are satisfied. In the following section we provide a toy example to sort out the process."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Toy Example",
|
| 69 |
+
"text": "The detailed implementation specifications of the RSA algorithm in real world can be found in [moriarty2016pkcs]. Suppose Alice wants to send a message \u201cTue 7PM\u201d to Bob secretly using the RSA algorithm. First, Bob needs to decide his private key and public key for the communication. Bob will choose two large random prime numbers and . Let\u2019s assume and . In real world, these two numbers should be much larger such that it is unrealistic for modern computers to obtain the prime factors and from . can be computed as . We can also obtain Euler\u2019s totient function of as .\nThe next step for Bob is to choose a public key , which is a number relatively prime to . For example, the standard sizes for RSA keys starts from 512 bits. To get a very high-strength key, the key size requires 4096 bits. Here in our toy example we choose . Now Bob needs to compute the private key . Based on the equation , we could get the inverse of modulo as using the Extended Euclidean Algorithm. After and are determined, and can be destroyed or hidden for the sake of security. Bob can release his public key to the public while keep private.\nFrom Alice\u2019s perspective, Alice needs to first obtain Bob\u2019s public key , then she could convert the message she wants to send into its numerical representations. Here we use ASCII (American Standard Code for Information Interchange) to convert \u201cTue 7PM\u201d into numerical representation as: 084 117 101 032 055 080 077.\nIf the message is too long, Alice could divide the message into smaller blocks, then encode each block separately. Here we divide the message into blocks that has 3 digits in each of them. There are seven blocks in the message including the space. With the public key , Alice could obtain the ciphertext through to get The complete ciphertext is shown as \"0469428 0547387 2687822 1878793 0330764 1501041 1232817\". When Bob receives the ciphertext, he will decrypt the ciphertext using his own private key to get .\nFinally he recovers the original message by looking up the ASCII table to get the plaintext message \u201cTue 7PM\u201d."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Student Learning Outcome Assessment",
|
| 75 |
+
"text": "To study the effectiveness of the proposed student-oriented approach in explaining the RSA algorithm, we conducted a comparative analysis with the traditional method outlined in [rosen2019discrete]. In the traditional method, the encryption and decryption process are presented upfront to the students, followed by the corresponding proof utilising number theory knowledge to enhance comprehension of the algorithm. The explanatory style from [rosen2019discrete] presents the conventional approach to teaching the RSA algorithm.\nThe comparison involved two sections of the same course, namely CSC 140 Discrete Structures at Rider University. These sections comprised 24 and 26 undergraduate students, respectively, all majoring in computer science or cybersecurity. Given that this is a 100-level course and a prerequisite for several higher-level courses, the majority of students are either freshmen or sophomores, aligning with the target readership of this paper.\nIn these two sections, all course content, excluding the RSA algorithm section, followed the same instructional format. Equal lecture time was allocated to each topic in both sections. Student performance was compared based on related assignment grades. Both sections were presented with identical assignment problems and grading criteria.\nThe study involved initially employing the proposed student-oriented method outlined in this work for students in Section I and the traditional method from [rosen2019discrete] for students in Section II. Subsequently, a related assignment was administered. Following this, both sections were exposed to an alternative introduction method\u2014Section I students were presented with the traditional explanation, while Section II students were introduced to the proposed student-oriented approach. Finally, a makeup opportunity for the assignment was extended to all students. Detailed results are presented in Fig. 2 ###reference_###.\n###figure_3### ###figure_4### In Fig. 2 ###reference_### (a), we initially compared two categories of student grades: \"Grades Without RSA\" and \"Grades of RSA.\" The former represents the averaged grades for all assignments throughout the semester, excluding the one related to the RSA algorithm. With a total of 9 assignments for the entire semester, all topics pertaining to these assignments are taught in the same way. Our analysis revealed that students from Section I performed, on average, 4 points higher than those from Section II (each assignment is out of 100 points).\nOn the other hand, \"Grades of RSA\" focuses solely on the assignment related to the RSA algorithm, considering a single assignment. Our findings indicated that students in Section I outperformed those in Section II by an impressive average margin of 14 points. If the effectiveness of the teaching methods were equal for both sections, we would anticipate a much smaller average grade difference than the observed 14 points. Consequently, these results underscore the effectiveness of the student-oriented approach in explaining the RSA algorithm compared to the traditional method.\nUpon offering both sections the alternative teaching method, we observed an improvement in grades for both groups (Fig. 2 ###reference_### (b)). However, the gap in grades between the two sections narrowed from 14 points to 6 points. This reduction further validates the efficacy of the student-oriented teaching approach."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "6",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Conclusion",
|
| 81 |
+
"text": "As the significance of cybersecurity continues to rapidly increase across various facets of society, comprehending the fundamental logic behind widely used security mechanisms becomes essential not only for cybersecurity students but also for a broader audience. In this study, we present a self-contained and student-oriented interpretation of the RSA algorithm, a cornerstone in public-key cryptosystems. Beginning with three goals of public-key cryptosystems, we guide readers through a step-by-step explanation of how the RSA algorithm satisfies and implements each of these three goals. Our student learning outcome assessment, conducted across two different course sections, demonstrated the effectiveness of our approach, with an average grade difference of 14 points compared to the traditional method of teaching the RSA algorithm.We envision this work serving as a more approachable channel for readers to grasp the intricacies of the RSA algorithm."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [],
|
| 85 |
+
"tables": {},
|
| 86 |
+
"image_paths": {
|
| 87 |
+
"1(a)": {
|
| 88 |
+
"figure_path": "2308.02785v2_figure_1(a).png",
|
| 89 |
+
"caption": "(a) Symmetric-key cryptography\nFigure 1: The information flow when Alice sends a message to Bob using symmetric and public key cryptography.",
|
| 90 |
+
"url": "http://arxiv.org/html/2308.02785v2/extracted/5746014/symmetric.png"
|
| 91 |
+
},
|
| 92 |
+
"1(b)": {
|
| 93 |
+
"figure_path": "2308.02785v2_figure_1(b).png",
|
| 94 |
+
"caption": "(b) Public-key cryptography\nFigure 1: The information flow when Alice sends a message to Bob using symmetric and public key cryptography.",
|
| 95 |
+
"url": "http://arxiv.org/html/2308.02785v2/extracted/5746014/public.png"
|
| 96 |
+
},
|
| 97 |
+
"2(a)": {
|
| 98 |
+
"figure_path": "2308.02785v2_figure_2(a).png",
|
| 99 |
+
"caption": "(a) \"Grades Without RSA\" refers to the average grades of assignments unrelated to the RSA algorithm, which are taught in the same manner; \"Grades of RSA\" represents the average grades related to the RSA algorithm, which are taught differently.\nFigure 2: Students learning outcome comparison in terms of assignment grades from two sections of the same course.",
|
| 100 |
+
"url": "http://arxiv.org/html/2308.02785v2/x1.png"
|
| 101 |
+
},
|
| 102 |
+
"2(b)": {
|
| 103 |
+
"figure_path": "2308.02785v2_figure_2(b).png",
|
| 104 |
+
"caption": "(b) \"First Grades of RSA\" represent the averaged grades of the assignment related to the RSA algorithm for the two sections; \"Second Grades of RSA\" refer to the averaged grades students received after the alternative way is offered.\nFigure 2: Students learning outcome comparison in terms of assignment grades from two sections of the same course.",
|
| 105 |
+
"url": "http://arxiv.org/html/2308.02785v2/x2.png"
|
| 106 |
+
}
|
| 107 |
+
},
|
| 108 |
+
"validation": true,
|
| 109 |
+
"references": [],
|
| 110 |
+
"url": "http://arxiv.org/html/2308.02785v2"
|
| 111 |
+
}
|
20240721/2308.07867v2.json
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Fast Risk Assessment in Power Grids through Novel Gaussian Process and Active Learning",
|
| 3 |
+
"abstract": "This paper presents a graph-structured Gaussian process (GP) model for data-driven risk assessment of critical voltage constraints. The proposed GP is based on a novel kernel, named the vertex-degree kernel (VDK), that decomposes the voltage-load relationship based on the network graph. To estimate the GP efficiently, we propose a novel active learning scheme that leverages the additive structure of VDK. Further, we prove a probabilistic bound on the error in risk estimation using VDK-GP model that demonstrates that it is statistically comparable to using standard AC power flow (AC-PF), but does not require computing a large number of ACPF solutions. Simulations demonstrate that the proposed VDK-GP achieves more than two fold sample complexity reduction, compared to a generic GP on medium scale 500-Bus and large scale 1354-Bus power systems. Moreover, active learning achieves an impressive reduction of over 15 times in comparison to the time complexity of Monte-Carlo simulations (MCS), and have risk estimation error of order for both 500-Bus and 1354-Bus system, demonstrating its superior efficiency in risk estimation.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Increase in uncertain power sources and variable loads means that ensuring secure power system operation has become more challenging than before [1 ###reference_b1###]. An important problem in this context is voltage risk assessment (VRA) that aims to quantify the likelihood of a bus voltage exceeding its operational limit due to uncertainty [2 ###reference_b2###]. The problem of VRA can also be viewed as uncertainty quantification (UQ) [3 ###reference_b3###] for the distribution of output voltage under uncertain load for a given operating condition. Computationally, performing VRA is a challenge as Alternating Current Power Flow (ACPF) equations are nonlinear and are not expressed as analytical (or closed-form) expressions of nodal voltages with bus load vector as input [4 ###reference_b4###, 5 ###reference_b5###]. Instead, iterative methods such as the Newton-Raphson load flow (NRLF) must be employed, and can lead to significant computational overhead since accurate VRA requires a large number of power flow samples. On the other hand, the direct current approximation for PF [6 ###reference_b6###, 7 ###reference_b7###] neglects the voltage information, and therefore cannot be utilized to estimate voltage risk.\nRecently, machine learning (ML) methods, in particular Deep Neural Networks (DNNs), have made significant advancements as universal function approximators, especially in conjunction with the idea of physics-informed ML [8 ###reference_b8###, 9 ###reference_b9###]. They have also been explored for PF learning and UQ [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###]. The idea behind learning ACPF is that a fast PF solver can be used as an Oracle to provide large number of voltage solutions for risk assessment [10 ###reference_b10###]. However, DNNs require extremely large number of samples to learn the PF approximator. For instance, more than 10,000 training samples are used to learn voltage solutions for the 118-Bus system in [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###]. Another limitation is that ML methods, in general, do not provide confidence bounds on their prediction. Such bounds are necessary for reliable constraint enforcement but requires a large number of NRLF solutions for out-of-sample validation.\nIn this work, we take an alternate modeling approach using Gaussian process (GP) for modeling voltage-injection characteristic and use it for VRA. Gaussian process (GP) learning [13 ###reference_b13###] is a versatile probabilistic method for function estimation that enables flexible representation of uncertainty [13 ###reference_b13###]. It has been applied to various power system applications [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 5 ###reference_b5###]. One notable drawback, common to all these GP works, is that their applicability has been restricted to small to medium-sized systems. This scale limitation is because exact GP inference has cubic complexity with respect to number of samples [13 ###reference_b13###], which grows with system size (or input dimension). Moreover, previous DNN and standard GP models in grid applications do not provide any criteria to apriori\ndecide the number of required training samples. The ability to assess the training quality \u2018on the go\u2019 and having a stopping criteria, is of great importance when risk-assessment needs to be done within a short amount of time.\nIn this paper, we propose to use a Vertex-Degree Kernel (VDK) based GP model for risk assessment of voltage, given an operating condition and load uncertainty set. VDK learns the voltage-load function in large-scale systems efficiently by breaking the Kernel function into additive lower-dimensional latent functions based on neighborhoods in the network topology. Further, the VDK-GP model is amenable for Active Learning (AL), an approach to reduce the overall sample complexity of training [19 ###reference_b19###], where successive training point containing the maximum information about the unknown target function are selected [20 ###reference_b20###]. AL for standard GP [5 ###reference_b5###] suffers from curse-of-dimensionality as the search space grows exponentially with the dimension of input, which is the number of loads. We leverage the additive additive low-dimensional functions inside VDK-GP\u2019s kernel for a novel network-swipe active learning (AL) algorithm to bypass the curse of dimensionality to further improve its efficiency.\nFinally, we establish probabilistic error bounds on expected value estimation with generic ML models, that applies to probabilistic risk estimation using VDK-GP. We show that VDK-GP\u2019s voltage outputs provides a fast alternative to solving standard AC-PF for computing the probability of voltage violations, while its theoretical error bound eliminates the need for any out-of-sample validation. In summary, the main contributions of this study can be delineated as:\nDevelopment of a graph-neighborhood structured kernel, the vertex-degree kernel (VDK), for effectively learning the voltage-load function in large-scale power systems.\nA novel network-swipe Active Learning (AL) algorithm for VDK-GP that intelligently selects training data that eliminating the need for solving numerous ACPF. The proposed AL method also provides a stopping criteria without requiring out-of-sample testing, that facilitating its use for operational risk assessment.\nA conservative probabilistic bound on expected estimation error which establishes VDK-GP\u2019s statistical equivalence with ACPF based risk estimation up to the same error threshold. We demonstrate the proposed GP-based model reduces the computational burden in achieving this probabilistic bound.\nTo evaluate the proposed method, we conduct benchmark experiments on medium to large-sized power networks, considering uncertain loads at all load buses. Our findings demonstrate that:\na) VDK-GP achieves comparable accuracy with less than 50% of the samples required by standard kernel GP;\nb) AL with VDK-GP achieves target model accuracy with fewer data points than VDK-GP; and\nc) The proposed model can be probabilistically guaranteed to achieve a similar level of accuracy as ACPF-based MCS while requiring 15 times less computational time.\nThe remainder of this paper is organized as follows. In Section II, we provide a brief overview of the power flow learning and we present the proposed VDK-GP and reduced representation of VDK. In Section III, we describe the idea of information gain, outline challenges in designing AL methods and the proposed network-swipe AL algorithm. In Section IV, we present the results for benchmark experiments and uncertainty quantification for medium to large systems and discuss insights. Finally, in Section V, we conclude the paper and discuss future work."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Voltage Risk Assessment",
|
| 15 |
+
"text": "Notation: Consider a power grid with denoting the set of buses/nodes and denoting the set of transmission lines/edges with impedances per edge . We denote the complex node voltage as with magnitude and phase angle . We denote load in the net-form i.e. 111This is only a convention adopted and does not affect the model. and refer to complex power as . Here, () is real (reactive) load at -th node. The space of loads is denoted by sub-space . The following AC PF equations represent the set of voltages as an implicit function of the nodal injections:\nIn power system operations, risk assessment involves evaluating the risk of constraint violation under load uncertainties in , given a generator policy (i.e. dispatch and participation factor ). Formally, we define voltage risk assessment (VRA) as follows:\nGiven dispatch decision set , and node voltage limits, what is the expected value by which a node voltage , with and having probability distribution , will exceed (or fall short) of limit? Mathematically, VRA, with (for lower limit , is given by\nHere, is the expectation operator. Note that if is an identity function i.e. (with omitted for brevity), then VRA models expected constraint violation [21 ###reference_b21###]. On the other hand, taking VRA to be can model the average size of violation.\nNonetheless, it is challenging to compute the VRA as voltage in Eq. 1 ###reference_### is an implicit non-linear function. This is further complicated when renewable sources and variable loads like EVs are involved, such that we load is sampled from the set and load distribution does not follow a well-defined analytical distribution. This negates the possibility of using robust optimization formulation to calculate maximum node voltage with load uncertainty sub-space [22 ###reference_b22###]. The alternative approach is to compute the risk empirically using a high number of AC-PF solutions for load uncertainty samples. As an illustrative example, consider the problem of violation estimation for a system, within an error margin of pu (0.1% for a 1000kV system). For a 95% confidence in the estimate, the required number of PF solutions222as prescribed by estimation theory is greater than . Solving such a large number of power flows for realistic transmission network with thousand plus buses, within a time-constraint of minutes333interval between consecutive real-time SCEDs is computationally prohibitive [1 ###reference_b1###].\nA prominent way to solve the risk-assessment problem is probabilistic power flow (PPF) [23 ###reference_b23###, 24 ###reference_b24###], where upon estimating the output distribution (e.g. node voltage magnitude), probability of violation is calculated. A variety of methods for PPF and risk-estimation use numerical methods that revolve around the Monte-Carlo Simulation (MCS) methods [25 ###reference_b25###]. The MCS-based PPF works rely on numerous simulations and majority of works propose different sampling methods and Quasi-MCS methods [26 ###reference_b26###, 24 ###reference_b24###] to improve computational complexity. However, to achieve the statistical guarantee for arbitrary input injections, the computational burden is very high as shown in Fig. 1 ###reference_###. Other type of numerical methods are approximation methods such as point estimate method (PEM) [27 ###reference_b27###, 28 ###reference_b28###]. They suffer similar limitations as MCS and estimation of complete distribution is difficult due to the requirement of the series expansion method [23 ###reference_b23###]. Further, risk-estimation using PEM-based approaches does not provide any guarantee [28 ###reference_b28###]. Other techniques include analytical methods such as Gaussian mixture approximation [29 ###reference_b29###], and cumulant methods [30 ###reference_b30###]. Although these analytical method provide better understanding of system behavior under uncertainty, they require convolution, various transforms which are time consuming, particularly under correlated uncertain inputs. Additionally, most of these methods are developed for a particular type of input uncertainty, thus difficult to generalize.\n###figure_1### To overcome this computational bottleneck, we propose a novel Gaussian Process (GP) based explicit model for voltages as a function of input loads, that can be (a) accurately estimated/learned using limited training data, and consequently, (b) used for fast VRA computation within a fraction of the time required for standard PF-based MCS approaches as highlighted in Fig. 1 ###reference_###. Crucially, we are able to reinforce our extensive simulation results with theoretical bounds on the sample requirement for GP-based VRA computation, and prove its correctness using statistical estimation theory. The next section introduces our proposed Vertex-Degree Kernel based Gaussian Process that uses characteristics of power flow physics to efficiently learn AC voltages, compared to alternate data-driven approaches."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Vertex-Degree Kernel Gaussian Process (VDK-GP) for Voltage Learning",
|
| 21 |
+
"text": "This section begins by reviewing PF in functional form and describing a standard Gaussian Process model for estimating voltage. Our proposed model, VDK-GP, is described subsequently. For modeling, we describe output voltage as a function of the input load vector as:\nwhere, is th node voltage measurement for load (active and reactive) vector and is the unknown underlying function with being i.i.d. Gaussian noise. In Gaussian Process (GP), we consider that function (sub-script omitted for brevity) belongs to a zero mean Gaussian distribution [13 ###reference_b13###] i.e.\nwith defining the covariance matrix or kernel matrix over the training samples and being design matrix having load vector samples from training set . The kernel matrix is constructed using covariance or kernel function working over the load vectors at different samples, i.e. the -th element , (see [13 ###reference_b13###] for details). As voltage is a smooth function of loads, square exponential kernel has been extensively used for PF learning [31 ###reference_b31###, 5 ###reference_b5###, 18 ###reference_b18###, 32 ###reference_b32###]. The square exponential kernel is defined as\nwhere is the Euclidean norm.\nThe square exponential kernel has two hyper-parameters , which are selected by maximizing the marginal log likelihood (MLL) for exact inference [13 ###reference_b13###]. This MLL maximization aims to find the optimal set of hyperparameters that best fit the observed data while penalizing complex models, thereby striking a balance between model complexity and goodness of fit [13 ###reference_b13###]. Upon learning, the GP provides mean and variance predictions of the function as\nHere, is the training voltage vector and is the estimated kernel matrix over samples of . The vector is obtained by applying the kernel function over and 444See appendix in [5 ###reference_b5###] for more details.. Note that GP not only provides point prediction as Eq. (6 ###reference_###), but also gives confidence level via predictive variance .\nIt is worth noting that the MLL for the standard Squared exponential kernel is done over a dimensional space (double of system size), and requires a high number of samples for accurate learning. Further, if kernel design problem is solved using optimization, the overall process becomes computationally expensive [31 ###reference_b31###]. In the next section, we introduce a novel GP Kernel inspired by the network structure and locality of voltages, that is able to reduce the training sample requirement for voltages, without sacrificing its accuracy."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "III-A Proposed Kernel Design",
|
| 27 |
+
"text": "We design an additive GP kernel using sub-kernels, each being a squared exponential-kernel limited to the neighborhood of a nodal load, i.e., for node we have set . The intuition behind the proposed kernel design is that a node\u2019s voltage is affected by all loads but their correlated effect555\u2018correlated\u2019 effect is the change in voltage due to simultaneous change in two or more nodal loads. is limited to loads that are near one another. In other words, effect of two far-away loads on nodal voltages can be considered uncorelated/independent. As maximum degree of a node is much less than the size of a power grid, each sub-kernel is low-dimensional. The complete additive kernel, the sum of these sub-kernels, is termed as Vertex Degree Kernel (VDK), and is defined as\nHere, is the sub-kernel working over . Note that by relying on the grid structure, neighborhood based correlations are kept intact in VDK, but complex design choices for kernels are avoided. Fig. 2 ###reference_### shows the idea of VDK construction. Each sub-kernel has hyper-parameters that form the full hyper-parameter vector for VDK as . As the sum of valid kernel functions is a kernel function, standard exact inference can be performed via optimizing MLL using [33 ###reference_b33###, 34 ###reference_b34###, 13 ###reference_b13###]. However, as square exponential kernel has two hyper-parameters (5 ###reference_###), the total number of hyper-parameters in VDK (8 ###reference_###) will be twice the number of network nodes. In the next section, we show that the neighborhood based additive form of VDK lends itself to a simple active learning algorithm for fast hyperparameter estimation."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "IV Active Learning for VDK-GP",
|
| 33 |
+
"text": "We are interested in rapid estimation of (-th node\u2019s voltage function) in Eq. 4 ###reference_### by learning the hyper-parameter , and further in determining a tractable stopping criterion for terminating the learning that does not rely on computationally expensive out of sample AC-PF solutions.\nTo answer the first part, we propose an active learning (AL) [19 ###reference_b19###] mechanism that sequentially selecting training samples to maximize the information about the voltage function, and predictive variance as an indicator to terminate the learning process. In GP modeling (or Bayesian setting in general) \u2018information gain\u2019 is used as a measure to quantify the informativeness of a training set or sample. Let, where is the complete training set or space. The Information gained by samples in is the information theoretic mutual information between voltage function and (vector of voltage samples following Eq. (3 ###reference_###)) [35 ###reference_b35###], and is given by . Here is the kernel matrix constructed using samples in set . Importantly, finding the best with given cardinality is an NP-hard problem [35 ###reference_b35###]. However, two results facilitate tractable AL. First, in [36 ###reference_b36###] information gain has been shown to be a submodular function of set , implying that greedy algorithm for sample selection is at least within of optimal solution. Second, the information gain in a new sample is given by predictive variances Eq. (7 ###reference_###). Hence, the next training sample, for -th node voltage function learning, can be obtained by solving\nHere, is predictive standard deviation of the GP trained on first samples, as given by Eq. (7 ###reference_###). As eluded in the introduction, for large networks, the non-convex function makes Eq. (9 ###reference_###) quickly intractable for standard GP [5 ###reference_b5###], as the input vector in Kernels has size [37 ###reference_b37###]. While VDK is separable in terms of lower-dimensional sub-kernels given in Eq. (8 ###reference_###), they have overlapping input groups for any two sub-kernels at nodes that are within two graph hops. This means that a simple parallelization into sub-kernels isn\u2019t possible for AL.\nInstead, we propose a block-descent type iterative network-swipe algorithm to exploit VDK\u2019s form for optimizing Eq. (9 ###reference_###).\nAt each iteration of the network-swipe algorithm, at the first step, we solve Eq. (9 ###reference_###) with respect to (load at the node where voltage is being predicted), while keeping all other loads fixed. The load at is updated with the optimized value. In the second step, we solve Eq. (9 ###reference_###) for all loads at nodes such that (1 hop neighbors of ). All other loads are kept unchanged. In the next step, loads at two-hop neighbors of are chosen to solve Eq. (9 ###reference_###), and so on till all loads have been updated. For elucidation, we use to denote distinct node groups at a graph distance of from node , with max-distance . Hence, , while . Mathematically, the iteration of network-swipe solves the following non-linear optimization problems sequentially for\nHere, . Also, represents the hyper-cube slice with respect to loads present in . The algorithm then starts a new iteration () to determine the next sample. The pseudo-code for AL is listed in Algorithm 1 ###reference_### and a graphical representation of steps for target node is shown in Fig. 3 ###reference_###.\nIn Algorithm 1 ###reference_###, we have used a time budget of , and a predictive variance threshold of . While we present a single sequence of network swipe steps to determine the next injection sample , multiple swipes can be performed across network to improve the information gain in . Further, for solving Eq. (10 ###reference_###), in Algorithm 1 ###reference_### we use function evaluation with different batch sizes and select the best candidate of injection among the random samples. Further, the function evaluation-based approach will allow to build parallelizable optimization methods, helping to scale and improve performance of the proposed network-swipe AL in future works. In the next section, we provide guarantees on using VDK-GP for risk estimation in the grid.\nIt is crucial to emphasize the effectiveness of the proposed network-swipe AL method, which is also related to the incorporation of the VDK structure. This Algorithm 1 ###reference_### enables the optimization problem, as defined in (10 ###reference_###), to remain low-dimensional. Utilizing a conventional GP kernel, standard active learning methods typically encounter the curse of dimensionality during sample-based optimization, particularly when seeking maximum variance across a load space of dimension , where denotes the number of buses within the system. Consequently, the proposed network-swipe AL design requires sample-based optimization over load variables set at a particular depth (defined as ) at a time. This low-dimensionality of optimization is particularly advantageous for applying proposed active learning method to large-scale networks."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "5",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Guarantees on Risk Assessment using VDK-GP",
|
| 39 |
+
"text": "As discussed in Section II ###reference_### and Fig. 1 ###reference_###, empirical voltage risk assessment (VRA) using MCS of power flow samples is computationally expensive, but faster using VDK-GP due to its closed-form input-output map. Fig. 4 ###reference_### presents the complete idea of risk assessment by first estimating the VDK-GP to generate voltage samples and determining the empirical violation estimation (VE) as well as CDF of the violation. While we demonstrate the computational performance in the next section, we address the question of theoretical guarantee on the performance of VDK-GP for VRA. We first derive results for general ML-based methods and extend it to VDK-GP (VE and CDF of violation) as a special case of a ML model. Consider a function defined over node voltage . Let indicate its evaluation at voltage derived from solving NRLF at load , and let denote its evaluation for voltage derived from a ML predictor (e.g. GP, DNN etc.) at . We then have the following result.\nExpected Value Estimation Error Bound: Suppose that a given ML model with output satisfies\nwith for any . Let be a Lipschitz continuous function with Lipschitz constant . Then the error in estimating with the ML model with samples is bounded with probability greater than as\nwhere, , , is a uniform bound on the maximum error that satisfies , and is number of samples.\nHere, is the empirical estimate of expectation using samples.\nThe detailed proof is given in the Appendix A ###reference_###. Theorem 1 ###reference_orem1### states that if an ML model is accurate in estimating (voltage magnitude), then empirical estimation of expected violation using the ML-generated samples is close to the true expected violation. Further, the use of ML has potential computational speed-up since direct function evaluation is significantly faster than solving traditional ACPF for each sample. Consider the case where is given by the Sigmoid function to convert deviation from voltage limit calculated using power flow solution as\nNote that Sigmoid function has Lipschitz constant equal to one, and provides information about both extent of violation as well as level of security. We subtract so that implies violation. Theorem 1 ###reference_orem1### provides concentration or error bounds on violation estimation (2 ###reference_###). By ensuring preservation of violation level information, the Sigmoid function allows for effective critical load point sampling [38 ###reference_b38###].\nTheorem 1 ###reference_orem1### requires a probabilistic error bound on the ML model given in Eq. (11 ###reference_###), but validating this bound would require extensive out-of-sample testing, through Hoeffding\u2019s inequality666Minimum number of ACPF-MCS samples require to obtain statistical error in VRA estimation below , with confidence level , is given as\n [39 ###reference_b39###].. This poses a challenge when generating ground-truth solutions (e.g., voltage solutions from AC-PF) is difficult within the ISO\u2019s time-constraints for risk assessment. Contrary to ML models with point-prediction (e.g. DNN), GP automatically offers a measure of confidence around the mean prediction, through the predictive variance in voltage, [13 ###reference_b13###]. This crucial feature enables GP to probabilistically upper-bound voltage solutions (V(s)) using mean and variance, ( and ) as described in Eq. 7 ###reference_### and eliminates the need for out-of-sample testing. The next corollary extends Theorem 1 ###reference_orem1### for VRA using GP\u2019s predictive variance guarantee.\nSuppose that the GP assumption holds for PF such that voltage values, for any two arbitrary load vectors, are jointly Gaussian. Then, where for any . And with being Sigmoid function, error in VE using GP is probabilistically bounded as\nwhere, definitions of variables are same as in Theorem 1 ###reference_orem1### and is expected fraction of voltage values outside the range given by .\nThe proof follows directly from Theorem 1 ###reference_orem1### and properties of Gaussian distribution. Note that in Corollary 1.1 ###reference_orem1.Thmcorollary1###, the GP model error probability is a function of variance multiplier in . The value of decreases rapidly with increase in values. At we have while will give . As discussed in Fig. 1 ###reference_### , performing the estimation by solving AC-PF over multiple load samples is not feasible due to high computational burden. Using in Hoeffding\u2019s inequality [39 ###reference_b39###] and conditions in Corollary 1.1 ###reference_orem1.Thmcorollary1###, we will have is 777The confidence bound of the GP is valid for any . However, for simplicity and to maintain consistency, we chose . and confidence of (). Further, using and 888As we use Sigmoid function (13 ###reference_###), can be used to satisfy the condition . More details of are with proof of Theorem 1 ###reference_orem1### in Appendix A ###reference_###, the VE will be bounded as . To generate same accuracy using AC-PF samples, we will require NRLF solutions that is much more computationally expensive.\nAdditionally, we can use AL-VDK GP to generate the empirical CDF of violation . This CDF can provide information on the probability of violation (PoV). formalized below.\nGiven dispatch decision set , and node voltage limit , PoV is defined as\nFor AL-VDK model\u2019s applicability, it is important to have confidence that the procedure in Fig. 4 ###reference_### will not underestimate the PoV. We present the theorem below which certifies that proposed GP-based predictive model will always overestimate the PoV i.e. provide a conservative estimate of security.\nThe GP-based predictive model overestimates probability of voltage violation i.e.\nwith confidence .\nDetailed proof is given in Appendix A ###reference_###\n\u220e"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "6",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "VI Results and Discussion",
|
| 45 |
+
"text": "In this section, simulation results demonstrate a) that our Graph-structured kernel (VDK) (8 ###reference_###) outperforms standard GP model [5 ###reference_b5###] in voltage prediction at same sample complexity, b) active learning (AL) with VDK (AL-VDK) efficiently learns voltage functions with acceptable error using fewer samples than VDK and c) AL-VDK voltage predictor exhibits significantly lower time complexity than NRLF for statistical estimation of voltage violation (VE) (Corollary 1.1 ###reference_orem1.Thmcorollary1###), while the proposed model is a conservative estimator of risk of violation (Theorem 2 ###reference_orem2###). The VE value indicate the extent by which voltage goes beyond the lower voltage limits, calculated using the Sigmoid (13 ###reference_###) function. To obtain the node voltages for given uncertainty set and decision set , we use\nPowerModels.jl, for running ACPF. For this, upon sampling a load vector , we update the generator points in the data file using the participation factors i.e. with being sum of load change from base-point. To validate the model, 1000 out-of-sample testing points are used in all cases unless stated otherwise. Three different systems from pglib-library[40 ###reference_b40###] are used for validation (118-Bus, 500-Bus, and 1354-Bus). We use the Square Exponential Kernel, both for standard GP and VDK-GP and model them in Julia. Additionally, we use a DNN, deep neural network, of three layers and 1000 neurons in each layer [10 ###reference_b10###]999We use Flux.jl with standard settings of ADAM solver to optimize hyper-parameters. Batch size is 5 and Epochs are set as 200. for comparison. We use mean absolute error (MAE) to validate the performance of proposed models: for .\n###figure_2###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "7",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "VII Conclusion",
|
| 51 |
+
"text": "This paper introduces a novel graph-structured kernel, the vertex degree kernel (VDK), designed to estimate violation risk using Gaussian Process (GP) with an additive structure inspired by physics. VDK effectively leverages network graph information, providing a promising approach for improved voltage-load relationship modeling. The results obtained demonstrate the superiority of VDK-GP over the full GP, showcasing a substantial reduction in sample complexity. Also, the proposed network-swipe AL algorithm further enhances model performance by employing intelligent data selection, maximizing information gain without reliance on labeled data. Results show that proposed method achieves more than 10 fold reduction in time complexity to perform risk estimation with error while conservatively over-estimating the probability of violation. Both these provide numerical evidence to support the theoretical results presented in the paper. For future directions, the additive structure and active learning capabilities of VDK-GP pave the way for developing Bayesian optimization-based methods tailored for large-scale power systems."
|
| 52 |
+
}
|
| 53 |
+
],
|
| 54 |
+
"appendix": [
|
| 55 |
+
{
|
| 56 |
+
"section_id": "Appendix 1",
|
| 57 |
+
"parent_section_id": null,
|
| 58 |
+
"section_name": "Appendix A Proofs",
|
| 59 |
+
"text": "The expresssion in the theorem can be written as\nUsing Jensen\u2019s inequality, we can upper bound as follows\nUnder the theorem\u2019s assumption, is a Lipschitz continuous function with Lipschitz constant , and is a uniform bound satisfying . Using the assumption on the ML model\u2019s performance, i.e., for and , we have for any ,\nHere, is the indicator function (1 if for ). Let such that , and . Then (16 ###reference_###) can be expressed as\nHere (18 ###reference_###) follows from definition of , and , and (19 ###reference_###) follows from (A-A ###reference_###) and definition of . Next, through direct application of Hoeffding\u2019s inequality on in (15 ###reference_###), we get\nwhere, and is the number of sample evaluations. Using (19 ###reference_###) and (20 ###reference_###) in (15 ###reference_###) proves the theorem.\n\u220e\nas joint probability is always less than individual probabilities.\n.\nHere, by GP confidence bound for any , where decides the confidence level like for 99.7% success or . Thus,\nNow breaking the joint probability as conditional probability\nIf and then . Thus,\nConverting conditional into joint probability\nAgain, as joint probability is always less than individual probabilities\nApplying Hoffding\u2019s inequality to GP-based probability estimation with\nThus,\nwith confidence\n\u220e"
|
| 60 |
+
}
|
| 61 |
+
],
|
| 62 |
+
"tables": {
|
| 63 |
+
"1": {
|
| 64 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Training Sample Requirement and Risk Estimation Results in 500-Bus System</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T1.9\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S6.T1.1.1.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.1.3\">Training Samples</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.1.4\">Time(s)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T1.2.2.2\">4</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.2.2.3\">67 - 70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.2.2.4\">28 - 30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.2.2.1\">7.8 0.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S6.T1.3.3.2\">181</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.3.3.3\">71 - 76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.3.3.4\">30 - 33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.3.3.1\">8.0 0.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S6.T1.4.4.2\">268</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.4.4.3\">102 - 109</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.4.4.4\">53 - 58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.4.4.1\">7.9 0.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S6.T1.5.5.2\">320</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.5.5.3\">72 - 76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.5.5.4\">30 - 33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.5.5.1\">7.8 0.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S6.T1.6.6.2\">321</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.6.6.3\">70 - 77</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.6.6.4\">30 - 33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.6.6.1\">6.8 0.5</td>\n</tr>\n</tbody>\n<tfoot class=\"ltx_tfoot\">\n<tr class=\"ltx_tr\" id=\"S6.T1.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" colspan=\"4\" id=\"S6.T1.7.7.1\">Mean evaluation time for 82000 samples is 33.2 sec</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" colspan=\"4\" id=\"S6.T1.8.8.1\">NRLF running time for 20500 samples is 4205 sec</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" colspan=\"4\" id=\"S6.T1.9.9.1\">\n: Difference in VE values using NRLF and AL-VDK</th>\n</tr>\n</tfoot>\n</table>\n</figure>",
|
| 65 |
+
"capture": "TABLE I: Training Sample Requirement and Risk Estimation Results in 500-Bus System"
|
| 66 |
+
},
|
| 67 |
+
"2": {
|
| 68 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Estimated Value Probability of Violation (POV) for 500-Bus System Voltages with Difference in Estimation using AL-VDK</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T2.11\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T2.11.12.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S6.T2.11.12.1.1\"></th>\n<td class=\"ltx_td ltx_align_center\" colspan=\"5\" id=\"S6.T2.11.12.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.11.12.1.2.1\">Node</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.11.13.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S6.T2.11.13.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.11.13.2.1.1\">PoV</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.11.13.2.2\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.11.13.2.3\">181</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.11.13.2.4\">268</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.11.13.2.5\">320</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.11.13.2.6\">321</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T2.1.1.1\">Estimated<sup class=\"ltx_sup\" id=\"S6.T2.1.1.1.1\">\u2020</sup>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.1.1.2\">0.0487</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.1.1.3\">0.0115</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.1.1.4\">0.6879</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.1.1.5\">0.8092</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.1.1.6\">0.9108</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S6.T2.2.2.1\">Difference<sup class=\"ltx_sup\" id=\"S6.T2.2.2.1.1\">#</sup>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.3.3.2\">\n0.04</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.3\">\n0.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.5.5.4\">\n0.16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.6.6.5\">\n0.15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.7.7.6\">\n0.10</td>\n</tr>\n</tbody>\n<tfoot class=\"ltx_tfoot\">\n<tr class=\"ltx_tr\" id=\"S6.T2.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" colspan=\"6\" id=\"S6.T2.8.8.1\">\n<sup class=\"ltx_sup\" id=\"S6.T2.8.8.1.1\">\u2020</sup>using 82000 and mean over AL trials;</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" colspan=\"6\" id=\"S6.T2.9.9.1\">\n<sup class=\"ltx_sup\" id=\"S6.T2.9.9.1.1\">#</sup>between 20050 ACPF and 82000 AL-VDK evaluations</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.11.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" colspan=\"6\" id=\"S6.T2.11.11.2\">\n<sup class=\"ltx_sup\" id=\"S6.T2.11.11.2.1\">#</sup>Positive Difference Overestimation</th>\n</tr>\n</tfoot>\n</table>\n</figure>",
|
| 69 |
+
"capture": "TABLE II: Estimated Value Probability of Violation (POV) for 500-Bus System Voltages with Difference in Estimation using AL-VDK"
|
| 70 |
+
},
|
| 71 |
+
"3": {
|
| 72 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Risk Estimation Results in 1354-Bus System</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T3.6\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1\">\n<td class=\"ltx_td ltx_border_r\" id=\"S6.T3.1.1.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.3\">Samples</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.4\">Time(s)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.2\">183</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.3\">77 - 81</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.4\">159 - 168</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.1\">8.0 0.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.3.3.2\">287</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.3.3.3\">77 - 81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.3.3.4\">154 - 164</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.3.3.1\">8.2 0.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"4\" id=\"S6.T3.4.4.1\">Mean evaluation time for 8200 samples is 29.8 sec</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.5.5\">\n<td class=\"ltx_td ltx_align_center\" colspan=\"4\" id=\"S6.T3.5.5.1\">NRLF running time for 2050 samples is 3879 sec</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" colspan=\"4\" id=\"S6.T3.6.6.1\">\n: Difference in VE using NRLF and AL-VDK</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 73 |
+
"capture": "TABLE III: Risk Estimation Results in 1354-Bus System"
|
| 74 |
+
}
|
| 75 |
+
},
|
| 76 |
+
"image_paths": {
|
| 77 |
+
"1": {
|
| 78 |
+
"figure_path": "2308.07867v2_figure_1.png",
|
| 79 |
+
"caption": "Figure 1: Relationship between required number of samples N\ud835\udc41Nitalic_N and error \u03b5\ud835\udf00\\varepsilonitalic_\u03b5 for MCS based VRA. Our proposed approach replaces these N\ud835\udc41Nitalic_N ACPF solutions with 4\u2062N4\ud835\udc414N4 italic_N GP model evaluations to achieve error of same order. The key advantage lies in the fact that GP evaluations are much faster than solving ACPF, due to GP model\u2019s closed-form. For example, it takes \u22484205absent4205\\approx 4205\u2248 4205 sec for obtaining 20050 ACPF solutions, while 82000 GP evaluations takes only \u224833.2absent33.2\\approx 33.2\u2248 33.2 sec, which corresponds to a speedup greater than 120x.",
|
| 80 |
+
"url": "http://arxiv.org/html/2308.07867v2/x1.png"
|
| 81 |
+
},
|
| 82 |
+
"5": {
|
| 83 |
+
"figure_path": "2308.07867v2_figure_5.png",
|
| 84 |
+
"caption": "Figure 5: Comparison of MAE performance of different methods, demonstrating the efficiency and low sample-complexity of AL-VDK, on three different nodes of 118-Bus system. GP, VDK and AL-VDK results are of 50 trials, and DNN results are of 10 trials. AL-VDK uses significantly fewer samples as dictated by Algorithm 1. AL-VDK training samples for all 50 trials are within 43 \u2013 48, 43 \u2013 47 and 42 \u2013 47 for nodes 21, 44, and 95 respectively.",
|
| 85 |
+
"url": "http://arxiv.org/html/2308.07867v2/x2.png"
|
| 86 |
+
},
|
| 87 |
+
"6": {
|
| 88 |
+
"figure_path": "2308.07867v2_figure_6.png",
|
| 89 |
+
"caption": "Figure 6: Comparison of MAE performance of different methods, demonstrating the efficiency and low sample-complexity of AL-VDK, on three different nodes of 500-Bus system. AL-VDK uses significantly fewer samples as dictated by Algorithm 1 and details are given in Table I. Our target is to achieve MAE <10\u22123absentsuperscript103<10^{-3}< 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT (0.1% error for a 1000kV system), and ACPF samples are generated till that threshold is reached.",
|
| 90 |
+
"url": "http://arxiv.org/html/2308.07867v2/x3.png"
|
| 91 |
+
},
|
| 92 |
+
"7": {
|
| 93 |
+
"figure_path": "2308.07867v2_figure_7.png",
|
| 94 |
+
"caption": "Figure 7: Distributions of violation h\u2062(\ud835\udc2c)\u210e\ud835\udc2ch(\\mathbf{s})italic_h ( bold_s ) obtained using 20050 NRLF solutions and 82000 AL-VDK evaluations after a using a random training instance. The right hand side shift of blue distributions shows that proposed AL-VDK always provides an overestimation of risk, as proven in Theorem 2.",
|
| 95 |
+
"url": "http://arxiv.org/html/2308.07867v2/x4.png"
|
| 96 |
+
}
|
| 97 |
+
},
|
| 98 |
+
"validation": true,
|
| 99 |
+
"references": [],
|
| 100 |
+
"url": "http://arxiv.org/html/2308.07867v2"
|
| 101 |
+
}
|
20240721/2308.09718v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2309.13289v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2311.07172v2.json
ADDED
|
@@ -0,0 +1,410 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "VerityMath: Advancing Mathematical Reasoning by Self-Verification Through Unit Consistency",
|
| 3 |
+
"abstract": "Large Language Models (LLMs), combined with program-based solving techniques, are increasingly demonstrating proficiency in mathematical reasoning. For example, closed-source models such as OpenAI GPT-4 and Claude show excellent results in solving math word problems. However, progress in math word problem-solving for open-source LLMs is limited, and the challenges these models face are not well-studied. In this paper, we study the performance of strong open-source LLMs, including Llama 2 (7B), Code Llama (7B), and Mistral (7B) on math word problems using program-based solving techniques. Specifically, we analyze the outputs of these models when applied to math word problems and identify a category of problems that pose a significant challenge, particularly those involving quantities spanning multiple units. To address this issue, we propose a systematic approach by defining the units for each quantity and ensuring the consistency of these units during mathematical operations. We developed Unit Consistency Programs (UCPs), an annotated dataset of math word problems, each paired with programs containing unit specifications and unit verification routines. We fine-tuned Llama 2 (7B), Code Llama (7B), and Mistral (7B) models with UCPs to produce their VerityMath variants. Our findings indicate that our approach, which incorporates unit consistency, currently slightly underperforms compared to an approach that does not. To understand the reasons behind this, we conduct an in-depth error analysis and suggest options for future improvements. Our code and dataset are available at https://github.com/vernontoh/VerityMath.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The ability to reason during the process of thinking and decision-making is a fundamental aspect of human intelligence. Replicating this ability in machines has been an objective in the field of Natural Language Processing. Large language models (LLMs) (OpenAI, 2023 ###reference_b19###; Anil et al., 2023 ###reference_b1###) mark significant progress toward this goal, demonstrating remarkable proficiency across a range of tasks, including mathematical reasoning (Zhou et al., 2023 ###reference_b32###; Zhao et al., 2023 ###reference_b30###; Zheng et al., 2023 ###reference_b31###). Specifically, methods like Program Aided Language Model (PAL) (Gao et al., 2023 ###reference_b9###) as well as Program of Thoughts (PoT) (Chen et al., 2023 ###reference_b4###) have demonstrated improvements in LLMs\u2019 ability to solve complex mathematical problems. These methodologies empower LLMs to formulate programs as intermediate reasoning steps and delegate the execution of these steps to a Python interpreter, thereby enhancing computational accuracy.\nHowever, open-source LLMs like those referenced in (Touvron et al., 2023 ###reference_b23###; Rozi\u00e8re et al., 2023 ###reference_b21###; Jiang et al., 2023 ###reference_b13###) demonstrate limited success in math reasoning tasks. For example, after fine-tuning on the GSM8K-PAL dataset provided by Jie & Lu (2023 ###reference_b14###), Mistral (7B) achieves just 70.4% accuracy on GSM8K (Cobbe et al., 2021 ###reference_b5###) (Ref Table 4 ###reference_###). Our analysis of the fine-tuned Llama 2 (7B), Code Llama (7B) and Mistral (7B) reveals challenges in solving math word problems with multi-unit quantities. These issues are more pronounced in multi-step reasoning, where early errors can lead to incorrect final solutions. Our study thus identifies specific challenges the model faces.\n###figure_1### We propose a methodological framework to enhance the reasoning capabilities of LLMs by introducing a unit system for quantities and enforcing unit consistency. Ensuring unit consistency is crucial for accurate solutions in the context of mathematical word problems. To achieve this, we introduce Unit Consistency Programs (UCPs) (Figure 1 ###reference_###) designed to enhance LLMs\u2019 reasoning abilities by enabling them to self-verify unit consistency within equations. UCPs consist of Counter objects responsible for tracking variable units and assert statements generated following each equation involving an operation. These assert statements verify the consistency of units within the equation and can trigger an assert error when inconsistent units are detected.\nWe have developed a dataset that pairs math word problems with unit consistency programs containing unit specifications and verification routines. Our preliminary study presents the outcomes of fine-tuning Llama 2 (7B), Code Llama (7B) , and Mistral (7B) using these programs. Although our approach, which incorporates unit consistency, currently slightly underperforms compared to a non-unit-consistent approach, we conducted an in-depth error analysis to understand the reasons behind this discrepancy and proposed several options for future improvements."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Motivating Analysis",
|
| 15 |
+
"text": "Recent studies have utilized the concept of program-based prompting to generate pseudo-gold programs as an intermediary step for training smaller models (Jie & Lu, 2023 ###reference_b14###; Zhu et al., 2023 ###reference_b33###). Notably, this approach has shown promising outcomes, although these results still fall short of the performance achieved by larger models like GPT-4 (OpenAI, 2023 ###reference_b19###). To better comprehend the gaps in the mathematical reasoning abilities of smaller models, we fine-tuned Llama 2 (7B) (Touvron et al., 2023 ###reference_b23###), Code Llama (7B) (Rozi\u00e8re et al., 2023 ###reference_b21###), and Mistral (7B) (Jiang et al., 2023 ###reference_b13###) using the GSM8K-PAL dataset provided by Jie & Lu (2023 ###reference_b14###), and conducted a comprehensive analysis of the fine-tuned models. The GSM8K-PAL dataset contains approximately 6.8k word problems paired with their PAL annotations in the training dataset as shown in Table 1 ###reference_###.\nAfter fine-tuning these models on GSM8K-PAL, we observed that they struggle with math word problems involving multiple different units. As illustrated in Figure 1 ###reference_### (top), the example illustrates a unit mismatch in the model trained on the PAL-based approach. Specifically, the subtraction operation between variables and discount_amount is incorrect. The units are incompatible: the former is in dollar, and the latter is in .\nTo support our observation that the model struggles with problems containing multiple units, we employed GPT-3.5 Turbo111GPT-3.5 Turbo annotations were obtained in September 2023. to categorize the examples from both the train and test splits into two distinct groups. The first group comprises of questions involving a single unit, while the second group comprises of questions with multiple units. This classification was achieved using few-shot prompting, with GPT-3.5 Turbo serving as the backend engine. The specifics of the few-shot prompt utilized are detailed in Section A.2 ###reference_###, and the distribution of these categories is presented in Table 2 ###reference_###. Our analysis reveals that approximately 40% of the problems in both training and test splits involve multiple units.\nTo further evaluate the accuracy of GPT-3.5 Turbo in identifying questions with multiple units, we conducted a small-scale human assessment, detailed in Table 3 ###reference_###. The first author manually annotated 100 randomly selected test examples from GSM8K and compared the annotations with the classifications made by GPT-3.5 Turbo. The results demonstrated a precision of 80.4%, indicating that GPT-3.5 Turbo generally excels in predicting questions involving multiple units. We have extended this analysis to the SVAMP (Patel et al., 2021 ###reference_b20###), as presented in Section A.3 ###reference_###, to demonstrate that this phenomenon is not exclusive to GSM8K.\nBased on the test dataset split we collected, we divided the accuracy of the fine-tuned models into two categories: one for questions with a single unit and another for questions with multiple units. This categorization is shown in Table 4 ###reference_###. A detailed examination of Table 4 ###reference_### reveals that our observations remained consistent across all three fine-tuned models, indicating superior performance on single-unit problems compared to those with multiple units. Motivated by these findings, we developed Unit Consistency Programs (UCPs) aimed at addressing the limitations inherent in PAL-based solutions.\nDataset\n#Train\n#Program\n#Valid\n#Test\n\nGSM8K-PAL\n07,473\n6,877 (92.0%)\n-\n1,319\n\nUCPs\n07,473\n4,480 (59.9%)\n-\n1,319\n###table_1###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Methodology",
|
| 21 |
+
"text": ""
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Unit Consistency Programs",
|
| 27 |
+
"text": "Unit consistency checks are essential safeguards, helping to identify and prevent errors from inconsistent units in mathematical equations. In contrast to PAL/PoT approaches that directly generate programs to solve math word problems, our method enhances these programs by integrating specialized Counter objects. These objects are responsible for tracking variable units and ensuring the correct handling of operations with differing units. Additionally, we incorporate assert statements after each equation, as illustrated in Figure 1 ###reference_### (bottom). These assert statements verify unit consistency within equations, triggering an error if unit mismatches are detected.\nConsider the example in Figure 1 ###reference_### (bottom), illustrating a multiplication operation between shirts_count (measured in \u2018shirts\u2019) and cost_per_shirt (measured in \u2018dollars per shirt\u2019). In this operation, the units of \u2018shirts\u2019 from shirts_count and \u2018per shirt\u2019 from cost_per_shirt naturally cancel each other out, resulting in a unit of \u2018dollars\u2019. An assert statement is used to verify this expected cancellation of units. In our notation, the exponent of a unit in the numerator is represented as +1, and in the denominator as -1. Therefore, in this multiplication, the positive exponent of \u2018shirts\u2019 in shirts_count cancels with the negative exponent of \u2018per shirt\u2019 in cost_per_shirt, aligning the product\u2019s right-hand side (RHS) with the expected left-hand side (LHS) unit of total_cost_before_discount, confirming it is in \u2018dollars\u2019. The example also illustrates a unitless quantity, specifically a percentage. In this case, there won\u2019t be any units specified in the Counter initialization. Our methodology requires the development of a specialized Counter class, details of which are elaborated in the Section A.4.2 ###reference_.SSS2###."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Training Data Annotations",
|
| 33 |
+
"text": "Adopting the methodology used in PAL/PoT, we sampled programs for each math word problem, adding them to our training data when their execution yielded the correct answer. For each math word problem in the training dataset , we performed greedy decoding at temperature to synthesize program . Upon executing the program , if the predicted answer matched the ground-truth answer and consists of Counter objects and assert statements, we included the tuple in our new training dataset . Any math word problem for which a matching program couldn\u2019t be obtained was discarded."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Fine-tuning Small Models",
|
| 39 |
+
"text": "We fine-tuned smaller models with our annotated dataset through standard causal language modeling techniques. The objective is to generate a corresponding Python program for a given math word problem . After fine-tuning, the model was used to generate Python programs, which were then executed using a Python interpreter to obtain the final answer. We employed strong open-source LLMs such as Llama 2 (7B), Code Llama (7B), and Mistral (7B) as our models to fine-tune."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Experiments",
|
| 45 |
+
"text": ""
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "Dataset",
|
| 51 |
+
"text": "We conducted our experiments primarily on GSM8K, employing few-shot prompting with GPT-4 for the first 1,000 examples222GPT-4 annotations obtained in September 2023. and GPT-4 Turbo for the remaining 6,473 examples333GPT-4 Turbo annotations obtained in December 2023. in the GSM8K train dataset. We used six manually crafted Unit Consistency Programs (UCPs) samples, as detailed in Section A.1 ###reference_###. We successfully annotated 59.9% of the GSM8K train dataset, creating our annotated UCPs dataset, . Table 1 ###reference_### presents the statistics of our UCPs dataset."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Baseline",
|
| 57 |
+
"text": "Our baseline models consist of different models such as Llama 2 (7B), Code Llama (7B), and Mistral (7B) fine-tuned on GSM8K-PAL. We use this as a direct baseline to our method as it provides a more effective comparison between our method UCPs and existing methods like PAL/POT since our UCPs serve as extensions to typical Python programs used for solving mathematical problems, as demonstrated in PAL/POT.\nModel\nSingle\nMultiple\nOverall\n\nClosed-Source Models\n\nGPT-4\n-\n-\n92.0\n\nGPT-3.5-Turbo\n-\n-\n80.8\n\nOpen-Source Models 7B\n\nLlama-2 (PAL)\u2020\n58.5 3.1\n51.2 4.2\n55.4\n\nCode-Llama (PAL)\u2020\n65.6 2.5\n59.8 3.3\n63.1\n\nMistral (PAL)\u2020\n72.2 1.8\n68.1 2.3\n70.4\n\nVerityMath-Llama-2\n51.9 5.7\n38.7 7.5\n46.2\n\nVerityMath-Code-Llama\n58.4 4.2\n48.6 5.6\n54.2\n\nVerityMath-Mistral\n71.5 3.3\n63.7 4.5\n68.2"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.3",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Implementation",
|
| 63 |
+
"text": "We conducted fine-tuning experiments on GSM8K-PAL and UCPs, details of both datasets can be found in Table 1 ###reference_###. In our fine-tuning experiments, we utilized the QLoRA technique (Dettmers et al., 2023 ###reference_b6###) for enabling efficient fine-tuning. All QLoRA hyper-parameters were set as presented in Dettmers et al. (2023 ###reference_b6###). In all our experiments we use NF4 with double quantization and bf16 computation datatype. We set LoRA , and add LoRA modules on all linear layers of the base model. We also use max grad norm of 0.3 and LoRA dropout of 0.1. We use AdamW optimizer and set the learning rate to , with a batch size of and a maximum context length of . We trained the model for epochs using A100 40 GB GPUs which took roughly 14 hours and evaluated it on the test dataset."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.4",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Main Results",
|
| 69 |
+
"text": "Our model, VerityMath-Mistral (7B), fine-tuned on UCPs achieved an overall accuracy of on the GSM8K test dataset. Specifically, it attained accuracy for problems involving a single unit and accuracy for those with multiple units, as detailed in Table 4 ###reference_###.\nWhen compared to the Mistral (7B) (PAL) baseline, VerityMath-Mistral (7B) exhibits a slight overall accuracy decrease of 2.2%. Meanwhile, VerityMath-Code Llama (7B) and VerityMath-Llama 2 (7B) experienced more significant declines in their overall accuracy, approximately 9% lower than their respective PAL counterparts.\nSpecifically, VerityMath-Code-Llama achieved 54.2% overall accuracy, with 58.4% for single unit problems and 48.6% for multiple units, while VerityMath-Llama-2 achieved an overall accuracy of 46.2%, with 51.9% for single unit and 38.7% for multiple units.\n###figure_2###"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.5",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Analysis",
|
| 75 |
+
"text": "In this section, we conducted an in-depth analysis of the potential causes for the decline in overall accuracy in the GSM8K test dataset. We focused on VerityMath-Mistral (7B) for all of our analysis.\nIn an error analysis of VerityMath-Mistral (7B) outputs from the test dataset, we observed some challenges that led to decreased performance, specifically, the correctness of Counter and assert statements.\nWe reran the whole evaluation but this time, when we were met with a program that raised an assertion error, we removed the Counter and assert statements and executed the programs again.\nIf the program compiles and produces the correct answer after this modification, it indicates that the program was originally incorrect due to incorrect Counter or assert statements.\nReferring to Figure 2 ###reference_###, we observed a notable percentage of output programs that contained incorrect Counter or assert statements in VerityMath-Mistral (7B) outputs. Specifically, of the problems with single units and of the problems with multiple units have incorrect Counter and assert which caused correct programs that would have resulted in the correct answer to have a false assertion error resulting in the wrong answer.\nExamples of such cases with incorrect Counter and assert are shown in Section A.5.2 ###reference_.SSS2###.\n###figure_3### We further conducted a detailed analysis of code solutions categorized by the number of assert statements, as shown in Figure 3 ###reference_###.\nEach bar represents the total number of code solution that consists of a specific number of assert statements.\nThe green segments of the bars indicate the count of code solutions that resulted in the correct answer, while the red segments represent those that resulted in an incorrect answer. The percentage of correct answers is annotated on each bar for clarity.\nIt is evident from the plot that the percentage of correct answers generally decreases as the number of assert statements increases, from code solutions with 2 to 4 assert statements having approximately 70% accuracy to code solutions with 5, 6, and 7 assert statements having 55.7%, 62.5%, and 40.9% respectively.\nHighlighting a trend where more complex code solutions with more assert statements are more likely to result in incorrect answers.\nThis aligns with the earlier observations regarding the correctness of assert statements, and suggests that with more assert statements in the code solution, it is more prone to having errors due to the incorrect assert statements which would then result in a wrong answer.\n###figure_4### Due to the difference in the number of training examples between GSM8k-PAL and UCPs of 2397 as shown in Table 1 ###reference_###. It is crucial to also understand the implications of the number of training examples with respect to the performance. We fine-tuned Mistral (7B) on both GSM8k-PAL and UCPs with an interval of 1000 training examples and showed the results in Figure 4 ###reference_###. The performance of Mistral (7B) when fine-tuned on GSM8k-PAL or UCPs demonstrates a clear trend of improvement with the increase in the number of training examples.\nFor GSM8k-PAL, the test accuracy starts at 63.8% with 1,000 training examples and steadily increases to 70.4% with 6,877 examples. On the other hand, The UCPs exhibit a more pronounced improvement curve, starting at 56.0% accuracy with 1,000 training examples, the performance increases significantly to 68.2% with 4,480 examples. This rate of improvement indicates that with limited examples, the concept of UCPs is harder to grasp for Mistral (7B) as compared to PAL.\nThe difference in performance gains suggests that UCPs might have untapped potential that could be realized with an increased number of training examples and it implies that with sufficient training examples, UCPs could potentially surpass PAL in performance.\nIn our in-depth anaysis, we identified a notable bottleneck in our current method, which is the correctness of Counter and assert statements. This issue led to a slight decrease in performance. Our method, UCPs, is a relatively more complex method for existing 7B LLMs to learn, but with a significant increase in dataset annotations, it is highly possible that our method will outshine the existing PAL method. Another approach could involve data augmentation using synthetic examples (Wu et al., 2021 ###reference_b27###).\nExamples showcasing the efficacy of UCPs are available in Section A.5.1 ###reference_.SSS1###."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Related Work",
|
| 81 |
+
"text": "Our research builds upon the Program of Thoughts (PoT) approach (Chen et al., 2023 ###reference_b4###) and the Program Aided Language Model (PAL) (Gao et al., 2023 ###reference_b9###) methodologies, which have shown effectiveness in solving mathematical problems. These approaches have outperformed techniques like the Chain-of-Thought (CoT) (Wei et al., 2022 ###reference_b25###), which can struggle with computational inaccuracies (Lewkowycz et al., 2022 ###reference_b15###). We extend their work by focusing on the use of programs for solving math word problems and the concept of self-verification to improve LLMs\u2019 reasoning capabilities.\nThe advancement of GPT models (Brown et al., 2020 ###reference_b2###) has inspired various studies (Ho et al., 2023 ###reference_b12###; Fu et al., 2023 ###reference_b8###; Magister et al., 2023 ###reference_b17###; Shridhar et al., 2023 ###reference_b22###) on creating synthetic datasets for fine-tuning smaller models (Hinton et al., 2015 ###reference_b11###). Notably, Zhu et al. (2023 ###reference_b33###) used PAL annotations in this context, (Magister et al., 2023 ###reference_b17###; Ho et al., 2023 ###reference_b12###; Yu et al., 2023 ###reference_b28###) employed CoT annotations, and Yue et al. (2023 ###reference_b29###) used a hybrid of CoT and PoT rationales.\nIn mathematical problem-solving, ensuring solution validity is crucial due to hallucinations in LLMs (Bubeck et al., 2023 ###reference_b3###) and challenges in executing multiplications (Dziri et al., 2023 ###reference_b7###). Prior research has focused on training additional verifiers for answer accuracy (Cobbe et al., 2021 ###reference_b5###), providing feedback for each intermediate reasoning step (Lightman et al., 2023 ###reference_b16###), and integrating tools to agents(Gou et al., 2024 ###reference_b10###). However, Weng et al. (2023 ###reference_b26###) and Miao et al. (2023 ###reference_b18###) have shown potential for LLMs to self-verify solutions. Our approach builds on these insights, incorporating programs for solving math word problems and leveraging self-verification to enhance LLM reasoning."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "6",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Conclusion and Future Work",
|
| 87 |
+
"text": "In this study, we analyzed open-source Large Language Models (LLMs) and pinpointed their struggle with math problems involving multiple units, highlighting a key improvement area. We introduced Unit Consistency Programs (UCPs) as a novel method to address LLMs\u2019 reasoning and verification abilities, especially in complex math problems. We identified some limitations in our current approach. Future work will focus on advancing unit check methodologies in UCPs to address these limitations."
|
| 88 |
+
}
|
| 89 |
+
],
|
| 90 |
+
"appendix": [
|
| 91 |
+
{
|
| 92 |
+
"section_id": "Appendix 1",
|
| 93 |
+
"parent_section_id": null,
|
| 94 |
+
"section_name": "Appendix A Appendix",
|
| 95 |
+
"text": "###table_2### The SVAMP dataset comprises a total of 1000 examples, with 700 allocated to the train dataset and 300 to the test dataset. The dataset encompasses four problem types: subtraction, addition, common-division, and multiplication. However, our analysis focuses solely on multiplication and common-division, as problems involving only addition or subtraction are defined to only consist of a single unit. We can observe from 5 ###reference_### that 46.9% and 58% of the problems are classified as multiple units in the train and test dataset respectively."
|
| 96 |
+
}
|
| 97 |
+
],
|
| 98 |
+
"tables": {
|
| 99 |
+
"1": {
|
| 100 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<p class=\"ltx_p ltx_align_center\" id=\"S2.T1.2\"><span class=\"ltx_text\" id=\"S2.T1.2.1\" style=\"font-size:80%;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T1.2.1.1\" style=\"width:232.1pt;height:54pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S2.T1.2.1.1.1\"><span class=\"ltx_text\" id=\"S2.T1.2.1.1.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.2.1.1.1.1.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.2.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S2.T1.2.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.1.1.1.1.1.1.1\">Dataset</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.2.1.1.1.1.1.1.2\"><span class=\"ltx_text\" id=\"S2.T1.2.1.1.1.1.1.1.2.1\">#<span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.1.1.1.1.1.2.1.1\">Train</span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.2.1.1.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.1.1.1.1.1.3.1\">#Program</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.2.1.1.1.1.1.1.4\"><span class=\"ltx_text\" id=\"S2.T1.2.1.1.1.1.1.1.4.1\">#<span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.1.1.1.1.1.4.1.1\">Valid</span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.2.1.1.1.1.1.1.5\"><span class=\"ltx_text\" id=\"S2.T1.2.1.1.1.1.1.1.5.1\">#<span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.1.1.1.1.1.5.1.1\">Test</span></span></span></span>\n<span class=\"ltx_tr\" id=\"S2.T1.2.1.1.1.1.1.2\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.2.1.1.1.1.1.2.1\">GSM8K-PAL</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.1.1.1.1.2.2\"><span class=\"ltx_text\" id=\"S2.T1.2.1.1.1.1.1.2.2.1\" style=\"color:#FFFFFF;\">0</span>7,473</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.1.1.1.1.2.3\">6,877 (92.0%)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.1.1.1.1.2.4\">-</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.1.1.1.1.2.5\">1,319</span></span>\n<span class=\"ltx_tr\" id=\"S2.T1.2.1.1.1.1.1.3\">\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S2.T1.2.1.1.1.1.1.3.1\">UCPs</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.2.1.1.1.1.1.3.2\"><span class=\"ltx_text\" id=\"S2.T1.2.1.1.1.1.1.3.2.1\" style=\"color:#FFFFFF;\">0</span>7,473</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.2.1.1.1.1.1.3.3\">4,480 (59.9%)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.2.1.1.1.1.1.3.4\">-</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.2.1.1.1.1.1.3.5\">1,319</span></span>\n</span></span></span>\n</span></span></span><span class=\"ltx_text\" id=\"S2.T1.2.2\" style=\"font-size:80%;\"></span></p>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S2.T1.9.1.1\" style=\"font-size:113%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S2.T1.10.2\" style=\"font-size:113%;\">Comparison of dataset size of GSM8K-PAL by <cite class=\"ltx_cite ltx_citemacro_citep\">(Jie & Lu, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.07172v2#bib.bib14\" title=\"\">2023</a>)</cite> and UCPs.</span></figcaption>\n</figure>",
|
| 101 |
+
"capture": "Table 1: Comparison of dataset size of GSM8K-PAL by (Jie & Lu, 2023) and UCPs."
|
| 102 |
+
},
|
| 103 |
+
"2": {
|
| 104 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S2.T2.2\">\n<tr class=\"ltx_tr\" id=\"S2.T2.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S2.T2.2.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.2.1.1.1\" style=\"font-size:80%;\">Train Dataset</span><span class=\"ltx_text\" id=\"S2.T2.2.1.1.2\" style=\"font-size:80%;\"> (7473)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S2.T2.2.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.2.1.2.1\" style=\"font-size:80%;\">Test Dataset</span><span class=\"ltx_text\" id=\"S2.T2.2.1.2.2\" style=\"font-size:80%;\"> (1319)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.2.2.1.1\" style=\"font-size:80%;\">Single</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.2.2.2.1\" style=\"font-size:80%;\">Multiple</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.2.2.3.1\" style=\"font-size:80%;\">Single</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.2.2.4.1\" style=\"font-size:80%;\">Multiple</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.2.3.1\"><span class=\"ltx_text\" id=\"S2.T2.2.3.1.1\" style=\"font-size:80%;\">4479</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.2.3.2\"><span class=\"ltx_text\" id=\"S2.T2.2.3.2.1\" style=\"font-size:80%;\">2994</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.2.3.3\"><span class=\"ltx_text\" id=\"S2.T2.2.3.3.1\" style=\"font-size:80%;\">755</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.2.3.4\"><span class=\"ltx_text\" id=\"S2.T2.2.3.4.1\" style=\"font-size:80%;\">564</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T2.2.4.1\"><span class=\"ltx_text\" id=\"S2.T2.2.4.1.1\" style=\"font-size:80%;\">(59.9%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T2.2.4.2\"><span class=\"ltx_text\" id=\"S2.T2.2.4.2.1\" style=\"font-size:80%;\">(40.1%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T2.2.4.3\"><span class=\"ltx_text\" id=\"S2.T2.2.4.3.1\" style=\"font-size:80%;\">(57.2%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T2.2.4.4\"><span class=\"ltx_text\" id=\"S2.T2.2.4.4.1\" style=\"font-size:80%;\">(42.8%)</span></td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S2.T2.5.1.1\" style=\"font-size:113%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S2.T2.6.2\" style=\"font-size:113%;\">Classification of GSM8K into two categories: single unit and multiple units.</span></figcaption>\n</figure>",
|
| 105 |
+
"capture": "Table 2: Classification of GSM8K into two categories: single unit and multiple units."
|
| 106 |
+
},
|
| 107 |
+
"3": {
|
| 108 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T3.2\" style=\"width:208.1pt;height:81.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-11.3pt,4.4pt) scale(0.902407806322274,0.902407806322274) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.2.1\">\n<tr class=\"ltx_tr\" id=\"S3.T3.2.1.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T3.2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.1.1.2.1\" style=\"font-size:80%;background-color:#FFFFFF;\">Positive Predicted</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.1.1.3.1\" style=\"font-size:80%;background-color:#FFFFFF;\">Negative Predicted</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.2.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.1.2.1.1\" style=\"font-size:80%;\">Actual Positive</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.1.2.2\"><span class=\"ltx_text\" id=\"S3.T3.2.1.2.2.1\" style=\"font-size:80%;\">37</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.1.2.3\"><span class=\"ltx_text\" id=\"S3.T3.2.1.2.3.1\" style=\"font-size:80%;\">16</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.2.1.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.1.3.1.1\" style=\"font-size:80%;\">Actual Negative</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.1.3.2\"><span class=\"ltx_text\" id=\"S3.T3.2.1.3.2.1\" style=\"font-size:80%;\">9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.1.3.3\"><span class=\"ltx_text\" id=\"S3.T3.2.1.3.3.1\" style=\"font-size:80%;\">38</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.1.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.1.4.1.1\" style=\"font-size:80%;\">Precision</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.1.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.1.4.2.1\" style=\"font-size:80%;\">Recall</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.1.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.1.4.3.1\" style=\"font-size:80%;\">Accuracy</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.2.1.5.1\"><span class=\"ltx_text\" id=\"S3.T3.2.1.5.1.1\" style=\"font-size:80%;\">80.4%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.2.1.5.2\"><span class=\"ltx_text\" id=\"S3.T3.2.1.5.2.1\" style=\"font-size:80%;\">69.8%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.2.1.5.3\"><span class=\"ltx_text\" id=\"S3.T3.2.1.5.3.1\" style=\"font-size:80%;\">75.0%</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T3.5.1.1\" style=\"font-size:113%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S3.T3.6.2\" style=\"font-size:113%;\">Small human evaluation compared on GPT-3.5 Turbo classification on 100 randomly sampled test examples from GSM8K. Human annotations were done by the first author.</span></figcaption>\n</figure>",
|
| 109 |
+
"capture": "Table 3: Small human evaluation compared on GPT-3.5 Turbo classification on 100 randomly sampled test examples from GSM8K. Human annotations were done by the first author."
|
| 110 |
+
},
|
| 111 |
+
"4": {
|
| 112 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<p class=\"ltx_p ltx_align_center\" id=\"S4.T4.15\"><span class=\"ltx_text\" id=\"S4.T4.15.15\" style=\"font-size:80%;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T4.15.15.15\" style=\"width:229.5pt;height:198pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S4.T4.15.15.15.15\"><span class=\"ltx_text\" id=\"S4.T4.15.15.15.15.15\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.15.15.15.15.15.15\">\n<span class=\"ltx_tr\" id=\"S4.T4.15.15.15.15.15.15.16\">\n<span class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T4.15.15.15.15.15.15.16.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.15.15.15.15.15.15.16.1.1\">Model</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.15.15.15.15.15.15.16.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.15.15.15.15.15.15.16.2.1\">Single</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.15.15.15.15.15.15.16.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.15.15.15.15.15.15.16.3.1\">Multiple</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.15.15.15.15.15.15.16.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.15.15.15.15.15.15.16.4.1\">Overall</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.15.15.15.15.15.15.17\">\n<span class=\"ltx_td ltx_align_center ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S4.T4.15.15.15.15.15.15.17.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.15.15.15.15.15.15.17.1.1\">Closed-Source Models</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.15.15.15.15.15.15.18\">\n<span class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.15.15.15.15.15.15.18.1\">GPT-4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.15.15.15.15.15.15.18.2\">-</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.15.15.15.15.15.15.18.3\">-</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.15.15.15.15.15.15.18.4\">92.0</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.15.15.15.15.15.15.19\">\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.15.15.15.15.15.15.19.1\">GPT-3.5-Turbo</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.15.15.15.15.15.15.19.2\">-</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.15.15.15.15.15.15.19.3\">-</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.15.15.15.15.15.15.19.4\">80.8</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.15.15.15.15.15.15.20\">\n<span class=\"ltx_td ltx_align_center ltx_border_t ltx_colspan ltx_colspan_4\" id=\"S4.T4.15.15.15.15.15.15.20.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.15.15.15.15.15.15.20.1.1\">Open-Source Models 7B</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.3.3.3.3.3.3.3\">\n<span class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.1.1.1.1.1.1\">Llama-2 (PAL)<sup class=\"ltx_sup\" id=\"S4.T4.1.1.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.1.1.1.1.1.1.1.1.1.1\">\u2020</span></sup></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.2.2.2.2.2.2.2\">58.5 <span class=\"ltx_text\" id=\"S4.T4.2.2.2.2.2.2.2.2.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.2.2.2.2.2.2.2.2.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.2.2.2.2.2.2.2.2.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>3.1</foreignobject></g></g></svg></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.3.3.3.3.3.3.3\">51.2 <span class=\"ltx_text\" id=\"S4.T4.3.3.3.3.3.3.3.3.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.3.3.3.3.3.3.3.3.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#FFE6E6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.3.3.3.3.3.3.3.3.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>4.2</foreignobject></g></g></svg></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.3.3.3.3.3.3.4\">55.4</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.6.6.6.6.6.6.6\">\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.4.4.4.4.4.4.4.1\">Code-Llama (PAL)<sup class=\"ltx_sup\" id=\"S4.T4.4.4.4.4.4.4.4.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.4.4.4.4.4.4.4.1.1.1\">\u2020</span></sup></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.5.5.5.5.5.5.5.2\">65.6 <span class=\"ltx_text\" id=\"S4.T4.5.5.5.5.5.5.5.2.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.5.5.5.5.5.5.5.2.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.5.5.5.5.5.5.5.2.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>2.5</foreignobject></g></g></svg></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.6.6.6.6.6.6.6.3\">59.8 <span class=\"ltx_text\" id=\"S4.T4.6.6.6.6.6.6.6.3.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.6.6.6.6.6.6.6.3.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#FFE6E6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.6.6.6.6.6.6.6.3.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>3.3</foreignobject></g></g></svg></span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.6.6.6.6.6.6.4\">63.1</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.9.9.9.9.9.9.9\">\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.7.7.7.7.7.7.7.1\">Mistral (PAL)<sup class=\"ltx_sup\" id=\"S4.T4.7.7.7.7.7.7.7.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.7.7.7.7.7.7.7.1.1.1\">\u2020</span></sup></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.8.8.8.8.8.8.8.2\">72.2 <span class=\"ltx_text\" id=\"S4.T4.8.8.8.8.8.8.8.2.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.8.8.8.8.8.8.8.2.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.8.8.8.8.8.8.8.2.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>1.8</foreignobject></g></g></svg></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.9.9.9.9.9.9.9.3\">68.1 <span class=\"ltx_text\" id=\"S4.T4.9.9.9.9.9.9.9.3.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.9.9.9.9.9.9.9.3.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#FFE6E6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.9.9.9.9.9.9.9.3.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>2.3</foreignobject></g></g></svg></span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.9.9.9.9.9.9.9.4\">70.4</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.11.11.11.11.11.11.11\">\n<span class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.11.11.11.11.11.11.11.3\"><span class=\"ltx_text\" id=\"S4.T4.11.11.11.11.11.11.11.3.1\" style=\"background-color:#DCDCDC;\">VerityMath-Llama-2</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.10.10.10.10.10.10.10.1\"><span class=\"ltx_text\" id=\"S4.T4.10.10.10.10.10.10.10.1.1\" style=\"background-color:#DCDCDC;\">51.9 <span class=\"ltx_text\" id=\"S4.T4.10.10.10.10.10.10.10.1.1.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.10.10.10.10.10.10.10.1.1.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.10.10.10.10.10.10.10.1.1.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>5.7</foreignobject></g></g></svg></span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.11.11.11.11.11.11.11.2\"><span class=\"ltx_text\" id=\"S4.T4.11.11.11.11.11.11.11.2.1\" style=\"background-color:#DCDCDC;\">38.7 <span class=\"ltx_text\" id=\"S4.T4.11.11.11.11.11.11.11.2.1.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.11.11.11.11.11.11.11.2.1.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#FFE6E6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.11.11.11.11.11.11.11.2.1.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>7.5</foreignobject></g></g></svg></span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.11.11.11.11.11.11.11.4\"><span class=\"ltx_text\" id=\"S4.T4.11.11.11.11.11.11.11.4.1\" style=\"background-color:#DCDCDC;\">46.2</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.13.13.13.13.13.13.13\" style=\"background-color:#DCDCDC;\">\n<span class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.13.13.13.13.13.13.13.3\"><span class=\"ltx_text\" id=\"S4.T4.13.13.13.13.13.13.13.3.1\" style=\"background-color:#DCDCDC;\">VerityMath-Code-Llama</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.12.12.12.12.12.12.12.1\"><span class=\"ltx_text\" id=\"S4.T4.12.12.12.12.12.12.12.1.1\" style=\"background-color:#DCDCDC;\">58.4 <span class=\"ltx_text\" id=\"S4.T4.12.12.12.12.12.12.12.1.1.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.12.12.12.12.12.12.12.1.1.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.12.12.12.12.12.12.12.1.1.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>4.2</foreignobject></g></g></svg></span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.13.13.13.13.13.13.13.2\"><span class=\"ltx_text\" id=\"S4.T4.13.13.13.13.13.13.13.2.1\" style=\"background-color:#DCDCDC;\">48.6 <span class=\"ltx_text\" id=\"S4.T4.13.13.13.13.13.13.13.2.1.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.13.13.13.13.13.13.13.2.1.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#FFE6E6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.13.13.13.13.13.13.13.2.1.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>5.6</foreignobject></g></g></svg></span></span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.13.13.13.13.13.13.13.4\"><span class=\"ltx_text\" id=\"S4.T4.13.13.13.13.13.13.13.4.1\" style=\"background-color:#DCDCDC;\">54.2</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.15.15.15.15.15.15.15\" style=\"background-color:#DCDCDC;\">\n<span class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S4.T4.15.15.15.15.15.15.15.3\"><span class=\"ltx_text\" id=\"S4.T4.15.15.15.15.15.15.15.3.1\" style=\"background-color:#DCDCDC;\">VerityMath-Mistral</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.14.14.14.14.14.14.14.1\"><span class=\"ltx_text\" id=\"S4.T4.14.14.14.14.14.14.14.1.1\" style=\"background-color:#DCDCDC;\">71.5 <span class=\"ltx_text\" id=\"S4.T4.14.14.14.14.14.14.14.1.1.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.14.14.14.14.14.14.14.1.1.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.14.14.14.14.14.14.14.1.1.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>3.3</foreignobject></g></g></svg></span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.15.15.15.15.15.15.15.2\"><span class=\"ltx_text\" id=\"S4.T4.15.15.15.15.15.15.15.2.1\" style=\"background-color:#DCDCDC;\">63.7 <span class=\"ltx_text\" id=\"S4.T4.15.15.15.15.15.15.15.2.1.1\" style=\"font-size:88%;\"> <svg class=\"ltx_picture\" height=\"6.97\" id=\"S4.T4.15.15.15.15.15.15.15.2.1.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"18.6\"><g fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,6.97) matrix(1 0 0 -1 0 0) translate(0,0.08)\"><g fill=\"#FFFFFF\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill=\"#FFE6E6\" fill-opacity=\"1.0\"><path d=\"M 0 4.15 L 0 2.74 C 0 5.04 1.86 6.9 4.15 6.9 L 14.45 6.9 C 16.74 6.9 18.6 5.04 18.6 2.74 L 18.6 4.15 C 18.6 1.86 16.74 0 14.45 0 L 4.15 0 C 1.86 0 0 1.86 0 4.15 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.0\" transform=\"matrix(1.0 0.0 0.0 1.0 1.38 -0.73)\"><foreignobject color=\"#000000\" height=\"6.9\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"15.84\"><span class=\"ltx_text\" id=\"S4.T4.15.15.15.15.15.15.15.2.1.1.pic1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1\" style=\"position:relative; bottom:0.5pt;\"></span>4.5</foreignobject></g></g></svg></span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.15.15.15.15.15.15.15.4\"><span class=\"ltx_text\" id=\"S4.T4.15.15.15.15.15.15.15.4.1\" style=\"background-color:#DCDCDC;\">68.2</span></span></span>\n</span></span></span>\n</span></span></span><span class=\"ltx_text\" id=\"S4.T4.15.16\" style=\"font-size:80%;\"></span></p>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T4.30.2.1\" style=\"font-size:113%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S4.T4.17.1\" style=\"font-size:113%;\">Comparison of test accuracy on GSM8K of different 7B open-source models fine-tuned on PAL and UCP. The <span class=\"ltx_text\" id=\"S4.T4.17.1.1\" style=\"background-color:#E6FFE6;\">green</span> and <span class=\"ltx_text\" id=\"S4.T4.17.1.2\" style=\"background-color:#FFE6E6;\">red</span> boxes represent the increase and decrease in accuracy compared to its overall score. <sup class=\"ltx_sup\" id=\"S4.T4.17.1.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T4.17.1.3.1\">\u2020</span></sup>We fine-tune the model using GSM8K-PAL by <cite class=\"ltx_cite ltx_citemacro_citet\">Jie & Lu (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.07172v2#bib.bib14\" title=\"\">2023</a>)</cite>.</span></figcaption>\n</figure>",
|
| 113 |
+
"capture": "Table 4: Comparison of test accuracy on GSM8K of different 7B open-source models fine-tuned on PAL and UCP. The green and red boxes represent the increase and decrease in accuracy compared to its overall score. \u2020We fine-tune the model using GSM8K-PAL by Jie & Lu (2023)."
|
| 114 |
+
},
|
| 115 |
+
"5": {
|
| 116 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A1.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A1.T5.2\">\n<tr class=\"ltx_tr\" id=\"A1.T5.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"A1.T5.2.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A1.T5.2.1.1.1\" style=\"font-size:80%;\">Train Dataset</span><span class=\"ltx_text\" id=\"A1.T5.2.1.1.2\" style=\"font-size:80%;\"> (192)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"A1.T5.2.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A1.T5.2.1.2.1\" style=\"font-size:80%;\">Test Dataset</span><span class=\"ltx_text\" id=\"A1.T5.2.1.2.2\" style=\"font-size:80%;\"> (81)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T5.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T5.2.2.1.1\" style=\"font-size:80%;\">Single</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T5.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T5.2.2.2.1\" style=\"font-size:80%;\">Multiple</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T5.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T5.2.2.3.1\" style=\"font-size:80%;\">Single</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T5.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T5.2.2.4.1\" style=\"font-size:80%;\">Multiple</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T5.2.3.1\"><span class=\"ltx_text\" id=\"A1.T5.2.3.1.1\" style=\"font-size:80%;\">102</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T5.2.3.2\"><span class=\"ltx_text\" id=\"A1.T5.2.3.2.1\" style=\"font-size:80%;\">90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T5.2.3.3\"><span class=\"ltx_text\" id=\"A1.T5.2.3.3.1\" style=\"font-size:80%;\">34</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T5.2.3.4\"><span class=\"ltx_text\" id=\"A1.T5.2.3.4.1\" style=\"font-size:80%;\">47</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T5.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T5.2.4.1\"><span class=\"ltx_text\" id=\"A1.T5.2.4.1.1\" style=\"font-size:80%;\">(53.1%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T5.2.4.2\"><span class=\"ltx_text\" id=\"A1.T5.2.4.2.1\" style=\"font-size:80%;\">(46.9%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T5.2.4.3\"><span class=\"ltx_text\" id=\"A1.T5.2.4.3.1\" style=\"font-size:80%;\">(42.0%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T5.2.4.4\"><span class=\"ltx_text\" id=\"A1.T5.2.4.4.1\" style=\"font-size:80%;\">(58.0%)</span></td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A1.T5.5.1.1\" style=\"font-size:113%;\">Table 5</span>: </span><span class=\"ltx_text\" id=\"A1.T5.6.2\" style=\"font-size:113%;\">SVAMP Dataset split. We only considered the portion which has type Multiplication or Common-Division.</span></figcaption>\n</figure>",
|
| 117 |
+
"capture": "Table 5: SVAMP Dataset split. We only considered the portion which has type Multiplication or Common-Division."
|
| 118 |
+
}
|
| 119 |
+
},
|
| 120 |
+
"image_paths": {
|
| 121 |
+
"1": {
|
| 122 |
+
"figure_path": "2311.07172v2_figure_1.png",
|
| 123 |
+
"caption": "Figure 1: Comparison between PAL-based Programs and Unit Consistency Programs. Unit Consistency Programs contain unit specifications using Counter objects and unit verification routines using assert statements.",
|
| 124 |
+
"url": "http://arxiv.org/html/2311.07172v2/extracted/5745445/images/unit_consistency.png"
|
| 125 |
+
},
|
| 126 |
+
"2": {
|
| 127 |
+
"figure_path": "2311.07172v2_figure_2.png",
|
| 128 |
+
"caption": "Figure 2: Error analysis of VerityMath-Mistral (7B). Correct Answer: The program compiles and produces the correct answer. Wrong Answer: The program compiles but produces an incorrect answer. Wrong Counter or assert : After removing Counter and assert statements, the program produces the correct answer. Compilation Error: The program is unable to compile.",
|
| 129 |
+
"url": "http://arxiv.org/html/2311.07172v2/extracted/5745445/images/pie_chart.png"
|
| 130 |
+
},
|
| 131 |
+
"3": {
|
| 132 |
+
"figure_path": "2311.07172v2_figure_3.png",
|
| 133 |
+
"caption": "Figure 3: Performance of VerityMath-Mistral (7B) on the GSM8K test dataset based on the number of assert statements in the code solution. The percentage shown in each bar represents the percentage of correct answers given the number of assert statements in the code solution.",
|
| 134 |
+
"url": "http://arxiv.org/html/2311.07172v2/extracted/5745445/images/number_assert.png"
|
| 135 |
+
},
|
| 136 |
+
"4": {
|
| 137 |
+
"figure_path": "2311.07172v2_figure_4.png",
|
| 138 |
+
"caption": "Figure 4: Performance of VerityMath-Mistral (7B) as we scale the number of training examples of GSM8K-PAL and UCPs. GSM8K-PAL has a total of 6877 annotated training examples whereas UCPs have 4480 annotated training examples.",
|
| 139 |
+
"url": "http://arxiv.org/html/2311.07172v2/extracted/5745445/images/training_examples.png"
|
| 140 |
+
}
|
| 141 |
+
},
|
| 142 |
+
"validation": true,
|
| 143 |
+
"references": [
|
| 144 |
+
{
|
| 145 |
+
"1": {
|
| 146 |
+
"title": "Palm 2 technical report, 2023.",
|
| 147 |
+
"author": "Anil, R., Dai, A. M., Firat, O., Johnson, M., Lepikhin, D., Passos, A., Shakeri, S., Taropa, E., Bailey, P., Chen, Z., and et al., E. C.",
|
| 148 |
+
"venue": null,
|
| 149 |
+
"url": null
|
| 150 |
+
}
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"2": {
|
| 154 |
+
"title": "Language models are few-shot learners, 2020.",
|
| 155 |
+
"author": "Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D.",
|
| 156 |
+
"venue": null,
|
| 157 |
+
"url": null
|
| 158 |
+
}
|
| 159 |
+
},
|
| 160 |
+
{
|
| 161 |
+
"3": {
|
| 162 |
+
"title": "Sparks of artificial general intelligence: Early experiments with gpt-4, 2023.",
|
| 163 |
+
"author": "Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., and Zhang, Y.",
|
| 164 |
+
"venue": null,
|
| 165 |
+
"url": null
|
| 166 |
+
}
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"4": {
|
| 170 |
+
"title": "Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks.",
|
| 171 |
+
"author": "Chen, W., Ma, X., Wang, X., and Cohen, W. W.",
|
| 172 |
+
"venue": "Transactions on Machine Learning Research, 2023.",
|
| 173 |
+
"url": null
|
| 174 |
+
}
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"5": {
|
| 178 |
+
"title": "Training verifiers to solve math word problems, 2021.",
|
| 179 |
+
"author": "Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., and Schulman, J.",
|
| 180 |
+
"venue": null,
|
| 181 |
+
"url": null
|
| 182 |
+
}
|
| 183 |
+
},
|
| 184 |
+
{
|
| 185 |
+
"6": {
|
| 186 |
+
"title": "Qlora: Efficient finetuning of quantized llms, 2023.",
|
| 187 |
+
"author": "Dettmers, T., Pagnoni, A., Holtzman, A., and Zettlemoyer, L.",
|
| 188 |
+
"venue": null,
|
| 189 |
+
"url": null
|
| 190 |
+
}
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"7": {
|
| 194 |
+
"title": "Faith and fate: Limits of transformers on compositionality, 2023.",
|
| 195 |
+
"author": "Dziri, N., Lu, X., Sclar, M., Li, X. L., Jiang, L., Lin, B. Y., West, P., Bhagavatula, C., Bras, R. L., Hwang, J. D., Sanyal, S., Welleck, S., Ren, X., Ettinger, A., Harchaoui, Z., and Choi, Y.",
|
| 196 |
+
"venue": null,
|
| 197 |
+
"url": null
|
| 198 |
+
}
|
| 199 |
+
},
|
| 200 |
+
{
|
| 201 |
+
"8": {
|
| 202 |
+
"title": "Specializing smaller language models towards multi-step reasoning.",
|
| 203 |
+
"author": "Fu, Y., Peng, H., Ou, L., Sabharwal, A., and Khot, T.",
|
| 204 |
+
"venue": "In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 10421\u201310430. PMLR, 23\u201329 Jul 2023.",
|
| 205 |
+
"url": null
|
| 206 |
+
}
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"9": {
|
| 210 |
+
"title": "PAL: Program-aided language models.",
|
| 211 |
+
"author": "Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., and Neubig, G.",
|
| 212 |
+
"venue": "In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 10764\u201310799. PMLR, 23\u201329 Jul 2023.",
|
| 213 |
+
"url": null
|
| 214 |
+
}
|
| 215 |
+
},
|
| 216 |
+
{
|
| 217 |
+
"10": {
|
| 218 |
+
"title": "ToRA: A tool-integrated reasoning agent for mathematical problem solving.",
|
| 219 |
+
"author": "Gou, Z., Shao, Z., Gong, Y., yelong shen, Yang, Y., Huang, M., Duan, N., and Chen, W.",
|
| 220 |
+
"venue": "In The Twelfth International Conference on Learning Representations, 2024.",
|
| 221 |
+
"url": null
|
| 222 |
+
}
|
| 223 |
+
},
|
| 224 |
+
{
|
| 225 |
+
"11": {
|
| 226 |
+
"title": "Distilling the knowledge in a neural network.",
|
| 227 |
+
"author": "Hinton, G. E., Vinyals, O., and Dean, J.",
|
| 228 |
+
"venue": "CoRR, abs/1503.02531, 2015.",
|
| 229 |
+
"url": null
|
| 230 |
+
}
|
| 231 |
+
},
|
| 232 |
+
{
|
| 233 |
+
"12": {
|
| 234 |
+
"title": "Large language models are reasoning teachers.",
|
| 235 |
+
"author": "Ho, N., Schmid, L., and Yun, S.-Y.",
|
| 236 |
+
"venue": "In Rogers, A., Boyd-Graber, J., and Okazaki, N. (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 14852\u201314882, Toronto, Canada, July 2023. Association for Computational Linguistics.",
|
| 237 |
+
"url": null
|
| 238 |
+
}
|
| 239 |
+
},
|
| 240 |
+
{
|
| 241 |
+
"13": {
|
| 242 |
+
"title": "Mistral 7b, 2023.",
|
| 243 |
+
"author": "Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de las Casas, D., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M.-A., Stock, P., Scao, T. L., Lavril, T., Wang, T., Lacroix, T., and Sayed, W. E.",
|
| 244 |
+
"venue": null,
|
| 245 |
+
"url": null
|
| 246 |
+
}
|
| 247 |
+
},
|
| 248 |
+
{
|
| 249 |
+
"14": {
|
| 250 |
+
"title": "Leveraging training data in few-shot prompting for numerical reasoning.",
|
| 251 |
+
"author": "Jie, Z. and Lu, W.",
|
| 252 |
+
"venue": "In Rogers, A., Boyd-Graber, J., and Okazaki, N. (eds.), Findings of the Association for Computational Linguistics: ACL 2023, pp. 10518\u201310526, Toronto, Canada, July 2023. Association for Computational Linguistics.",
|
| 253 |
+
"url": null
|
| 254 |
+
}
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"15": {
|
| 258 |
+
"title": "Solving quantitative reasoning problems with language models.",
|
| 259 |
+
"author": "Lewkowycz, A., Andreassen, A. J., Dohan, D., Dyer, E., Michalewski, H., Ramasesh, V. V., Slone, A., Anil, C., Schlag, I., Gutman-Solo, T., Wu, Y., Neyshabur, B., Gur-Ari, G., and Misra, V.",
|
| 260 |
+
"venue": "In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022.",
|
| 261 |
+
"url": null
|
| 262 |
+
}
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"16": {
|
| 266 |
+
"title": "Let\u2019s verify step by step, 2023.",
|
| 267 |
+
"author": "Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., Leike, J., Schulman, J., Sutskever, I., and Cobbe, K.",
|
| 268 |
+
"venue": null,
|
| 269 |
+
"url": null
|
| 270 |
+
}
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"17": {
|
| 274 |
+
"title": "Teaching small language models to reason.",
|
| 275 |
+
"author": "Magister, L. C., Mallinson, J., Adamek, J., Malmi, E., and Severyn, A.",
|
| 276 |
+
"venue": "In Rogers, A., Boyd-Graber, J., and Okazaki, N. (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 1773\u20131781, Toronto, Canada, July 2023. Association for Computational Linguistics.",
|
| 277 |
+
"url": null
|
| 278 |
+
}
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"18": {
|
| 282 |
+
"title": "Selfcheck: Using llms to zero-shot check their own step-by-step reasoning, 2023.",
|
| 283 |
+
"author": "Miao, N., Teh, Y. W., and Rainforth, T.",
|
| 284 |
+
"venue": null,
|
| 285 |
+
"url": null
|
| 286 |
+
}
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"19": {
|
| 290 |
+
"title": "Gpt-4 technical report, 2023.",
|
| 291 |
+
"author": "OpenAI.",
|
| 292 |
+
"venue": null,
|
| 293 |
+
"url": null
|
| 294 |
+
}
|
| 295 |
+
},
|
| 296 |
+
{
|
| 297 |
+
"20": {
|
| 298 |
+
"title": "Are NLP models really able to solve simple math word problems?",
|
| 299 |
+
"author": "Patel, A., Bhattamishra, S., and Goyal, N.",
|
| 300 |
+
"venue": "In Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tur, D., Beltagy, I., Bethard, S., Cotterell, R., Chakraborty, T., and Zhou, Y. (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2080\u20132094, Online, June 2021. Association for Computational Linguistics.",
|
| 301 |
+
"url": null
|
| 302 |
+
}
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"21": {
|
| 306 |
+
"title": "Code llama: Open foundation models for code, 2023.",
|
| 307 |
+
"author": "Rozi\u00e8re, B., Gehring, J., Gloeckle, F., Sootla, S., Gat, I., Tan, X. E., Adi, Y., Liu, J., Remez, T., Rapin, J., Kozhevnikov, A., Evtimov, I., Bitton, J., Bhatt, M., Ferrer, C. C., Grattafiori, A., Xiong, W., D\u00e9fossez, A., Copet, J., Azhar, F., Touvron, H., Martin, L., Usunier, N., Scialom, T., and Synnaeve, G.",
|
| 308 |
+
"venue": null,
|
| 309 |
+
"url": null
|
| 310 |
+
}
|
| 311 |
+
},
|
| 312 |
+
{
|
| 313 |
+
"22": {
|
| 314 |
+
"title": "Distilling reasoning capabilities into smaller language models.",
|
| 315 |
+
"author": "Shridhar, K., Stolfo, A., and Sachan, M.",
|
| 316 |
+
"venue": "In Rogers, A., Boyd-Graber, J., and Okazaki, N. (eds.), Findings of the Association for Computational Linguistics: ACL 2023, pp. 7059\u20137073, Toronto, Canada, July 2023. Association for Computational Linguistics.",
|
| 317 |
+
"url": null
|
| 318 |
+
}
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"23": {
|
| 322 |
+
"title": "Llama 2: Open foundation and fine-tuned chat models, 2023.",
|
| 323 |
+
"author": "Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., and et al., S. B.",
|
| 324 |
+
"venue": null,
|
| 325 |
+
"url": null
|
| 326 |
+
}
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"24": {
|
| 330 |
+
"title": "Self-consistency improves chain of thought reasoning in language models, 2023.",
|
| 331 |
+
"author": "Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., and Zhou, D.",
|
| 332 |
+
"venue": null,
|
| 333 |
+
"url": null
|
| 334 |
+
}
|
| 335 |
+
},
|
| 336 |
+
{
|
| 337 |
+
"25": {
|
| 338 |
+
"title": "Chain of thought prompting elicits reasoning in large language models.",
|
| 339 |
+
"author": "Wei, J., Wang, X., Schuurmans, D., Bosma, M., brian ichter, Xia, F., Chi, E. H., Le, Q. V., and Zhou, D.",
|
| 340 |
+
"venue": "In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022.",
|
| 341 |
+
"url": null
|
| 342 |
+
}
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"26": {
|
| 346 |
+
"title": "Large language models are better reasoners with self-verification, 2023.",
|
| 347 |
+
"author": "Weng, Y., Zhu, M., Xia, F., Li, B., He, S., Liu, K., and Zhao, J.",
|
| 348 |
+
"venue": null,
|
| 349 |
+
"url": null
|
| 350 |
+
}
|
| 351 |
+
},
|
| 352 |
+
{
|
| 353 |
+
"27": {
|
| 354 |
+
"title": "Lime: Learning inductive bias for primitives of mathematical reasoning.",
|
| 355 |
+
"author": "Wu, Y., Rabe, M. N., Li, W., Ba, J., Grosse, R. B., and Szegedy, C.",
|
| 356 |
+
"venue": "In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 11251\u201311262. PMLR, 18\u201324 Jul 2021.",
|
| 357 |
+
"url": null
|
| 358 |
+
}
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"28": {
|
| 362 |
+
"title": "Metamath: Bootstrap your own mathematical questions for large language models.",
|
| 363 |
+
"author": "Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok, J. T., Li, Z., Weller, A., and Liu, W.",
|
| 364 |
+
"venue": "arXiv preprint arXiv:2309.12284, 2023.",
|
| 365 |
+
"url": null
|
| 366 |
+
}
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"29": {
|
| 370 |
+
"title": "Mammoth: Building math generalist models through hybrid instruction tuning, 2023.",
|
| 371 |
+
"author": "Yue, X., Qu, X., Zhang, G., Fu, Y., Huang, W., Sun, H., Su, Y., and Chen, W.",
|
| 372 |
+
"venue": null,
|
| 373 |
+
"url": null
|
| 374 |
+
}
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"30": {
|
| 378 |
+
"title": "Automatic model selection with large language models for reasoning, 2023.",
|
| 379 |
+
"author": "Zhao, X., Xie, Y., Kawaguchi, K., He, J., and Xie, Q.",
|
| 380 |
+
"venue": null,
|
| 381 |
+
"url": null
|
| 382 |
+
}
|
| 383 |
+
},
|
| 384 |
+
{
|
| 385 |
+
"31": {
|
| 386 |
+
"title": "Progressive-hint prompting improves reasoning in large language models, 2023.",
|
| 387 |
+
"author": "Zheng, C., Liu, Z., Xie, E., Li, Z., and Li, Y.",
|
| 388 |
+
"venue": null,
|
| 389 |
+
"url": null
|
| 390 |
+
}
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"32": {
|
| 394 |
+
"title": "Solving challenging math word problems using gpt-4 code interpreter with code-based self-verification, 2023.",
|
| 395 |
+
"author": "Zhou, A., Wang, K., Lu, Z., Shi, W., Luo, S., Qin, Z., Lu, S., Jia, A., Song, L., Zhan, M., and Li, H.",
|
| 396 |
+
"venue": null,
|
| 397 |
+
"url": null
|
| 398 |
+
}
|
| 399 |
+
},
|
| 400 |
+
{
|
| 401 |
+
"33": {
|
| 402 |
+
"title": "Pad: Program-aided distillation specializes large models in reasoning, 2023.",
|
| 403 |
+
"author": "Zhu, X., Qi, B., Zhang, K., Long, X., and Zhou, B.",
|
| 404 |
+
"venue": null,
|
| 405 |
+
"url": null
|
| 406 |
+
}
|
| 407 |
+
}
|
| 408 |
+
],
|
| 409 |
+
"url": "http://arxiv.org/html/2311.07172v2"
|
| 410 |
+
}
|
20240721/2311.08919v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2311.17101v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2312.02175v2.json
ADDED
|
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Wavefront Transformation-based Near-field Channel Prediction for Extremely Large Antenna Array with Mobility",
|
| 3 |
+
"abstract": "This paper addresses the mobility problem in extremely large antenna array (ELAA) communication systems.\nIn order to account for the performance loss caused by the spherical wavefront of ELAA in the mobility scenario, we propose a wavefront transformation-based matrix pencil (WTMP) channel prediction method.\nIn particular, we design a matrix to transform the spherical wavefront into a new wavefront, which is closer to the plane wave.\nWe also design a time-frequency projection matrix to capture the time-varying path delay.\nFurthermore, we adopt the matrix pencil (MP) method to estimate channel parameters.\nOur proposed WTMP method can mitigate the effect of near-field radiation when predicting future channels.\nTheoretical analysis shows that the designed matrix is asymptotically determined by the angles and distance between the base station (BS) antenna array and the scatterers or the user when the number of BS antennas is large enough.\nFor an ELAA communication system in the mobility scenario, we prove that the prediction error converges to zero with the increasing number of BS antennas.\nSimulation results demonstrate that our designed transform matrix efficiently mitigates the near-field effect, and that our proposed WTMP method can overcome the ELAA mobility challenge and approach the performance in stationary setting.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The fifth generation (5G) mobile communication systems attain superior spectral and energy efficiencies by introducing the massive multiple-input multiple-output (MIMO) technology [1 ###reference_b1###].\nCompared to 5G, the future sixth generation (6G) wireless communication systems is expected to achieve high throughput by utilizing some new promising technologies, e.g., extremely large antenna array (ELAA) [2 ###reference_b2###], Terahertz communications [3 ###reference_b3###], and reconfigurable intelligent surface (RIS) [4 ###reference_b4###].\nThe ELAA deploys enormous antennas, significantly increasing the array aperture [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###].\nThe radiative fields of the array contain the near field and far field, and the boundary between the two fields is Rayleigh distance, defined as with denoting array aperture and being wavelength [7 ###reference_b7###].\nThe near-field region expands with the increasing array aperture, bringing in near-field effects that significantly impact the channel conditions.\nThe user equipment (UE) or the scatterers are even located within the near-field region, which makes the conventional plane wavefront assumption invalid [8 ###reference_b8###].\nConsidering the near-field effects of ELAA, the spherical wavefront assumption can model the near-field channel more accurately [9 ###reference_b9###].\nThe advantage of ELAA on the spectral efficiency (SE) relies on the accurate channel state information (CSI).\nRecently, several works have studied ELAA channel estimation.\n[10 ###reference_b10###] proposes a Bayesian channel estimation scheme, and points out that the discrete Fourier transform (DFT) matrix may not perform as expected in the near-field channel because of the unattainable angular-domain sparsity.\nThe work in [11 ###reference_b11###] supposes the ELAA has directionality, and estimates the ELAA channel by orthogonal matching pursuit (OMP) algorithm.\nThe above works aim to estimate the exact ELAA channel. However, the spherical wavefront model (SWM) is very complex and needs to be simplified [12 ###reference_b12###].\nTo simplify the SWM, the authors in [13 ###reference_b13###] approximate the spherical wavefront as a parabolic wavefront, which is more accurate than the plane wavefront and less complex than the spherical wavefront.\nThe approximation radiative region is called the \u201cFresnel region\u201d with a range of .\n[14 ###reference_b14###] first calculates the fourth-order cumulant of the array to decouple the distance and angles, and then separately estimates the parameters based on the multiple signal classification (MUSIC) algorithm.\nOther subspace-based methods, e.g., estimation of signal parameters via rational invariance techniques (ESPRIT), can also estimate channel parameters when the angles and distance are decoupled by the channel covariance matrix [15 ###reference_b15###].\nHowever, the above estimation algorithms are high-complexity.\nSome neural network (NN) algorithms, e.g., complex-valued neural network (CVNN) [16 ###reference_b16###] and convolutional neural network (CNN) [17 ###reference_b17###], are trained to estimate the near-field channel.\nYet, the generalization of NN algorithms still needs to be enhanced.\nDifferent from the conventional DFT matrix, [18 ###reference_b18###] designs a polar-domain transform matrix containing angle and distance information to describe channel sparsity.\nBy exploiting the polar-domain sparsity, the authors design an OMP-based method to achieve more accurate channel estimation.\nHowever, the above literature does not consider the mobility problem.\nThe mobility problem (or \u201ccurse of mobility\u201d) [19 ###reference_b19###, 20 ###reference_b20###] is one typical problem that degrades the performance of massive MIMO.\nThe UE movement and CSI delay are two main reasons causing the performance decline.\nSpecifically, the UE movement makes the channel fast-varying, and a large CSI delay causes the estimated CSI to be outdated, making the precoder unusable.\nChannel prediction is an effective solution to the mobility problem.\nThe authors in [19 ###reference_b19###] propose a Prony-based angular-delay domain (PAD) channel prediction method, which is asymptotically error-free when the number of antennas is large enough.\nWith the 2D-DFT matrices, the PAD method exploits the angular-delay-domain sparsity.\nHowever, the ELAA communication system introduces an extra parameter, i.e., distance, and the DFT matrix cannot describe the angular-domain sparsity.\nAdditionally, the movement of UEs introduces the time-varying path delays.\nThe discrete prolate spheroidal (DPS) sequence can capture the slightly varying path delay in a WiFi communication scenario [21 ###reference_b21###].\nHowever, in the mobility environment, the path delay may vary substantially, which causes the DPS sequence not to achieve the expected performance.\nTherefore, the existing channel prediction methods are unsuitable under the spherical wavefront assumption in the ELAA channel.\nIn order to fill the above gaps and address the mobility problem of the ELAA channel in this paper, we propose a novel wavefront transformation-based matrix pencil (WTMP) channel prediction method.\nNotice that the steering vectors of the near-field channel and far-field channel share the same angles, and the steering vector of the near-field channel contains an extra distance parameter.\nThe key idea is designing a matrix to transform the spherical wavefront and make it closer to the plane wave.\nIn such a way, the near-field effects may be mitigated.\nIn the literature, several works have designed methods to transform the near-field estimation to the far-field estimation, e.g., exploiting the fourth-order cumulant of the array [14 ###reference_b14###] and calculating the channel covariance matrix [15 ###reference_b15###].\nDifferent from the existing methods that aims to simplify the near-field parameters estimation, our proposed WTMP method transforms the near-field channel to the far-field channel.\nIn this paper, by utilizing the OMP algorithm, we first estimate the channel parameters, i.e., the number of paths, distance, elevation angle of departure (EOD), and azimuth angle of departure (AOD).\nThen, based on the estimated parameters, we design a wavefront-transformation matrix containing the angles and distance information.\nNext, to capture the time-varying path delay, we design a time-frequency projection matrix containing the time-varying path delay information.\nThe designed matrix is a block matrix, with each sub-block matrix containing the Doppler and path delay information at a certain moment.\nThe different sub-block matrices are designed based on the Doppler and delay information at different moments.\nAfter that, we project the channel onto the angular-time-frequency domain by constructing an angular-time-frequency projection matrix that consists of the designed wavefront-transformation matrix, time-frequency projection matrix, and DFT matrix.\nFinally, we adopt the matrix pencil (MP) method to estimate the Doppler using the angular-time-frequency-domain CSI.\nTo the best of our knowledge, our proposed WTMP method is the first attempt to transform the spherical wavefront and predict the ELAA channel.\nThe contributions of this paper are summarized as follows:\nWe propose a WTMP prediction method to address the mobility problem with time-varying path delay in the ELAA channel by designing a wavefront-transformation matrix.\nWithout straightly estimating the near-field channel, our designed matrix transforms the complex near-field channel estimation into the far-field channel estimation.\nThe simulations show that our WTMP method significantly outperforms the existing method.\nWe prove that the designed transform matrix depends on the elevation, angle, azimuth angle, and distance between the BS antenna and the scatterers or the UE, as the number of the base station (BS) antennas is large enough. Therefore, the transform matrix can be constructed with estimated angles and distance.\nWe analyze the asymptotic performance under enough channel samples and a finite number of the BS antennas, and prove that the WTMP method is asymptotically error-free for an arbitrary CSI delay.\nWe further prove that if the number of the BS antennas is large enough and only finite samples are available, the prediction error of our WTMP method asymptotically converges to zero for an arbitrary CSI delay.\nThis paper is organized as follows: We introduce the channel model in Sec. II ###reference_###. Sec. III ###reference_### describes our proposed WTMP channel prediction method. The performance of the WTMP method is analyzed in Sec. IV ###reference_###. The simulation results are illustrated and discussed in Sec. V ###reference_###. Finally, Sec. VI ###reference_### concludes the paper.\nNotations: We use boldface to represent vectors and matrices. Specifically, , and denote identity matrix, zero matrix and one matrix. , , , and denote the transpose, conjugate, conjugate transpose, Moore-Penrose pseudo inverse and inverse of a matrix , respectively.\n is Dirac\u2019s delta function.\n denotes the Fourier transform operation.\n stands for the norm of a vector, and means the Frobenius norm of a matrix.\n represents the rank of a matrix.\n denotes the diagonal operation of a matrix. is the expectation operation, and denotes the eigenvalue decomposition operation (EVD). takes the angle of a complex number.\n represents the inner product of vector and . is the kronecker product of and . is used to define a new formula."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Channel Model",
|
| 15 |
+
"text": "We consider a TDD massive MIMO system where the BS deploys an ELAA to serve multiple UEs.\nThe BS estimates CSI from the UL pilot. The DL CSI is acquired from the BS by utilizing channel reciprocity [2 ###reference_b2###].\n###figure_1### Fig. 1 depicts the near-field channel between the BS and the UE. The BS has a uniform planar array (UPA) consisting of columns and rows.\nThe UE has two antenna elements with and polarization angles.\nThe BS is equipped with antennas.\nAssume and are even.\nThe horizontal and vertical apertures of the BS array are and .\nIn the TDD mode, the UL and DL channels share the same bandwidth , which consists of subcarriers with spacing .\nThe channel has propagation paths, and each path has certain EOD, AOD, delay, distance, Doppler and amplitude.\nFor the -th path, we denote the elevation angle of arrival (EOA), azimuth angle of arrival (AOA), EOD and AOD as , , and , respectively.\nThe ranges of angles are , , and .\nDenote the spherical unit vector of the UE antenna by :\nLet denote the complex amplitude of the -th path.\nThe Doppler of the -th path is defined as , where is the velocity vector of the UE.\nThe wavelength is defined as , where is the speed of light and is the central carrier frequency.\nDenote the location vector of the UE antenna as .\nThe BS antenna array is located on the plane.\nLet the antenna element in the center of the BS antenna array be the coordinate origin, which is located at .\nThe location of the -th column and the -th row of the BS antenna array is\nwhere and are horizontal and vertical antenna spacings, respectively.\nThe ranges of and are and .\nFor notational simplicity, and are abbreviated as and .\nThe location of the -th scatterer is:\nwhere is the distance from the central BS antenna element to the -th scatterer with a range of .\nEq. (3 ###reference_###) can also denote the location of the -th UE, if is replaced with , where denotes the distance between the central BS antenna element to the -th antenna of the UE.\nLet denote the channel impulse response between the -th column and the -th row of the BS antenna array and the -th antenna of the UE, which is modelled as [22 ###reference_b22###]\nwhere is the delay of the -th path [22 ###reference_b22###]\nwhere and are the initial value and the changing rate of delay.\nThe time-varying path delay can be viewed as the Doppler effect in the frequency domain.\nNotice that different paths have different delays, i.e., .\nTo describe the effect of path delays, we may also transform to a phase by using the Fourier transform, where is the frequency.\nTherefore, is transformed to the channel frequency response :\nwhere denotes the distance from the -th column and the -th row of the BS antenna array to the -th scatterer:\nwith\nand\nSince , we may obtain and .\nWith a Fresnel approximation expansion , we may approximate the distance as\nNext, we will determine an approximation region where the error of the distance in Eq. (10 ###reference_###) is negligible.\nApplying a binomial expansion , the distance under the far-field assumption is approximated by:\nWith a three-order Taylor expansion , the distance is approximated by\nDenote the phases of the exact spherical wavefront, the approximated near-field spherical wavefront and the far-field plane wavefront as , , and , respectively.\nTherefore, the phase discrepancy between the spherical wavefront and the approximated near-field spherical wavefront is calculated by\nSince , , and , we may obtain that if , and , the maximum phase discrepancy may be achieved as , where\nand is the maximum value of for a range of .\nThe boundary between the approximated spherical wavefront and the exact spherical wavefront is determined by the condition that the largest phase discrepancy is no more than [6 ###reference_b6###], i.e., . Therefore, we may obtain:\nSimilarly, we calculate the phase discrepancy between the approximated near-field spherical wavefront and the far-field plane wavefront as\nWhen , , and , we obtain the maximum phase discrepancy as .\nLet the phase discrepancy more than : [6 ###reference_b6###]. We may obtain:\nEventually, the approximation region is determined by , where the error of the distance is negligible.\nThe 3-D near-field steering vector containing the distance, EOD, and AOD is\nwhere is a distance response vector:\nThe two matrices and are expressed as:\nand\nTherefore, the 3-D far-field steering vector is expressed as:\nDenote the channel between all BS antennas and the -th UE antenna at time and frequency as .\nThe channels at all subcarriers are , where is the -th subcarrier frequency.\nWe rewrite as\nwhere with .\nThe matrix consists of delay-and-Doppler vectors:\nwhere\nThe matrix contains the 3-D near-field steering vectors of all paths:\nwhere is a block matrix:\nThe diagonal matrix is composed of the distance response vectors of all paths:\nThe matrix contains the 3-D far-field steering vectors of all paths:\nwhere is the -th column vector of .\nThe vectorized form of is given by\nwhere ."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III The Proposed WTMP Channel Prediction Method",
|
| 21 |
+
"text": "In this section, we introduce our proposed WTMP channel prediction method.\nIn an ELAA communication system, due to the near-field effects, the spherical wavefront assumption is true in place of the plane wavefront assumption and introduces phase fluctuations among array elements.\nTo coping with the phase fluctuations challenge, we propose a WTMP method based on the structures of the near-field and far-field channels.\nThe key to the WTMP method is designing a matrix that transforms the phase of the near-field channel into a new phase.\nCompared to the phase of the near-field channel, the new phase is closer to the one of the far-field channel.\nIn general, we first estimate the parameters, i.e., EOD, AOD, and distance, via the OMP algorithm.\nThen, basing on the steering vector estimation of the near-field and far-field channels, we design a wavefront-transformation matrix.\nNext, another time-frequency-domain projection matrix is constructed to track the time-varying path delays.\nFinally, we adopt the MP method to estimate Doppler.\nThe details will be shown below."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "III-A The Parameters Estimation",
|
| 27 |
+
"text": "The near-field channel is still compressible even though the angle sparsity does not hold because the number of paths is usually less than the number of array elements, i.e., .\nHere we adopt the OMP algorithm to estimate the angles and distance.\nThe channels at different subcarriers share the same parameters, i.e., EOD, AOD, and distance.\nFor simplification, we use the channel at the first subcarrier to estimate angles and distance.\nThe observation channel at the first subcarrier is , where\nThe matrix may be viewed as a dictionary matrix depending on the tuple . The parameters estimation problem is transformed into a vector reconstructing problem by discretizing the EOA, AOD, and distance with a grid:\nwhere , , and are the resolutions of EOD, AOD, and distance. Also, , , and are ranges of EOD, AOD, and distance.\nThe numbers of sampling grid points , , and are , , and .\nThe dictionary matrix is expressed as\nUtilizing the OMP algorithm, we may determine a pair of distance and angles in each iteration.\nAfter iterations, the number of paths, EOD, AOD, and distance are estimated as , , , and , respectively."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "III-B Wavefront Transformation",
|
| 33 |
+
"text": "Based on the estimated parameters in Sec. III-A ###reference_###, we now design a wavefront-transformation matrix in this section.\nFor ease of exposition, we first determine the transform matrix based on the mapping relationship between the near-field and far-field steering vectors.\nThen, we generate the matrix and determine each entry.\nFinally, we design the transform matrix by normalizing each entry. As we focus on the phase fluctuation, only the phase of the generated matrix is needed.\nWe start by describing the mapping relationship between the near-field and far-field steering vectors.\nDenote a matrix containing the 3-D far-field steering vectors of all paths:\nwhere and are given in Eq. (27 ###reference_###) and Eq. (29 ###reference_###).\nDefine a matrix describing the mapping relationship between and in Eq. (26 ###reference_###):\nBy substituting Eq. (26 ###reference_###) and Eq. (34 ###reference_###) into Eq. (35 ###reference_###),we may rewrite Eq. (35 ###reference_###) as\nwhere is expressed as\nand , is expressed as:\nFrom Eq. (37 ###reference_###), we may notice that matrix is the wavefront-transformation matrix.\nThen, we will calculate the matrix .\nPerform the SVD of : , where the unitary matrix contains the left singular vectors:\nand the rank of is . In other words, , where , the diagonal matrix contains the first singular values, and consists of the first column vectors of .\nBased on the matrix structure of in Eq. (29 ###reference_###), the -th column vector of is computed by:\nand\nFrom Eq. (36 ###reference_###), we may obtain\nwhere is orthogonal to .\nDenote a space .\nTherefore, the matrix falls into the null space of .\nUp to now, the matrix is determined.\nNext, we will generate the transform matrix and determine each entry.\nAccording to Eq. (37 ###reference_###), we may first design matrix and then generate matrix .\nFor matrix , there are many potential matrices that fall into the null space of .\nFortunately, we only need to generate a suitable matrix in the null space of .\nFor derivation simplicity, we assume that the energy of the -th row vector in matrix is concentrated in only one element.\nFor example, the energy of vector is one, where is close to one, and the other elements are close to zero.\nFrom Eq. (42 ###reference_###), it is clear that is orthogonal to all column vectors of .\nWithout loss of generality, we assume that the energy of is concentrated in and that the last elements of are zero.\nSince the first elements of , are zero, it is easily obtained that , .\nTo determine the first elements of , we may formulate an optimization problem as\nHowever, it is difficult and very complex to determine non-zero entries one by one.\nTo simplify the optimization problem of Eq. (43 ###reference_###), we assume\nwhere , , , , and are real variables.\nDenote the -th element of in Eq. (40 ###reference_###) as .\nSince the last elements of are zero, we may obtain and .\nEventually, in Eq. (40 ###reference_###) is calculated as:\nBasing on Eq. (44 ###reference_###), we may compute\nFrom , we readily obtain\nAccording to the assumptions and equalities between Eq. (44 ###reference_###) and Eq. (49 ###reference_###), the optimization problem in Eq. (43 ###reference_###) is reformulated as\nwhere\nDefine , where\nLetting , we may obtain\nIf , may be simplified as a real variable:\nBased on , Eq. (44 ###reference_###), and Eq. (49 ###reference_###), the rest elements are calculated by\nand\nwhere\nUntil now, the vector is calculated, and the matrix is designed as\nwhere\nand .\nSince the bulk energy of is concentrated on the diagonal elements, we select the diagonal elements to approximate :\nNotice that such an approximation is coincident with the asymptotic performance of , proved in the next section.\nThen, basing on Eq. (37 ###reference_###), we may generate the matrix as\nwhere is generated according to the procedure between Eq. (43 ###reference_###) and Eq. (60 ###reference_###). The matrices and are shown in Eq. (27 ###reference_###) and Eq. (28 ###reference_###), respectively, where the number of paths and distances may be estimated by OMP algorithm in Sec. III-A ###reference_###.\nSince and , the matrix is full-rank: .\nBy substituting , , and into Eq. (61 ###reference_###), we may easily generate , which is a diagonal matrix.\nDue to our focus on the performance decline caused by phase fluctuations in the near-field channel, the effective proportion of is diagonal elements phases.\nTherefore, we obtain the final wavefront-transformation matrix by normalizing all elements in .\nThe detailed design process is illustrated in Algorithm 1 ###reference_###.\nNote that the designed matrix may transform the spherical wavefront to a new wavefront closer to the plane wave.\nTherefore, our designed wavefront-transformation matrix can mitigate the near-field effect in the ELAA channel."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-C Time-frequency-domain Projection",
|
| 39 |
+
"text": "Because of the Doppler effect on the time domain and frequency domain simultaneously in the ELAA communication system, time-varying path delay causes performance degradation that the conventional DFT matrix is unable to address.\nThis section aims to design a time-frequency-domain projection matrix to track the time-varying path delay.\nThe key is to determine the Doppler and delay sampling intervals.\nMore specifically, we first determine the Doppler interval by calculating the column coherence of two response vectors related to the Doppler.\nThen, to capture the Doppler and path delays of different paths, we design a matrix that contains the delay and Doppler information at a moment.\nFinally, by utilizing multiple samples, we compute a time-frequency-domain matrix to track the time-varying path delay and Doppler, which contains the time-varying path delay information.\nWe first denote the Doppler and delay sampling intervals as and .\nFrom Eq. (24 ###reference_###) and Eq. (25 ###reference_###), we may calculate the column coherence between and :\nIn order to achieve the time-frequency-domain sparsity, the column coherence should be as small as possible.\nLet , we may obtain , which is coincident with the DFT matrix [23 ###reference_b23###].\nThen, we will determine the Doppler sampling interval , which can be calculated by the column coherence between two time-domain vectors.\nDenote the number of samples as and the duration of the channel sample as .\nSince in Eq. (25 ###reference_###) contains the Doppler information at a moment, we select the phase , , and construct a time-domain response vector at the -th subcarrier frequency as:\nThe column coherence between two time-domain response vectors is calculated by\nSimilar to the procedure of calculating , we let and may obtain\nDue to , is approximated as . Therefore, we may obtain at all subcarrier frequencies.\nNext, since the conventional DFT matrix fails to track the Doppler effect in the frequency domain, we may design some matrices containing the Doppler and path delay.\nAdditionally, the channels at each subcarrier frequency have different Doppler effect in different paths.\nAs a result, we design a time-frequency-domain projection matrix at time as:\nwhere , , and consists of the delay-and-Doppler response vectors:\nwhere denotes the delay response vector at time . Specifically, is used to capture the path delay, and can capture the Doppler effect of different paths.\nThe physical meaning of is introducing the -th Doppler sampling interval and delay sampling intervals at time to track the delays of different paths.\nIf , the channel is static without Doppler effect.\nIn this case, is a DFT matrix with a size of .\nIn Eq. (66 ###reference_###), the physical meaning of is a time-frequency-domain matrix containing Doppler sampling intervals and delay sampling intervals at time , which may track the path delays and various Doppler of different paths.\nFinally, to track the time-varying path delay and Doppler, we may extend the matrix in Eq. (66 ###reference_###) at time to other moments, and design the time-frequency-domain projection matrix as a block matrix:\nThe physical meaning of is a time-frequency-domain matrix containing the Doppler information and time-varying path delay information of samples.\nIn the mobility problem, the effect of phase shift, brought from the Doppler effect, enhances as time passes.\nOur designed time-frequency-domain projection matrix can mitigate the phase shift effect.\nFor clarification, the detailed generation process is summarized in Algorithm 2 ###reference_###.\nNote that the designed matrix only depends on the number of time samples , time sampling interval , central carrier frequency and bandwidth .\nWith matrix , we may track the Doppler and time-varying path delay."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.4",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-D Doppler Estimation",
|
| 45 |
+
"text": "Basing on the wavefront-transformation matrix designed in Sec. III-B ###reference_###, we first mitigate the effect of phase fluctuations introduced by spherical wavefront.\nSince the channels at different subcarrier frequencies share the same distance, the wavefront-transformation matrix in the frequency domain is expressed as\nBy using the time-frequency-domain projection matrix and two DFT matrices, i.e., and ,\nthe joint angular-time-frequency basis is computed by\nwhere is an orthogonal angular-domain basis, and is a time-frequency-domain basis.\nAfter mitigating the near-field effects, with the angular-time-frequency basis S, the vectorized channel in Eq. (30 ###reference_###) is projected onto the angular-time-frequency domain:\nwhere is the vectorized channel in the angular-time-frequency domain, and .\nMost of the entries in may be close to zero because the number of paths is less than the size of , i.e, .\nDefine a positive threshold that is close to 1. The number of non-negligible entries is determined by\nwhere is the -th entry of and is expressed as\nand\nThe projection of channel in the angular domain is time-invariant and generated by the .\nAlso, is the -th entry of the -th non-negligible row vector in .\nThe vectorized channel is approximated by\nwhere is the -th column vector of .\nNext, we adopt the MP method to estimate Doppler.\nFor notational simplicity, we rewrite as .\nDefine an MP matrix at the -th subcarrier frequency as\nwhere the pencil size satisfies , , and\nSelect the first and the last columns of as and , respectively.\nThe matrix is estimated by\nThe Doppler of the -th path is estimated as\nwhere is the -th entry of .\nAccording to Eq. (77 ###reference_###) and Eq. (78 ###reference_###), we may easily obtain the estimations of and as and , respectively.\nFrom Eq. (76 ###reference_###), we also estimate as .\nDefine a new MP matrix as\nwhich is estimated by\nBy selecting the last entry from , we may estimate .\nDenote the number of predicted samples as .\nUpdate Eq. (82 ###reference_###) by removing the first column and appending a new column at last based on the last predictions.\nThen, repeat Eq. (83 ###reference_###) times by replacing with .\nWe may predict , which is a simplified notation of in Eq. (74 ###reference_###).\nFurthermore, predict at each subcarrier frequency by repeating the prediction process of between Eq. (76 ###reference_###) and Eq. (83 ###reference_###) times.\nWe may predict as:\nThe vectorized channel at time () is predicted as\nThe details of our proposed WTMP channel prediction method are summarized in Algorithm 3 ###reference_###.\nNotice that , and in step 1 may also be estimated by some super-resolution methods, e.g., MUSIC and ESPRIT.\nHowever, the super-resolution methods may introduce enormous computational complexity due to multi-dimensional search.\nCompared to the super-resolution methods, our adopted OMP algorithm in step 1 needs less computational complexity.\nBy increasing the sampling grid points of EOD, AOD, and distance in step 1, the estimation accuracy of angles and distance may increase.\nIn Algorithm 3, the computation complexity is dominated by step 1, step 9, and step 14.\nIn the step 1, the OMP algorithm needs iterations, and the computation complexity of the -th iteration is .\nStep 9 has a complexity order of .\nRepeating step 9 times, step 14 has a complexity order of .\nThe global complexity of the WTMP method is ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "IV Performance Analysis of The WTMP Prediction Method",
|
| 51 |
+
"text": "In this section, we start the performance analysis of our proposed WTMP prediction method by proving the asymptotic performance of the designed matrix .\nThen, the asymptotic prediction error of the WTMP method is derived, for the case of enough number of samples are available and the BS has finite antennas.\nFinally, we derive the asymptotic prediction error under the condition with the enough BS antennas and finite samples.\nMore details will be shown below.\nIf the number of the BS antennas is large enough, the designed wavefront-transformation matrix is determined by the angles and distance between the BS antenna array and the scatterer or the UE.\n\nAccording to Eq. (61 ###reference_###), the wavefront-transformation matrix is determined by , and , where is independent of the angles, and is related to angles and distance.\nNext, we aim to prove that the matrix is asymptotically independent of angles and distance.\nThus, we transform the proof to a sub-problem:\nAccording to Eq. (54 ###reference_###), Eq. (55 ###reference_###), and Eq. (56 ###reference_###), with the number of antennas increasing, the entries of the -th row vector of in Eq. (59 ###reference_###) are calculated by\nand\nTherefore,\nIn other words,\nThus, Proposition 1 is proved.\nRemarks:\nThe energy of the designed matrix is concentrated on the diagonal entries.\nWhen the number of the BS antenna elements is large enough, we may capture nearly all energy of matrix , provided that we select the diagonal elements to approximate .\nAs a result, this Proposition is in line with the approximation of in Sec. III-B ###reference_###.\nProposition 1 is also the prior basis of the following performance analysis.\nDenote the vectorized form of the observation sample at time by : ,\nwhere is the temporally independent identically distributed (i.i.d.) Gaussian white noise with zero mean and element-wise variance.\nConsidering an ELAA with a finite number of BS antennas, if the number of channel samples is large enough, the performance of our proposed WTMP method will be analyzed in Proposition 2.\nFor an arbitrary CSI delay , the asymptotic prediction error of the WTMP method yields:\nproviding that the pencil size satisfies .\nThis Proposition is a generalization of Theorem 1 in [24 ###reference_b24###] when the noise is temporal i.i.d., and the number of samples is large enough.\nAccording to Eq. (76 ###reference_###), denote an MP matrix generated by observation samples as , where is a noise matrix.\nWe may prove the Proposition as follows: Firstly, compute the correlation matrix of :\n, where the expectation is taken over time.\nThen, perform the SVD of and estimate the Doppler. One may easily obtain that and the prediction error converges to zero.\nThe detailed proof is omitted.\nRemarks:\nGiven enough samples, Proposition 2 ###reference_position2### indicates that the channel prediction error converges to zero when the noise is i.i.d..\nHowever, Proposition 2 ###reference_position2### requires too many samples and disregards the fact that the ELAA deploys a large number of BS antennas.\nIn the following, we will break these constraints and derive the asymptotic performance with enough BS antennas.\nBefore the analysis, we introduce a technical assumption.\nThe normalized relative error of the transform matrix yields:\n\nRemarks:\nThe sizes of and are .\nIn an arbitrary path, transforms one column vector of and the normalized relative error ought to be finite.\nFurthermore, due to the limited number of paths, the normalized relative error should be finite when transforms .\nTherefore, the assumption is generally valid.\nBefore the following derivation, if , we denote the vectorized form of a narrowband far-field channel as .\nAfter being transformed by matrix , the narrowband near-field channel may be asymptotically quantified as\nwhere is an error vector, and .\nThe vector is time-invariant and may not affect the estimation accuracy of Doppler.\nBased on Eq. (94 ###reference_###), the asymptotic performance of our proposed WTMP method will be derived in Theorem 1 ###reference_orem1###.\nUnder Assumption 1 ###reference_umption1###, for a narrowband channel, if the number of the BS antennas is large enough, and the pencil size satisfies , the asymptotic performance of our WTMP prediction method yields:\nproviding that samples are accurate enough, i.e.,\n\nThe detailed proof can be found in Appendix -A ###reference_###.\nRemarks:\nThe assumption in Eq. (96 ###reference_###) is a mild technology assumption, which can be fulfilled by some non-linear signal processing technologies even in the case of pilot contamination existing in the multi-user multi-cell scenario [25 ###reference_b25###].\nCompared to Proposition 2 ###reference_position2###, with the help of more BS antennas, we obtain a better result that only finite samples are needed to achieve asymptotically error-free performance."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Numerical results",
|
| 57 |
+
"text": "In this section, we first describe the simulation channel model and then provide numerical results to show the performance of our proposed scheme.\nBasing on the clustered delay line (CDL) channel model of 3GPP, we add an extra distance parameter to generate the simulation channel model.\nThe channel model consists of 9 scattering clusters, 20 rays in each cluster, and 180 propagation paths.\nThe extra distance parameter is the distance from the BS antenna array to the scatterers or the UE, which is modelled as a random variable uniformly distributed during the interval .\nThe root mean square (RMS) angular spreads of AOD, EOD, AOA and EOA are , , and .\nThe detailed simulation parameters are listed in Table 1.\nWe consider a 3D Urban Macro (3D UMa) scenario, where the UEs move at 60 km/h and 120 km/h.\nThe carrier frequency is 39 GHz, and the bandwidth is 20 MHz with 30 kHz subcarrier spacing.\nOne slot contains 14 OFDM symbols and has a duration of 0.5 ms.\nEach UE sends one sequence of Sounding Reference Signal (SRS) in a time slot.\nOne channel sample is available for each slot.\nThe antenna configuration is , where is the number of horizontal antenna elements, is the number of vertical antenna elements, and denotes the number of polarization.\nThe horizontal and vertical antenna spacings are both .\nThe BS antenna is equipped with a UPA.\nBased on Eq. (15 ###reference_###) and Eq. (17 ###reference_###), the approximation region is .\nIn the OMP algorithm, the numbers of sampling grid points for EOD, AOD and distance are 30, 900 and 360, respectively.\nThe DL precoder is eigen-based zero-forcing (EZF) [26 ###reference_b26###].\nTo assess the prediction method performance, we introduce three metrics, i.e., the DL SE, the DL prediction error, and the normalized mean square error (NMSE) of the near-field channel after being transformed by matrix .\n###figure_2### Fig. 2 ###reference_### depicts the performance of different prediction methods when the UEs move at 60 km/h, 120 km/h and 150 km/h.\nThe CSI delay is relatively large, i.e., 16 ms.\nThe DL SE is calculated by averaged over time and frequency, where is the signal-to-noise ratio of the -th UE and is the number of UEs.\nThe ideal setting is referred as \u201cStationary channel\u201d, where the DL SE achieves an upper bound of performance.\nThe curves labelled as \u201cNo prediction\u201d are the results without channel prediction.\nWe select the PAD channel prediction method in [19 ###reference_b19###] as reference curves.\nWe may observe that the PAD method only achieves moderate prediction gains, given that the path delays are time-varying and the wavefront is spherical.\nIt may also be observed that our proposed method approaches the ideal setting even at a speed of 150 km/h and a CSI delay of 16 ms.\nIt is because our proposed method may effectively address the effects brought by the time-varying path delay and near-field radiation.\n###figure_3### ###figure_4### Fig. 3 ###reference_### compares the prediction errors of different methods as the number of BS antennas increases.\nThe DL prediction error is computed as , which is averaged over time, frequency and UEs.\nOur proposed WTMP method outperforms the PAD method, and the prediction error asymptotically converges to zero.\nIt is also in line with Theorem 1 ###reference_orem1###.\nFig. 4 ###reference_### gives the SEs of different prediction methods as multiple UEs move at different velocities, i.e., every four UEs at 30 km/h, 60 km/h, 90 km/h and 120 km/h, respectively.\nThe curve labelled as \u201cWTMP-SOMP\u201d is the result of the SOMP algorithm to estimate the distance and angles.\nWe may also observe that our proposed method still outperforms the PAD method and is close to the upper bound of SE.\n###figure_5### In Fig. 5 ###reference_###, we show the SEs of different prediction methods when the BS is equipped with different antenna arrays, e.g., and .\nWe also observe that our proposed method still outperforms the PAD method when the BS antenna configuration is a UPA or a ULA.\n###figure_6### In Fig. 6 ###reference_###, we compare the NMSE against the distance to show the advantage of the transform matrix , where the distances between the BS and the scatterers increase from 30 m to 255 m.\nThe curve labelled by \u201cWith matrix \u201d is the NMSE when is introduced to transform the spherical wavefront.\nWe calculate the NMSE by averaged over UEs.\nThe other curve is named \u201cNo matrix \u201d to show the NMSE between and , which is calculated by .\nThe BS antenna configuration is .\nWe may notice that after introducing , the NMSE decreases obviously, and the near-field channel is nearly transformed to a far-field channel.\nTherefore, our designed matrix can effectively mitigate the near-field effects.\n###figure_7### Finally, we adopt a new simulation model consisting of a line-of-sight (LoS) path and 7 clusters.\nEach cluster contains 20 rays, and the total number of propagation paths is 141.\nThe RMS angular spreads of AOD, EOD, AOA and EOA are updated as , , and .\nFig. 7 ###reference_### shows the SEs of different prediction methods under this model.\nIt is clear that our proposed method addresses the near-field effects and time-varying path delay, as the SE of our proposed method is close to the upper bound."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "6",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "VI Conclusion",
|
| 63 |
+
"text": "In this paper, we address the mobility problem in ELAA communication systems.\nWe propose a wavefront transformation-based near-field channel prediction method by transforming the spherical wavefront.\nWe also design a time-frequency-domain projection matrix to capture the time-varying path delay in the mobility scenario, which projects the channel onto the time-frequency domain.\nIn the theoretical analysis, we prove that our proposed WTMP method asymptotically converges to be error-free as the number of BS antennas is large enough, given a finite number of samples.\nWe also prove that the angles and distance parameters asymptotically determine the designed wavefront-transformation matrix with the increasing number of BS antennas.\nSimulation results show that in the high-mobility scenario with large CSI delay, our designed wavefront-transformation matrix provides significant gain, and the performance of our proposed WTMP method is close to the ideal stationary setting."
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [],
|
| 67 |
+
"tables": {
|
| 68 |
+
"1": {
|
| 69 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>The main simulation parameters.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.7.8.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.7.8.1.1\">Scenario</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.7.8.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.8.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.7.8.1.2.1.1\" style=\"width:120.0pt;\">3D Urban Macro (3D UMa)</span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.7.9.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.7.9.1.1\">Carrier frequency (GHz)</th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T1.7.9.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.9.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.7.9.1.2.1.1\" style=\"width:120.0pt;\">39</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.7.10.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.7.10.2.1\">Bandwidth (MHz)</th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T1.7.10.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.10.2.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.7.10.2.2.1.1\" style=\"width:120.0pt;\">20</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.7.11.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.7.11.3.1\">Subcarrier spacing (kHz)</th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T1.7.11.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.11.3.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.7.11.3.2.1.1\" style=\"width:120.0pt;\">30</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.7.12.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.7.12.4.1\">Number of UEs</th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T1.7.12.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.12.4.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.7.12.4.2.1.1\" style=\"width:120.0pt;\">16</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.5\">BS antenna configuration</th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.4.4.4.4\">\n<span class=\"ltx_p\" id=\"S5.T1.4.4.4.4.4\" style=\"width:120.0pt;\">, , , </span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.7.7.4\">UE antenna configuration</th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T1.7.7.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.7.3.3\">\n<span class=\"ltx_p\" id=\"S5.T1.7.7.3.3.3\" style=\"width:120.0pt;\">, the polarization angles are and </span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.7.13.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.7.13.5.1\">CSI delay (ms)</th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S5.T1.7.13.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.13.5.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.7.13.5.2.1.1\" style=\"width:120.0pt;\">16</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.7.14.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.7.14.6.1\">UEs speed (km/h)</th>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.7.14.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.14.6.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.7.14.6.2.1.1\" style=\"width:120.0pt;\">60, 120, 150</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 70 |
+
"capture": "TABLE I: The main simulation parameters."
|
| 71 |
+
}
|
| 72 |
+
},
|
| 73 |
+
"image_paths": {
|
| 74 |
+
"1": {
|
| 75 |
+
"figure_path": "2312.02175v2_figure_1.png",
|
| 76 |
+
"caption": "Figure 1: The typical UL near-field channel of ELAA communication system.",
|
| 77 |
+
"url": "http://arxiv.org/html/2312.02175v2/x1.png"
|
| 78 |
+
},
|
| 79 |
+
"2": {
|
| 80 |
+
"figure_path": "2312.02175v2_figure_2.png",
|
| 81 |
+
"caption": "Figure 2: The SE versus SNR, the BS has 512 antennas.",
|
| 82 |
+
"url": "http://arxiv.org/html/2312.02175v2/x2.png"
|
| 83 |
+
},
|
| 84 |
+
"3": {
|
| 85 |
+
"figure_path": "2312.02175v2_figure_3.png",
|
| 86 |
+
"caption": "Figure 3: The prediction error versus the number of BS antennas, the UEs move at 120 km/h.",
|
| 87 |
+
"url": "http://arxiv.org/html/2312.02175v2/x3.png"
|
| 88 |
+
},
|
| 89 |
+
"4": {
|
| 90 |
+
"figure_path": "2312.02175v2_figure_4.png",
|
| 91 |
+
"caption": "Figure 4: The SE versus SNR, the BS is equipped with 512 antennas, multiple velocity levels of UEs, i.e., four at 30 km/h, four at 60 km/h, four at 90 km/h and four at 120 km/h.",
|
| 92 |
+
"url": "http://arxiv.org/html/2312.02175v2/x4.png"
|
| 93 |
+
},
|
| 94 |
+
"5": {
|
| 95 |
+
"figure_path": "2312.02175v2_figure_5.png",
|
| 96 |
+
"caption": "Figure 5: The SE versus SNR, the UEs move at 120 km/h.",
|
| 97 |
+
"url": "http://arxiv.org/html/2312.02175v2/x5.png"
|
| 98 |
+
},
|
| 99 |
+
"6": {
|
| 100 |
+
"figure_path": "2312.02175v2_figure_6.png",
|
| 101 |
+
"caption": "Figure 6: The NMSE versus the distances, the BS has 256 antennas, and the UEs move at 120 km/h.",
|
| 102 |
+
"url": "http://arxiv.org/html/2312.02175v2/x6.png"
|
| 103 |
+
},
|
| 104 |
+
"7": {
|
| 105 |
+
"figure_path": "2312.02175v2_figure_7.png",
|
| 106 |
+
"caption": "Figure 7: The SNR versus SE, the BS has 512 antennas, and the UEs move at 120 km/h.",
|
| 107 |
+
"url": "http://arxiv.org/html/2312.02175v2/x7.png"
|
| 108 |
+
}
|
| 109 |
+
},
|
| 110 |
+
"validation": true,
|
| 111 |
+
"references": [],
|
| 112 |
+
"url": "http://arxiv.org/html/2312.02175v2"
|
| 113 |
+
}
|
20240721/2312.06646v4.json
ADDED
|
@@ -0,0 +1,545 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Computational Copyright: Towards A Royalty Model for Music Generative AI",
|
| 3 |
+
"abstract": "The advancement of generative AI has given rise to pressing copyright challenges, especially within the music industry. This paper focuses on the economic aspects of these challenges, emphasizing that the economic impact constitutes a central issue in the copyright arena. Furthermore, the complexity of the black-box generative AI technologies not only suggests but necessitates algorithmic solutions. Yet, such solutions have been largely missing, exacerbating regulatory hurdles in this landscape. We seek to address this gap by proposing viable royalty models for revenue sharing on AI music generation platforms. We start by examining existing royalty models utilized by platforms like Spotify and YouTube, and then discuss how to adapt them to the unique context of AI-generated music. A significant challenge emerging from this adaptation is the attribution of AI-generated music to influential copyrighted content in the training data. To this end, we present algorithmic solutions employing data attribution techniques. We also conduct a range of experiments to verify the effectiveness and robustness of these solutions. This research is one of the early attempts to integrate technical advancements with economic and legal considerations in the field of music generative AI, offering a computational copyright solution for the challenges posed by the opaque nature of AI technologies.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Recent advancements in generative AI have significantly impacted creative industries, leading to a surge in AI-generated content across art, music, literature, and software. This rapid evolution has raised complex legal challenges, especially concerning copyright issues Henderson et al. (2023 ###reference_b14###); Samuelson (2023 ###reference_b34###); Sag (2023 ###reference_b33###); Franceschelli and Musolesi (2022 ###reference_b8###). A notable instance of these challenges is the recent lawsuit filed by New York Times against Microsoft and OpenAI NYT (2023 ###reference_b2###). Copyright laws cover a range of rights, including protection of original works, controlling their reproduction, and managing the distribution of profits from these works. The emergence of generative AI poses multifaceted challenges in this regard, as it blurs the lines of authorship and originality.\nArguably, central to these challenges is the economic impact. Taking the music industry as an example, a vast collection of music has been publicly available on platforms like Spotify and YouTube, where copyright owners are compensated through royalties. This practice not only suggests that economic incentives are a primary reason for making music publicly accessible, but also highlights the centrality of economic rights in copyright protections. This trend is reflective of a broader truth: economic considerations are at the heart of the U.S. copyright law, where a primary goal is to stimulate creativity by ensuring that creators are adequately compensated. There has also been ongoing debate about whether training generative AI with copyrighted content aligns with the fair use doctrine111See Section 107 of the Copyright Act: https://www.copyright.gov/title17/92chap1.html#107 ###reference_html#107###.. However, it is increasingly argued that fair use may not apply if the AI generated content competes with the original market for the data Henderson et al. (2023 ###reference_b14###). These issues underscore the economic impact as a crucial aspect of copyright challenges in generative AI.\nHowever, effective technical solutions addressing the aforementioned challenge have been limited or nonexistent. Existing efforts have focused on preventing generative AI from generating content similar to its training data Vyas et al. (2023 ###reference_b42###); Chu et al. (2023 ###reference_b3###); Li et al. (2023 ###reference_b20###). This approach, while helpful, may not fully address the broader economic implications of AI-generated content. Addressing the economic aspect of the copyright challenges is challenging as it requires a solution that integrates technical advancement into business agreements.\nThe challenge is also pressing. Without effective technical solutions for proper royalty distribution, regulatory bodies are faced with a dilemma between stifling innovation and compromising the interests of copyright owners. As it stands, numerous AI music generation platforms are navigating these uncharted waters, operating in legal gray areas and leaving the rights of copyright owners inadequately protected Drott (2021 ###reference_b6###); Clancy (2021 ###reference_b4###).\nThis paper aims to bridge this crucial gap by proposing potential royalty models for revenue sharing from AI music generation platforms. Specifically, we design the royalty model by addressing the following key questions: 1) Who are the stakeholders? 2) What are the sources of revenue? 3) How to determine the royalty distribution for revenue sharing?\nTo answer these questions, we start with case studies of Spotify and YouTube, which are the leading platforms in music streaming and video sharing respectively. We investigate their royalty models and examine feasibility of adapting these models to AI music generation platforms. A critical technical challenge for such adaptation we identify is the difficulty in attributing the AI generated music to the influential copyrighted content used in the model training data. In response, we develop algorithmic solutions using data attribution techniques to mitigate these challenges. Our experimental results demonstrate that the proposed solutions are reasonably effective.\nThe proposed approach represents an early effort to navigate the complex intersection of technological innovation and economic considerations in copyright law for generative AI. The complexity of the black-box generative AI technologies necessitates a computational copyright solution. This paper showcases a promising prototype towards this goal."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Exploring Digital Music Royalty Models Through Case Studies",
|
| 15 |
+
"text": "In this section, we examine the royalty models in the digital music industry through a couple of case studies. Please refer to Appendix A ###reference_### for fundamental concepts and a few major types of music royalties that are prevalent in the industry. In order to understand the intricacies of the implementation of royalty models and their applicability to AI music generation platforms, we delve into case studies of two major platforms: Spotify and YouTube. Spotify is the largest music streaming platform in the world while YouTube is the largest video sharing platform. Both platforms have a significant amount of music content and generate revenue through multiple sources. Furthermore, despite various existing criticisms on these royalty models Marshall (2015 ###reference_b23###); Trendacosta (2020 ###reference_b39###), they represent the status quo of how the current digital music market works. Therefore, designing royalty models for AI music generation platforms mimicking the ones for Spotify and YouTube would be a reasonable initial step in this area. In the following sections, we will examine the royalty models of these two platforms in detail."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Case Study: Spotify",
|
| 21 |
+
"text": "Spotify employs a centralized method for sharing its revenue with copyright owners, primarily via streaming royalties. The process involves determining Spotify\u2019s total revenue from various sources and subsequently calculating the royalty distribution for copyright owners.\nSpotify\u2019s royalty model involves several key groups of stakeholders, in addition to the streaming platform itself. These groups222Please refer to Appendix B ###reference_### for detailed description of these groups of stakeholders. are (1) artists and creators, (2) record labels and music publishers, (3) music rights societies and collecting agencies, (4) listeners and subscribers, and (5) advertisers.\nStakeholders in groups 1, 2, and 3 receive revenue shares from Spotify, while groups 4 and 5 contribute to the generation of Spotify\u2019s revenue. Typically, Spotify directly interacts with stakeholders in groups 2 and 3. Individual artists and creators often have contracts with these labels, publishers, or music rights agencies, and do not directly engage with Spotify in the financial aspect of their music streaming.\nThe major revenue sources of Spotify can be divided into two categories: subscription and advertisement. In 2021, premium subscriptions accounted for 88% of Spotify\u2019s revenue while advertisements accounted for the remaining 12% Johnston (2023 ###reference_b17###). The two revenue sources lead to the formation of separate revenue pools, which are also calculated separately for different countries or regions.\nSpotify employs a straightforward pro rata model to calculate the royalty distribution for each revenue pool. The royalty for each artist or track is calculated by applying their stream share to each revenue pool. This method ensures that royalty distribution is directly proportional to the popularity and streaming frequency of each artist\u2019s or track\u2019s work on the platform."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Case Study: YouTube",
|
| 27 |
+
"text": "YouTube\u2019s model for compensating music copyright owners is multifaceted, offering various methods for monetizing the content: (1) YouTube Partner Program: Music copyright owners can join the YouTube Partner Program, uploading music (videos) to their official channels. Revenue is shared based on user views of their content; (2) Content ID: Owners can earn from videos using their music through the Content ID system. This system uses fingerprinting and machine learning to identify copyrighted content in uploaded videos and allocates revenue from these videos to the copyright owners; (3) Licensing: Owners can also license their music directly to a YouTube video for a one-time payment.\nThe first method resembles Spotify\u2019s royalty model. The second and the third methods are different as they involve a third-party video creator.\nThe stakeholders involved in the first method above are similar to those in Spotify\u2019s royalty model. However, the second and third methods introduce additional parties: video creators and third-party licensing platforms. Video creators are the ones who upload videos (incorporating copyrighted music) to YouTube. Third-party licensing platforms are companies that help video creators obtain licenses for music used in their videos. These companies often have direct licensing agreements with YouTube and music rights owners, offering a streamlined process for video creators to legally use music in their videos.\nFor the first two methods, royalties come from YouTube\u2019s revenue streams. YouTube generates revenue primarily through advertisements and, to a lesser extent, through premium subscriptions. The advertisement model is diverse, including in-video ads, banner ads, and sponsored content. Premium subscriptions, offering ad-free viewing and other benefits, also contribute to YouTube\u2019s revenue.\nA crucial aspect of YouTube\u2019s royalty model is the challenge of attributing copyright ownership in the videos. Unlike Spotify, where content attribution is straightforward through stream counts, the incorporation of copyrighted music in the user-generated videos makes this task technically demanding. The Content ID system serves as the technical foundation that enables the second and third methods of YouTube\u2019s revenue sharing. In the second method, it identifies music in videos and allocates revenue to the copyright owners. In the third method, while synchronization licenses might be obtained through third-party licensing platforms outside of YouTube, the presence of the Content ID system encourages them to secure these licenses. Although the system has its share of flaws and has faced criticism Van der Sar (2021 ###reference_b41###); McKay (2011 ###reference_b25###); Trendacosta (2020 ###reference_b39###); Saadatpanah et al. (2020 ###reference_b31###), including issues with false positives and negatives, it is still broadly embraced by the music industry.\nA notable pattern of the royalty models on digital music/video platforms is that the payment is not directly made to each individual piece of music work333The licensing model on YouTube, where video creators directly buy synchronization licenses for individual music, is an exception. However, note that this model is also enabled by the existence of the Content ID system.. Rather, the platforms typically follow two main steps: 1) formation of revenue pools and 2) distribution based on access frequency."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Potential Royalty Models for AI Music Generation Platforms",
|
| 33 |
+
"text": "This section explores potential royalty models for AI music generation platforms. We start by understanding the business models of these platforms, which involves summarizing their services, identifying key stakeholders, and highlighting various revenue sources integral to their operations. This foundation sets the stage for discussing the design of proper royalty distribution mechanisms."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "The Business Models",
|
| 39 |
+
"text": "While the landscape of AI music generation is still rapidly evolving, there have been a few common business models emerging McFarland (2023 ###reference_b24###). We summarize these business models in terms of services, stakeholders, and revenue sources444Please refer to Appendix C ###reference_### for more detailed AI music generation platforms business models..\nThe backbone of AI music generation platforms is generative AI trained on a large corpus of existing music, which often includes copyrighted music. With the generative AI, the platforms offer a variety of services to meet different needs of end users. The potential stakeholders involved in AI music generation platforms have significant overlaps with those on traditional music platforms, as summarized in the five groups in Section 2.1 ###reference_###. The platforms have several different ways for generating revenues, such as subscription fees, licensing, advertisements and costom composition fees."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Potential Royalty Model Designs",
|
| 45 |
+
"text": "Given the similarity of the stakeholders and revenue sources between the AI music generation platforms and traditional music platforms, it is logical to consider adopting and adapting existing royalty models from platforms like Spotify and YouTube.\nParticularly, the business models for AI music generation platforms align with the pattern identified in the case study, where the revenue is channeled through the platform, rather than directly compensating for each individual piece of music.\nSimilar to Spotify and YouTube, these platforms would first accumulate revenue, forming distinct pools based on different criteria such as revenue sources (subscriptions, advertisements, licensing fees).\nThe revenue from each pool would then be distributed based on the frequency at which each copyrighted work included in the training corpus is accessed during the service.\nThe key question is how the copyrighted training content is \u201caccessed\u201d in the services provided by the platforms. Here, the music generated by a generative AI is influenced by the copyrighted works included in its training corpus. This scenario is analogous to YouTube, where copyrighted music is used as ingredients for new creations like videos or remixes. In the generative AI scenario, end users can be viewed to access the copyrighted training content indirectly through the generated music.\nRecalling YouTube\u2019s model, the first step involves calculating the frequency of video views. Subsequently, these views are attributed to the copyrighted music used in the videos. For AI music generation platforms, a similar method could be employed: first determining the usage frequency of the generated music and then attributing this usage back to the original copyrighted works that influenced the creation of this music.\nEnforcing such a royalty model presents an open technical challenge: accurately attributing the influence of original copyrighted works on the generated music. However, if this attribution challenge can be effectively addressed, the remaining elements of the royalty model can closely mirror those of YouTube\u2019s Content ID system.\nIn the following section, we propose an algorithmic solution to mitigate this challenge of attribution, aiming to create an effective royalty model for AI music generation platforms."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Attributing AI-Generated Music to Copyrighted Content",
|
| 51 |
+
"text": ""
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Formulation of the Attribution Problem",
|
| 57 |
+
"text": "Attributing the influence of copyrighted training content on generated music essentially asks the question: \u201cTo what extent does each piece of music in the training corpus influence a specific piece of AI-generated music?\u201d The quantifiable definition of \u201cinfluence\u201d can potentially be subjective. We suggest two perspectives to define the \u201cinfluence\u201d, one inspired by the machine learning literature and the other comes from the domain knowledge of music.\nThe data attribution problem in machine learning refers to the task of identifying and quantifying the contribution of each data point in the training dataset to a given output of a machine learning model. Formally, this problem is often framed as follows: How does the removal of a particular data point from the training dataset and subsequent retraining of the model affect its output? This change in output serves as a measure of the removed data point\u2019s influence on that specific model output Koh and Liang (2017 ###reference_b18###). In the context of AI music generation, we can define the influence of a piece of training music on a piece of generated music in terms of the change in the likelihood of the model producing that generated music, assuming the model is retrained after removing the training music piece.\nThe second perspective considers the influence from a musical standpoint, focusing on how one musician\u2019s work might affect another\u2019s. Such influence spans multiple aspects, including musical styles (such as genres, rhythms, melodies, or harmonies), technical and instrumental methods (how a musician plays an instrument or sings), or thematic elements (such as themes, messages, or lyrical content).\nIn Section 4.2 ###reference_###, we introduce an algorithm designed to estimate influence from the data attribution perspective. Then, in Section 4.3 ###reference_###, we evaluate the proposed method using metrics from both perspectives, highlighting a potential synergy between these two viewpoints."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Data Attribution for Music Generation",
|
| 63 |
+
"text": "In this section, we first introduce formal definitions of AI music generation and the data attribution problem for music generation. Subsequently, we propose an algorithmic solution to quantitatively estimate the influence of each training piece on a particular piece of generated music.\nWe start by introducing the notations and the formal definitions of symbolic music generation. Symbolic music is characterized as a series of discrete events that collectively represent a segment of music. A music segment with events can be expressed as . Here, each represents a specific event from a vocabulary set that defines all the valid symbolic events.\nA symbolic music generation model, denoted as , takes a prompt music segment and calculates the subsequent event\u2019s conditional probability distribution , where indicates the length of prompt. Suppose the size of the vocabulary set is . The model can be represented as a classification neural network with output units correspond to a probability distribution over the vocabulary set. This type of model formulation is known as autoregressive model, which is one of the most popular model family for symbolic music.\nNow we formalize the data attribution problem for symbolic music generation. Suppose we have a training dataset with segments of music and a generation model trained on this dataset. For any piece of music segment and any model , we define a utility function that maps the music segment and the model to a real value. The influence of a training piece () on a new piece of AI-generated music , which is also called an attribution score, can then be defined as\nwhere is the model retrained on the dataset with removed. In practice, the utility function can be defined as the (log-)likelihood of music segment being generated by model . In this case, measures the change of likelihood for being generated when is removed from the training corpus.\nWe can define two instances of the data attribution problem, respectively event-level attribution and segment-level attribution. The event-level attribution corresponds to a special case where has a single event, i.e., . The segment-level attribution corresponds to the general case where has multiple events. The two instances provide different granularity of attribution scores. In an autoregressive symbolic music generation model, the music is generated event by event. Therefore, the training data points could have different influences when generating different events in a segment. The event-level attribution provides a way to capture this nuance. On the other hand, the segment-level attribution looks at the influence of training data on a larger scale, focusing on the overall structure and composition of a generated music segment.\nDirectly calculating requires retraining a model for for each training data point , which is computationally prohibitive. Fortunately, there has been a rich literature on efficient data attribution methods Hammoudeh and Lowd (2022 ###reference_b12###), primarily designed for classification models. Furthermore, we shall see that these methods can be easily adapted to the autoregressive generative models like Music Transformer.\nIn particular, since generating one event can be viewed as a classification problem, we can directly apply existing data attribution methods for classification models to event-level attribution. For the segment-level attribution, when the utility function is defined as the log-likelihood, i.e., . We observe that, assuming for some , then by Bayes\u2019 rule,\nwhich is exactly the sum of all the event-level attribution scores. Therefore, we can apply any data attribution method that can attribute the log-likelihood of a classification model in the segment-level attribution when using log-likelihood as the utility function.\nSeveral off-the-shelf data attribution methods can be applied to estimate the attribution scores for the autoregressive symbolic music generation models. We denote the estimated attribution score for on as ."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2.1",
|
| 67 |
+
"parent_section_id": "4.2",
|
| 68 |
+
"section_name": "4.2.1 AI Music Generation",
|
| 69 |
+
"text": "In the field of AI music generation, there are two major paradigms: waveform music generation and symbolic music generation Manzelli et al. (2018 ###reference_b22###). Waveform music generation involves the direct synthesis of a music\u2019s waveform, with examples including WaveNet Oord et al. (2016 ###reference_b27###). Symbolic music generation involves creating music in a symbolic format, such as the Musical Instrument Digital Interface (MIDI) format. This paper focuses on symbolic music generation.\nWe start by introducing the notations and the formal definitions of symbolic music generation. Symbolic music is characterized as a series of discrete events that collectively represent a segment of music. A music segment with events can be expressed as . Here, each represents a specific event from a vocabulary set that defines all the valid symbolic events.\nA symbolic music generation model, denoted as , takes a prompt music segment and calculates the subsequent event\u2019s conditional probability distribution , where indicates the length of prompt. Suppose the size of the vocabulary set is . The model can be represented as a classification neural network with output units correspond to a probability distribution over the vocabulary set. This type of model formulation is known as autoregressive model, which is one of the most popular model family for symbolic music."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2.2",
|
| 73 |
+
"parent_section_id": "4.2",
|
| 74 |
+
"section_name": "4.2.2 Data Attribution for Symbolic Music Generation",
|
| 75 |
+
"text": "Now we formalize the data attribution problem for symbolic music generation. Suppose we have a training dataset with segments of music and a generation model trained on this dataset. For any piece of music segment and any model , we define a utility function that maps the music segment and the model to a real value. The influence of a training piece () on a new piece of AI-generated music , which is also called an attribution score, can then be defined as\nwhere is the model retrained on the dataset with removed. In practice, the utility function can be defined as the (log-)likelihood of music segment being generated by model . In this case, measures the change of likelihood for being generated when is removed from the training corpus.\nWe can define two instances of the data attribution problem, respectively event-level attribution and segment-level attribution. The event-level attribution corresponds to a special case where has a single event, i.e., . The segment-level attribution corresponds to the general case where has multiple events. The two instances provide different granularity of attribution scores. In an autoregressive symbolic music generation model, the music is generated event by event. Therefore, the training data points could have different influences when generating different events in a segment. The event-level attribution provides a way to capture this nuance. On the other hand, the segment-level attribution looks at the influence of training data on a larger scale, focusing on the overall structure and composition of a generated music segment.\nDirectly calculating requires retraining a model for for each training data point , which is computationally prohibitive. Fortunately, there has been a rich literature on efficient data attribution methods Hammoudeh and Lowd (2022 ###reference_b12### ###reference_b12###), primarily designed for classification models. Furthermore, we shall see that these methods can be easily adapted to the autoregressive generative models like Music Transformer.\nIn particular, since generating one event can be viewed as a classification problem, we can directly apply existing data attribution methods for classification models to event-level attribution. For the segment-level attribution, when the utility function is defined as the log-likelihood, i.e., . We observe that, assuming for some , then by Bayes\u2019 rule,\nwhich is exactly the sum of all the event-level attribution scores. Therefore, we can apply any data attribution method that can attribute the log-likelihood of a classification model in the segment-level attribution when using log-likelihood as the utility function.\nSeveral off-the-shelf data attribution methods can be applied to estimate the attribution scores for the autoregressive symbolic music generation models. We denote the estimated attribution score for on as ."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.3",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Experimental Evaluation",
|
| 81 |
+
"text": "We conduct experiments that extend two data attribution methods, TracIN Pruthi et al. (2020 ###reference_b30###) and TRAK Park et al. (2023 ###reference_b28###), for the Music Transformer model Huang et al. (2018 ###reference_b15###), evaluated on the MAESTRO dataset Hawthorne et al. (2019 ###reference_b13###)555Please refer to Appendix D ###reference_### for detailed experimental setup..\nTo evaluate the estimated attribution scores from the data attribution perspective, we can compare them with the ground truth attribution scores defined in Eq. (1 ###reference_###). In the data attribution literature Koh and Liang (2017 ###reference_b18###); Ilyas et al. (2022 ###reference_b16###), the comparison is typically measured by Spearman\u2019s rank correlation. Formally, for a training dataset with data points, one will calculate the rank correlation between and .\nHowever, calculating involves retraining a model for removing each data point, which becomes computationally impractical on large datasets. Following Ilyas et al. (2022 ###reference_b16###), we adopt an approximated version of this rank correlation metric. Instead of retraining for removing each data point, we randomly select a set of subsets of the training dataset, , and retrain a model on for each . Slightly overloading the notation, we define a subset attribution score as . Correspondingly, we use the summation of the estimated attribution scores on each subset as the estimated attribution score for that whole subset, i.e., . Then we can calculate a rank correlation between and .\nFor the musical influence perspective, there are multiple aspects mentioned in Section 4.1 ###reference_###. In our study, we focus on the similarity of musical styles. A common approach to quantitatively evaluate musical style similarity is by extracting features from the music Slaney et al. (2008 ###reference_b35###). In this study, we identify three features used in Spotify API666https://developer.spotify.com/documentation/web-api/ ###reference_n/web-api/### to characterize a piece of music. Loudness measures the overall velocity of a music segment. We define it as the average velocity of events within the segment. Key measures the average pitch height of all events in the music segment. Duration measures the total length of the music segment in time, calculated as the sum of the time deltas of all events.\nWe extract these features from both the generated music and the training samples. Then we can evaluate the attribution methods by investigating if the most influential training music pieces are more similar to the generated music in terms of musical styles. Formally, for each musical style feature, we calculate the Pearson correlation over pairs of generated music and training music pieces.\nWe form the set with 100 random subsets, each contains 50% of the training samples. We calculate the rank correlations on 178 generated music and report the average rank correlations for different data attribution methods in Table 1 ###reference_###.\nIn comparison to the random baseline, both attribution methods have achieved significantly positive correlations with the ground-truth scores at the event level, and TRAK also works well at the segment level. This indicates that there exist computationally feasible solutions that can reasonably attribute the generated music to the copyrighted training content, thus solving the key technical bottleneck for establishing a royalty model. In addition, we observe that event-level attribution seems to be easier than segment-level attribution. This leads to an interesting question about the proper granularity of attributing generated music, which we leave for future exploration. For the rest of the paper, we conduct all the experiments with the TRAK-based attribution method.\nFor each generated music, we order and group the training music pieces by their attribution scores. Figure 1 ###reference_### shows the results of musical similarity correlation between the generated music and training music groups. We observe a clearly decreasing trend of musical similarity for training music group with lower attribution scores. This suggests that the data attribution methods also capture some influence in terms of musical styles (see Appendix E ###reference_### for detailed discussion).\n###figure_1### ###figure_2### ###figure_3###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.3.1",
|
| 85 |
+
"parent_section_id": "4.3",
|
| 86 |
+
"section_name": "4.3.1 Evaluation Metrics",
|
| 87 |
+
"text": "We introduce evaluation metrics formalizing the two perspectives of influence in Section 4.1 ###reference_###.\nTo evaluate the estimated attribution scores from the data attribution perspective, we can compare them with the ground truth attribution scores defined in Eq. (1 ###reference_### ###reference_###). In the data attribution literature Koh and Liang (2017 ###reference_b18### ###reference_b18###); Ilyas et al. (2022 ###reference_b16### ###reference_b16###), the comparison is typically measured by Spearman\u2019s rank correlation. Formally, for a training dataset with data points, one will calculate the rank correlation between and .\nHowever, calculating involves retraining a model for removing each data point, which becomes computationally impractical on large datasets. Following Ilyas et al. (2022 ###reference_b16### ###reference_b16###), we adopt an approximated version of this rank correlation metric. Instead of retraining for removing each data point, we randomly select a set of subsets of the training dataset, , and retrain a model on for each . Slightly overloading the notation, we define a subset attribution score as . Correspondingly, we use the summation of the estimated attribution scores on each subset as the estimated attribution score for that whole subset, i.e., . Then we can calculate a rank correlation between and .\nFor the musical influence perspective, there are multiple aspects mentioned in Section 4.1 ###reference_### ###reference_###. In our study, we focus on the similarity of musical styles. A common approach to quantitatively evaluate musical style similarity is by extracting features from the music Slaney et al. (2008 ###reference_b35### ###reference_b35###). In this study, we identify three features used in Spotify API666https://developer.spotify.com/documentation/web-api/ ###reference_n/web-api/### ###reference_n/web-api/### to characterize a piece of music. Loudness measures the overall velocity of a music segment. We define it as the average velocity of events within the segment. Key measures the average pitch height of all events in the music segment. Duration measures the total length of the music segment in time, calculated as the sum of the time deltas of all events.\nWe extract these features from both the generated music and the training samples. Then we can evaluate the attribution methods by investigating if the most influential training music pieces are more similar to the generated music in terms of musical styles. Formally, for each musical style feature, we calculate the Pearson correlation over pairs of generated music and training music pieces."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.3.2",
|
| 91 |
+
"parent_section_id": "4.3",
|
| 92 |
+
"section_name": "4.3.2 Experimental Results",
|
| 93 |
+
"text": "We form the set with 100 random subsets, each contains 50% of the training samples. We calculate the rank correlations on 178 generated music and report the average rank correlations for different data attribution methods in Table 1 ###reference_### ###reference_###.\nIn comparison to the random baseline, both attribution methods have achieved significantly positive correlations with the ground-truth scores at the event level, and TRAK also works well at the segment level. This indicates that there exist computationally feasible solutions that can reasonably attribute the generated music to the copyrighted training content, thus solving the key technical bottleneck for establishing a royalty model. In addition, we observe that event-level attribution seems to be easier than segment-level attribution. This leads to an interesting question about the proper granularity of attributing generated music, which we leave for future exploration. For the rest of the paper, we conduct all the experiments with the TRAK-based attribution method.\nFor each generated music, we order and group the training music pieces by their attribution scores. Figure 1 ###reference_### ###reference_### shows the results of musical similarity correlation between the generated music and training music groups. We observe a clearly decreasing trend of musical similarity for training music group with lower attribution scores. This suggests that the data attribution methods also capture some influence in terms of musical styles (see Appendix E ###reference_### ###reference_### for detailed discussion).\n###figure_4### ###figure_5### ###figure_6###"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Robustness of Attribution Scores",
|
| 99 |
+
"text": "In this section, we analyze the robustness of the TRAK attribution scores for music generative AI, which is crucial for establishing reliable royalty distribution.\nIn Section 5.1 ###reference_###, we examine the robustness against the randomness inherently existing in the data attribution process, which we term as stochastic robustness. In Section 5.2 ###reference_###, we further investigate the adversarial robustness of the attribution scores against malicious actors that seek to adversarially manipulate the attribution scores."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.1",
|
| 103 |
+
"parent_section_id": "5",
|
| 104 |
+
"section_name": "Stochastic Robustness",
|
| 105 |
+
"text": "Data attribution methods for deep neural networks naturally come with randomness due to, e.g., model initialization and training dynamics S\u00f8gaard et al. (2021 ###reference_b36###); Nguyen et al. (2023 ###reference_b26###).\nIn this subsection, we examine the stability of the attribution scores against such natural randomness. Specifically, we run the data attribution method multiple times with independent model initialization and training processes. Then we carry out Student\u2019s t-test on the score of each training-generation pair (with the null hypothesis that the score equals to zero) as a way to quantify the stochastic robustness of the attribution scores.\n###figure_7### ###figure_8### Figure 2(a) ###reference_sf1### shows the histogram of the p-values for t-tests on the training-generation pairs. We find that only a small portion of the pairs have p-values smaller than 0.05777Rigorously claiming statistical significance requires false-discovery-rate control for the multiple hypothesis testing. But we are only using the distribution of p-values to measure the stability of attribution scores, and do not intend to claim statistical significance for individual scores.. Furthermore, we group the training-generation pairs by the relative rankings of the average attribution scores for these pairs, and Figure 2(b) ###reference_sf2### shows the boxplots of p-values for each group: the p-value is correlated with the rankings of the attribution scores. This result suggests that while the data attribution scores are not always stable, the ones with the top attribution scores tend to be reliable.\nThis result has an implication on the royalty mechanism design: the revenue of a generated music should be distributed to the training pieces with top attribution scores."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.2",
|
| 109 |
+
"parent_section_id": "5",
|
| 110 |
+
"section_name": "Adversarial Robustness",
|
| 111 |
+
"text": "As with any system where financial interests are involved, there might be malicious actors who seek to increase their royalty shares by adversarially manipulating their contributed data.\nIn this section, we evaluate the adversarial robustness of the TRAK attribution scores under two potential adversarial attack methods.\nDuplicating a training sample multiple times is an intuitive way to increase its total attribution. In the experiment, we add multiple duplicate copies of a training sample into the dataset and then recalculate the attribution scores. Table 2 ###reference_### shows the total attribution score of the duplicated training samples over 178 generated music. We find that having duplications of a training sample in fact mostly decreases the total attribution score of this training sample and its duplications, indicating the attribution scores are robust against this type of attack (see Appendix F ###reference_### for explanations).\nReplacing part of one\u2019s music with a segment from a highly influential training sample is another viable method to increase the attribution score of the altered music. In our experiment, we copy a segment from a training sample (source) that has top attribution scores and replace a segment of another training sample (target) with it. Table 3 ###reference_### presents how many times the modified target appears in the top-50 attribution scores among 178 generated music. It indicates that the attack can be effective even with a relatively small number of copied events.\nThe results of adversarial robustness demonstrate that the current best attribution method, TRAK, is robust to certain adversarial attacks. However, some attacks may still be successful if additional information, such as the attribution scores of other samples, is available. Enhancing the adversarial robustness of attribution scores represents a crucial direction for future research888Please refer to Appendix F ###reference_### for detailed experimental setup.."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "6",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "Related Work",
|
| 117 |
+
"text": "In this section, we discuss how the computational copyright solution and AI music royalty model in this paper are connected to prior works. These works can fall into three categories: (1) law, (2) economy, and (3) technology."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "7",
|
| 121 |
+
"parent_section_id": null,
|
| 122 |
+
"section_name": "Conclusion and Discussion",
|
| 123 |
+
"text": "In conclusion, this paper has explored the intricate landscape of copyright challenges posed by generative AI, with a particular emphasis on the music industry. We have highlighted the pressing need for computational copyright solutions to manage the economic implications of AI-generated content, which could be crucial in tackling regulatory challenges.\nOur case studies of existing digital music platforms have set the stage for discussing potential royalty models applicable to AI music generation platforms. Along the way, we have addressed the challenge of attributing the generated music to the copyrighted training content by leveraging data attribution techniques. This study offers a promising prototype of a computational copyright solution in the field of generative AI.\nFurthermore, we believe the proposed economic solution has the potential to extend beyond the scope of copyright law, addressing the broader labor displacement issues led by advancements in modern AI. If these issues are not adequately addressed, they may threaten the financial stability of creative professionals or lead to increased unemployment in the near future. In the long run, this could \u201ckill the goose\u201d of the creative industries, as there is no solid evidence that AI improvement can be entirely driven by synthetic data. Devaluing human labor may ultimately harm future AI development. In light of these challenges, it is imperative to go beyond existing legal and economic doctrines and develop new paradigms of wealth distribution and labor valuation. By proactively creating frameworks that ensure equitable compensation and recognize the indispensable role of human creativity, we can foster a sustainable ecosystem where both AI and human talent thrive. Such innovative approaches will not only mitigate the risks of labor displacement but also drive future growth and innovation in the creative industries."
|
| 124 |
+
}
|
| 125 |
+
],
|
| 126 |
+
"appendix": [
|
| 127 |
+
{
|
| 128 |
+
"section_id": "Appendix 1",
|
| 129 |
+
"parent_section_id": null,
|
| 130 |
+
"section_name": "Appendix A A Primer on the Concepts of Music Royalties",
|
| 131 |
+
"text": "It is essential to familiarize ourselves with the fundamental concepts and a few major types of music royalties that are prevalent in the industry."
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"section_id": "Appendix 2",
|
| 135 |
+
"parent_section_id": null,
|
| 136 |
+
"section_name": "Appendix B Detailed Stakeholder Description",
|
| 137 |
+
"text": "Artists and Creators: Musicians, songwriters, and producers who create the content streamed on Spotify.\nRecord Labels and Music Publishers: Organizations that own the copyrights to music recordings and compositions.\nMusic Rights Societies and Collecting Agencies: Organizations responsible for collecting royalties and distributing them to copyrights owners.\nListeners and Subscribers: The end-users whose subscription fees and advertising views generate revenue.\nAdvertisers: Companies that pay Spotify to advertise on its free-tier platform."
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"section_id": "Appendix 3",
|
| 141 |
+
"parent_section_id": null,
|
| 142 |
+
"section_name": "Appendix C AI Music Generation Platforms Business Models",
|
| 143 |
+
"text": ""
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"section_id": "Appendix 4",
|
| 147 |
+
"parent_section_id": null,
|
| 148 |
+
"section_name": "Appendix D More Details of Experimental Setup",
|
| 149 |
+
"text": ""
|
| 150 |
+
},
|
| 151 |
+
{
|
| 152 |
+
"section_id": "Appendix 5",
|
| 153 |
+
"parent_section_id": null,
|
| 154 |
+
"section_name": "Appendix E Discussion on Musical Similarity and Attribution Scores",
|
| 155 |
+
"text": "Our experiments reveal a synergy between musical similarity and attribution scores, supporting the intuitive belief that training music pieces more akin to generated ones should receive greater attribution. This section discusses the distinctions between musical similarity and attribution scores, illustrating why relying solely on similarity assessments might be inadequate for addressing the challenge of attribution.\nInitially, the concept of musical similarity is model-independent, indicating that it consistently assigns similar attributes regardless of the model used. This characteristic is at odds with the fact that different models, even when trained on the same dataset, can exhibit varied behaviors. Furthermore, musical similarity fails to account for the \u201cinteractions\u201d within the training dataset. For instance, the presence of multiple similar music pieces in the dataset can influence the contribution of each piece."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"section_id": "Appendix 6",
|
| 159 |
+
"parent_section_id": null,
|
| 160 |
+
"section_name": "Appendix F Experiment Settings and Further discussion for Robustness",
|
| 161 |
+
"text": ""
|
| 162 |
+
}
|
| 163 |
+
],
|
| 164 |
+
"tables": {
|
| 165 |
+
"1": {
|
| 166 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.6\" style=\"width:260.2pt;height:36.8pt;vertical-align:-1.3pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-68.0pt,9.3pt) scale(0.656636823386474,0.656636823386474) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.7.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.7.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.7.1.2\">Random</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.7.1.3\">TracIN</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.7.1.4\">TRAK</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.3.3.3.4\">Segment-level</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.1.1.1\">0.00910.007</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.2.2.2\">-0.036 0.031</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.3.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.3.3.1\">0.3010.007</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T1.6.6.6.4\">Event-level</th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.4.4.4.1\">-0.00040.008</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.5.5.5.2\">0.1270.008</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.6.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.6.3.1\">0.3590.010</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>The average retraining rank correlation among 178 generated music for different data attribution methods. The error bar represent the standard error of the mean. \u201cRandom\u201d refers to a baseline that employs random attribution scores. \u201cSegment-level\u201d and \u201cEvent-level\u201d refer to the two levels of attribution discussed in Section\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.06646v4#S4.SS2.SSS1\" title=\"4.2.1 AI Music Generation \u2023 4.2 Data Attribution for Music Generation \u2023 4 Attributing AI-Generated Music to Copyrighted Content \u2023 Computational Copyright: Towards A Royalty Model for Music Generative AI\"><span class=\"ltx_text ltx_ref_tag\">4.2.1</span></a>.</figcaption>\n</figure>",
|
| 167 |
+
"capture": "Table 1: The average retraining rank correlation among 178 generated music for different data attribution methods. The error bar represent the standard error of the mean. \u201cRandom\u201d refers to a baseline that employs random attribution scores. \u201cSegment-level\u201d and \u201cEvent-level\u201d refer to the two levels of attribution discussed in Section\u00a04.2.1."
|
| 168 |
+
},
|
| 169 |
+
"2": {
|
| 170 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.1\" style=\"width:303.5pt;height:61.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-27.2pt,5.5pt) scale(0.84825023034198,0.84825023034198) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1.1.1\">Duplicate Copies</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.1.1.1.1.2\">0 (original)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.1.1.1.1.3\">1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.1.1.1.1.4\">2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.1.1.1.1.5\">5</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.1.1.1.1.6\">10</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.1.1.1.1.7\">100</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.2.1.1\">Sample 1</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T2.1.1.2.1.2\">-4.187</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T2.1.1.2.1.3\">0.410</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T2.1.1.2.1.4\">-0.054</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T2.1.1.2.1.5\">-0.007</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T2.1.1.2.1.6\">0.115</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T2.1.1.2.1.7\">0.014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.1.3.2.1\">Sample 2</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T2.1.1.3.2.2\">0.188</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T2.1.1.3.2.3\">-0.308</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T2.1.1.3.2.4\">0.132</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T2.1.1.3.2.5\">-0.004</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T2.1.1.3.2.6\">-0.016</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T2.1.1.3.2.7\">-0.009</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S5.T2.1.1.4.3.1\">Sample 3</th>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T2.1.1.4.3.2\">5.128</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T2.1.1.4.3.3\">-0.112</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T2.1.1.4.3.4\">-0.058</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T2.1.1.4.3.5\">-0.054</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T2.1.1.4.3.6\">0.005</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T2.1.1.4.3.7\">0.004</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The total attribution score of the duplicated sample over 178 generated music. Three training music sequences are chosen to be duplicated independently. Experiments with 1, 2, 5, 10, and 100 extra copies are conducted to compare against the baseline scenario of 0 copies, representing the original setting.</figcaption>\n</figure>",
|
| 171 |
+
"capture": "Table 2: The total attribution score of the duplicated sample over 178 generated music. Three training music sequences are chosen to be duplicated independently. Experiments with 1, 2, 5, 10, and 100 extra copies are conducted to compare against the baseline scenario of 0 copies, representing the original setting."
|
| 172 |
+
},
|
| 173 |
+
"3": {
|
| 174 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T3.1\" style=\"width:260.2pt;height:48.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-14.0pt,2.6pt) scale(0.902860455181158,0.902860455181158) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.1.1.1\">Copied Events</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.1.1.1.1.2\">0 (original)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.1.1.1.1.3\">1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.1.1.1.1.4\">2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.1.1.1.1.5\">4</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.1.1.1.1.6\">8</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.1.1.1.1.7\">16</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.1.1.1.1.8\">32</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.2.1.1\">Source 0</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T3.1.1.2.1.2\">0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T3.1.1.2.1.3\">17</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T3.1.1.2.1.4\">25</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T3.1.1.2.1.5\">24</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T3.1.1.2.1.6\">41</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T3.1.1.2.1.7\">23</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T3.1.1.2.1.8\">33</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S5.T3.1.1.3.2.1\">Source 1</th>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T3.1.1.3.2.2\">0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T3.1.1.3.2.3\">19</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T3.1.1.3.2.4\">18</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T3.1.1.3.2.5\">22</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T3.1.1.3.2.6\">21</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T3.1.1.3.2.7\">26</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T3.1.1.3.2.8\">26</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>The number of times that modified target appears in the top-50 attribution scores among 178 generated music (the higher the more influential). Two different music sequences with high attribution scores are selected independently as sources. Experiments on 1, 2, 4, 8, 16, and 32 copied events are conducted to compare against the baseline scenario of 0 events, representing the original setting.</figcaption>\n</figure>",
|
| 175 |
+
"capture": "Table 3: The number of times that modified target appears in the top-50 attribution scores among 178 generated music (the higher the more influential). Two different music sequences with high attribution scores are selected independently as sources. Experiments on 1, 2, 4, 8, 16, and 32 copied events are conducted to compare against the baseline scenario of 0 events, representing the original setting."
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
"image_paths": {
|
| 179 |
+
"1(a)": {
|
| 180 |
+
"figure_path": "2312.06646v4_figure_1(a).png",
|
| 181 |
+
"caption": "Figure 1: Musical similarity in terms of loudness, key, and duration. The x-axis represents groups of training music pieces with decreasing TRAK attribution scores.",
|
| 182 |
+
"url": "http://arxiv.org/html/2312.06646v4/x1.png"
|
| 183 |
+
},
|
| 184 |
+
"1(b)": {
|
| 185 |
+
"figure_path": "2312.06646v4_figure_1(b).png",
|
| 186 |
+
"caption": "Figure 1: Musical similarity in terms of loudness, key, and duration. The x-axis represents groups of training music pieces with decreasing TRAK attribution scores.",
|
| 187 |
+
"url": "http://arxiv.org/html/2312.06646v4/x2.png"
|
| 188 |
+
},
|
| 189 |
+
"1(c)": {
|
| 190 |
+
"figure_path": "2312.06646v4_figure_1(c).png",
|
| 191 |
+
"caption": "Figure 1: Musical similarity in terms of loudness, key, and duration. The x-axis represents groups of training music pieces with decreasing TRAK attribution scores.",
|
| 192 |
+
"url": "http://arxiv.org/html/2312.06646v4/x3.png"
|
| 193 |
+
},
|
| 194 |
+
"2(a)": {
|
| 195 |
+
"figure_path": "2312.06646v4_figure_2(a).png",
|
| 196 |
+
"caption": "(a) Histogram of the p-values for t-tests on attribution scores.\nFigure 2: P-values for t-tests on attribution scores.",
|
| 197 |
+
"url": "http://arxiv.org/html/2312.06646v4/x4.png"
|
| 198 |
+
},
|
| 199 |
+
"2(b)": {
|
| 200 |
+
"figure_path": "2312.06646v4_figure_2(b).png",
|
| 201 |
+
"caption": "(b) P-value v.s. the ranking group of attribution scores.\nFigure 2: P-values for t-tests on attribution scores.",
|
| 202 |
+
"url": "http://arxiv.org/html/2312.06646v4/x5.png"
|
| 203 |
+
}
|
| 204 |
+
},
|
| 205 |
+
"validation": true,
|
| 206 |
+
"references": [
|
| 207 |
+
{
|
| 208 |
+
"1": {
|
| 209 |
+
"title": "https://www.create.ac.uk/blog/2021/06/11/21-for-2021-copyright-re-use-and-digital-business-models/, 2021.",
|
| 210 |
+
"author": "21 for 2021: Copyright, re-use and digital business models.",
|
| 211 |
+
"venue": "Accessed: 2024-01-31.",
|
| 212 |
+
"url": null
|
| 213 |
+
}
|
| 214 |
+
},
|
| 215 |
+
{
|
| 216 |
+
"2": {
|
| 217 |
+
"title": "U.S. District Court, Southern District of New York, 2023.",
|
| 218 |
+
"author": "New york times co. v. microsoft corp et al.",
|
| 219 |
+
"venue": "No. 23-11195.",
|
| 220 |
+
"url": null
|
| 221 |
+
}
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"3": {
|
| 225 |
+
"title": "How to protect copyright data in optimization of large language models?",
|
| 226 |
+
"author": "Timothy Chu, Zhao Song, and Chiwun Yang.",
|
| 227 |
+
"venue": "arXiv preprint arXiv:2308.12247, 2023.",
|
| 228 |
+
"url": null
|
| 229 |
+
}
|
| 230 |
+
},
|
| 231 |
+
{
|
| 232 |
+
"4": {
|
| 233 |
+
"title": "Reflections on the financial and ethical implications of music generated by artificial intelligence.",
|
| 234 |
+
"author": "Martin Clancy.",
|
| 235 |
+
"venue": "PhD thesis, PhD Thesis. Trinity College, Dublin, 2021.",
|
| 236 |
+
"url": null
|
| 237 |
+
}
|
| 238 |
+
},
|
| 239 |
+
{
|
| 240 |
+
"5": {
|
| 241 |
+
"title": "Understanding midi: A painless tutorial on midi format.",
|
| 242 |
+
"author": "H\u00e9lio Magalh\u00e3es de Oliveira and RC de Oliveira.",
|
| 243 |
+
"venue": "arXiv preprint arXiv:1705.05322, 2017.",
|
| 244 |
+
"url": null
|
| 245 |
+
}
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"6": {
|
| 249 |
+
"title": "Copyright, compensation, and commons in the music ai industry.",
|
| 250 |
+
"author": "Eric Drott.",
|
| 251 |
+
"venue": "Creative Industries Journal, 14(2):190\u2013207, 2021.",
|
| 252 |
+
"url": null
|
| 253 |
+
}
|
| 254 |
+
},
|
| 255 |
+
{
|
| 256 |
+
"7": {
|
| 257 |
+
"title": "Can copyright be reduced to privacy?",
|
| 258 |
+
"author": "Niva Elkin-Koren, Uri Hacohen, Roi Livni, and Shay Moran.",
|
| 259 |
+
"venue": "arXiv preprint arXiv:2305.14822, 2023.",
|
| 260 |
+
"url": null
|
| 261 |
+
}
|
| 262 |
+
},
|
| 263 |
+
{
|
| 264 |
+
"8": {
|
| 265 |
+
"title": "Copyright in generative deep learning.",
|
| 266 |
+
"author": "Giorgio Franceschelli and Mirco Musolesi.",
|
| 267 |
+
"venue": "Data & Policy, 4:e17, 2022.",
|
| 268 |
+
"url": null
|
| 269 |
+
}
|
| 270 |
+
},
|
| 271 |
+
{
|
| 272 |
+
"9": {
|
| 273 |
+
"title": "Remix rights and negotiations over the use of copy-protected works.",
|
| 274 |
+
"author": "Joshua S Gans.",
|
| 275 |
+
"venue": "International Journal of Industrial Organization, 41:76\u201383, 2015.",
|
| 276 |
+
"url": null
|
| 277 |
+
}
|
| 278 |
+
},
|
| 279 |
+
{
|
| 280 |
+
"10": {
|
| 281 |
+
"title": "Copyright policy options for generative artificial intelligence.",
|
| 282 |
+
"author": "Joshua S. Gans.",
|
| 283 |
+
"venue": "2024.",
|
| 284 |
+
"url": null
|
| 285 |
+
}
|
| 286 |
+
},
|
| 287 |
+
{
|
| 288 |
+
"11": {
|
| 289 |
+
"title": "Data shapley: Equitable valuation of data for machine learning.",
|
| 290 |
+
"author": "Amirata Ghorbani and James Zou.",
|
| 291 |
+
"venue": "In International conference on machine learning, pages 2242\u20132251. PMLR, 2019.",
|
| 292 |
+
"url": null
|
| 293 |
+
}
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"12": {
|
| 297 |
+
"title": "Training data influence analysis and estimation: A survey.",
|
| 298 |
+
"author": "Zayd Hammoudeh and Daniel Lowd.",
|
| 299 |
+
"venue": "arXiv preprint arXiv:2212.04612, 2022.",
|
| 300 |
+
"url": null
|
| 301 |
+
}
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"13": {
|
| 305 |
+
"title": "Enabling factorized piano music modeling and generation with the MAESTRO dataset.",
|
| 306 |
+
"author": "Curtis Hawthorne, Andriy Stasyuk, Adam Roberts, Ian Simon, Cheng-Zhi Anna Huang, Sander Dieleman, Erich Elsen, Jesse Engel, and Douglas Eck.",
|
| 307 |
+
"venue": "In International Conference on Learning Representations, 2019.",
|
| 308 |
+
"url": null
|
| 309 |
+
}
|
| 310 |
+
},
|
| 311 |
+
{
|
| 312 |
+
"14": {
|
| 313 |
+
"title": "Foundation models and fair use.",
|
| 314 |
+
"author": "Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A Lemley, and Percy Liang.",
|
| 315 |
+
"venue": "arXiv preprint arXiv:2303.15715, 2023.",
|
| 316 |
+
"url": null
|
| 317 |
+
}
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"15": {
|
| 321 |
+
"title": "Music transformer, 2018.",
|
| 322 |
+
"author": "Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, and Douglas Eck.",
|
| 323 |
+
"venue": null,
|
| 324 |
+
"url": null
|
| 325 |
+
}
|
| 326 |
+
},
|
| 327 |
+
{
|
| 328 |
+
"16": {
|
| 329 |
+
"title": "Datamodels: Predicting predictions from training data, 2022.",
|
| 330 |
+
"author": "Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, and Aleksander Madry.",
|
| 331 |
+
"venue": null,
|
| 332 |
+
"url": null
|
| 333 |
+
}
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"17": {
|
| 337 |
+
"title": "How spotify makes money.",
|
| 338 |
+
"author": "Matthew Johnston.",
|
| 339 |
+
"venue": "Investopedia, 2023.",
|
| 340 |
+
"url": null
|
| 341 |
+
}
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"18": {
|
| 345 |
+
"title": "Understanding black-box predictions via influence functions.",
|
| 346 |
+
"author": "Pang Wei Koh and Percy Liang.",
|
| 347 |
+
"venue": "In International conference on machine learning, pages 1885\u20131894. PMLR, 2017.",
|
| 348 |
+
"url": null
|
| 349 |
+
}
|
| 350 |
+
},
|
| 351 |
+
{
|
| 352 |
+
"19": {
|
| 353 |
+
"title": "Talkin\u201dbout ai generation: Copyright and the generative-ai supply chain.",
|
| 354 |
+
"author": "Katherine Lee, A Feder Cooper, and James Grimmelmann.",
|
| 355 |
+
"venue": "arXiv preprint arXiv:2309.08133, 2023.",
|
| 356 |
+
"url": null
|
| 357 |
+
}
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"20": {
|
| 361 |
+
"title": "Mitigate replication and copying in diffusion models with generalized caption and dual fusion enhancement.",
|
| 362 |
+
"author": "Chenghao Li, Dake Chen, Yuke Zhang, and Peter A Beerel.",
|
| 363 |
+
"venue": "arXiv preprint arXiv:2309.07254, 2023.",
|
| 364 |
+
"url": null
|
| 365 |
+
}
|
| 366 |
+
},
|
| 367 |
+
{
|
| 368 |
+
"21": {
|
| 369 |
+
"title": "Generative ai and copyright: principles, priorities and practicalities, 2023.",
|
| 370 |
+
"author": "Daryl Lim.",
|
| 371 |
+
"venue": null,
|
| 372 |
+
"url": null
|
| 373 |
+
}
|
| 374 |
+
},
|
| 375 |
+
{
|
| 376 |
+
"22": {
|
| 377 |
+
"title": "An end to end model for automatic music generation: Combining deep raw and symbolic audio networks.",
|
| 378 |
+
"author": "Rachel Manzelli, Vijay Thakkar, Ali Siahkamari, and Brian Kulis.",
|
| 379 |
+
"venue": "In Proceedings of the musical metacreation workshop at 9th international conference on computational creativity, Salamanca, Spain, 2018.",
|
| 380 |
+
"url": null
|
| 381 |
+
}
|
| 382 |
+
},
|
| 383 |
+
{
|
| 384 |
+
"23": {
|
| 385 |
+
"title": "\u2019let\u2019s keep music special. f\u2014spotify\u2019: on-demand streaming and the controversy over artist royalties.",
|
| 386 |
+
"author": "Lee Marshall.",
|
| 387 |
+
"venue": "Creative Industries Journal, 8(2):177\u2013189, 2015.",
|
| 388 |
+
"url": null
|
| 389 |
+
}
|
| 390 |
+
},
|
| 391 |
+
{
|
| 392 |
+
"24": {
|
| 393 |
+
"title": "9 best ai music generators (december 2023).",
|
| 394 |
+
"author": "Alex McFarland.",
|
| 395 |
+
"venue": "https://www.unite.ai/best-ai-music-generators/, 2023.",
|
| 396 |
+
"url": null
|
| 397 |
+
}
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"25": {
|
| 401 |
+
"title": "Youtube copyfraud and abuse of the content id system.",
|
| 402 |
+
"author": "Patrick McKay.",
|
| 403 |
+
"venue": "http://fairusetube.org/youtube-copyfraud, 2011.",
|
| 404 |
+
"url": null
|
| 405 |
+
}
|
| 406 |
+
},
|
| 407 |
+
{
|
| 408 |
+
"26": {
|
| 409 |
+
"title": "A bayesian perspective on training data attribution.",
|
| 410 |
+
"author": "Elisa Nguyen, Minjoon Seo, and Seong Joon Oh.",
|
| 411 |
+
"venue": "arXiv preprint arXiv:2305.19765, 2023.",
|
| 412 |
+
"url": null
|
| 413 |
+
}
|
| 414 |
+
},
|
| 415 |
+
{
|
| 416 |
+
"27": {
|
| 417 |
+
"title": "Wavenet: A generative model for raw audio.",
|
| 418 |
+
"author": "Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu.",
|
| 419 |
+
"venue": "arXiv preprint arXiv:1609.03499, 2016.",
|
| 420 |
+
"url": null
|
| 421 |
+
}
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"28": {
|
| 425 |
+
"title": "Trak: Attributing model behavior at scale, 2023.",
|
| 426 |
+
"author": "Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, and Aleksander Madry.",
|
| 427 |
+
"venue": null,
|
| 428 |
+
"url": null
|
| 429 |
+
}
|
| 430 |
+
},
|
| 431 |
+
{
|
| 432 |
+
"29": {
|
| 433 |
+
"title": "The economics of copyright in the digital age.",
|
| 434 |
+
"author": "Christian Peukert and Margaritha Windisch.",
|
| 435 |
+
"venue": "2023.",
|
| 436 |
+
"url": null
|
| 437 |
+
}
|
| 438 |
+
},
|
| 439 |
+
{
|
| 440 |
+
"30": {
|
| 441 |
+
"title": "Estimating training data influence by tracing gradient descent.",
|
| 442 |
+
"author": "Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan.",
|
| 443 |
+
"venue": "Advances in Neural Information Processing Systems, 33:19920\u201319930, 2020.",
|
| 444 |
+
"url": null
|
| 445 |
+
}
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"31": {
|
| 449 |
+
"title": "Adversarial attacks on copyright detection systems.",
|
| 450 |
+
"author": "Parsa Saadatpanah, Ali Shafahi, and Tom Goldstein.",
|
| 451 |
+
"venue": "In International Conference on Machine Learning, pages 8307\u20138315. PMLR, 2020.",
|
| 452 |
+
"url": null
|
| 453 |
+
}
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"32": {
|
| 457 |
+
"title": "The new legal landscape for text mining and machine learning.",
|
| 458 |
+
"author": "Matthew Sag.",
|
| 459 |
+
"venue": "J. Copyright Soc\u2019y USA, 66:291, 2018.",
|
| 460 |
+
"url": null
|
| 461 |
+
}
|
| 462 |
+
},
|
| 463 |
+
{
|
| 464 |
+
"33": {
|
| 465 |
+
"title": "Copyright safety for generative ai.",
|
| 466 |
+
"author": "Matthew Sag.",
|
| 467 |
+
"venue": "Forthcoming in the Houston Law Review, 2023.",
|
| 468 |
+
"url": null
|
| 469 |
+
}
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"34": {
|
| 473 |
+
"title": "Generative ai meets copyright.",
|
| 474 |
+
"author": "Pamela Samuelson.",
|
| 475 |
+
"venue": "Science, 381(6654):158\u2013161, 2023.",
|
| 476 |
+
"url": null
|
| 477 |
+
}
|
| 478 |
+
},
|
| 479 |
+
{
|
| 480 |
+
"35": {
|
| 481 |
+
"title": "Learning a metric for music similarity.",
|
| 482 |
+
"author": "Malcolm Slaney, Kilian Weinberger, and William White.",
|
| 483 |
+
"venue": "In International Symposium on Music Information Retrieval (ISMIR), volume 148, 2008.",
|
| 484 |
+
"url": null
|
| 485 |
+
}
|
| 486 |
+
},
|
| 487 |
+
{
|
| 488 |
+
"36": {
|
| 489 |
+
"title": "Revisiting methods for finding influential examples.",
|
| 490 |
+
"author": "Anders S\u00f8gaard et al.",
|
| 491 |
+
"venue": "arXiv preprint arXiv:2111.04683, 2021.",
|
| 492 |
+
"url": null
|
| 493 |
+
}
|
| 494 |
+
},
|
| 495 |
+
{
|
| 496 |
+
"37": {
|
| 497 |
+
"title": "Diffusion art or digital forgery? investigating data replication in diffusion models.",
|
| 498 |
+
"author": "Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein.",
|
| 499 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6048\u20136058, 2023a.",
|
| 500 |
+
"url": null
|
| 501 |
+
}
|
| 502 |
+
},
|
| 503 |
+
{
|
| 504 |
+
"38": {
|
| 505 |
+
"title": "Understanding and mitigating copying in diffusion models.",
|
| 506 |
+
"author": "Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein.",
|
| 507 |
+
"venue": "arXiv preprint arXiv:2305.20086, 2023b.",
|
| 508 |
+
"url": null
|
| 509 |
+
}
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"39": {
|
| 513 |
+
"title": "Unfiltered: How youtube\u2019s content id discourages fair use and dictates what we see online.",
|
| 514 |
+
"author": "Katharine Trendacosta.",
|
| 515 |
+
"venue": "Electronic Frontier Foundation, 2020.",
|
| 516 |
+
"url": null
|
| 517 |
+
}
|
| 518 |
+
},
|
| 519 |
+
{
|
| 520 |
+
"40": {
|
| 521 |
+
"title": "You made this? i made this: Practices of authorship and (mis) attribution on tiktok.",
|
| 522 |
+
"author": "D Bondy Valdovinos Kaye, Aleesha Rodriguez, Katrin Langton, and Patrik Wikstrom.",
|
| 523 |
+
"venue": "International Journal of Communication, 15:3195\u20133215, 2021.",
|
| 524 |
+
"url": null
|
| 525 |
+
}
|
| 526 |
+
},
|
| 527 |
+
{
|
| 528 |
+
"41": {
|
| 529 |
+
"title": "Youtube processes 4 million content id claims per day, transparency report reveals.",
|
| 530 |
+
"author": "Ernesto Van der Sar.",
|
| 531 |
+
"venue": "https://torrentfreak.com/youtube-processes-4-million-content-id-claims-per-day-transparency-report-reveals-211207/, 2021.",
|
| 532 |
+
"url": null
|
| 533 |
+
}
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"42": {
|
| 537 |
+
"title": "Provable copyright protection for generative models.",
|
| 538 |
+
"author": "Nikhil Vyas, Sham Kakade, and Boaz Barak.",
|
| 539 |
+
"venue": "arXiv preprint arXiv:2302.10870, 2023.",
|
| 540 |
+
"url": null
|
| 541 |
+
}
|
| 542 |
+
}
|
| 543 |
+
],
|
| 544 |
+
"url": "http://arxiv.org/html/2312.06646v4"
|
| 545 |
+
}
|
20240721/2312.08224v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2312.09863v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2312.14024v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2402.03119v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2402.10698v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2402.11111v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2402.14646v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2402.16399v2.json
ADDED
|
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Temporal Persistence and Intercorrelation of Embeddings Learned by an End-to-End Deep Learning Eye Movement-driven Biometrics Pipeline",
|
| 3 |
+
"abstract": "What qualities make a feature useful for biometric performance?\nIn prior research, pre-dating the advent of deep learning (DL) approaches to biometric analysis, a strong relationship between temporal persistence, as indexed by the intraclass correlation coefficient (ICC), and biometric performance (Equal Error Rate, EER) was noted.\nMore generally, the claim was made that good biometric performance resulted from a relatively large set of weakly intercorrelated features with high ICC.\nThe present study aimed to determine whether the same relationships are found in a state-of-the-art DL-based eye movement biometric system (\u201cEye-Know-You-Too\u201d), as applied to two publicly available eye movement datasets.\nTo this end, we manipulate various aspects of eye-tracking signal quality, which produces variation in biometric performance, and relate that performance to the temporal persistence and intercorrelation of the resulting embeddings.\nData quality indices were related to EER with either linear or logarithmic fits, and the resulting model was noted.\nAs a general matter, we found that temporal persistence was an important predictor of DL-based biometric performance, and also that DL-learned embeddings were generally weakly intercorrelated.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Biometric systems are increasingly replacing traditional (i.e., password-based, PIN-based, etc) authentication systems for security due to their reliability and convenience [1 ###reference_b1###].\nAccording to Nelson [2 ###reference_b2###], biometric features should be \u201c\u2026reliable, unique, collectible, convenient, long term, universal, and acceptable\u201d.\nFeatures with relative permanence are required for good biometric performance [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###].\nFriedman et al. [7 ###reference_b7###] was the first to evaluate the intraclass correlation coefficient (ICC) to index the reliability (or \u201ctemporal persistence\u201d) of biometric features.\nTemporal persistence ensures that the chosen biometric features remain relatively stable over the relevant period, enabling consistent and accurate authentication.\nIn the era before the wide advent of deep-learning (DL) based biometric systems, Friedman et al. found that more temporally persistent features lead to better performance in biometric systems i.e., feature sets with high ICC biometrically outperformed features with comparatively low ICC.\nThe extent to which the ICC is useful for DL-based biometric embeddings is unknown.\nThe main goal of the present study is to evaluate this question.\nIf it is found that temporal persistence is important for the biometric performance of DL-based systems, this would tend to generalize the importance of this quality to all biometric systems.\nFriedman et al [8 ###reference_b8###] assessed why temporal persistence affects biometric performance. Recall that the computation of the EER requires the creation of a distribution of genuine and impostor similarity scores. Friedman et al. found that the median of the genuine scores distribution increases and the spread (interquartile range) decreases with increasing ICC. However, the impostor distributions do not change. These changes in the genuine similarity score distributions lead to a better separation from the impostor distributions and therefore a lower EER.\nPhysiological and anatomical biometric systems, like fingerprint and facial recognition, rely on physical characteristics that can change over time due to aging or injury.\nThis can downgrade biometric performance.\nTo address this limitation, researchers have been exploring behavioral biometrics that are more likely to remain stable (e.g., voice, gait, signature recognition).\nOne such approach is eye movement biometrics, which has emerged as a promising behavioral biometric modality, attracting significant attention[9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###].\nUnique and consistent patterns of eye movement offer advantages like user identification [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###], user authentication [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###], high specificity[6 ###reference_b6###], disorder detection [22 ###reference_b22###, 23 ###reference_b23###], gender prediction [24 ###reference_b24###, 25 ###reference_b25###], resistance to impersonation [26 ###reference_b26###, 27 ###reference_b27###] and liveness detection [28 ###reference_b28###, 29 ###reference_b29###].\nBuilding on the foundation of temporal persistence in the traditional biometric approach, our study shifts the focus to DL-based behavioral biometric systems, particularly those that analyze eye movements.\nThis research aims to assess the role of temporal persistence and embedding\u2019s intercorrelation in a DL-based eye-movement biometric system.\nTo this end, various aspects of eye-movement signal quality were manipulated to produce variations in biometric performance (EER).\nThe relationship between the temporal persistence of DL-based embeddings and EER was assessed under several conditions.\nAlso, the intercorrelation of sets of embeddings is evaluated.\nIn this paper, we will try to address the following research questions:\nDo reductions in the sampling rate affect biometric performance, and are these changes related to the reliability of the learned embeddings?\nWe will employ decimation to achieve the desired sampling level, which reduces both the sampling rate and the number of samples. Comparing the effects of sample rate reduction to only the reduction in the number of samples will help us assess separately the effects of sample rate reduction and reduced data length. This consideration leads to RQ2.\nDoes reduced data length at a fixed sampling rate affect the reliability of the learned embeddings?\nDecimation reduces both data length and sampling rate. Here, we will investigate the effect of decreasing data length while maintaining a fixed sampling rate.\nDoes the number of sequences affect the reliability of the learned embeddings?\nComputational limitations require we derive embeddings for 5-seconds of data at a time.\nThese 5-second intervals are referred to as \u201csequences\u201d.\nOur baseline analyses involve averaging over either 12 or 9 sets of sequences. We wanted to evaluate the analysis results based on a range of sequences.\nDoes the quality of the eye-movement signal affect the reliability of the learned embeddings?\nWe will explore how degraded spatial precision of the eye-movement signal influences the embeddings.\nDoes any eye-tracking data manipulation affect the intercorrelation of the learned embeddings?\nWe will explore how data manipulation of various kinds affects the absolute value of the intercorrelation of the embeddings.\nThis paper provides a review of the relevant literature in Section II.\nOur methodology is detailed in Section III.\nThe design of our experiments is outlined in Section IV.\nSection V presents the results obtained from these experiments.\nAnalysis of these results and key insights are discussed in Section VI.\nThe paper concludes with final remarks in Section VII."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Prior Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Prior work on Temporal persistence/Biometric permanence",
|
| 21 |
+
"text": "In the biometric authentication field, it is widely accepted that human traits with high temporal persistence, encompassing temporal stability and permanence, are fundamental.[3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 7 ###reference_b7###, 8 ###reference_b8###, 6 ###reference_b6###].\nSome studies focused on evaluating the biometric permanence of a modality, comparing the biometric performance of the system at different times[30 ###reference_b30###, 4 ###reference_b4###, 31 ###reference_b31###, 32 ###reference_b32###].\nAs per our knowledge, there are relatively few studies (discussed below) that have explored the relationship between the temporal persistence of individual features and biometric performance and proposed an index to measure the temporal persistence of features.\nPrior research [7 ###reference_b7###] introduced the use of the intraclass correlation coefficient (ICC) to the biometric community as an index of temporal persistence, although ICC has long been used as a measure of feature reliability in various fields [33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###].\nIt is a measure of how stable a feature is over time.\nFeatures with high ICC are more stable than those with low ICC.\nThe authors argued that using features with high ICC leads to better biometric performance.\nThey tested this on 14 datasets (8 of them were eye-movements-related data) and found that using features with high ICC resulted in superior biometric performance in most cases.\nFriedman et al. [8 ###reference_b8###] demonstrated that increased temporal persistence makes biometric systems work better for gait and eye movement datasets.\nThe median of the genuine scores distribution increases with increasing ICC, and the interquartile range (IQR) narrows which means that the genuine scores become more concentrated around a higher value as ICC increases.\nThe median of the imposter scores distribution does not change significantly with increasing ICC meaning that the imposter scores remain spread out across a similar range of values regardless of ICC.\nThese changes in the distributions lead to better separation between genuine and impostor scores, which makes it easier for a biometric system to correctly classify a sample.\nThe Equal Error Rate (EER), which is the point where the false acceptance rate (FAR) and false rejection rate (FRR) are equal, is also lower for higher ICC values.\nThis indicates that the system is less likely to make errors (accepting an imposter or rejecting a genuine user) when the temporal persistence is higher."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Prior work on Eye Movement Biometrics",
|
| 27 |
+
"text": "Kasprowski and Ober\u2019s introduction of eye movement as a biometric modality for human authentication [9 ###reference_b9###] marked a significant milestone.\nThis spurred extensive research in the field of eye movement biometrics [36 ###reference_b36###, 37 ###reference_b37###, 38 ###reference_b38###], primarily aimed at developing a state-of-the-art (SOTA) approach for eye movement-based user authentication.\nTwo primary approaches have emerged in this domain: the statistical feature-based approach and the machine learning-based approach.\nIn the statistical feature-based approach, a standardized processing sequence is employed.\nIt involves segmenting recordings into distinct eye movement events using event classification algorithms, followed by the creation of a biometric template comprising a vector of discrete features from each event [39 ###reference_b39###].\nHowever, the challenge lies in the classification of events, which can vary in effectiveness depending on the classification algorithm used [40 ###reference_b40###]. Various algorithms for classifying eye-movement events have been suggested [41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###], aiming to enhance biometric performance.\nSeveral studies have utilized this approach, including [18 ###reference_b18###, 7 ###reference_b7###, 45 ###reference_b45###, 46 ###reference_b46###, 47 ###reference_b47###, 48 ###reference_b48###].\nMeanwhile, there\u2019s been a significant increase in the application of end-to-end machine learning approaches in eye movement biometrics.\nRecent studies have focused on deep learning, adopting two main approaches: processing pre-extracted features as in [49 ###reference_b49###, 17 ###reference_b17###], and learning embeddings directly from the raw eye tracking signals [50 ###reference_b50###, 19 ###reference_b19###, 15 ###reference_b15###, 16 ###reference_b16###, 51 ###reference_b51###].\nThe development of the Eye Know You Too (EKYT) [20 ###reference_b20###] model by Lohr et al. represents a significant advancement.\nAs per our knowledge, EKYT is a state-of-the-art (SOTA) user authentication system based on eye movement data.\nEKYT is developed in such a way that it is capable of learning meaningful embeddings.\nEmbeddings offer a way for deep learning models to represent complex data in a simplified, lower-dimensional space, preserving inherent patterns and relationships.\nThis approach allows the model to group similar data points closer together in a vector space, facilitating the discovery of underlying patterns that might be challenging for humans to identify directly.\nUnlike traditional feature extraction, embeddings enable the model to learn these representations, potentially unveiling complex relationships within the data that enhance authentication processes.\nOnce learned, these embeddings can be used in classification problems such as the authors did for eye-movement-based biometric authentication in [20 ###reference_b20###].\nConcluding our review, it\u2019s important to note that, based on our current understanding, there has been no investigation into the analysis of embeddings derived from a deep learning model in an EMB-driven pipeline. This paper will specifically address and explore this area."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Methodology",
|
| 33 |
+
"text": ""
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Dataset",
|
| 39 |
+
"text": "We have employed two large datasets in our study.\nOne was collected with a high-end eye-tracker and the other was collected with an eye-tracking-enabled virtual reality (VR) headset.\nThe reason behind using two datasets is to ensure the generalizability of the findings.\nThe first dataset, we used in this study is the publicly available GazeBase(GB) dataset [52 ###reference_b52###].\nEye movement recordings of this dataset are collected with a high-end eye-tracker, EyeLink 1000 at a sampling rate of 1000 Hz.\nIt includes 12,334 monocular recordings (left eye only) from 322 college-aged subjects.\nThe data was collected over three years in nine rounds (Round 1 to Round 9).\nEach recording captures both horizontal and vertical movements of the left eye in degrees of visual angle.\nParticipants completed seven eye movement tasks: random saccades (RAN), reading (TEX), fixation (FXS), horizontal saccades (HSS), two video viewing tasks (VD1 and VD2), and a video-gaming task (Balura game, BLG).\nEach round comprised two recording sessions of the same tasks per subject, spaced by 20 minutes.\nFurther details about the dataset and recording procedure are available in [52 ###reference_b52###].\nThe second dataset is GazeBaseVR (GBVR) [53 ###reference_b53###], a GazeBase-inspired dataset collected with an eye-tracking-enabled virtual reality (VR) headset.\nIt includes 5020 binocular recordings from a diverse population of 407 college-aged subjects.\nThe data was collected over 26 months in three rounds (Round 1 to Round 3).\nAll the eye movements were recorded at a 250 Hz sampling rate.\nEach recording captures both horizontal and vertical movements of both eyes in degrees of visual angle.\nEach participant completed a series of five eye movement tasks: vergence (VRG), horizontal smooth pursuit (PUR), reading (TEX), video-viewing (VD), and a random saccade task (RAN).\nMore details about the dataset and how data were collected are available in [53 ###reference_b53###]."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Model Architecture and Data Handling",
|
| 45 |
+
"text": ""
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2.1",
|
| 49 |
+
"parent_section_id": "3.2",
|
| 50 |
+
"section_name": "3.2.1 Data Preprocessing",
|
| 51 |
+
"text": "All the recordings from each dataset underwent a series of pre-processing steps before being input to the neural network architecture.\nEyeLink 1000 is unable to estimate gaze during blinks.\nIn these instances, the device returns a Not a Number (NaN) for the affected samples. Additionally, the range of possible horizontal and vertical gaze coordinates is limited to -23.3\u00b0 to +23.3\u00b0 and -18.5\u00b0 to 11.7\u00b0, respectively.\nAny gaze samples that fell outside these bounds were also set to NaN.\nTwo velocity channels (horizontal and vertical) were derived from the raw gaze data using a Savitzky-Golay differentiation filter [54 ###reference_b54###] with a window size of 7 and order of 2 [7 ###reference_b7###].\nSubsequently, the recordings were segmented into non-overlapping 5-second sequences (5000-sample) using a rolling window method.\nFor each task, the first twelve of these 5-second sequences were then combined into a single 60-second data stream for further analysis.\nTo mitigate the impact of noise on the data, velocity values were clamped between \u00b11000\u00b0/s.\nFinally, all velocity channels across all sequences and subjects were standardized using z-score normalization.\nIn other words, all velocity data from every sample from every sequence and every subject was combined into a single distribution.\nThe mean of this distribution was subtracted from every sample, and every sample was divided by the standard deviation of this distribution.\nAny remaining NaN values were replaced with 0, as recommended by Lohr et al. [19 ###reference_b19###].\nFurther details regarding data pre-processing are provided in Lohr et al. [20 ###reference_b20###]."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2.2",
|
| 55 |
+
"parent_section_id": "3.2",
|
| 56 |
+
"section_name": "3.2.2 Network Architecture",
|
| 57 |
+
"text": "In this research, we used the Eye Know You Too (EKYT) network architecture for eye-movement-based biometric authentication[20 ###reference_b20###].\nThis DenseNet-based [55 ###reference_b55###] architecture achieves SOTA biometric authentication using high-quality eye-tracking input data (collected at 1000 Hz).\nThe EKYT architecture incorporates eight convolutional layers.\nIn this design, the feature maps generated by each convolutional layer are concatenated with those from all preceding layers before advancing to the subsequent convolutional layer.\nThis process results in a concatenated set of feature maps, which are subsequently flattened.\nThese flattened maps undergo processing through a global average pooling layer and are subsequently input into a fully connected layer, resulting in a 128-dimensional embedding of the input sequence.\nThe 128-dimensional embedding generated by this architecture serves as the fundamental component for our analysis in this research.\nFor a more comprehensive understanding of the network architecture, readers are directed to Lohr et al. (2022) [20 ###reference_b20###]."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2.3",
|
| 61 |
+
"parent_section_id": "3.2",
|
| 62 |
+
"section_name": "3.2.3 Dataset Split & Training",
|
| 63 |
+
"text": "For the GB dataset, there were 322 participants in Round 1 and 59 participants in Round 6.\nAll 59 participants from Round 6 are a subset of all subjects in Round 1.\nAll the participants (59) common through Round 1 to 6 were treated as a held-out dataset and not used for the training and validation step.\nThe model underwent training using all data (except for heldout data) from Rounds 1-5, except the BLG (gaming) task.\nWe have used data from all three rounds of the GBVR dataset.\nRound 1 contained data from 407 subjects.\nTo enhance the integrity of our validation process, we segregated the data of 60 subjects from Round 1 and treated it as a held-out dataset.\nThis subset was not used in the training or validation phases.\nWe divided the participants from training data into four non-overlapping folds for cross-validation.\nThe goal was to distribute the number of participants and recordings as evenly as possible across folds.\nThe algorithm used for assigning folds is discussed in [19 ###reference_b19###].\nFour distinct models were trained, with each model using a different fold for validation and the remaining three folds for training.\nFor learning rate scheduling, we used the Adam[56 ###reference_b56###] optimizer, and PyTorch\u2019s OneCycleLR with cosine annealing [57 ###reference_b57###] in the training process.\nWe used the weighted sum of categorical cross-entropy loss (CE) and multi-similarity loss (MS) loss [58 ###reference_b58###] in the training procedure.\nWe adhered to the default values for the hyperparameters of the MS loss and other optimizer hyperparameters as recommended in [20 ###reference_b20###].\nOur input samples had both horizontal and vertical velocity channels.\nIn both the GB and GBVR datasets, the duration for each input sample was set to five seconds.\nGiven that GB was collected at a sampling rate of 1000 Hz, each input sample in this dataset includes a window encompassing 5000 time steps.\nConversely, for the GBVR dataset, which has a sampling rate of 250 Hz, each input sample comprises 1250 time steps.\nThe model was trained over 100 epochs.\nDuring the initial 30 epochs, the learning rate was gradually increased to from the initial rate of .\nIn the subsequent 70 epochs, the learning rate was gradually reduced to a minimum of . Each batch contained 64 samples (classes per batch = 8 samples per class per batch = 8)."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.2.4",
|
| 67 |
+
"parent_section_id": "3.2",
|
| 68 |
+
"section_name": "3.2.4 Embedding Generation",
|
| 69 |
+
"text": "The method focused on creating centroid embeddings by averaging multiple subsequence embeddings from the first \u2018n\u2019 windows of a recording.\nAlthough the model was not trained directly on these centroid embeddings, it was designed to foster a well-clustered embedding space, ensuring that embeddings from the same class are closely grouped and distinct from others.\nThe primary process involved embedding the first 5-second sequence of an eye-tracking signal, with the possibility of aggregating embeddings across multiple sequences.\nThe training phase did not exclude subsequences with high NaNs, and each subsequence was treated individually.\nIn our approach, we formed the enrollment dataset by using the first 60 seconds of the session-1 TEX task from Round 1 for each subject in the test set.\nFor the authentication dataset, we used the first 60 seconds of the session-2 TEX task from Round 1 for each subject in the test set.\nIt is to be noted that we did not use 60 seconds at once, we split 60 seconds into 5-second subsequences, getting embeddings for each subsequence, and then computed the centroid of those embeddings.\nFor each sequence in the enrollment and authentication sets, 128-dimensional embeddings were computed with each of four different models trained using 4-fold cross-validation.\nFor simplicity, we are using 128-dimensional embeddings generated from a single-fold model in our study.\nThis model was then used to compute pairwise cosine similarities between the embeddings in different sets."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.3",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "Evaluation Metrics",
|
| 75 |
+
"text": "This study focuses on only three key metrics that assess: (1) the temporal persistence of embeddings, (2) the intercorrelation of embeddings, and (3) biometric performance.\nRecall that data from each subject was collected twice in two sessions approximately 20 min apart.\nTo assess the temporal persistence of embeddings across sessions, we initially considered employing the Intraclass Correlation Coefficient (ICC) as suggested by [7 ###reference_b7###].\nHowever, most, but not all of the embeddings were normally distributed. For this reason, we opted for the non-parametric Kendall\u2019s Coefficient of Concordance (KCC) [59 ###reference_b59###] instead of the ICC.\nIntercorrelations between embeddings were assessed using Spearman correlation coefficient (Spearman R).\nBiometric authentication performance was assessed using the equal error rate (EER).\nThe EER is the location on a receiver operating characteristic (ROC) curve where the False Rejection Rate and False Acceptance Rate are equal.\nThe lower the EER value, the better the performance of the biometric system is.\nThe goal of our analysis is to assess the relationships between these three metrics."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.4",
|
| 79 |
+
"parent_section_id": "3",
|
| 80 |
+
"section_name": "Hardware & Software",
|
| 81 |
+
"text": "All models were trained on a workstation equipped with an NVIDIA RTX A6000 GPU, an AMD Ryzen Threadripper PRO 5975WX with 32 cores, and 48 gigabytes of RAM. The system ran an Anaconda environment with Python 3.7.11, PyTorch 1.10.0, Torchvision 0.11.0, Torchaudio 0.10.0, Cudatoolkit 11.3.1, and Pytorch Metric Learning (PML) [60 ###reference_b60###] version 0.9.99."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Experiment Design",
|
| 87 |
+
"text": "We have designed our experiments based on the research question mentioned in the introduction.\n###figure_1###"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.1",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "RQ1: Decimation Do reductions in the sampling rate affect biometric\nperformance, and are these changes related to the reliability of the learned embeddings?",
|
| 93 |
+
"text": "As noted above, GB was initially collected at a sampling frequency of 1000 Hz.\nFor this analysis, we compared data collected at 1000 Hz to data decimated to frequencies of 500 Hz, 333 Hz, 250 Hz, 100 Hz, 50 Hz, 25 Hz, and 10 Hz.\nGBVR was initially collected at a frequency of 250 Hz, the data was subsequently decimated to frequencies of 125 Hz, 50 Hz, 25 Hz, and 10 Hz.\nThe decimation process was carried out using the scipy.signal.decimate function, which downsamples the signal after implementing an anti-aliasing filter.\nThe model in use was then trained on these decimated datasets to produce 128-dimensional embeddings at each decimation level.\nFollowing this, Kendall\u2019s Coefficient of Concordance (KCC), and Equal Error Rate (EER) were calculated based on the model and the generated embeddings. Readers are referred to Fig 1 ###reference_### (B1-B2)."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.2",
|
| 97 |
+
"parent_section_id": "4",
|
| 98 |
+
"section_name": "RQ2: Percentage Does reduced data length at a fixed sampling rate affect the reliability of the learned embeddings?",
|
| 99 |
+
"text": "In the baseline case, each 5-second sequence consisted of 5,000 (GB) or 1,250 (GBVR) samples.\nIn this study, we reduced the number of samples used for each sequence by specified percentage levels.\nEach sequence was initially regarded as 100% of the data.\nWe progressively reduced this amount to 50%, 33%, 25%, 10%, 5%, 2.5%, and 1% for the GB dataset, and to 50%, 20%, 10%, and 4% for the GBVR dataset.\nEach reduction was applied within a sequence.\nFor example, for a 50% reduction, we retained only the first 2.5 seconds of data and zero-padded the rest.\nHowever, the eye-tracking data was always centered in each sequence because convolutional layers tend to have an effective receptive field that is Gaussian in shape [61 ###reference_b61###].\nThat is, zero-padding was applied to both sides of the reduced data to make the sequence of 5 seconds again.\nFor a clearer understanding, readers are referred to Figure 1 ###reference_### (C1-C2).\nThe model was subsequently trained on these adjusted datasets to produce embeddings.\nWe then evaluated the EER, KCC, and intercorrelation based on the model and the generated embeddings."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.3",
|
| 103 |
+
"parent_section_id": "4",
|
| 104 |
+
"section_name": "RQ3: # Sequences Does the number of sequences affect the reliability of the learned embeddings?",
|
| 105 |
+
"text": "In the baseline model, for GB we averaged embeddings over 12 consecutive 5-second sequences and for GBVR we averaged over 9 consecutive 5-second sequences.\nOur model generates 128-dimensional embeddings for each sequence. For the GB dataset, we evaluated results based on 1 to 12 sequences, and for the GBVR dataset, we evaluated results based on 1 to 9 sequences because of the limited amount of data.\nWe then analyzed key metrics such as KCC, EER, and the intercorrelation among embeddings derived from differing numbers of sequences.\nRefer to Fig 1 ###reference_### (A1-A2) for an infographic representation of the experiment."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.4",
|
| 109 |
+
"parent_section_id": "4",
|
| 110 |
+
"section_name": "RQ4: Signal Quality Does the quality of the eye-movement signal affect\nthe reliability of the learned embeddings?",
|
| 111 |
+
"text": "In this experiment, we inject Gaussian noise into the raw data to downgrade its spatial precision following [62 ###reference_b62###, 63 ###reference_b63###, 20 ###reference_b20###].\nWe have calculated the precision of the individual recordings.\nThe raw recording was segmented into 80 ms segments, as referenced in [64 ###reference_b64###].\nSegments containing any invalid samples were excluded.\nWe computed the Root Mean Square (RMS) for each valid segment.\nThe spatial precision for each recording was determined by calculating the median RMS, considering only the lowest fifth percentile of all RMS values for that recording.\nWe then calculated the spatial precision for each subject by taking the median of the spatial precision values from each of their recordings.\nTable 1 ###reference_### shows the spatial precision of GB and GBVR after injecting various amounts of noise."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.5",
|
| 115 |
+
"parent_section_id": "4",
|
| 116 |
+
"section_name": "RQ5: Intercorrelation Does any of the eye-tracking data manipulation affect the intercorrelation of the learned embeddings?",
|
| 117 |
+
"text": "We have calculated the absolute value of the intercorrelation (using scipy.stats.spearmanr) for each of the above analyses. We have investigated the effect of the eye-tracking data manipulation on the calculated intercorrelation."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "5",
|
| 121 |
+
"parent_section_id": null,
|
| 122 |
+
"section_name": "Results",
|
| 123 |
+
"text": "In this study, we employ four manipulations of eye-movement data (decimation, percentage, number of sequences, and signal quality degradation) and evaluate the effects of these manipulations on biometric performance (EER), temporal persistence (KCC) and the relationship between EER and KCC. We also evaluate the intercorrelations of embeddings after each manipulation.\nFor RQ1, RQ2 and RQ3, we found that the relationships between the manipulation and either biometric performance or reliability (KCC) were the best fit after taking the log of x, whereas for RQ4, we found that a linear fit was best. The following equations are used."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5.1",
|
| 127 |
+
"parent_section_id": "5",
|
| 128 |
+
"section_name": "RQ1: Do reductions in the sampling rate affect biometric\nperformance, and are these changes related to the reliability of the learned embeddings?",
|
| 129 |
+
"text": "Figure 2 ###reference_### illustrates how sampling rate affects the temporal persistence and the performance of the EMB system, comparing two different datasets. The provided figure illustrates the relationships between two performance metrics (KCC and EER) and the decimated level (in Hz) for two datasets: GB and GBVR.\nThe left column of plots shows that KCC decreases with a decreasing sampling rate for the GB dataset, while EER increases logarithmically, indicating that a lower sampling rate negatively impacts performance.\nThis is evidenced by values of 0.76 and 0.68 for KCC and EER, respectively, and a strong negative correlation between EER and KCC with a value of 0.99.\nSimilarly, the right column reveals that for the GBVR dataset, KCC also decreases and EER increases with a lower sampling rate, but with stronger fits ( values of 0.97 and 0.86 for KCC and EER, respectively) and nearly perfect negative correlation between EER and KCC ( value of 0.95).\nOverall, the figure indicates that higher sampling rates yield better temporal persistence and lower equal error rates in EMB, with GBVR appearing more sensitive to changes in sampling rate than GB.\nThe logarithmic models show strong correlations, emphasizing the importance of maintaining high sampling rates for better biometric performance.\n###figure_2### Decimation reduces both data length and sampling rate. So, after decimation, it is impossible to determine if the effects on our measures is due to the reduced sample rate or the reduced amount of data. To address this confound, we also performed the percentage analysis. For this, the same number of eye-movement signal points is used as for decimation but the sampling rate does not change."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "5.2",
|
| 133 |
+
"parent_section_id": "5",
|
| 134 |
+
"section_name": "RQ2: Does reduced data length at a fixed sampling rate affect the reliability of the learned embeddings?",
|
| 135 |
+
"text": "Figure 3 ###reference_### illustrates how reducing the percentage of raw eye movement data impacts temporal persistence and performance of the EMB system.\nThe figure shows the relationships between two performance metrics\u2014 KCC and EER \u2014 and the percentage level (%).\nIn the left column of plots, it is evident that for the GB dataset, KCC decreases with decreasing percentage levels, with an value of 0.96, while EER increases logarithmically, with an value of 0.93.\nThere is a strong negative correlation between EER and KCC, indicated by an value of 0.97.\nSimilarly, the right column of plots reveals that for the GBVR dataset, KCC decreases with decreasing percentage levels, with a value of 0.97, and EER increases logarithmically, with an value of 0.97.\nThe negative correlation between EER and KCC is almost perfect, with an value of 0.96.\nOverall, the figure suggests that a higher percentage of eye movement data leads to better temporal persistence and lower equal error rates in the EMB system.\n###figure_3### In Fig. 4 ###reference_###, we compare the biometric performance between decimation and percentage analysis. In the GB plot, for decimation, we can see the biometric performance degrades significantly when the number of samples is reduced to 250. On the other hand, for percentage analysis, the biometric performance degrades from the very beginning.\nA similar trend is observed in the GBVR plot, though the difference between decimation and percentage is more subtle.\n###figure_4###"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "5.3",
|
| 139 |
+
"parent_section_id": "5",
|
| 140 |
+
"section_name": "RQ3: Does the number of sequences affect the reliability of the learned embeddings?",
|
| 141 |
+
"text": "Figure 5 ###reference_### illustrates the impact of the number of sequences (GB: 5,000 samples, GBVR: 1,250 samples) on the temporal persistence of the learned embeddings and the performance of the EMB system. It consists of six subplots showcasing the relationships between two performance metrics (KCC and EER) and the number of sequences, both for GB and GBVR datasets.\n###figure_5### The left column focuses on the GB dataset, illustrating how KCC and EER change with decreasing number of sequences.\nThe KCC shows a logarithmic decrease with a decrease in the number of sequences (A1) with an value of 0.87, indicating a strong fit.\nConversely, EER increases with less number of sequences (A2), supported by an value of 0.75.\nThe relationship between KCC and EER (A3) reveals a strong negative logarithmic correlation, with an value of 0.95, indicating a strong fit.\nThe right column mirrors these analyses for the GBVR dataset, where KCC also decreases with the number of sequences reduced (B1) and EER increases (B2), with values of 0.99 and 0.97.\nThe EER vs. KCC plot (B3) shows a negative relationship with an value of 0.99.\nOverall, these results indicate that biometric performance improves with more sequences, reflected in better reliability (KCC) and lower error rates (EER), with the GBVR dataset demonstrating an even more robust fit than the GB dataset alone."
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "5.4",
|
| 145 |
+
"parent_section_id": "5",
|
| 146 |
+
"section_name": "RQ4: Does the quality of the eye-movement signal affect\nthe reliability of the learned embeddings?",
|
| 147 |
+
"text": "Fig. 6 ###reference_### shows how degraded spatial precision affects KCC and EER, across embeddings learned from two datasets.\nFor the GB dataset, spatial precision values range from 0.00435 (original) to 2.3 with the injection of Gaussian noise.\nFor the GBVR dataset, spatial precision values range from 0.041(original) to 1.80 with the injection of Gaussian noise.\nA significant relationship is observed in both GB and GBVR; KCC drops and EER increases linearly with degraded spatial precision.\nA strong negative linear correlation exists between KCC and EER for GB and GBVR datasets, presenting higher temporal persistence associated with lower equal error rates.\n###figure_6###"
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"section_id": "5.5",
|
| 151 |
+
"parent_section_id": "5",
|
| 152 |
+
"section_name": "RQ5: Does any of the eye-tracking data manipulation affect the intercorrelation of the learned embeddings?",
|
| 153 |
+
"text": "We calculated the absolute value of the intercorrelation of the learned embeddings for each of the analyses above and found that data manipulation minimally affects intercorrelation. When combining all levels of manipulation, across all datasets, the mean absolute correlation value is 0.19 with an SD of 0.14. Detailed results are shown in Table. 2 ###reference_###."
|
| 154 |
+
},
|
| 155 |
+
{
|
| 156 |
+
"section_id": "5.6",
|
| 157 |
+
"parent_section_id": "5",
|
| 158 |
+
"section_name": "Relation between Temporal Persistence (KCC) and Biometric Performance(EER) across All Manipulations",
|
| 159 |
+
"text": "We have seen that there is a strong relationship between KCC and EER in each of the above analyses. The relationship between these two parameters is very strong (Fig. 7, ), supporting our notion that temporal persistence (\u201creliability\u201d) is important for biometric performance in all cases.\n###figure_7###"
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"section_id": "6",
|
| 163 |
+
"parent_section_id": null,
|
| 164 |
+
"section_name": "Discussion",
|
| 165 |
+
"text": "The main findings of this report are listed in Table 3 ###reference_###. This table shows the effect of various signal manipulations on embeddings in terms of reliability (KCC), intercorrelation, and biometric performance (EER).\nResearch Question\nindicates a significant effect due to data manipulation\n\u201c\u2014\u201d Denotes negligible impact."
|
| 166 |
+
},
|
| 167 |
+
{
|
| 168 |
+
"section_id": "6.1",
|
| 169 |
+
"parent_section_id": "6",
|
| 170 |
+
"section_name": "Summary of Findings",
|
| 171 |
+
"text": "RQ1: Decimation \nThe impact of different sampling rates on embeddings was profound.\nFor GB, the experiment showed that while decimation down to 100 Hz did not have a major impact on either stability (KCC) or biometric performance (EER), decimation below this frequency led to marked drops in reliability and decreases in biometric performance.\nFor GBVR, the point of transition was closer to 50 Hz.\nThis suggests that while some degree of decimation is tolerable, excessively low sampling rates compromise the efficacy of the biometric system to a significant degree.\nRQ2: Percentage We have found that reduced data length from the recording during the training process significantly influenced the embeddings.\nLowering the given data percentages resulted in less reliable embeddings, as evidenced by a downward trend of KCC from 100% to 1 % of data.\nA significant drop in biometric performance in terms of Equal Error Rate (EER) is also seen in the results.\nWhen we compare the effect of decimation to the effect of a reduced percentage of samples, we note that biometric performance is better with decimation than with the reduction of data samples alone. This is probably because the decimated signal still samples the entire signal whereas the percentage manipulation only samples a small part of the signal.\nRQ3: #Sequences We found that varying sequence sizes significantly influenced the embeddings.\nLonger sequences generally resulted in more reliable and consistent embeddings, as evidenced by higher KCC values for 12 consecutive sequences (60 seconds) compared to 1 sequence (5 seconds) of data.\nHowever, there was a diminishing return on increasing data length beyond a certain point.\nThe most significant impact was observed when comparing very short data sequences to moderately long ones.\nThe improvement in embedding reliability was marked, as evidenced by a noticeable shift in key metrics.\nA significant drop in biometric performance in terms of EER is also seen in the results.\nRQ4: Signal Quality The study also delved into how eye-tracking signal quality (spatial precision) affects embeddings. We noted its influence on the embeddings\u2019 KCC and EER.\nResults indicated that there is a significant effect on temporal persistence and biometric performance with downgrading spatial precision eye-tracking signal by injecting Gaussian noise.\nRQ5: Intercorrelation For all analyses, the effect of any of our manipulations had a minimal effect on the absolute value of the intercorrelations of the embeddings. In all cases, the absolute value of the intercorrelations was small (mean = 0.19, SD = 0.14).\nIt was interesting to note that when the relationship between KCC and EER was evaluated across all manipulations and both datasets, the relationship was very strong. Thus, it appears that there may be a universal strong relationship between these two entities."
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"section_id": "6.2",
|
| 175 |
+
"parent_section_id": "6",
|
| 176 |
+
"section_name": "Methodological Strength",
|
| 177 |
+
"text": "A significant methodological strength of our study lies in the selection of the Eye Know You Too (EKYT) network architecture, which is recognized for its state-of-the-art performance in biometric authentication\nThe EKYT architecture is based on a DenseNet framework, known for its efficient handling of complex data structures.\nAdditionally, we designed and conducted a series of experiments to manipulate the quality of eye movement signals. These experiments included altering the sampling rate, reducing the sample size, varying the number of sequence sizes, and degrading the spatial precision of the signal. These manipulations allowed us to thoroughly investigate the robustness of the relation between the temporal persistence and biometric performance of the DL-based EMB."
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"section_id": "6.3",
|
| 181 |
+
"parent_section_id": "6",
|
| 182 |
+
"section_name": "Limitations & Future Direction",
|
| 183 |
+
"text": "All of our biometric measurements emerge from an ROC analysis. It is well established that ROC-curve analyses are relatively inaccurate when based on small sample sizes [65 ###reference_b65###]. For GB, all biometric performance was assessed with N=59 subjects, and for GBVR, all biometric performance was assessed with N = 60 subjects. These are small samples. However, in the context of eye-tracking studies, our sample sizes are relatively large. Even with such small sample sizes, we did find important relationships among the variables of interest. However, a replication of this work with larger sample sizes would make an important contribution.\nObviously, good biometric performance results from embeddings with high temporal persistence.\nA key question arises for future research: Can the temporal persistence of embeddings be integrated into the overall biometric analysis? In other words, is there a method to enhance the temporal persistence of embeddings learned by a DL-based biometric pipeline?"
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"section_id": "7",
|
| 187 |
+
"parent_section_id": null,
|
| 188 |
+
"section_name": "Conclusion",
|
| 189 |
+
"text": "We have previously shown the importance of temporal persistence on biometric performance in traditional, non-DL based biometric systems [7 ###reference_b7###].\nThe findings in our present report extend this finding to embeddings learned by a DL-based biometric pipeline.\nOur study has shown a strong relation between the temporal persistence of learned embeddings as assessed by the KCC to the biometric performance (EER) of a DL-based biometric pipeline.\nWe have also documented the effects of various data manipulations on biometric performance and temporal persistence.\nData manipulation in any manner affects the learned embeddings from the temporal persistence and biometric efficacy perspective.\nIntercorrelations do not vary much throughout the conducted research."
|
| 190 |
+
}
|
| 191 |
+
],
|
| 192 |
+
"appendix": [],
|
| 193 |
+
"tables": {
|
| 194 |
+
"1": {
|
| 195 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Spatial precision of GB and GBVR at different levels of noise addition.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.1\">Added Noise (SD)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.2.1\">Sp. Precision (GB)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.3.1\">Sp. Precision (GBVR)</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.2.1.1\">0</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.1.2\">0.0044</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.1.3\">0.041</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.3.2.1\">0.05</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.2.2\">0.059</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.2.3\">0.070</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.4.3.1\">0.25</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.3.2\">0.289</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.3.3\">0.240</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.5.4.1\">0.5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.4.2\">0.577</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.4.3\">0.460</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.6.5.1\">0.75</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.5.2\">0.865</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.5.3\">0.683</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.7.6.1\">1.0</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.6.2\">1.151</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.6.3\">0.905</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.8.7.1\">1.25</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.7.2\">1.438</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.7.3\">1.129</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.9.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.9.8.1\">1.5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.9.8.2\">1.726</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.9.8.3\">1.351</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.10.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T1.1.10.9.1\">1.75</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.10.9.2\">2.013</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.10.9.3\">1.576</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.11.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.1.11.10.1\">2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.11.10.2\">2.301</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.11.10.3\">1.796</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 196 |
+
"capture": "Table 1: Spatial precision of GB and GBVR at different levels of noise addition."
|
| 197 |
+
},
|
| 198 |
+
"2": {
|
| 199 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.7\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.7.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.7.7.8.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.7.7.8.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.7.7.8.1.1.1\">Linear Fit</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.7.7.8.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.7.7.8.1.2.1\">Logarithmic Fit</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.1.1.1.2\">f(x) = ax + b</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.1.1.1.1\">f(x) = \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.7.7.9.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.7.7.9.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.7.7.9.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.7.7.9.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.7.7.9.1.1.1.1.1\">Where:</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.7.7.9.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.7.7.9.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.7.7.9.1.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.7.7.9.1.2.1.1.1\">Where:</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.3.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.2.2.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.2.2.2.1.1\">\n<span class=\"ltx_p\" id=\"S5.2.2.2.1.1.1\"> is the independent variable,</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.3.3.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.3.3.3.2.1\">\n<span class=\"ltx_p\" id=\"S5.3.3.3.2.1.1\"> is the independent variable,</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.5.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.4.4.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.4.4.4.1.1\">\n<span class=\"ltx_p\" id=\"S5.4.4.4.1.1.1\"> is the slope of the line,</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.5.5.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.5.5.5.2.1\">\n<span class=\"ltx_p ltx_align_left\" id=\"S5.5.5.5.2.1.1\"> is the slope of the logarithmic curve,</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.7.7.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.6.6.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.6.6.6.1.1\">\n<span class=\"ltx_p\" id=\"S5.6.6.6.1.1.1\"> is the y-intercept of the line.</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.7.7.7.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.7.7.7.2.1\">\n<span class=\"ltx_p\" id=\"S5.7.7.7.2.1.1\"> is the y-intercept of the logarithmic curve.</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 200 |
+
"capture": "Figure 2: Relationship between KCC and EER with the decimated level (Hz) for two datasets: GB and GBVR.\nSubplots (A1) and (B1) show the logarithmic decrease in KCC for the GB and GBVR datasets, respectively, with lower sampling rates.\nSubplots (A2) and (B2) depict the logarithmic increase in EER for the GB and GBVR datasets, with lower sampling rates.\nSubplots (A3) and (B3) illustrate the strong negative logarithmic relationship between EER and KCC for the GB and GBVR datasets.\n, and coefficient values are added to each plot\u2019s legend.\nThe values across all plots indicate a strong fit, suggesting that a higher sampling rate improves biometric performance, with the GBVR dataset demonstrating a particularly robust fit."
|
| 201 |
+
},
|
| 202 |
+
"3": {
|
| 203 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Absolute Value of Intercorrelation</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T2.1.1.1.1\">Experiment name</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.2\">Mean (SD) - GB</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.3\">Mean (SD) - GBVR</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.1.2.1.1\">Decimation</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.1.2\">0.19 (0.14)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.1.3\">0.19 (0.14)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.1.3.2.1\">Percentage</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.2\">0.20 (0.15)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.3\">0.20 (0.15)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.1.4.3.1\"># Sequences</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.2\">0.19 (0.14)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.3\">0.18 (0.14)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T2.1.5.4.1\">Degraded Signal Quality</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.5.4.2\">0.19 (0.14)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.5.4.3\">0.20 (0.15)</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 204 |
+
"capture": "Table 2: Absolute Value of Intercorrelation"
|
| 205 |
+
},
|
| 206 |
+
"4": {
|
| 207 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Impact on Embeddings: Performance Metrics Variation</figcaption><div class=\"ltx_flex_figure\">\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S6.T3.11\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T3.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.1.1\">RQ</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T3.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.3.3.4.1\">Description</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T3.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.1\">KCC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T3.3.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.3.3.5.1\">Spearman R*</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T3.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.3.3.3.1\">EER</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T3.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.5.5.3\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.5.5.4\">Decimate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.5.5.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.5.5.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.7.7.3\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.7.7.4\">Percentage</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.7.7.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.7.7.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.9.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.9.9.3\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.9.9.4\"># Sequences</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.8.8.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.9.9.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.9.9.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.11.11.3\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.11.11.4\">Sig. Quality</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.11.11.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.11.11.2\"></td>\n</tr>\n</tbody>\n</table>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<ul class=\"ltx_itemize ltx_centering ltx_figure_panel\" id=\"S6.I1\">\n<li class=\"ltx_item\" id=\"S6.I1.i1\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">\u2022</span>\n<div class=\"ltx_para\" id=\"S6.I1.i1.p1\">\n<p class=\"ltx_p\" id=\"S6.I1.i1.p1.1\"> Research Question</p>\n</div>\n</li>\n<li class=\"ltx_item\" id=\"S6.I1.i2\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">\u2022</span>\n<div class=\"ltx_para\" id=\"S6.I1.i2.p1\">\n<p class=\"ltx_p\" id=\"S6.I1.i2.p1.2\"> indicates a significant effect due to data manipulation</p>\n</div>\n</li>\n<li class=\"ltx_item\" id=\"S6.I1.i3\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">\u2022</span>\n<div class=\"ltx_para ltx_noindent\" id=\"S6.I1.i3.p1\">\n<p class=\"ltx_p\" id=\"S6.I1.i3.p1.1\"> <span class=\"ltx_text ltx_font_bold\" id=\"S6.I1.i3.p1.1.1\">\u201c\u2014\u201d</span> Denotes negligible impact.</p>\n</div>\n</li>\n</ul>\n</div>\n<div class=\"ltx_flex_break\"></div>\n</div>\n</figure>",
|
| 208 |
+
"capture": "Table 3: Impact on Embeddings: Performance Metrics Variation"
|
| 209 |
+
}
|
| 210 |
+
},
|
| 211 |
+
"image_paths": {
|
| 212 |
+
"1": {
|
| 213 |
+
"figure_path": "2402.16399v2_figure_1.png",
|
| 214 |
+
"caption": "Figure 1: Visual Representation of the Experimental Design.\n(A1) The interval between the red-dotted lines is defined as a sequence, containing 5000 samples for GB (for GBVR it is 1250 samples, not shown in the figure).\n(A2) Displays the first sequence from plot (A1).\n(B1) The signal from the plot (A1) has been downsampled to 25 Hz for demonstration.\n(B2) Shows the first sequence from plot (B1).\n(C1) Analyze only the first 10% of the signal, but place it in the center of the sequence with zero-padding on both sides.\n(C2) Presents the last sequence from the plot (C1) as an example. The right column provides a clearer visualization of the specific sequences from each row.",
|
| 215 |
+
"url": "http://arxiv.org/html/2402.16399v2/x1.png"
|
| 216 |
+
},
|
| 217 |
+
"2": {
|
| 218 |
+
"figure_path": "2402.16399v2_figure_2.png",
|
| 219 |
+
"caption": "Figure 2: Relationship between KCC and EER with the decimated level (Hz) for two datasets: GB and GBVR.\nSubplots (A1) and (B1) show the logarithmic decrease in KCC for the GB and GBVR datasets, respectively, with lower sampling rates.\nSubplots (A2) and (B2) depict the logarithmic increase in EER for the GB and GBVR datasets, with lower sampling rates.\nSubplots (A3) and (B3) illustrate the strong negative logarithmic relationship between EER and KCC for the GB and GBVR datasets.\nR2superscript\ud835\udc452R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, and coefficient values are added to each plot\u2019s legend.\nThe R2superscript\ud835\udc452R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT values across all plots indicate a strong fit, suggesting that a higher sampling rate improves biometric performance, with the GBVR dataset demonstrating a particularly robust fit.",
|
| 220 |
+
"url": "http://arxiv.org/html/2402.16399v2/x2.png"
|
| 221 |
+
},
|
| 222 |
+
"3": {
|
| 223 |
+
"figure_path": "2402.16399v2_figure_3.png",
|
| 224 |
+
"caption": "Figure 3: \nRelationship between KCC and EER with the percentage level (%) for two datasets: GB and GBVR.\nSubplots (A1) and (B1) show the logarithmic decrease in KCC for the GB and GBVR datasets, respectively, with lower percentage levels.\nSubplots (A2) and (B2) depict the logarithmic increase in EER for the GB and GBVR datasets as the percentage levels decrease. The high R2superscript\ud835\udc452R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT values across these plots indicate a strong fit, suggesting that a higher percentage level improves biometric performance.\nSubplots (A3) and (B3) illustrate the strong negative logarithmic correlation between EER and KCC for the GB and GBVR datasets.\nR2superscript\ud835\udc452R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, and coefficient values are added to each plot\u2019s legend.",
|
| 225 |
+
"url": "http://arxiv.org/html/2402.16399v2/x3.png"
|
| 226 |
+
},
|
| 227 |
+
"4": {
|
| 228 |
+
"figure_path": "2402.16399v2_figure_4.png",
|
| 229 |
+
"caption": "Figure 4: Relationship between the number of eye-tracking (ET) samples and the Equal Error Rate (EER). The plots illustrate the EER for two datasets, GB and GBVR, across varying sample sizes (50 to 5000 samples for GB and 50 to 1250 samples for GBVR). The results from RQ1 and RQ2 have been compared in each plot.",
|
| 230 |
+
"url": "http://arxiv.org/html/2402.16399v2/x4.png"
|
| 231 |
+
},
|
| 232 |
+
"5": {
|
| 233 |
+
"figure_path": "2402.16399v2_figure_5.png",
|
| 234 |
+
"caption": "Figure 5: Relationship between KCC and EER with the number of sequences for two datasets: GB and GBVR.\nSubplots (A1) and (B1) show the logarithmic decrease in KCC GB and KCC GBVR, respectively, with reduced sequences.\nSubplots (A2) and (B2) depict the logarithmic increase in EER GB and EER GBVR with increasing sequences.\nSubplots (A3) and (B3) illustrate the strong negative logarithmic correlation between EER and KCC for GB and GBVR datasets, presenting higher temporal persistence associated with lower equal error rates.\nR2superscript\ud835\udc452R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, and coefficient values are added to each plot\u2019s legend.",
|
| 235 |
+
"url": "http://arxiv.org/html/2402.16399v2/x5.png"
|
| 236 |
+
},
|
| 237 |
+
"6": {
|
| 238 |
+
"figure_path": "2402.16399v2_figure_6.png",
|
| 239 |
+
"caption": "Figure 6: Relationship between KCC and EER with the variation in spatial precision for two datasets: GB and GBVR.\nSubplots (A1) and (B1) show the linear decrease in KCC GB and KCC GBVR, respectively, with the degradation of spatial precision.\nSubplots (A2) and (B2) depict the linear increase in EER GB and EER GBVR with the degradation of spatial precision.\nSubplots (A3) and (B3) illustrate the strong negative linear correlation between KCC and EER for GB and GBVR datasets, presenting higher temporal persistence associated with lower equal error rates.\nR2superscript\ud835\udc452R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, and coefficient values are added to each plot\u2019s legend.",
|
| 240 |
+
"url": "http://arxiv.org/html/2402.16399v2/x6.png"
|
| 241 |
+
},
|
| 242 |
+
"7": {
|
| 243 |
+
"figure_path": "2402.16399v2_figure_7.png",
|
| 244 |
+
"caption": "Figure 7: This graph compares the relationship between KCC and EER across all manipulations and datasets. A linear model provides a good fit with a model r2superscript\ud835\udc5f2r^{2}italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT of 0.92.",
|
| 245 |
+
"url": "http://arxiv.org/html/2402.16399v2/x7.png"
|
| 246 |
+
}
|
| 247 |
+
},
|
| 248 |
+
"validation": true,
|
| 249 |
+
"references": [],
|
| 250 |
+
"url": "http://arxiv.org/html/2402.16399v2"
|
| 251 |
+
}
|
20240721/2402.16832v2.json
ADDED
|
@@ -0,0 +1,365 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Cross-Modal Projection in Multimodal LLMs Doesn\u2019t Really Project Visual Attributes to Textual Space",
|
| 3 |
+
"abstract": "Multimodal large language models (MLLMs) like LLaVA and GPT-4(V) enable general-purpose conversations about images with the language modality. As off-the-shelf MLLMs may have limited capabilities on images from domains like dermatology and agriculture, they must be fine-tuned to unlock domain-specific applications. The prevalent architecture of current open-source MLLMs comprises two major modules: an image-language (cross-modal) projection network and a large language model. It is desirable to understand the roles of these two modules in modeling domain-specific visual attributes to inform the design of future models and streamline the interpretability efforts on the current models. To this end, via experiments on datasets and under 2 fine-tuning settings, we find that as the MLLM is fine-tuned, it indeed gains domain-specific visual capabilities, but the updates do not lead to the projection extracting relevant domain-specific visual attributes. Our results indicate that the domain-specific visual attributes are modeled by the LLM, even when only the projection is fine-tuned. Through this study, we offer a potential reinterpretation of the role of cross-modal projections in MLLM architectures.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "###figure_1### The recent wave of advancements in large language models (LLMs) has equipped them with the ability to \u201csee\u201d images, leading to multimodal large language models (MLLMs) like LLaVA Liu et al. (2023c ###reference_b22###), GPT-4(V) Achiam et al. (2023 ###reference_b1###), and Gemini Anil et al. (2023 ###reference_b6###). MLLMs unlock the potential to converse with visual data using language. However, existing MLLMs are trained and evaluated for general-purpose multimodal tasks like question-answering on natural images111We use \u2018natural images\u2019 or \u2018internet images\u2019 to refer to common images encountered on social media platforms and the Web and contrast them with domain-specific images. Liu et al. (2023c ###reference_b22###); AI (2024 ###reference_b3###), which limits their applicability in specific domains like agriculture and dermatology. MLLMs with domain-specific visual capabilities can transform workflows in several industries, including healthcare, agriculture, circuit design, and satellite imaging Miotto et al. (2018 ###reference_b23###); Ferentinos (2018 ###reference_b12###); Anilturk et al. (2023 ###reference_b7###); Kaselimi et al. (2022 ###reference_b15###). While fine-tuning can improve domain-specific visual capabilities of general-purpose MLLMs, we adopt domain-specific fine-tuning as a strategic approach to understand the roles that the MLLM\u2019s key architectural components play in modeling visual attributes. A better understanding of the roles of MLLM\u2019s components in modeling visual attributes can inform future design choices as well as direct interpretability efforts.\nArchitecturally, open-source MLLMs comprise two key components: (i) a cross-modal projection layer that connects image representations with the LLM, and (ii) the LLM that processes the projected image representation and the text tokens; see Figure 1 ###reference_### (left). In the context of the projection, researchers often consider the projection layer as the unit responsible for aligning features/concepts from the image to the LLM space Li et al. (2023 ###reference_b18###); Lin et al. (2023 ###reference_b19###); Moon et al. (2023 ###reference_b24###). Consequently, one prevalent fine-tuning strategy to adapt MLLMs for domain-specific visual tasks is to update the projection while keeping the LLM parameters frozen Moon et al. (2023 ###reference_b24###). Alternatively, the projection and the LLM parameters can be fine-tuned concurrently Liu et al. (2023b ###reference_b21###).\nIn this work, we use domain-specific fine-tuning using the above two strategies to understand the role of the projection and the LLM parameters in acquiring domain-specific image modeling capabilities. We posit that if the projection plays a critical role in acquiring domain-specific image modeling capabilities, the post-projection representation \u2013 i.e., the representation of the image transformed by the projection, should be richer222We use domain-specific richness to indicate the \u201cexpressive power\u201d of the representations Bengio et al. (2012 ###reference_b9###) towards the domain-specific task. in domain-specific features.\nConversely, if the post-projection representation is not richer in domain-specific features, the domain-specific features are being identified or modeled by the LLM parameters.333Project webpage: https://claws-lab.github.io/projection-in-MLLMs/ ###reference_MLLMs/###\nOur experiments and analysis with different datasets show that, as expected, both the fine-tuning strategies boost domain-specific closed-set image classification performance of the MLLM. However, none of the strategies lead to extraction of richer domain-specific features by the update in the projection layer; see Figure 1 ###reference_### (right). This indicates that as MLLMs are fine-tuned to classify domain-specific images, the identification of domain-specific image attributes occurs in the LLM parameters, whether frozen or not. More broadly, our results add to the existing evidence that deep neural networks can be inherently multimodal Goh et al. (2021 ###reference_b14###); Schwettmann et al. (2023 ###reference_b30###), and LLMs could model visual data with minimal assistance from the cross-modal projection.\nWe first discuss the fine-tuning strategies to improve the domain-specific capabilities of MLLMs (Section 2 ###reference_###) and then analyze the role of projection in acquiring the new domain-specific capabilities (Section 3 ###reference_###). Finally, we discuss the implications of our work and the future directions (Section 4 ###reference_###)."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Effect of Fine-tuning Projection Layer versus the Entire Multimodal LLM",
|
| 15 |
+
"text": "We are interested in exploring two potential fine-tuning strategies that could help an MLLM in gaining domain-specific visual capabilities. The first approach involves simply fine-tuning the vision-to-language projection, e.g., a simple two-layer MLP with 20M parameters. The second approach involves training the entire MLLM \u2013 i.e., the projection layer + the LLM with 7B parameters. We conduct all our experiments with the LLaVA-1.5 model Liu et al. (2023b ###reference_b21###), which uses the LLaMA-2-7B Touvron et al. (2023 ###reference_b32###) as the LLM backbone, as it is a strong representative of open-source state-of-the-art multimodal LLMs Ge et al. (2023 ###reference_b13###); Liu et al. (2023a ###reference_b20###); Yu et al. (2023 ###reference_b35###).\nSetting 1: Only fine-tuning the projection layer.\nLLaVA-1.5 involves pre-training the cross-modal projection layers to align image features with the pre-trained LLM\u2019s token embeddings by maximizing the next-token prediction likelihood of the MLLM.\nLet denotes the ground-truth output corresponding to the question regarding the image encoding , which is obtained from the frozen vision-encoder of CLIP Radford et al. (2021 ###reference_b28###). The projection layer, parameterized by , is trained to elicit the correct response from the frozen LLM, token-by-token while using the projected image-encoding , and considering previous tokens of the ground-truth answer. See Figure 2 ###reference_### (Appendix) for a pictorial illustration of the formulation. Since our focus is to perform domain-specific image classification using MLLMs, we consider for a given image and construct as:\nFor each example, we randomly shuffle the order of classes inside <classes_string> to avoid any position bias. We fine-tune the projection layers of the LLaVA-1.5 model for epoch using the default hyper-parameters (Liu et al., 2023b ###reference_b21###). During inference, we perform zero-shot classification using the same prompt above for the MLLM with the updated projection.\nSetting 2: Fine-tuning the MLLM end-to-end.\nAlternatively, we fine-tune all the MLLM parameters, i.e., the projection layers and the LLM parameters concurrently by maximizing the next token-prediction likelihood of the MLLM. In other words, we update both and , where denotes the LLM paramters. We use the same strategy to construct and as in the previous setting. Again, we fine-tune the LLaVA-1.5 model for epoch using the default hyper-parameters. Similar to the above setting, after training the MLLM, we perform zero-shot domain-specific image classification using the constructed above.\nWe fine-tune the MLLM using these strategies for each of the datasets from different domains.\nImage datasets. The image classification datasets correspond to the following tasks: leaf disease classification, visual texture detection, skin disease identification, and humanitarian category classification. Figure 3 ###reference_### (Appendix) provides an illustration of the datasets under consideration.\n(i) Agriculture: To enable scalable and early plant disease detection, Singh et al. (2020 ###reference_b31###) curated PlantDoc. The dataset comprises 2,598 images categorized into 17 classes of leaf diseases. \n(ii) Textures: With an aim to evaluate whether visual models can identify human-centric attributes like texture beyond detecting or describing objects/scenes, Cimpoi et al. (2014 ###reference_b10###) curated 5,640 images categorized into 47 texture-related classes (like polka-dotted, wrinkled, and honeycombed). \n(iii) Dermatology: We consider the DermNet dataset (Rimi et al., 2020 ###reference_b29###), which comprises 19,561 images categorized into 23 types of skin diseases like Acne, Melanoma, Seborrheic Keratoses, etc. \n(iv) Humanitarian: To aid development of computational methods that can help humanitarian organizations process images posted on social platforms during crises, Alam et al. (2018 ###reference_b4###) and Ofli et al. (2020 ###reference_b26###) curated the CrisisMMD dataset, which comprises 10,461 images categorized into different categories. This dataset comprises images that are the closest to natural/internet images.\nDomain-specific classification performance. Table 1 ###reference_### shows the image classification performance (macro-averaged scores and accuracy) of the MLLMs under various settings. For reference, we include zero-shot classification performance of CLIP444https://huggingface.co/openai/clip-vit-large-patch14-336 ###reference_ge-patch14-336### Wolf et al. (2019 ###reference_b34###), which is the visual encoder of the LLaVA-1.5 model (see Appendix A.1 ###reference_### for details). First, it is worth noting that the zero-shot performance of the original LLaVA-1.5 model is notably worse than CLIP\u2019s zero-shot performance. This indicates that while domain-specific image attributes are present in the pre-projection image embeddings that are obtained from a frozen vision encoder (i.e., ), they are not being used by the MLLM parameters. This can be attributed to the corpus used to train MLLMs like LLaVA, which comprises natural images. Second,\nclearly, the results show that finetuning indeed improves performance on domain-specific classification, with significant improvements made when fine-tuning the entire MLLM (\u2018FT-E2E\u2019) as opposed to only the projection layer (\u2018FT-Proj\u2019). The greater effectiveness of the FT-E2E can be attributed to greater representational space () over FT-Proj (). With these observations, next, we focus on investigating the role of projection in capturing domain-specific image attributes."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Role of Projection in Learning Domain-Specific Image Attributes",
|
| 21 |
+
"text": "Following up on results in Table 1 ###reference_###, we ask: does the projection learn to model the domain-specific image attributes on fine-tuning the MLLM?\nEstimating post-projection richness. To answer the above question, we develop a reliable-yet-simple way to estimate domain-specific richness of the projected image representation, i.e., the post-projection representation, denoted by . We do this by training an independent multilayer perceptron (MLP) to perform the image classification task using as the image representation. This classifier helps estimate the extent of domain-specific information (or expressive power Bengio et al. (2012 ###reference_b9###)) that can be extracted from the input, in this case the post-projection image representation . In other words, a better classification performance by this MLP will denote relative domain-specific richness of the post-projection embeddings used for training, and vice versa. We train one MLP each using the post-projection representation obtained from the following three settings: (i) original LLaVA-1.5, (ii) LLaVA-1.5 with fine-tuned projection, and (ii) LLaVA-1.5 with end-to-end fine-tuning, while keeping the architecture of the MLP the same for consistent comparison. We provide the additional details, including architecture and training hyper-parameters, in Appendix A.2 ###reference_###.\nComparing domain-specific richness of post-projection representation across different settings. Table 2 ###reference_### shows: (a) the domain-specific richness of post-projection representation (\u2018Post-proj MLP\u2019), and (b) the corresponding MLLM performance (\u2018MLLM\u2019), across the three settings mentioned above (i.e., \u2018Original\u2019, \u2018FT-Proj\u2019, and \u2018FT-E2E\u2019). We report the macro-averaged score on the test set of the respective dataset for both (a) and (b). There are two key trends in Table 2 ###reference_###: first, when the \u2018Original\u2019 LLaVA-1.5 model\u2019s projection layer is fine-tuned (\u2018FT-Proj\u2019), the domain-specific richness of the post-projection representation diminishes, while a boost in the MLLM performance is observed. Similarly, second, with end-to-end fine-tuning of LLaVA-1.5 (\u2018FT-E2E\u2019), the domain-specific richness of the post-projection representation worsens while the MLLM performance boosts notably. These two trends are consistent across all the datasets considered in our study.\nDomain-specific attributes are identified within the LLM. The two trends observed above reinforce the idea that as the MLLM gains previously-absent domain-specific image classification abilities via fine-tuning, the contribution of the projection layer in identifying relevant image attributes declines. Let us consider the two fine-tuning settings separately. In the first setting, the projection layer undergoes updates to assist the frozen LLM in more accurate label prediction, and yet captures lesser domain-specific image attributes. This indicates that the updates in projection layer merely facilitate better use of frozen LLM parameters for the domain-specific task and do not necessarily involve mapping image attributes to the frozen LLM space.\nIn the second setting as well, when both the LLM parameters and projection layer undergo updates concurrently, the projection layer captures lesser domain-specific attributes, which indicates that the updates in the LLM parameters are predominantly responsible for the acquired domain-specific image classification capabilities. In sum, our results indicate that the modeling of domain-specific image attributes in MLLMs is done by the LLM parameters, whether they are kept frozen or undergo updates."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Discussion and Implications",
|
| 27 |
+
"text": "Existing literature on interpretability of neural networks has discussed the notion of \u201cmultimodal neurons\u201d \u2013 neurons that trigger in response to particular concepts spanning disparate modalities Goh et al. (2021 ###reference_b14###); Schwettmann et al. (2023 ###reference_b30###); Pan et al. (2023 ###reference_b27###). For instance, Goh et al. (2021 ###reference_b14###) demonstrate that in the CLIP model, a single neuron could respond to the photographs, drawings, or images that relate to, let\u2019s say \u2018spiderman,\u2019 even though the input image may differ in terms of low-level visual attributes like color, edges, and corners. Similarly, Schwettmann et al. (2023 ###reference_b30###) show that a specific neurons within a frozen text-only Transformer are responsible for detecting visual concepts, let\u2019s say like \u2018horses,\u2019 in the input images that are projected to align with the text-only transformer. Our study adds to this literature by showing that even the acquired abilities to detect visual attributes in an MLLM are reliant on the LLM parameters. Notably, when the LLM parameters are frozen, the cross-modal projection layer adapts to facilitate detection of visual attibutes in the LLM without extracting domain-specific attributes. In other words, when the LLM is frozen and the projection is fine-tuned, the projection parameters are updated to leverage the pre-existing domain-specific knowledge in the LLM parameters. In the future, we aim to interpret the layer- & neuron-level contributions in LLMs towards acquired multimodal reasoning."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Limitations and Broader Perspective",
|
| 33 |
+
"text": "Limitations and future work: Our current work focuses on a representative cross-modal projection scheme (multilayer perceptron) in a state-of-the-art MLLM (LLaVA-1.5). Other open-source MLLMs have considered other projection schemes like a trainable linear layer (LLaVa-1; Liu et al. (2023c ###reference_b22###)), gated cross-attention (Flamingo; Alayrac et al. (2022 ###reference_b5###)), and Q-Former (InstructBLIP; Dai et al. (2023 ###reference_b11###)). Future work could extend the current study to other projection schemes and models. Beyond the adopted strategy of estimating the post-projection richness of image representations using an independent classifier, future work could also probe the MLLM using concept bottleneck methods Koh et al. (2020 ###reference_b17###), or analyze mutual information between representations Bachman et al. (2019 ###reference_b8###). Finally, while outside the scope of the current work, a holistic evaluation of the MLLM should focus on domain-specific capabilities as well as the general purpose capabilities.\nBroader social impact: The authors do not foresee\nany negative social impacts of this specific work. However, we acknowledge that existing LLMs and MLLMs demonstrate different forms of biases Wan et al. (2023 ###reference_b33###); Nwatu et al. (2023 ###reference_b25###) that could be inherited in domain-specific variants. In line with the ongoing effort towards mitigating social biases in deep neural networks, future efforts that aim to interface modality-specific reasoning with LLMs, should consider the additional biases that LLMs may introduce on top of the modality-specific networks.\nDatasets and code: The datasets used in this study are publicly available and were curated by previous research. We abide\nby their terms of use. We release the code for our experiments to aid reproducibility and enable future research on this topic: https://github.com/claws-lab/projection-in-MLLMs ###reference_-MLLMs###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "6",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Acknowledgements",
|
| 39 |
+
"text": "This research/material is based upon work supported in part by\nNSF grants CNS-2154118, ITE-2137724, ITE-2230692, CNS2239879, Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112290102 (subcontract No. PO70745), CDC, and funding from Microsoft. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the position or policy of DARPA, DoD, SRI International, CDC, NSF, and no official endorsement should be inferred. Gaurav is partially supported by the JP Morgan AI Research PhD Fellowship and the Snap Research Fellowship. We thank the members of the CLAWS Lab for their helpful feedback."
|
| 40 |
+
}
|
| 41 |
+
],
|
| 42 |
+
"appendix": [
|
| 43 |
+
{
|
| 44 |
+
"section_id": "Appendix 1",
|
| 45 |
+
"parent_section_id": null,
|
| 46 |
+
"section_name": "Appendix A Appendix",
|
| 47 |
+
"text": "We perform zero-shot classification using the CLIP model (clip-vit-large-patch14-336; ), which is the same as the vision encoder used for obtaining pre-projection representation of the input image (i.e., ) by the LLaVA-1.5 model. The CLIP model embeds both image and text data into a common space using a contrastive learning objective. We use the pre-trained model to compute the cosine similarity between the image representations and the representation of the dataset-specific label strings obtained from the textual backbone of CLIP. Following this, we consider the most similar label string to be the predicted label for the given image, and compute classification metrics on the test set to quantify CLIP\u2019s zero-shot performance.\n###figure_2### We train a multilayer perceptron for estimating the domain-specific richness of the post-projection image representation (i.e., ). The MLP takes the tokens corresponding to the image as input and learns to perform the classification task using the examples from the standard train set. Architecturally, the MLP comprises a token-level average pooling step to obtain the image representation, followed by subsequent layers, and eventually the output layer of size equivalent to the number of classes in the dataset. We use ReLU activation Agarap (2018 ###reference_b2###) to induce non-linearity. We keep the architecture of this MLP fixed across all the settings to control for the number of learnable parameters and the representational power of the neural network, therefore allowing us to estimate the richness of the input embeddings with respect to the target task. Each model is trained with a batch size of 128. We use Adam optimizer Kingma and Ba (2014 ###reference_b16###) with a learning rate initialized at and adopt early stopping based on the loss values to avoid overfitting. As a sanity check, we note that an MLP trained using our setup on the post-projection embeddings obtained from the original LLaVA-1.5 model for the Humanitarian task (a natural images dataset), achieves close to the state-of-the-art performance reported on this task Alam et al. (2018 ###reference_b4###). This indicates that our setup enables a reliable estimate of the richness/expressive power of the post-projection representations.\nAs reference to the performance of MLLM\u2019s domain-specific capabilities (before and after fine-tuning), we include the performance of simple image-only classification models. We use the 1024-dimensional image embeddings obtained from a pre-trained CLIP model (clip-vit-large-patch14-336) and train a multilayer perceptron with layers of size ( (input layer), , , , , , # of classes (output layer)). We use the same design choices as used for training the MLPs described in Sec. A.2 ###reference_###, and evaluate the models on respective test sets of the dataset. The results are presented in Table 3 ###reference_###. Although it is not the primary focus of this work, it is interesting to note that for the domain-specific tasks \u2013 i.e., all the tasks except Humanitarian the MLP (with parameters) performs better than the fine-tuned MLLM (with parameters). Both the model use CLIP embeddings as input representation of the image and are fine-tuned with the same amount of labeled data.\n###figure_3### All the experiments discussed in this study were conducted using two NVIDIA A100 GPUs (80 GB). Each fine-tuning run of the MLLM took about 1 hour requiring both the GPUs, with additional time for inference; multiple inference runs could be carried over a single GPU. The training and evaluation of the MLPs took less than 20 minutes each. Each run of zero-shot evaluation of CLIP was done on a single GPU in less than 15 minutes."
|
| 48 |
+
}
|
| 49 |
+
],
|
| 50 |
+
"tables": {
|
| 51 |
+
"1": {
|
| 52 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S2.T1.4\" style=\"width:433.6pt;height:102.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-49.4pt,11.7pt) scale(0.814478216634539,0.814478216634539) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.4.4\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S2.T1.4.4.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.4.5.1.1\">Models/Variants</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S2.T1.4.4.5.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S2.T1.4.4.5.2.1\">Agriculture</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S2.T1.4.4.5.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S2.T1.4.4.5.3.1\">Textures</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S2.T1.4.4.5.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S2.T1.4.4.5.4.1\">Dermatology</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S2.T1.4.4.5.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S2.T1.4.4.5.5.1\">Humanitarian</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.4\">\n<td class=\"ltx_td\" id=\"S2.T1.4.4.4.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.4.6\">Acc.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.4.7\">Acc.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.4.8\">Acc.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.4.9\">Acc.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.4.6.1\">Random (Uniform)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.6.2\">0.0309</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.6.3\">0.0339</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.6.4\">0.0214</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.6.5\">0.0218</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.6.6\">0.0451</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.6.7\">0.0483</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.6.8\">0.2425</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.6.9\">0.2664</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.4.7.1\">CLIP (Zero-shot; LLaVA-1.5\u2019s vision encoder)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.7.2\">0.4165</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.7.3\">0.4492</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.7.4\">0.4582</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.7.5\">0.4984</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.7.6\">0.1783</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.7.7\">0.2401</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.7.8\">0.4139</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.7.9\">0.4718</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.4.8.1\">LLaVA-1.5 (Zero-shot)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.8.2\">0.1064</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.8.3\">0.1255</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.8.4\">0.1882</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.8.5\">0.2138</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.8.6\">0.0658</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.8.7\">0.0672</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.8.8\">0.5169</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.8.9\">0.5678</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.4.9.1\">LLaVA-1.5 (FT-Proj with labels)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.9.2\">0.2221</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.9.3\">0.2478</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.9.4\">0.4505</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.9.5\">0.4654</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.9.6\">0.2932</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.9.7\">0.3403</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.9.8\">0.6227</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.9.9\">0.7151</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S2.T1.4.4.10.1\">LLaVA-1.5 (FT-E2E with labels)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.4.10.2\">0.5984</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.4.10.3\">0.6525</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.4.10.4\">0.7446</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.4.10.5\">0.7496</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.4.10.6\">0.4947</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.4.10.7\">0.5464</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.4.10.8\">0.7950</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.4.10.9\">0.8554</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.12.1\">Performance on domain-specific image classification datasets.</span> Fine-tuning LLaVA-1.5 end-to-end leads to the best domain-specific performance, while only fine-tuning the projection leads to a notable gain over LLaVA\u2019s zero-shot capabilities across all the datasets. It is worth noting that CLIP\u2019s zero-shot performance, which is the pre-projection image representation that LLaVA uses, is notably better than LLaVA\u2019s zero-shot performance. All the values are averaged over experimental runs with different random seeds; the is for all values.\n</figcaption>\n</figure>",
|
| 53 |
+
"capture": "Table 1: Performance on domain-specific image classification datasets. Fine-tuning LLaVA-1.5 end-to-end leads to the best domain-specific performance, while only fine-tuning the projection leads to a notable gain over LLaVA\u2019s zero-shot capabilities across all the datasets. It is worth noting that CLIP\u2019s zero-shot performance, which is the pre-projection image representation that LLaVA uses, is notably better than LLaVA\u2019s zero-shot performance. All the values are averaged over experimental runs with different random seeds; the is for all values.\n"
|
| 54 |
+
},
|
| 55 |
+
"2": {
|
| 56 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T2.2\" style=\"width:433.6pt;height:355.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(57.8pt,-47.4pt) scale(1.36374940049239,1.36374940049239) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.2.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T2.2.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.2.2.2.3.1\">Task</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.2.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.2.2.2.4.1\">Setting</span></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S3.T2.1.1.1.1\">\n<span class=\"ltx_text\" id=\"S3.T2.1.1.1.1.2\"></span> <span class=\"ltx_text\" id=\"S3.T2.1.1.1.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.1.1.1.1.1.1.1\">\n<span class=\"ltx_tr\" id=\"S3.T2.1.1.1.1.1.1.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.1.1.1.1.1.1.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1.1.1.1.2.1.1\">Post-proj MLP</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.1.1.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.1.1.1.1.1.1.1.1.1\">(LLaVA-1.5; )</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T2.1.1.1.1.3\"></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S3.T2.2.2.2.2\">\n<span class=\"ltx_text\" id=\"S3.T2.2.2.2.2.2\"></span> <span class=\"ltx_text\" id=\"S3.T2.2.2.2.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.2.2.2.2.1.1.1\">\n<span class=\"ltx_tr\" id=\"S3.T2.2.2.2.2.1.1.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.2.2.2.2.1.1.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.2.2.2.2.1.1.1.2.1.1\">MLLM</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.2.2.2.2.1.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.2.2.2.2.1.1.1.1.1\">(LLaVA-1.5; )</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T2.2.2.2.2.3\"></span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.3.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.2.2.3.1.1\">Agriculture</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.3.1.2\">Original</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.3.1.3\">0.5701 <span class=\"ltx_text\" id=\"S3.T2.2.2.3.1.3.1\" style=\"color:#808080;\">(\u2014\u2014\u2014\u2013)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.3.1.4\">0.1064 <span class=\"ltx_text\" id=\"S3.T2.2.2.3.1.4.1\" style=\"color:#808080;\">(\u2014\u2014\u2014\u2014-)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.4.2\">\n<td class=\"ltx_td\" id=\"S3.T2.2.2.4.2.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.4.2.2\">FT-Proj</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.4.2.3\">0.4134 <span class=\"ltx_text\" id=\"S3.T2.2.2.4.2.3.1\" style=\"color:#A8213D;\">(-27.49%)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.4.2.4\">0.2221 <span class=\"ltx_text\" id=\"S3.T2.2.2.4.2.4.1\" style=\"color:#177345;\">(+108.74%)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.5.3\">\n<td class=\"ltx_td\" id=\"S3.T2.2.2.5.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.5.3.2\">FT-E2E</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.5.3.3\">0.5346 <span class=\"ltx_text\" id=\"S3.T2.2.2.5.3.3.1\" style=\"color:#A8213D;\">(-06.22%)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.5.3.4\">0.5984 <span class=\"ltx_text\" id=\"S3.T2.2.2.5.3.4.1\" style=\"color:#177345;\">(+462.41%)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.6.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.2.2.6.4.1\">Textures</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.6.4.2\">Original</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.6.4.3\">0.6401 <span class=\"ltx_text\" id=\"S3.T2.2.2.6.4.3.1\" style=\"color:#808080;\">(\u2014\u2014\u2014\u2013)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.6.4.4\">0.1882 <span class=\"ltx_text\" id=\"S3.T2.2.2.6.4.4.1\" style=\"color:#808080;\">(\u2014\u2014\u2014\u2014-)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.7.5\">\n<td class=\"ltx_td\" id=\"S3.T2.2.2.7.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.7.5.2\">FT-Proj</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.7.5.3\">0.4736 <span class=\"ltx_text\" id=\"S3.T2.2.2.7.5.3.1\" style=\"color:#A8213D;\">(-26.01%)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.7.5.4\">0.4505 <span class=\"ltx_text\" id=\"S3.T2.2.2.7.5.4.1\" style=\"color:#177345;\">(+139.37%)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.8.6\">\n<td class=\"ltx_td\" id=\"S3.T2.2.2.8.6.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.8.6.2\">FT-E2E</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.8.6.3\">0.6212 <span class=\"ltx_text\" id=\"S3.T2.2.2.8.6.3.1\" style=\"color:#A8213D;\">(-02.95%)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.8.6.4\">0.7446 <span class=\"ltx_text\" id=\"S3.T2.2.2.8.6.4.1\" style=\"color:#177345;\">(+295.64%)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.9.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.2.2.9.7.1\">Dermatology</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.9.7.2\">Original</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.9.7.3\">0.3105 <span class=\"ltx_text\" id=\"S3.T2.2.2.9.7.3.1\" style=\"color:#808080;\">(\u2014\u2014\u2014\u2013)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.9.7.4\">0.0658 <span class=\"ltx_text\" id=\"S3.T2.2.2.9.7.4.1\" style=\"color:#808080;\">(\u2014\u2014\u2014\u2014-)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.10.8\">\n<td class=\"ltx_td\" id=\"S3.T2.2.2.10.8.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.10.8.2\">FT-Proj</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.10.8.3\">0.2182 <span class=\"ltx_text\" id=\"S3.T2.2.2.10.8.3.1\" style=\"color:#A8213D;\">(-29.72%)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.10.8.4\">0.2932 <span class=\"ltx_text\" id=\"S3.T2.2.2.10.8.4.1\" style=\"color:#177345;\">(+345.59%)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.11.9\">\n<td class=\"ltx_td\" id=\"S3.T2.2.2.11.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.11.9.2\">FT-E2E</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.11.9.3\">0.2525 <span class=\"ltx_text\" id=\"S3.T2.2.2.11.9.3.1\" style=\"color:#A8213D;\">(-18.67%)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.11.9.4\">0.4947 <span class=\"ltx_text\" id=\"S3.T2.2.2.11.9.4.1\" style=\"color:#177345;\">(+651.82%)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.12.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.2.2.12.10.1\">Humanitarian</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.12.10.2\">Original</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.12.10.3\">0.7498 <span class=\"ltx_text\" id=\"S3.T2.2.2.12.10.3.1\" style=\"color:#808080;\">(\u2014\u2014\u2014\u2013)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.12.10.4\">0.5169 <span class=\"ltx_text\" id=\"S3.T2.2.2.12.10.4.1\" style=\"color:#808080;\">(\u2014\u2014\u2014\u2014-)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.13.11\">\n<td class=\"ltx_td\" id=\"S3.T2.2.2.13.11.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.13.11.2\">FT-Proj</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.13.11.3\">0.6025 <span class=\"ltx_text\" id=\"S3.T2.2.2.13.11.3.1\" style=\"color:#A8213D;\">(-19.64%)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.2.2.13.11.4\">0.6227 <span class=\"ltx_text\" id=\"S3.T2.2.2.13.11.4.1\" style=\"color:#177345;\">(+020.47%)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.14.12\">\n<td class=\"ltx_td ltx_border_bb\" id=\"S3.T2.2.2.14.12.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.2.2.14.12.2\">FT-E2E</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.2.2.14.12.3\">0.7238 <span class=\"ltx_text\" id=\"S3.T2.2.2.14.12.3.1\" style=\"color:#A8213D;\">(-03.46%)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.2.2.14.12.4\">0.7950 <span class=\"ltx_text\" id=\"S3.T2.2.2.14.12.4.1\" style=\"color:#177345;\">(+053.80%)</span>\n</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.4.1\">Estimating the domain-specific richness of the post-projection image representation using an independent MLP.</span> Compared to the original LLaVA-1.5 model, both fine-tuning strategies lead to worsened domain-specific richness of the post-projection image representation (second-last column), while the MLLM performance (last column) improves consistently. This implies that the domain-specific attributes are identified in the LLM, even when the LLM parameters are kept frozen as the projection is updated (i.e., \u2018FT-Proj\u2019).\n</figcaption>\n</figure>",
|
| 57 |
+
"capture": "Table 2: Estimating the domain-specific richness of the post-projection image representation using an independent MLP. Compared to the original LLaVA-1.5 model, both fine-tuning strategies lead to worsened domain-specific richness of the post-projection image representation (second-last column), while the MLLM performance (last column) improves consistently. This implies that the domain-specific attributes are identified in the LLM, even when the LLM parameters are kept frozen as the projection is updated (i.e., \u2018FT-Proj\u2019).\n"
|
| 58 |
+
},
|
| 59 |
+
"3": {
|
| 60 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A1.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A1.T3.1\" style=\"width:325.2pt;height:178.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(80.8pt,-44.4pt) scale(1.98695246397678,1.98695246397678) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A1.T3.1.1\">\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A1.T3.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T3.1.1.1.2.1\">Task</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T3.1.1.1.1\">\n score</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T3.1.1.1.3\">Acc.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.1.2.1\">Agriculture</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.2\">0.6991</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.3\">0.7118</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.1.3.1\">Textures</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2\">0.7644</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.3\">0.7638</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.1.4.1\">Dermatology</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.2\">0.6046</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3\">0.6492</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A1.T3.1.1.5.1\">Humanitarian</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.5.2\">0.7506</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.5.3\">0.8238</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span><span class=\"ltx_text ltx_font_bold\" id=\"A1.T3.7.1\">Classification performance of MLP-based image-only classifiers.</span> A simple MLP performs better on out of tasks than the fine-tuned multimodal LLM; see Table <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.16832v2#S2.T1\" title=\"Table 1 \u2023 2 Effect of Fine-tuning Projection Layer versus the Entire Multimodal LLM \u2023 Cross-Modal Projection in Multimodal LLMs Doesn\u2019t Really Project Visual Attributes to Textual Space\"><span class=\"ltx_text ltx_ref_tag\">1</span></a> for MLLM results.\n</figcaption>\n</figure>",
|
| 61 |
+
"capture": "Table 3: Classification performance of MLP-based image-only classifiers. A simple MLP performs better on out of tasks than the fine-tuned multimodal LLM; see Table 1 for MLLM results.\n"
|
| 62 |
+
}
|
| 63 |
+
},
|
| 64 |
+
"image_paths": {
|
| 65 |
+
"1": {
|
| 66 |
+
"figure_path": "2402.16832v2_figure_1.png",
|
| 67 |
+
"caption": "Figure 1: Overview of our study. While the MLLM\u2019s domain-specific visual capability can be improved using fine-tuning strategies, the domain-specific richness of the image\u2019s post-projection representation does not improve. Results indicate that domain-specific visual attributes are predominantly modeled by the LLM parameters (whether frozen or not) and the projection does not necessarily play a role in mapping visual attributes to the LLM space.",
|
| 68 |
+
"url": "http://arxiv.org/html/2402.16832v2/x1.png"
|
| 69 |
+
},
|
| 70 |
+
"2": {
|
| 71 |
+
"figure_path": "2402.16832v2_figure_2.png",
|
| 72 |
+
"caption": "Figure 2: Architecture of the MLLM considered in this study. \u03d5italic-\u03d5\\phiitalic_\u03d5 and \u03b8\ud835\udf03\\thetaitalic_\u03b8 denote tunable parameters of the projection and the large language model, respectively.",
|
| 73 |
+
"url": "http://arxiv.org/html/2402.16832v2/x2.png"
|
| 74 |
+
},
|
| 75 |
+
"3": {
|
| 76 |
+
"figure_path": "2402.16832v2_figure_3.png",
|
| 77 |
+
"caption": "Figure 3: Illustration of the 4444 domain-specific image classification datasets used in this study. The datasets are from diverse domains; for brevity we only show some of the representative labels from each of the datasets. Images best viewed with zoom.",
|
| 78 |
+
"url": "http://arxiv.org/html/2402.16832v2/x3.png"
|
| 79 |
+
}
|
| 80 |
+
},
|
| 81 |
+
"validation": true,
|
| 82 |
+
"references": [
|
| 83 |
+
{
|
| 84 |
+
"1": {
|
| 85 |
+
"title": "Gpt-4 technical report.",
|
| 86 |
+
"author": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023.",
|
| 87 |
+
"venue": "arXiv preprint arXiv:2303.08774.",
|
| 88 |
+
"url": null
|
| 89 |
+
}
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"2": {
|
| 93 |
+
"title": "Deep learning using rectified linear units (relu).",
|
| 94 |
+
"author": "Abien Fred Agarap. 2018.",
|
| 95 |
+
"venue": "arXiv preprint arXiv:1803.08375.",
|
| 96 |
+
"url": null
|
| 97 |
+
}
|
| 98 |
+
},
|
| 99 |
+
{
|
| 100 |
+
"3": {
|
| 101 |
+
"title": "Introducing domain-specific large vision models.",
|
| 102 |
+
"author": "Landing AI. 2024.",
|
| 103 |
+
"venue": "https://landing.ai/blog/introducing-domain-specific-large\n-vision-models/.",
|
| 104 |
+
"url": null
|
| 105 |
+
}
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"4": {
|
| 109 |
+
"title": "Crisismmd: Multimodal twitter datasets from natural disasters.",
|
| 110 |
+
"author": "Firoj Alam, Ferda Ofli, and Muhammad Imran. 2018.",
|
| 111 |
+
"venue": "In Proceedings of the 12th International AAAI Conference on Web and Social Media (ICWSM).",
|
| 112 |
+
"url": null
|
| 113 |
+
}
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"5": {
|
| 117 |
+
"title": "Flamingo: a visual language model for few-shot learning.",
|
| 118 |
+
"author": "Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. 2022.",
|
| 119 |
+
"venue": "Advances in Neural Information Processing Systems, 35:23716\u201323736.",
|
| 120 |
+
"url": null
|
| 121 |
+
}
|
| 122 |
+
},
|
| 123 |
+
{
|
| 124 |
+
"6": {
|
| 125 |
+
"title": "Gemini: a family of highly capable multimodal models.",
|
| 126 |
+
"author": "Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023.",
|
| 127 |
+
"venue": "arXiv preprint arXiv:2312.11805.",
|
| 128 |
+
"url": null
|
| 129 |
+
}
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"7": {
|
| 133 |
+
"title": "Automatic defect classification (adc) solution using data-centric artificial intelligence (ai) for outgoing quality inspections in the semiconductor industry.",
|
| 134 |
+
"author": "Onder Anilturk, Edwin Lumanauw, James Bird, Juan Olloniego, Dillon Laird, Juan Camilo Fernandez, and Quinn Killough. 2023.",
|
| 135 |
+
"venue": "In Metrology, Inspection, and Process Control XXXVII, volume 12496, pages 830\u2013836. SPIE.",
|
| 136 |
+
"url": null
|
| 137 |
+
}
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"8": {
|
| 141 |
+
"title": "Learning representations by maximizing mutual information across views.",
|
| 142 |
+
"author": "Philip Bachman, R Devon Hjelm, and William Buchwalter. 2019.",
|
| 143 |
+
"venue": "Advances in neural information processing systems, 32.",
|
| 144 |
+
"url": null
|
| 145 |
+
}
|
| 146 |
+
},
|
| 147 |
+
{
|
| 148 |
+
"9": {
|
| 149 |
+
"title": "Unsupervised feature learning and deep learning: A review and new perspectives.",
|
| 150 |
+
"author": "Yoshua Bengio, Aaron C Courville, and Pascal Vincent. 2012.",
|
| 151 |
+
"venue": "CoRR, abs/1206.5538, 1(2665):2012.",
|
| 152 |
+
"url": null
|
| 153 |
+
}
|
| 154 |
+
},
|
| 155 |
+
{
|
| 156 |
+
"10": {
|
| 157 |
+
"title": "Describing textures in the wild.",
|
| 158 |
+
"author": "M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. 2014.",
|
| 159 |
+
"venue": "In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).",
|
| 160 |
+
"url": null
|
| 161 |
+
}
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"11": {
|
| 165 |
+
"title": "Instructblip: Towards general-purpose vision-language models with instruction tuning.",
|
| 166 |
+
"author": "Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Albert Li, Pascale Fung, and Steven C. H. Hoi. 2023.",
|
| 167 |
+
"venue": "ArXiv, abs/2305.06500.",
|
| 168 |
+
"url": "https://api.semanticscholar.org/CorpusID:258615266"
|
| 169 |
+
}
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"12": {
|
| 173 |
+
"title": "Deep learning models for plant disease detection and diagnosis.",
|
| 174 |
+
"author": "Konstantinos P Ferentinos. 2018.",
|
| 175 |
+
"venue": "Computers and electronics in agriculture, 145:311\u2013318.",
|
| 176 |
+
"url": null
|
| 177 |
+
}
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"13": {
|
| 181 |
+
"title": "Mllm-bench, evaluating multi-modal llms using gpt-4v.",
|
| 182 |
+
"author": "Wentao Ge, Shunian Chen, Guiming Chen, Junying Chen, Zhihong Chen, Shuo Yan, Chenghao Zhu, Ziyue Lin, Wenya Xie, Xidong Wang, et al. 2023.",
|
| 183 |
+
"venue": "arXiv preprint arXiv:2311.13951.",
|
| 184 |
+
"url": null
|
| 185 |
+
}
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"14": {
|
| 189 |
+
"title": "Multimodal neurons in artificial neural networks.",
|
| 190 |
+
"author": "Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. 2021.",
|
| 191 |
+
"venue": "Distill, 6(3):e30.",
|
| 192 |
+
"url": null
|
| 193 |
+
}
|
| 194 |
+
},
|
| 195 |
+
{
|
| 196 |
+
"15": {
|
| 197 |
+
"title": "A vision transformer model for convolution-free multilabel classification of satellite imagery in deforestation monitoring.",
|
| 198 |
+
"author": "Maria Kaselimi, Athanasios Voulodimos, Ioannis Daskalopoulos, Nikolaos Doulamis, and Anastasios Doulamis. 2022.",
|
| 199 |
+
"venue": "IEEE Transactions on Neural Networks and Learning Systems.",
|
| 200 |
+
"url": null
|
| 201 |
+
}
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"16": {
|
| 205 |
+
"title": "Adam: A method for stochastic optimization.",
|
| 206 |
+
"author": "Diederik P Kingma and Jimmy Ba. 2014.",
|
| 207 |
+
"venue": "arXiv preprint arXiv:1412.6980.",
|
| 208 |
+
"url": null
|
| 209 |
+
}
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"17": {
|
| 213 |
+
"title": "Concept bottleneck models.",
|
| 214 |
+
"author": "Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. 2020.",
|
| 215 |
+
"venue": "In International conference on machine learning, pages 5338\u20135348. PMLR.",
|
| 216 |
+
"url": null
|
| 217 |
+
}
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"18": {
|
| 221 |
+
"title": "Llava-med: Training a large language-and-vision assistant for biomedicine in one day.",
|
| 222 |
+
"author": "Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. 2023.",
|
| 223 |
+
"venue": "arXiv preprint arXiv:2306.00890.",
|
| 224 |
+
"url": null
|
| 225 |
+
}
|
| 226 |
+
},
|
| 227 |
+
{
|
| 228 |
+
"19": {
|
| 229 |
+
"title": "Video-llava: Learning united visual representation by alignment before projection.",
|
| 230 |
+
"author": "Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. 2023.",
|
| 231 |
+
"venue": "arXiv preprint arXiv:2311.10122.",
|
| 232 |
+
"url": null
|
| 233 |
+
}
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"20": {
|
| 237 |
+
"title": "Hallusionbench: You see what you think? or you think what you see? an image-context reasoning benchmark challenging for gpt-4v (ision), llava-1.5, and other multi-modality models.",
|
| 238 |
+
"author": "Fuxiao Liu, Tianrui Guan, Zongxia Li, Lichang Chen, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. 2023a.",
|
| 239 |
+
"venue": "arXiv preprint arXiv:2310.14566.",
|
| 240 |
+
"url": null
|
| 241 |
+
}
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"21": {
|
| 245 |
+
"title": "Improved baselines with visual instruction tuning.",
|
| 246 |
+
"author": "Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023b.",
|
| 247 |
+
"venue": "arXiv preprint arXiv:2310.03744.",
|
| 248 |
+
"url": null
|
| 249 |
+
}
|
| 250 |
+
},
|
| 251 |
+
{
|
| 252 |
+
"22": {
|
| 253 |
+
"title": "Visual instruction tuning.",
|
| 254 |
+
"author": "Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023c.",
|
| 255 |
+
"venue": "In Thirty-seventh Conference on Neural Information Processing Systems.",
|
| 256 |
+
"url": "https://openreview.net/forum?id=w0H2xGHlkw"
|
| 257 |
+
}
|
| 258 |
+
},
|
| 259 |
+
{
|
| 260 |
+
"23": {
|
| 261 |
+
"title": "Deep learning for healthcare: review, opportunities and challenges.",
|
| 262 |
+
"author": "Riccardo Miotto, Fei Wang, Shuang Wang, Xiaoqian Jiang, and Joel T Dudley. 2018.",
|
| 263 |
+
"venue": "Briefings in bioinformatics, 19(6):1236\u20131246.",
|
| 264 |
+
"url": null
|
| 265 |
+
}
|
| 266 |
+
},
|
| 267 |
+
{
|
| 268 |
+
"24": {
|
| 269 |
+
"title": "Anymal: An efficient and scalable any-modality augmented language model.",
|
| 270 |
+
"author": "Seungwhan Moon, Andrea Madotto, Zhaojiang Lin, Tushar Nagarajan, Matt Smith, Shashank Jain, Chun-Fu Yeh, Prakash Murugesan, Peyman Heidari, Yue Liu, et al. 2023.",
|
| 271 |
+
"venue": "arXiv preprint arXiv:2309.16058.",
|
| 272 |
+
"url": null
|
| 273 |
+
}
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"25": {
|
| 277 |
+
"title": "Bridging the digital divide: Performance variation across socio-economic factors in vision-language models.",
|
| 278 |
+
"author": "Joan Nwatu, Oana Ignat, and Rada Mihalcea. 2023.",
|
| 279 |
+
"venue": "arXiv preprint arXiv:2311.05746.",
|
| 280 |
+
"url": null
|
| 281 |
+
}
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"26": {
|
| 285 |
+
"title": "Analysis of social media data using multimodal deep learning for disaster response.",
|
| 286 |
+
"author": "Ferda Ofli, Firoj Alam, and Muhammad Imran. 2020.",
|
| 287 |
+
"venue": "In 17th International Conference on Information Systems for Crisis Response and Management. ISCRAM, ISCRAM.",
|
| 288 |
+
"url": null
|
| 289 |
+
}
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"27": {
|
| 293 |
+
"title": "Finding and editing multi-modal neurons in pre-trained transformer.",
|
| 294 |
+
"author": "Haowen Pan, Yixin Cao, Xiaozhi Wang, and Xun Yang. 2023.",
|
| 295 |
+
"venue": "arXiv preprint arXiv:2311.07470.",
|
| 296 |
+
"url": null
|
| 297 |
+
}
|
| 298 |
+
},
|
| 299 |
+
{
|
| 300 |
+
"28": {
|
| 301 |
+
"title": "Learning transferable visual models from natural language supervision.",
|
| 302 |
+
"author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021.",
|
| 303 |
+
"venue": "In International conference on machine learning, pages 8748\u20138763. PMLR.",
|
| 304 |
+
"url": null
|
| 305 |
+
}
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"29": {
|
| 309 |
+
"title": "Derm-nn: skin diseases detection using convolutional neural network.",
|
| 310 |
+
"author": "Tanzina Afroz Rimi, Nishat Sultana, and Md Ferdouse Ahmed Foysal. 2020.",
|
| 311 |
+
"venue": "In 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS), pages 1205\u20131209. IEEE.",
|
| 312 |
+
"url": null
|
| 313 |
+
}
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"30": {
|
| 317 |
+
"title": "Multimodal neurons in pretrained text-only transformers.",
|
| 318 |
+
"author": "Sarah Schwettmann, Neil Chowdhury, Samuel Klein, David Bau, and Antonio Torralba. 2023.",
|
| 319 |
+
"venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2862\u20132867.",
|
| 320 |
+
"url": null
|
| 321 |
+
}
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"31": {
|
| 325 |
+
"title": "Plantdoc: A dataset for visual plant disease detection.",
|
| 326 |
+
"author": "Davinder Singh, Naman Jain, Pranjali Jain, Pratik Kayal, Sudhakar Kumawat, and Nipun Batra. 2020.",
|
| 327 |
+
"venue": "In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, pages 249\u2013253.",
|
| 328 |
+
"url": null
|
| 329 |
+
}
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"32": {
|
| 333 |
+
"title": "Llama 2: Open foundation and fine-tuned chat models.",
|
| 334 |
+
"author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023.",
|
| 335 |
+
"venue": "arXiv preprint arXiv:2307.09288.",
|
| 336 |
+
"url": null
|
| 337 |
+
}
|
| 338 |
+
},
|
| 339 |
+
{
|
| 340 |
+
"33": {
|
| 341 |
+
"title": "\"kelly is a warm person, joseph is a role model\": Gender biases in llm-generated reference letters.",
|
| 342 |
+
"author": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, and Nanyun Peng. 2023.",
|
| 343 |
+
"venue": "arXiv preprint arXiv:2310.09219.",
|
| 344 |
+
"url": null
|
| 345 |
+
}
|
| 346 |
+
},
|
| 347 |
+
{
|
| 348 |
+
"34": {
|
| 349 |
+
"title": "Huggingface\u2019s transformers: State-of-the-art natural language processing.",
|
| 350 |
+
"author": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, et al. 2019.",
|
| 351 |
+
"venue": "arXiv preprint arXiv:1910.03771.",
|
| 352 |
+
"url": null
|
| 353 |
+
}
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"35": {
|
| 357 |
+
"title": "Mm-vet: Evaluating large multimodal models for integrated capabilities.",
|
| 358 |
+
"author": "Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. 2023.",
|
| 359 |
+
"venue": "arXiv preprint arXiv:2308.02490.",
|
| 360 |
+
"url": null
|
| 361 |
+
}
|
| 362 |
+
}
|
| 363 |
+
],
|
| 364 |
+
"url": "http://arxiv.org/html/2402.16832v2"
|
| 365 |
+
}
|
20240721/2402.17553v3.json
ADDED
|
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web",
|
| 3 |
+
"abstract": "For decades, human-computer interaction has fundamentally been manual. Even today, almost all productive work done on the computer necessitates human input at every step. Autonomous virtual agents represent an exciting step in automating many of these menial tasks. Virtual agents would empower users with limited technical proficiency to harness the full possibilities of computer systems. They could also enable the efficient streamlining of numerous computer tasks, ranging from calendar management to complex travel bookings, with minimal human intervention. In this paper, we introduce OmniACT, the first-of-a-kind dataset and benchmark for assessing an agent\u2019s capability to generate executable programs to accomplish computer tasks. Our scope extends beyond traditional web automation, covering a diverse range of desktop applications. The dataset consists of fundamental tasks such as \u201cPlay the next song\", as well as longer horizon tasks such as \u201cSend an email to John Doe mentioning the time and place to meet\". Specifically, given a pair of screen image and a visually-grounded natural language task, the goal is to generate a script capable of fully executing the task. We run several strong baseline language model agents on our benchmark. The strongest baseline, GPT-4, performs the best on our benchmark However, its performance level still reaches only 15% of the human proficiency in generating executable scripts capable of completing the task, demonstrating the challenge of our task for conventional web agents. Our benchmark provides a platform to measure and evaluate the progress of language model agents in automating computer tasks and motivates future work towards building multimodal models that bridge large language models and the visual grounding of computer screens.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Performing computer tasks based on natural language instructions has been a long-standing goal of artificial intelligence [49 ###reference_b49###]. One concrete objective in the line of research is to develop generalist agents that can assist humans in doing computer tasks [21 ###reference_b21###], such as \u201cOrder a pizza from Domino\u2019s\" or \u201cWrite a message to John.\" The agent should be able to open the application and perform the task. Executing these actions on a personal computer involves a sequence of interactions with a mouse and keyboard. For example, the simple task of writing an email involves hovering over the application icon, clicking it, clicking the \u2018New Email\u2019 button, writing the content of the email, and clicking send. Successfully sending an email requires accurately predicting the correct action at each step and accurately executing it, which is a herculean task even for the best agents today [14 ###reference_b14###].\nA generalist agent for computer tasks must understand natural language instructions, process visual screenshots, and produce the correct sequence of actions to be performed to achieve the intended task. Several existing approaches focus on building agents based on the HTML model [40 ###reference_b40###, 9 ###reference_b9###, 62 ###reference_b62###]. However, this approach introduces several challenges and constraints. These agents are limited to web applications and often struggle with complex or long-context HTML code. They cannot interact with native desktop applications or perform tasks that span multiple applications, like drafting an email using text from a code editor, without significant alterations. Furthermore, HTML-based agents, which are inherently powered by text-only language models, typically underperform in tasks requiring visual cues, such as identifying and clicking a blue button on a desktop\u2019s top-right corner. In contrast, humans can easily understand UI elements like dropdown menus, typable areas, redirections, and options with just a glance.\nTowards the goal of developing a generalist autonomous agent with robust visual and user interface (UI) understanding capabilities, we introduce a new task and dataset, OmniACT, containing over 9.8K pairs of images and instructions (Figure 1 ###reference_###) across different operating systems and the web. This dataset includes screenshots of various UI screens and corresponding natural language instructions. The objective of these instructions is to generate executable commands using the PyAutoGUI Python library [1 ###reference_b1###]. PyAutoGUI enables the automation of the mouse and keyboard operations, which helps to facilitate interactions with various native applications across macOS, Windows, and Linux. This simplifies completing specified tasks across different web domains and native desktop applications.\nWe evaluate several language model-based agent baselines on this dataset, including LLaMA [47 ###reference_b47###], Vicuna [7 ###reference_b7###], Palmyra-X (43B) [2 ###reference_b2###], InstructPalmyra-30B [45 ###reference_b45###], GPT 3.5, and GPT-4 [32 ###reference_b32###]. We experiment with fine-tuning Vicuna-13B and LLaMA-13B models using QLoRA [10 ###reference_b10###]. We also benchmark multimodal baseline LLaVa-v1.5-7B, LLaVa-v1.5-13B [47 ###reference_b47###], Gemini-Pro [44 ###reference_b44###] and GPT-4-vision-preview [55 ###reference_b55###] for the task. Our findings highlight the necessity for a multimodal model capable of executing these tasks, and our analysis provides insights into promising future work in the space.\nOur key contributions are outlined as follows:\nWe release a novel dataset of desktop and website applications consisting of over 9.8K natural language tasks, UI screens, and corresponding code snippets collected through human annotation. We introduce custom performance metrics tailored for computer tasks.\nWe propose DetACT, a module for creating textual representations of the screen using signals from OCR, color, and icon-template matching.\nWe conduct a comprehensive benchmark and analysis of state-of-the-art LLMs and multimodal models on our benchmark. Our results show that OmniACT is a challenging task for even the best LLM agents today, and existing models are far below human performance."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "UI Understanding",
|
| 21 |
+
"text": "User interface (UI) understanding has garnered interest from researchers in the machine learning and human-computer interaction communities, evolving with various models focusing on understanding the semantics of mobile and web user interfaces. UIBert [3 ###reference_b3###], PixelBERT [16 ###reference_b16###], ActionBert [15 ###reference_b15###], VUT [25 ###reference_b25###], Screen2Words [48 ###reference_b48###], WidgetCaptioning [24 ###reference_b24###] and Pix2Act [39 ###reference_b39###] are notable models in this area. They propose approaches for learning the user-interface semantics of the mobile screen using the image and view hierarchy. These models have demonstrated effectiveness in tasks like capability prediction, screen segmentation and understanding, and screen caption generation. Lexi [4 ###reference_b4###] and Spotlight [22 ###reference_b22###] propose models that use vision-only inputs to minimize the reliance on metadata such as view hierarchy. Furata et al. [11 ###reference_b11###] demonstrates the use of fine-tuning for multimodal web navigation. The majority of machine learning models trained for UI understanding leverage the Rico dataset [8 ###reference_b8###] and its extensions, which contain 64,462 unique Android screens and metadata. In addition, [4 ###reference_b4###] released the UICaptions dataset, which consists of diverse image-captions pairs across a wide range of applications. PixelHelp [23 ###reference_b23###] also released a corpus to train models that can interpret natural language instructions and map them to mobile UI actions."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Autonomous Computer Agents",
|
| 27 |
+
"text": "The advent of large language models (LLMs) has been pivotal in the rapid advancement of agents that operate on web pages. Recent research such as ViperGPT [43 ###reference_b43###] Chameleon [29 ###reference_b29###], RCI Agent [18 ###reference_b18###], VisProg [12 ###reference_b12###], and [31 ###reference_b31###] employ LLMs for planning or action prediction in developing autonomous agents. Benchmark datasets, such as MiniWoB [40 ###reference_b40###], WebShop [56 ###reference_b56###],\nMacaw-LLM [30 ###reference_b30###],\nASH-Prompting [41 ###reference_b41###]\nMind2Web [9 ###reference_b9###], WebArena [62 ###reference_b62###], AgentBench [28 ###reference_b28###] and VisualWebArena [20 ###reference_b20###]\nhave also been proposed to measure the ability of LLM-based agents to automate web tasks. These methods mainly involve agents that operate on a text-based Document Object Model (DOM) of HTML scripts. This limits their understanding of screen context, which is crucial for the model\u2019s decision-making and action-taking processes. To address this limitation, [35 ###reference_b35###] released Android in the Wild, a dataset comprising screens, natural language instructions, and corresponding actions. Following this, [59 ###reference_b59###] proposed a multimodal model, AutoUI, which is designed to build an agent on the Android in the Wild dataset confined to the Android ecosystem. WebAgent [13 ###reference_b13###] utilized Flan-U-PaLM, for grounded code generation, and HTML-T5 and showed improvement on real-world websites.\nCurrent benchmarks for autonomous agents focus mainly on the Web or Android environments, posing challenges for tasks involving desktop applications or spanning multiple applications beyond the web domain. The absence of established benchmarks and datasets in this area, coupled with basic methods for extracting user interface (UI) elements, underscores the need for significant progress in developing more versatile autonomous agents capable of handling diverse tasks beyond the current scope. To highlight the unique features that OmniACT introduces in the assessment of capable autonomous agents, we provide a comparison between the existing benchmarks and our proposed benchmark, OmniACT, in Table 1 ###reference_###.\n###figure_1###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "OmniACT",
|
| 33 |
+
"text": "We introduce a novel dataset and benchmark, OmniACT, which measures the performance of autonomous agents on both web and desktop applications. Compared to previous benchmarks which focus on text-based reasoning [40 ###reference_b40###, 62 ###reference_b62###, 9 ###reference_b9###, 56 ###reference_b56###, 17 ###reference_b17###], our benchmark aims to measure multimodal agents that bridge large language model planners and UI understanding vision models. OmniACT can be accomplished as a standalone task as it is not under a mock environment.\nAll actions that a human can execute on the computer can be encoded in the PyAutoGUI [1 ###reference_b1###] Python framework. This framework allows a user to execute keyboard and mouse operations by running Python code. The PyAutoGUI code to execute these tasks is shown in the third column of Figure 1 ###reference_###. For other computer tasks, the PyAutoGUI library provides functions such as \u2018press\u2019, \u2018write\u2019, and \u2018scroll\u2019 which can be used to execute the task. Our dataset consists of parallel data of natural language tasks, UI screenshots, and ground truth PyAutoGUI scripts that achieve successful execution."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Task Formulation",
|
| 39 |
+
"text": "Given an input state of a computer defined by the screen and the task description in natural language, the goal of the task is to output a sequence of actions that can successfully accomplish the task within a screenshot . Formally, the task can be defined as learning the transition function . During dataset collection, we ensure that all task descriptions are feasible and can be accomplished in the current screenshot . To reduce ambiguity and facilitate better evaluation, we ensure that task descriptions are detailed and unambiguous. Tasks can also be visually grounded (e.g., \u2018Click the red button to start recording\u2019) or natural language based (e.g., \u2018Click the My Account button\u2019). We define the action space using the functionalities in the PyAutoGUI library: . The exhaustive list of actions is provided in Table 2 ###reference_###. Our action space is much larger than other benchmarks [40 ###reference_b40###, 9 ###reference_b9###, 62 ###reference_b62###] that resort to two or three interaction options. Mouse actions such as \u2018moveTo\u2019, \u2018click\u2019, \u2018rightClick\u2019, \u2018doubleClick\u2019, and \u2018dragTo\u2019, additionally require screen coordinates as arguments, which indicate the pixel location of the action.\nFigure 1 ###reference_### illustrates sample tasks and corresponding outputs for three applications within OmniACT: (1) Stocks (MacOS), (2) Apartments.com (web page), and (3) Weather (MacOS). The first column depicts the input image, and the second column shows the natural language task that is to be executed on the current screen. To execute these tasks, a user must accurately perform a series of operations using the mouse and keyboard. Eg: to check the rate of change in Google\u2019s stock price over the last month, the mouse has to be moved to the last month and dragged while holding the left-click button to the current month."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Dataset Preparation",
|
| 45 |
+
"text": "To prepare our dataset, we followed a pipelined approach, as summarized in Figure 2 ###reference_###. We first selected a variety of applications and websites. For each application or website, we created bounding boxes around key UI elements and labeled them according to their functionality, which is crucial for assisting human annotators in writing accurate PyAutoGUI scripts. After each script is written, we converted the labels back into numeric coordinates, allowing us to align the scripts precisely with the locations of the UI elements. Finally, we thoroughly reviewed each script, focusing on its executability and adherence to syntax standards. This ensured the high quality and functionality of our dataset, making it a valuable resource for training and evaluating autonomous agents."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2.1",
|
| 49 |
+
"parent_section_id": "3.2",
|
| 50 |
+
"section_name": "3.2.1 Application/Website Selection",
|
| 51 |
+
"text": "To test the computer agents\u2019 generalization ability across different tasks, we collect tasks across multiple domains on both desktop and web applications. In total, we collect and annotate 9802 data points (Table 3 ###reference_###), with the split between desktop and web applications approximately 3:1. The emphasis on desktop applications, which do not contain Document Object Model (DOM) hierarchies unlike HTML-based web pages, presents a more complex multimodal challenge where visual cues are crucial. We collect tasks from applications within the three most popular operating systems. We select 22 native applications from MacOS, and 8 each from Linux and Windows. We annotate roughly 3 to 4 screens for every application. The full list of applications is provided in the Appendix.\nMany common computer tasks today are still performed through web applications, so we also collect 3-4 screenshots from 27 different web applications. To ensure diversity in task intents, we categorize these tasks into one of the following 6 categories: (1) Shopping, (2) Entertainment, (3) Service, (4) Government, (5) Travel, (6) Health. Inspired by the methodology of [9 ###reference_b9###], these categories were selected to cover a wide range of user intents and functionalities."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2.2",
|
| 55 |
+
"parent_section_id": "3.2",
|
| 56 |
+
"section_name": "3.2.2 UI Screen Segmentation",
|
| 57 |
+
"text": "To collect gold-standard data, we first annotate and segment the screen by identifying the bounding boxes present on the screen. We employ slightly different techniques for web and desktop applications to create the bounding boxes:\nDesktop Applications: We build a custom annotation interface based on PyQt5111https://pypi.org/project/PyQt5/ ###reference_pypi.org/project/PyQt5/### to create bounding boxes manually over a screen image using a simple drag-and-click mechanism. This custom interface expedites the process and allows us to get highly accurate gold-label data for desktop images.\nWebsites: For webpages, we write JavaScript code to extract all interactable (click, type, etc.) regions from HTML source code. We also extract banners, dropdowns, submit, and radio buttons from the screen. We filter the elements to retain only those that are visible and interactable within the screen."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2.3",
|
| 61 |
+
"parent_section_id": "3.2",
|
| 62 |
+
"section_name": "3.2.3 Functionality Tagging",
|
| 63 |
+
"text": "To map each bounding box to its correct functional description, we leverage Amazon MTurk workers (see details in Appendix), who are given an image with a bounding box and are required to write the correct description or label of the bounding box\u2019s function. For example, given an image of an Amazon webpage with a search bar, the annotator labels it as \u201cfind-product-search-bar\". The logical descriptions are used to create tasks in a structured manner without the need to identify individual bounding box coordinates."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.2.4",
|
| 67 |
+
"parent_section_id": "3.2",
|
| 68 |
+
"section_name": "3.2.4 Task Creation",
|
| 69 |
+
"text": "###figure_2### Our approach for each screen involves utilizing all human-annotated bounding boxes and their labels to create tasks that can be executed within the confines of a single screen. These tasks are designed to be visually grounded in order to measure the capabilities of multimodal agents. We plan to release the bounding box and their corresponding labels as the metadata for evaluation purposes.\nFor dataset compilation, college students with basic Python programming skills served as annotators, accessing API references for PyAutoGUI and examples of potential tasks. Each student generated multiple tasks, each accompanied by three alternative natural language reformulations. For instance, \u201cWhat is 3+2?\" might be reformulated as \u201cCalculate the sum of 2 and 3\" or \u201cAdd two to three\". To avoid train-test leakage, rephrased tasks were consistently placed in the same dataset split. Further details on the annotation process are available in the Appendix."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.2.5",
|
| 73 |
+
"parent_section_id": "3.2",
|
| 74 |
+
"section_name": "3.2.5 Reverse Mapping and Filtering",
|
| 75 |
+
"text": "To ensure high-quality data, we incorporate an additional step into the data collection pipeline. We build scripts to map the text-based labels of each bounding box back to their numeric coordinates, and then match the syntax and verify if the task will be executed on the screen. Using this filter, we remove all the non-working or syntactically incorrect data points and finally manually review the set of tasks.\nAfter filtering, we obtain 9802 human-annotated, gold-label data points across more than 200 desktop and web screens (Table 3 ###reference_###), split into train, validation, and test sets in a 7:1:2 ratio. All collected data will be publicly released to encourage future work on multimodal agents."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Evaluation Metrics",
|
| 81 |
+
"text": "In this section, we detail various evaluation metrics for benchmarking model performance on the OmniACT dataset. UI screens have additional constraints such as spatial relevance which are not factored in most conventional similarity-based metrics such as BLEU [34 ###reference_b34###], CodeBLEU [36 ###reference_b36###], BERTScore [58 ###reference_b58###] and CodeBERTScore [61 ###reference_b61###]. For example, a valid click action is usually not constrained to a single coordinate but can be any coordinate within a specified region. In the event of invalid coordinate predictions, an agent that predicts coordinates further away from the valid region should invoke a higher penalty compared to an agent that predicted coordinates close to the region. We propose two new metrics adapted: Sequence Score (Section 4.1 ###reference_###) and Action Score (Section 4.2 ###reference_###) aimed at utilizing UI information."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.1",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "Sequence Score",
|
| 87 |
+
"text": "The sequence score measures whether the predicted action sequence (e.g., \u2018click\u2019, \u2018write\u2019, \u2018press\u2019) exactly matches the gold sequence. Since predicting the first action in the sequence is relatively straightforward and later actions are more difficult, we define sequence score as follows:\nwhere is the action sequence length, is set to 0.1 and is set to 1."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.2",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "Action Score",
|
| 93 |
+
"text": "The action score measures how well a code snippet containing the correct action sequence can perform the task. Specifically, for a script with a correct action sequence, we introduce penalties for inaccurate behavior. The penalties are described below:\nClick penalty (): For actions \u2018click\u2019, \u2018rightClick\u2019, \u2018doubleClick\u2019, \u2018moveTo\u2019, and \u2018dragTo\u2019, we penalize code snippets where predicted coordinates lie outside of the bounding box of the UI element. The click penalty for the action of the example is defined as:\nHere corresponds to the smallest Euclidean distance between the predicted coordinate and bounding box. is zero when the predicted coordinate lies within the target bounding box. is the Dirichlet smoothing coefficient which we dynamically set to the inverse of the length of the diagonal of the bounding box. This ensures that the penalty for points outside the bounding box varies based on the size of the bounding box. For two predicted points with the same , the metric penalizes more heavily if the box is larger. This is sound with the intuition that the chances of clicking on a larger box are higher and should be penalized more in case of a mistake.\nKey penalty (): For actions \u2018press\u2019 and \u2018hotkey\u2019, we check whether the set of keys in the target code (represented as ) and predicted code (represented as ) are the same. It is formally defined as:\nWrite penalty (): For action type \u2018write\u2019, we penalize the output for the sentence to be typed. Specifically, we the employ BLEU score [34 ###reference_b34###], and compute:\nHere, represents the actual sentence to be typed, and represents the sentence predicted by the model in the action of example .\nIn the above equations, () is the weighting factor:\nThis ensures that the action score . The mean action score is calculated as follows:"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": " DetACT: DETecting ACTions from UI",
|
| 99 |
+
"text": "###figure_3### Understanding UI screens is crucial for multimodal computer tasks. Web-based agents typically use language-only inputs from the HTML DOM. This is insufficient for comprehending the full extent of an application UI, as many components may not be easily described with HTML code. To address this, we propose DetACT, which allows us to convert images of UI layouts into structured code and text outputs for a downstream LLM. DetACT is a system comprised of three distinct modules: the text module, the icon module, and the color module.\nText Extraction: We use the EasyOCR model222https://github.com/JaidedAI/EasyOCR ###reference_### to parse over the UI screens and collect all text-based elements. Along with the text, we also note the locations of each of these elements. This is depicted in Figure 3 ###reference_###, along with a list of text elements found on the screen using the OCR Module. We segment and classify the different regions within the screenshot using the Segment Anything Model (SAM) [19 ###reference_b19###]. From the outputs, we filter out the non-textual segments for our icon and color detection.\nIcon Module: For matching with the appropriate icon, we use a pack of 1600 icons333https://icomoon.io/ ###reference_icomoon.io/### as templates. Each of these icons is labeled with their appropriate functionality and is matched with the filtered outputs SAM [19 ###reference_b19###]. For the similarity of the two images, we resize the reference icons and segmented region of interest (ROI) to the same size, and convert both images to grayscale. After this, we use the Structural Similarity Index (SSIM) [52 ###reference_b52###], to find the closest match of the ROI to the icons in our set, and select the ones above the SSIM threshold of 0.95. As seen in Figure 3 ###reference_###, a few icons matched on the screen are Globe icon, Calendar icon, Person icon, and Location icon; each depicting a different use case.\nColor Module: Finally, to place all segments of interest into appropriate buckets of colors, we average the RGB pixel values over the ROI and, based on that value, bucket them into different color categories. We categorize colors differently based on the human perspective of the ranges of each color. To avoid ambiguity, we consider eleven major colors, namely yellow, blue, green, red, pink, violet, white, black, orange, brown, and grey. We record the center of the element along with the color.\nOnce all the elements of each category are extracted with their coordinates, we then filter these UI elements by prompting GPT-4 [32 ###reference_b32###]. We ensure that the elements selected are suited only for our task, for which we also provide the task description in our prompts along with the list of elements. Full details of the prompt are provided in the appendix section of the paper. As we observe in Figure 3 ###reference_###, given an image from the Expedia application, and a task (\u201cClick on the Black Location icon and enter the destination as Paris.\"), the LLM filters out the elements to retain only \u201cGoing To\", \u201cLocation Icon\", and the Black colored elements from the screen. This is passed as input to the LLM or VLM backbone."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "Baselines",
|
| 105 |
+
"text": "To evaluate the performance of existing language model-based agents, we conduct experiments with both language-based and multimodal baselines. The DetACT module takes in image and text descriptions of the task and outputs the color, icon, and text-based signals. This is concatenated to the prompt for the LLM prompt-based baselines (see Figure 4 ###reference_###). Every prompt starts with a role assignment [60 ###reference_b60###], followed by the detailed API reference of the PyAutoGUI function set, along with a textual description of their function. We then add five in-context examples from the training set that most closely match the task (based on the cosine similarity of the MiniLM [50 ###reference_b50###] embeddings of the reference task and the train examples). We add a list of UI elements filtered by the DetACT module to the prompt. Finally, we provide the rules with the task description. For multimodal baselines, we also pass the image pixels to the vision encoder. We choose coordinate-based UI elements in the prompt as recent techniques like the Set-of-Mark (SOM) [54 ###reference_b54###] prompting does not work for desktop settings since it is difficult to obtain interactive elements from the desktop screen images. We report the results of several baselines:\nFew-shot Generative LLM:\nWe experiment with models from LLaMA-2 [47 ###reference_b47###], Vicuna-1.5 [7 ###reference_b7###], CodeLLaMA-34B [37 ###reference_b37###], Palmyra [46 ###reference_b46###], and GPT [32 ###reference_b32###] series. We use the prompts structure as shown in Figure 4 ###reference_### to prompt the model. For LLaMA and CodeLLaMa, we reduce the prompt length to 2000 tokens by removing outputs from the DetACT module with lower confidence, as we observed poor performance on longer prompts. For the other models, we allow prompts with up to 4000 token sizes.\nFinetuned Generative LLM:\nWe fine-tuned the LLaMA-13B model and Vicuna-13B using QLoRa [10 ###reference_b10###] with rank 64 and scaling factor 16 for 300 steps to generate the code given screen description from the DetACT module and the instruction.\nFew-shot Generative Multimodal Models:\nAs OmniACT is predominantly multimodal, with a majority of tasks being visually grounded, we conduct experiments with large multimodal models. Given the limited research in this domain [57 ###reference_b57###, 51 ###reference_b51###], there is a scarcity of available multimodal models with significant size adept for this task. Here, we experiment with [27 ###reference_b27###, 26 ###reference_b26###], providing a similar prompt as well as the screen image."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "7",
|
| 109 |
+
"parent_section_id": null,
|
| 110 |
+
"section_name": "Results and Analysis",
|
| 111 |
+
"text": "As shown in Table 4 ###reference_###, we experiment with three different categories of models, namely Prompt-based LLMs, Fine-tuned LLMs, and Prompt-based Multimodal Models.\nGPT-4 is the best-performing approach, scoring higher on the sequence score and invoking lower penalties on coordinate predicting and text input.\nFor prompt-only LLMs, the GPT-3.5-turbo and GPT-4 models outperform the other LLM baselines, including the LLaMA [47 ###reference_b47###] and Vicuna [7 ###reference_b7###] models. We observe that CodeLLaMA-34B [38 ###reference_b38###], which is trained for code generation, also achieves a higher performance than other models of the same size at predicting the action sequences.\nFine-tuned models also perform much better than their few-shot prompt-only counterparts. Fine-tuning substantially improves LLaMA-13B\u2019s sequence score (4.80 to 8.92) and action score (1.62 to 2.14), as well as the other metrics.\nDespite this, we observed that both, prompt-based LLMs and finetuned LLMs face severe mouse penalties, especially on click coordinates. This is because they rely solely on text-based signals.\nTo address this, we experiment with multimodal language models (Table 4 ###reference_###). We observe that the coordinate prediction improves significantly when we provide the entire image as input to the multimodal LLM, as this enables it to fully utilize the screen representation. In addition to open sourced models, we also experiment with the GPT-4-vision API [55 ###reference_b55###] which shows that GPT-4 Vision [55 ###reference_b55###] outperforms GPT-4 significantly on the Action Score along with improving the sequence score, which we attribute to the strong reasoning abilities of GPT-4 coupled with the improved visual understanding capabilities of the GPT-4-vision model [55 ###reference_b55###]. These findings pave the way towards exciting new research directions on building multimodal models for long-horizon planning and code generation.\nHuman performance over the task: OmniACT consists of visually complicated tasks, and tests various types of computer skills. In order to get a gauge of how well humans perform, we collect evaluation data from human evaluators. We split the test set uniformly amongst 10 human evaluators, and provided them with the screenshot and task instruction. We record the actions taken by the annotators, and measure their performance on our predefined metrics (Table 4 ###reference_###).\nWe find that users generally exhibit a high level of proficiency when attempting most tasks for the first time. However, there are instances where users face difficulties in successfully completing certain tasks. These are due to factors including the user\u2019s inability to fully comprehend the task, difficulties in grounding the task to the provided screenshot, or a lack of familiarity with the UI."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "8",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "Conclusion and Future Work",
|
| 117 |
+
"text": "Autonomous virtual agents offer the potential to automate routine tasks, benefiting users with limited technical expertise. To solve this task, we introduce OmniACT, a unique dataset of 9.8K human-labeled data points. OmniACT benchmarks autonomous agents across a range of tasks on web and desktop applications.\nLLM-based agents, like GPT-4, achieve a respectable action score of 11.6 on our dataset. However, OmniACT presents a challenge for the current state-of-the-art language and multimodal models. It provides a direction for future research on foundational multimodal models that seamlessly integrate language and visual understanding of computer screens and stands poised to drive the next wave of advancements in generalist autonomous agents offering omnipotent assistance to humans."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "9",
|
| 121 |
+
"parent_section_id": null,
|
| 122 |
+
"section_name": "Limitations",
|
| 123 |
+
"text": "This work introduces a valuable dataset, yet we recognize a few limitations that exist. State-of-the-art models like GPT-4, may exhibit susceptibility to hallucinations and bias towards specific data types, hindering broad applicability. Reliance on closed models like GPT-4V poses integration challenges due to high costs and time constraints. Despite efforts for equal representation and data collection without personal information, biases may be introduced as the dataset is exclusively in English, and human-curated content may have temporal biases."
|
| 124 |
+
}
|
| 125 |
+
],
|
| 126 |
+
"appendix": [],
|
| 127 |
+
"tables": {
|
| 128 |
+
"1": {
|
| 129 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S2.T1.2.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S2.T1.3.2\" style=\"font-size:90%;\">Comparison of OmniACT with other related benchmarks.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S2.T1.4\" style=\"width:433.6pt;height:167.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-200.0pt,77.3pt) scale(0.520171801526931,0.520171801526931) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S2.T1.4.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.1.1\">Datasets</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S2.T1.4.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.2.1\">Size</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.3.1\">Env Type</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.1.4\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.4.1.1.1.4.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.4.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.4.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.4.1.1.1.1\">Task</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.4.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.4.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.4.1.2.1.1\">Heterogeneity</span></td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.1.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.4.1.1.1.5.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.5.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.5.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.5.1.1.1.1\">Real-World</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.5.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.5.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.5.1.2.1.1\">Portayal</span></td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.1.6\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.4.1.1.1.6.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.6.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.6.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.6.1.1.1.1\">Executional</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.6.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.6.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.6.1.2.1.1\">Correctness</span></td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.1.7\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.4.1.1.1.7.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.7.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.7.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.7.1.1.1.1\">Supports</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.7.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.7.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.7.1.2.1.1\">Desktop</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.7.1.3\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.7.1.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.7.1.3.1.1\">Apps</span></td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.1.8\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.4.1.1.1.8.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.8.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.8.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.8.1.1.1.1\">Continuous Scale</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.8.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.8.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.8.1.2.1.1\">Adaptive</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1.1.8.1.3\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.1.1.8.1.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.8.1.3.1.1\">Evaluation</span></td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.1.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.1.1.9.1\">Task</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.4.1.2.1.1\">VisualWebArena\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib20\" title=\"\">20</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.4.1.2.1.2\">910</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.2.1.3\">Web</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.2.1.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.2.1.4.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.2.1.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.2.1.5.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.2.1.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.2.1.6.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.2.1.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.2.1.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.2.1.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.2.1.8.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.2.1.9\">Web Navigation</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.3.2.1\">WebArena\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib62\" title=\"\">62</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.3.2.2\">812</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.3.2.3\">Web</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.3.2.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.3.2.4.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.3.2.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.3.2.5.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.3.2.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.3.2.6.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.3.2.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.3.2.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.3.2.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.3.2.8.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.3.2.9\">Web Navigation</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.4.3.1\">Mind2Web\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib9\" title=\"\">9</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.4.3.2\">2350</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.4.3.3\">Web</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.4.3.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.4.3.4.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.4.3.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.4.3.5.1\" style=\"color:#32CB00;\"> Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.4.3.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.4.3.6.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.4.3.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.4.3.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.4.3.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.4.3.8.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.4.3.9\">Web Navigation</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.5.4.1\">WebShop\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib56\" title=\"\">56</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.5.4.2\">12000 Products</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.5.4.3\">Web</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.5.4.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.5.4.4.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.5.4.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.5.4.5.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.5.4.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.5.4.6.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.5.4.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.5.4.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.5.4.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.5.4.8.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.5.4.9\">Web Navigation</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.6.5.1\">RUSS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib53\" title=\"\">53</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.6.5.2\">80</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.6.5.3\">Web</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.6.5.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.6.5.4.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.6.5.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.6.5.5.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.6.5.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.6.5.6.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.6.5.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.6.5.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.6.5.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.6.5.8.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.6.5.9\">Web Navigation</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.7.6.1\">WebSRC\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib6\" title=\"\">6</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.7.6.2\">2735</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.7.6.3\">Web</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.7.6.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.7.6.4.1\" style=\"color:#32CB00;\"> Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.7.6.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.7.6.5.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.7.6.6\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.7.6.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.7.6.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.7.6.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.7.6.8.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.7.6.9\">QA</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.4.1.8.7.1\">MiniWoB++ <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib17\" title=\"\">17</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.4.1.8.7.2\">100</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.8.7.3\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.4.1.8.7.3.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.8.7.3.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.8.7.3.1.1.1\">Mobile</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.8.7.3.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.8.7.3.1.2.1\">Websites</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.8.7.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.8.7.4.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.8.7.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.8.7.5.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.8.7.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.8.7.6.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.8.7.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.8.7.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.8.7.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.8.7.8.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S2.T1.4.1.8.7.9\">Web Navigation</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.9.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.9.8.1\">PixelHelp\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib23\" title=\"\">23</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.9.8.2\">187</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.9.8.3\">Mobile</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.9.8.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.9.8.4.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.9.8.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.9.8.5.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.9.8.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.9.8.6.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.9.8.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.9.8.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.9.8.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.9.8.8.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.9.8.9\">UI Grounding</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.10.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.10.9.1\">MetaGUI \u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib42\" title=\"\">42</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.10.9.2\">1125</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.10.9.3\">Mobile</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.10.9.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.10.9.4.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.10.9.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.10.9.5.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.10.9.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.10.9.6.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.10.9.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.10.9.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.10.9.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.10.9.8.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.10.9.9\">Mobile Navigation</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.11.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.11.10.1\">MoTIF\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib5\" title=\"\">5</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.11.10.2\">756</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.11.10.3\">Mobile</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.11.10.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.11.10.4.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.11.10.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.11.10.5.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.11.10.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.11.10.6.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.11.10.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.11.10.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.11.10.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.11.10.8.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.11.10.9\">Mobile Navigation</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.12.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.12.11.1\">AITW\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib35\" title=\"\">35</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.4.1.12.11.2\">715142</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.12.11.3\">Mobile and Web</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.12.11.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.12.11.4.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.12.11.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.12.11.5.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.12.11.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.12.11.6.1\" style=\"color:#32CB00;\">Yes</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.12.11.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.12.11.7.1\" style=\"color:#FE0000;\">No</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.1.12.11.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.12.11.8.1\" style=\"color:#FE0000;\"> No</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.12.11.9\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.4.1.12.11.9.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.12.11.9.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.12.11.9.1.1.1\">Mobile/Web</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.12.11.9.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.1.12.11.9.1.2.1\">Navigation</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.13.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S2.T1.4.1.13.12.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.13.12.1.1\">OmniACT</span> (Ours)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S2.T1.4.1.13.12.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.13.12.2.1\">9802</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.1.13.12.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.13.12.3.1\">Desktop and Web</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.1.13.12.4\"><span class=\"ltx_text\" id=\"S2.T1.4.1.13.12.4.1\" style=\"color:#32CB00;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.13.12.4.1.1\">Yes</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.1.13.12.5\"><span class=\"ltx_text\" id=\"S2.T1.4.1.13.12.5.1\" style=\"color:#32CB00;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.13.12.5.1.1\">Yes</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.1.13.12.6\"><span class=\"ltx_text\" id=\"S2.T1.4.1.13.12.6.1\" style=\"color:#32CB00;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.13.12.6.1.1\">Yes</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.1.13.12.7\"><span class=\"ltx_text\" id=\"S2.T1.4.1.13.12.7.1\" style=\"color:#32CB00;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.13.12.7.1.1\">Yes</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.1.13.12.8\"><span class=\"ltx_text\" id=\"S2.T1.4.1.13.12.8.1\" style=\"color:#32CB00;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.13.12.8.1.1\">Yes</span></span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.4.1.13.12.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.4.1.13.12.9.1\">Code Generation</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 130 |
+
"capture": "Table 1: Comparison of OmniACT with other related benchmarks."
|
| 131 |
+
},
|
| 132 |
+
"2": {
|
| 133 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T2.3.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S3.T2.4.2\" style=\"font-size:90%;\">Action types supported by <span class=\"ltx_text\" id=\"S3.T2.4.2.1\" style=\"color:#FF8000;\">Omni<span class=\"ltx_text\" id=\"S3.T2.4.2.1.1\" style=\"color:#800080;\">ACT</span></span>\u00a0and the number of instances for each action in the dataset.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T2.5\" style=\"width:108.4pt;height:118.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-36.6pt,39.9pt) scale(0.59681492834118,0.59681492834118) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.5.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T2.5.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1\">Type</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.2.1\">Action</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.3.1\">%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.5.1.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.2.2.2\"><span class=\"ltx_text\" id=\"S3.T2.5.1.2.2.2.1\" style=\"background-color:#FFFFC7;\">Click</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.2.2.3\"><span class=\"ltx_text\" id=\"S3.T2.5.1.2.2.3.1\" style=\"background-color:#FFFFC7;\">63.73</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.3.3\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.3.3.2\"><span class=\"ltx_text\" id=\"S3.T2.5.1.3.3.2.1\" style=\"background-color:#FFFFC7;\">Double Click</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.3.3.3\"><span class=\"ltx_text\" id=\"S3.T2.5.1.3.3.3.1\" style=\"background-color:#FFFFC7;\">0.58</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.4.4\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.4.4.2\"><span class=\"ltx_text\" id=\"S3.T2.5.1.4.4.2.1\" style=\"background-color:#FFFFC7;\">Right Click</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.4.4.3\"><span class=\"ltx_text\" id=\"S3.T2.5.1.4.4.3.1\" style=\"background-color:#FFFFC7;\">0.77</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.5.5\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.5.5.2\"><span class=\"ltx_text\" id=\"S3.T2.5.1.5.5.2.1\" style=\"background-color:#FFFFC7;\">Move/Hover</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.5.5.3\"><span class=\"ltx_text\" id=\"S3.T2.5.1.5.5.3.1\" style=\"background-color:#FFFFC7;\">1.85</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.6.6\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.6.6.2\"><span class=\"ltx_text\" id=\"S3.T2.5.1.6.6.2.1\" style=\"background-color:#FFFFC7;\">Drag</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.6.6.3\"><span class=\"ltx_text\" id=\"S3.T2.5.1.6.6.3.1\" style=\"background-color:#FFFFC7;\">0.29</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.7.7\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.7.7.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.7.7.2\"><span class=\"ltx_text\" id=\"S3.T2.5.1.7.7.2.1\" style=\"background-color:#FFFFC7;\">Scroll</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.7.7.3\"><span class=\"ltx_text\" id=\"S3.T2.5.1.7.7.3.1\" style=\"background-color:#FFFFC7;\">1.68</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.8.8\" style=\"background-color:#FFFFC7;\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.5.1.8.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.8.8.1.1\" style=\"background-color:#FFFFC7;\">Mouse</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.8.8.2\"><span class=\"ltx_text\" id=\"S3.T2.5.1.8.8.2.1\" style=\"background-color:#FFFFC7;\">Horizontal Scroll</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.8.8.3\"><span class=\"ltx_text\" id=\"S3.T2.5.1.8.8.3.1\" style=\"background-color:#FFFFC7;\">0.17</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.9.9\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.5.1.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.9.9.2\"><span class=\"ltx_text\" id=\"S3.T2.5.1.9.9.2.1\" style=\"background-color:#C3EFAF;\">Press</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.9.9.3\"><span class=\"ltx_text\" id=\"S3.T2.5.1.9.9.3.1\" style=\"background-color:#C3EFAF;\">16.28</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.10.10\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.10.10.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.10.10.2\"><span class=\"ltx_text\" id=\"S3.T2.5.1.10.10.2.1\" style=\"background-color:#C3EFAF;\">Hotkey</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.10.10.3\"><span class=\"ltx_text\" id=\"S3.T2.5.1.10.10.3.1\" style=\"background-color:#C3EFAF;\">3.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1.11.11\" style=\"background-color:#C3EFAF;\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T2.5.1.11.11.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.11.11.1.1\" style=\"background-color:#C3EFAF;\">Keyboard</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.5.1.11.11.2\"><span class=\"ltx_text\" id=\"S3.T2.5.1.11.11.2.1\" style=\"background-color:#C3EFAF;\">Write</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.5.1.11.11.3\"><span class=\"ltx_text\" id=\"S3.T2.5.1.11.11.3.1\" style=\"background-color:#C3EFAF;\">11.65</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 134 |
+
"capture": "Table 2: Action types supported by OmniACT\u00a0and the number of instances for each action in the dataset."
|
| 135 |
+
},
|
| 136 |
+
"3": {
|
| 137 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T3.2.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S3.T3.3.2\" style=\"font-size:90%;\">Dataset distribution across splits and platforms.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T3.4\" style=\"width:173.4pt;height:69.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-47.9pt,19.2pt) scale(0.644244973411587,0.644244973411587) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T3.4.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S3.T3.4.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.4.1.1.1.1.1\">Domain</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T3.4.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.4.1.1.1.2.1\">Train</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T3.4.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.4.1.1.1.3.1\">Validation</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T3.4.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.4.1.1.1.4.1\">Test</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T3.4.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.4.1.1.1.5.1\">Total</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T3.4.1.2.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T3.4.1.2.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.4.1.2.1.2\"><span class=\"ltx_text\" id=\"S3.T3.4.1.2.1.2.1\" style=\"background-color:#FFFFC7;\">Mac OS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.4.1.2.1.3\"><span class=\"ltx_text\" id=\"S3.T3.4.1.2.1.3.1\" style=\"background-color:#FFFFC7;\">3028</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.4.1.2.1.4\"><span class=\"ltx_text\" id=\"S3.T3.4.1.2.1.4.1\" style=\"background-color:#FFFFC7;\">444</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.4.1.2.1.5\"><span class=\"ltx_text\" id=\"S3.T3.4.1.2.1.5.1\" style=\"background-color:#FFFFC7;\">786</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.4.1.2.1.6\" style=\"background-color:#FFFFC7;\"><span class=\"ltx_text\" id=\"S3.T3.4.1.2.1.6.1\" style=\"background-color:#FFFFC7;\">4258</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.4.1.3.2\">\n<td class=\"ltx_td\" id=\"S3.T3.4.1.3.2.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.3.2.2\"><span class=\"ltx_text\" id=\"S3.T3.4.1.3.2.2.1\" style=\"background-color:#FFFFC7;\">Linux</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.3.2.3\"><span class=\"ltx_text\" id=\"S3.T3.4.1.3.2.3.1\" style=\"background-color:#FFFFC7;\">761</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.3.2.4\"><span class=\"ltx_text\" id=\"S3.T3.4.1.3.2.4.1\" style=\"background-color:#FFFFC7;\">126</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.3.2.5\"><span class=\"ltx_text\" id=\"S3.T3.4.1.3.2.5.1\" style=\"background-color:#FFFFC7;\">247</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.3.2.6\" style=\"background-color:#FFFFC7;\"><span class=\"ltx_text\" id=\"S3.T3.4.1.3.2.6.1\" style=\"background-color:#FFFFC7;\">1134</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.4.1.4.3\" style=\"background-color:#FFFFC7;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.4.3.1\" style=\"background-color:#FFFFC7;\"><span class=\"ltx_text\" id=\"S3.T3.4.1.4.3.1.1\" style=\"background-color:#FFFFC7;\">Desktop</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.4.3.2\"><span class=\"ltx_text\" id=\"S3.T3.4.1.4.3.2.1\" style=\"background-color:#FFFFC7;\">Windows</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.4.3.3\"><span class=\"ltx_text\" id=\"S3.T3.4.1.4.3.3.1\" style=\"background-color:#FFFFC7;\">1573</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.4.3.4\"><span class=\"ltx_text\" id=\"S3.T3.4.1.4.3.4.1\" style=\"background-color:#FFFFC7;\">216</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.4.3.5\"><span class=\"ltx_text\" id=\"S3.T3.4.1.4.3.5.1\" style=\"background-color:#FFFFC7;\">458</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.4.3.6\" style=\"background-color:#FFFFC7;\"><span class=\"ltx_text\" id=\"S3.T3.4.1.4.3.6.1\" style=\"background-color:#FFFFC7;\">2247</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.4.1.5.4\" style=\"background-color:#C3EFAF;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.5.4.1\" style=\"background-color:#C3EFAF;\"><span class=\"ltx_text\" id=\"S3.T3.4.1.5.4.1.1\" style=\"background-color:#C3EFAF;\">Web</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.5.4.2\"><span class=\"ltx_text\" id=\"S3.T3.4.1.5.4.2.1\" style=\"background-color:#C3EFAF;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.5.4.3\"><span class=\"ltx_text\" id=\"S3.T3.4.1.5.4.3.1\" style=\"background-color:#C3EFAF;\">1427</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.5.4.4\"><span class=\"ltx_text\" id=\"S3.T3.4.1.5.4.4.1\" style=\"background-color:#C3EFAF;\">206</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.5.4.5\"><span class=\"ltx_text\" id=\"S3.T3.4.1.5.4.5.1\" style=\"background-color:#C3EFAF;\">530</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.1.5.4.6\" style=\"background-color:#C3EFAF;\"><span class=\"ltx_text\" id=\"S3.T3.4.1.5.4.6.1\" style=\"background-color:#C3EFAF;\">2163</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.4.1.6.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.4.1.6.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.4.1.6.5.1.1\">Total</span></td>\n<td class=\"ltx_td ltx_border_bb ltx_border_t\" id=\"S3.T3.4.1.6.5.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.4.1.6.5.3\">6789</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.4.1.6.5.4\">992</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.4.1.6.5.5\">2,021</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.4.1.6.5.6\">9802</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 138 |
+
"capture": "Table 3: Dataset distribution across splits and platforms."
|
| 139 |
+
},
|
| 140 |
+
"4": {
|
| 141 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S7.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S7.T4.13.4.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S7.T4.6.3\" style=\"font-size:90%;\">Baseline Performance. (A) Prompt-only LLMs, (B) Fine Tuned LLMs, (C) Prompt-only Multimodal Models. The table represents the Sequence score (SS), click penalty (), Key penalty (), Write Penalty (), and Action Score (AS). The best results for the (SS) and (AS) are highlighted.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S7.T4.11\" style=\"width:195.1pt;height:182.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-98.1pt,91.6pt) scale(0.498644507854812,0.498644507854812) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S7.T4.11.5\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S7.T4.11.5.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.5.6.1\">Model</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S7.T4.7.1.1.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S7.T4.7.1.1.1.1\">\n<tr class=\"ltx_tr\" id=\"S7.T4.7.1.1.1.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S7.T4.7.1.1.1.1.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.7.1.1.1.1.1.1.1\">SS(</span><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.7.1.1.1.1.1.1.2\">)</span>\n</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S7.T4.8.2.2.2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S7.T4.8.2.2.2.1\">\n<tr class=\"ltx_tr\" id=\"S7.T4.8.2.2.2.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S7.T4.8.2.2.2.1.1.1\"></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S7.T4.9.3.3.3\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S7.T4.9.3.3.3.1\">\n<tr class=\"ltx_tr\" id=\"S7.T4.9.3.3.3.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S7.T4.9.3.3.3.1.1.1\"></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S7.T4.10.4.4.4\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S7.T4.10.4.4.4.1\">\n<tr class=\"ltx_tr\" id=\"S7.T4.10.4.4.4.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S7.T4.10.4.4.4.1.1.1\"></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S7.T4.11.5.5.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S7.T4.11.5.5.5.1\">\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.5.5.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S7.T4.11.5.5.5.1.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.5.5.1.1.1.1\">AS(</span><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.5.5.1.1.1.2\">)</span>\n</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.6.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S7.T4.11.5.6.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.6.1.1.1\" style=\"font-size:90%;\">Prompt based LLMs</span></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.6.1.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.6.1.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.6.1.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.6.1.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.6.1.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.7.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.7.2.1\">LLaMA-7B\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib47\" title=\"\">47</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.7.2.2\">4.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.7.2.3\">1.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.7.2.4\">1.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.7.2.5\">0.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.7.2.6\">0.48</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.8.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.8.3.1\">Vicuna-7B \u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib7\" title=\"\">7</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.8.3.2\">3.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.8.3.3\">1.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.8.3.4\">1.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.8.3.5\">0.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.8.3.6\">0.77</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.9.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.9.4.1\">LLaMA-13B \u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib47\" title=\"\">47</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.9.4.2\">4.80</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.9.4.3\">1.32</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.9.4.4\">0.93</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.9.4.5\">0.93</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.9.4.6\">1.62</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.10.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.10.5.1\">Vicuna-13B \u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib7\" title=\"\">7</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.10.5.2\">5.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.10.5.3\">1.65</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.10.5.4\">0.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.10.5.5\">1.06</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.10.5.6\">1.78</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.11.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.11.6.1\">Palmyra-Instruct-30B <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib45\" title=\"\">45</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.11.6.2\">7.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.11.6.3\">5.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.11.6.4\">0.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.11.6.5\">0.40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.11.6.6\">1.31</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.12.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.12.7.1\">CodeLLaMA-34B <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib38\" title=\"\">38</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.12.7.2\">10.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.12.7.3\">2.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.12.7.4\">2.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.12.7.5\">0.66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.12.7.6\">3.72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.13.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.13.8.1\">Palmyra-X 43B <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib2\" title=\"\">2</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.13.8.2\">11.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.13.8.3\">3.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.13.8.4\">3.02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.13.8.5\">2.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.13.8.6\">2.94</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.14.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.14.9.1\">GPT-3.5-turbo-0613\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib33\" title=\"\">33</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.14.9.2\">22.85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.14.9.3\">8.13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.14.9.4\">4.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.14.9.5\">2.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.14.9.6\">7.89</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.15.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.15.10.1\">GPT-4 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib32\" title=\"\">32</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.15.10.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.15.10.2.1\">32.75</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.15.10.3\">10.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.15.10.4\">6.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.15.10.5\">3.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.15.10.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.15.10.6.1\">11.60</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.16.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S7.T4.11.5.16.11.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.16.11.1.1\" style=\"font-size:90%;\">Finetuned LLMs</span></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.16.11.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.16.11.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.16.11.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.16.11.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.16.11.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.17.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.17.12.1\">LLaMA-13B FT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.17.12.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.17.12.2.1\">8.92</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.17.12.3\">4.61</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.17.12.4\">1.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.17.12.5\">0.74</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.17.12.6\">2.14</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.18.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.18.13.1\">Vicuna-13B FT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.18.13.2\">8.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.18.13.3\">4.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.18.13.4\">1.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.18.13.5\">0.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.18.13.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.18.13.6.1\">2.72</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.19.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S7.T4.11.5.19.14.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.19.14.1.1\" style=\"font-size:90%;\">Multimodal LLMs</span></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.19.14.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.19.14.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.19.14.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.19.14.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T4.11.5.19.14.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.20.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.20.15.1\">LLaVA-v1.5-7B\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib27\" title=\"\">27</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.20.15.2\">13.23</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.20.15.3\">4.73</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.20.15.4\">1.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.20.15.5\">1.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.20.15.6\">5.82</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.21.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.21.16.1\">LLaVA-v1.5-13B\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib26\" title=\"\">26</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.21.16.2\">20.56</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.21.16.3\">6.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.21.16.4\">3.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.21.16.5\">2.85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.21.16.6\">8.19</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.22.17\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.22.17.1\">Gemini-Pro\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib44\" title=\"\">44</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.22.17.2\">30.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.22.17.3\">9.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.22.17.4\">6.81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.22.17.5\">3.66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.22.17.6\">11.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.23.18\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.11.5.23.18.1\">GPT-4V\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.17553v3#bib.bib26\" title=\"\">26</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.23.18.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.23.18.2.1\">38.72</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.23.18.3\">10.53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.23.18.4\">7.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.23.18.5\">4.03</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.11.5.23.18.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.11.5.23.18.6.1\">17.02</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.11.5.24.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S7.T4.11.5.24.19.1\">Human Performance</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S7.T4.11.5.24.19.2\">82.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S7.T4.11.5.24.19.3\">0.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S7.T4.11.5.24.19.4\">0.36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S7.T4.11.5.24.19.5\">1.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S7.T4.11.5.24.19.6\">80.14</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 142 |
+
"capture": "Table 4: Baseline Performance. (A) Prompt-only LLMs, (B) Fine Tuned LLMs, (C) Prompt-only Multimodal Models. The table represents the Sequence score (SS), click penalty (), Key penalty (), Write Penalty (), and Action Score (AS). The best results for the (SS) and (AS) are highlighted."
|
| 143 |
+
}
|
| 144 |
+
},
|
| 145 |
+
"image_paths": {
|
| 146 |
+
"2": {
|
| 147 |
+
"figure_path": "2402.17553v3_figure_2.png",
|
| 148 |
+
"caption": "Figure 2: Data Collection Pipeline. (1) We select over 60 applications and websites to ensure diversity, (2) segment the screen through human-annotated bounding boxes, (3) label the bounding boxes based on functionality, (4) ask student volunteers to come up with tasks, given a screen image, and (5) reverse map the textual labels to coordinates and filter the scripts based on execution and syntax.",
|
| 149 |
+
"url": "http://arxiv.org/html/2402.17553v3/x2.png"
|
| 150 |
+
},
|
| 151 |
+
"3": {
|
| 152 |
+
"figure_path": "2402.17553v3_figure_3.png",
|
| 153 |
+
"caption": "Figure 3: DetACT Module. Given an initial image and a natural language task description, we use a pipelined approach to run OCR and SAM on the screen. The outputs from SAM are then used by icon and color-matching modules to obtain an exhaustive set of useful UI elements. The list of elements is passed through LLM based filter to select only the elements related to the given task.",
|
| 154 |
+
"url": "http://arxiv.org/html/2402.17553v3/extracted/5746088/figs/detact_page-0001.jpg"
|
| 155 |
+
},
|
| 156 |
+
"4": {
|
| 157 |
+
"figure_path": "2402.17553v3_figure_4.png",
|
| 158 |
+
"caption": "Figure 4: Baseline Model Architecture. Image and task descriptions are sent to DetACT module, which gives a filtered list of UI elements relevant to feed into the prompt along with the task. We also show the prompt structure used for action script generation. This structure is passed through the LLM (along with the image for multimodal LLM) to generate the automation script.",
|
| 159 |
+
"url": "http://arxiv.org/html/2402.17553v3/x3.png"
|
| 160 |
+
}
|
| 161 |
+
},
|
| 162 |
+
"validation": true,
|
| 163 |
+
"references": [],
|
| 164 |
+
"url": "http://arxiv.org/html/2402.17553v3"
|
| 165 |
+
}
|
20240721/2402.18919v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2403.00957v2.json
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Resolution of Simpson\u2019s paradox via the common cause principle",
|
| 3 |
+
"abstract": "Simpson\u2019s paradox is an obstacle to establishing a probabilistic association between two events and , given the third (lurking) random variable . We focus on scenarios when the random variables (which combines , , and their complements) and have a common cause that need not be observed. Alternatively, we can assume that screens out from . For such cases, the correct association between and is to be defined via conditioning over . This setup generalizes the original Simpson\u2019s paradox: now its two contradicting options refer to two particular and different causes . We show that if and are binary and is quaternary (the minimal and the most widespread situation for the Simpson\u2019s paradox), the conditioning over any binary common cause establishes the same direction of association between and as the conditioning over in the original formulation of the paradox. Thus, for the minimal common cause, one should choose the option of Simpson\u2019s paradox that assumes conditioning over and not its marginalization.\nThe same conclusion is reached when Simpson\u2019s paradox is formulated via 3 continuous Gaussian variables: within the minimal formulation of the paradox (3 scalar continuous variables , , and ), one should choose the option with the conditioning over .",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Simpson\u2019s paradox was discovered more than a century ago [1 ###reference_b1###, 2 ###reference_b2###], generated a vast literature, and is well-recognized in several fields including, statistics, epidemiology, psychology, social science, etc.\n[3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###]. This counter-intuitive effect limits the ability to draw conclusions from probabilistic data. The effect is important because it demands more than simply extracting relative frequencies from data; e.g. it necessitates looking at exchangeability [9 ###reference_b9###] or causality [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 13 ###reference_b13###, 14 ###reference_b14###].\nThe paradox starts with two random variables and . Now contains control variable and the target variable , while is a side random variable that correlates with both and . The meaning of and is clarified via examples presented below. If there is no information on the outcome of , the behavior of can be studied on two levels. The first (aggregated) level is that of marginal probabilities . The second level is finer-grained and is represented by conditional probabilities for all possible values of . Simpson\u2019s paradox amounts to certain relations between those probabilities; see section 2 ###reference_### for details. It states that no decision-making is possible, because conclusions drawn from probabilities on different levels contradict each other. Without Simpson\u2019s paradox, decision-making can proceed at the aggregate level, because looking at the fine-grained level is either redundant or inconclusive. Thus, Simpson\u2019s paradox first and foremost involves decision-making. Moreover, it demonstrates limitations of the sure-thing principle [5 ###reference_b5###], a pillar of traditional decision making [25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###]. A recent review of the sure-thing principle (and its limitations other than Simpson\u2019s paradox) can be found in Ref. [28 ###reference_b28###]. Limitations of probabilistic decision-making are important for the modern artificial intelligence\n(probability models, uncertainty estimation, etc).\nIn section 2 ###reference_###, Simpson\u2019s paradox is defined in detail, and previous efforts to resolve it in several specific situations are reviewed and criticized. In particular, we show that while certain previous solutions of the paradox assumed the existence of\n(causally-sufficient) time-ordered directed acyclic graphs (TODAGs) that describe the 3 variables involved in the paradox, several important examples of the paradox need not support this assumption; see sections 2.2.3 ###reference_.SSS3###, 4 ###reference_### and 5 ###reference_###. Based on the previous literature, we also argue in section 2 ###reference_### that\nSimpson\u2019s paradox is sufficiently frequent when the probabilities of the involved variables are generated from the unbiased (non-informative) distribution, modeled via Dirichlet density. Hence this is a genuine decision-making paradox and not an artifact due to inappropriate data gathering.\nOur proposal here is to look for the resolution of the paradox by assuming that\u2014given two correlated variables and \u2014there is a random variable that makes and conditionally independent; i.e., screens out from . Examples of Simpson\u2019s paradox show that such a is frequently plausible, though it is normally not observed directly. In particular, is conceivable if the correlations between and are not caused by a direct causal influence of on . Then the existence of is postulated by the common cause principle. (If correlations are caused by a causal influence of on , Simpson\u2019s paradox can formally exist, but factually it is absent because the decision is obviously to be taken according to the aggregated level.)\nIntroducing the screening variable allows us to reformulate and extend Simpson\u2019s paradox: its two options\u2014along with many other options\u2014refer to particular choices of ; see section 3 ###reference_###. Now the paradox seems to be further from resolution than before. However, we show that when the variables , , , and holding the paradox are binary (the minimal set-up of the paradox), the decision-making is to be made according to the fine-grained probabilities, i.e., the paradox is resolved. Such a definite relation is impossible for a tertiary (or larger) : now depending on all options of Simpson\u2019s paradox are possible, e.g. the precise control of can be necessary for decision-making.\nNext, we turn to Simpson\u2019s paradox for continuous variables, which was discussed earlier than the discrete formulation [1 ###reference_b1###]. It holds the main message of the discrete formulation. In addition, it includes the concept of the conditional correlation coefficient (only for Gaussian variables is the random-variable dependence fully explained by the correlation coefficient). The continuous formulation is important because it applies to big data [23 ###reference_b23###, 24 ###reference_b24###, 29 ###reference_b29###], and because (statistically) it is more frequent than the discrete version [30 ###reference_b30###]. The advantage of continuous Gaussian formulation is that the general description of the paradox under the common cause is feasible; see section 6 ###reference_###. For this situation, we show conceptually the same result as for the discrete version: in the minimal (and most widespread) version of the paradox, the very existence of an (unobservable) common cause leads to preferring the fine-grained option of the paradox.\nThe rest of this paper is organized as follows. Section 2 ###reference_### is a short but sufficiently inclusive review of Simpson\u2019s paradox and its resolutions proposed in the literature 111Among the issues not addressed in this paper is the explanation of Simpson\u2019s paradox using counterfactual random variables. This subject is reviewed in [6 ###reference_b6###]. . It also discusses two basic examples for illustrating different aspects of the paradox; see section 2.2.3 ###reference_.SSS3###. In section 2.3 ###reference_###, we review results about how frequent the paradox is and re-estimate its frequency within an unbiased data generation. In section 3 ###reference_### we reformulate Simpson\u2019s paradox by assuming that there is a common cause (or screening variable) behind the three variables. Now need not be observable, since we show that it will be sufficient to assume that it exists and (provided that all variables are binary) Simpson\u2019s paradox is resolved by choosing its fine-grained option. A similar conclusion is reached for Gaussian variables; see section 6 ###reference_###. Section 4 ###reference_### considers published data from Ref. [16 ###reference_b16###] on a case of smoking and surviving. This example is not easily treated via the existing methods. Still, we show that the existence of a common cause for this situation is plausible and that Simpson\u2019s paradox can be studied via our method and leads to a reasonable result. Section 5 ###reference_### treats data on COVID-19, which was suggested in Ref. [31 ###reference_b31###]. We demonstrate that an assumption of a plausible common cause points to different conclusions than in Ref. [31 ###reference_b31###]. The last section summarizes and outlines future research directions."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Formulation of Simpson\u2019s paradox and previous works",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Formulation of the paradox for binary variables and its necessary conditions",
|
| 21 |
+
"text": "To formulate the paradox in its simplest form, assume three binary random variables , , . The target event is , and we would like to know how it is influenced by which occurs at an earlier time than the time of : .\nThis can be done by looking at conditional probability. For\nwhich is equivalent to , we would conclude that enables . However, (1 ###reference_###) is compatible with\nwhere also occured in an earlier time: . Examples supporting (1 ###reference_###\u20133 ###reference_###) are studied below (sections 2.2.3 ###reference_.SSS3###, 4 ###reference_### and 5 ###reference_###) and also in Appendix .1 ###reference_###. Since (2 ###reference_###, 3 ###reference_###) hold for each value of we should perhaps conclude that enables in contrast to (1 ###reference_###). Decision-makers would not know whether to apply (1 ###reference_###) or (2 ###reference_###, 3 ###reference_###). This is Simpson\u2019s paradox. Its equivalent formulation is when all inequalities in (1 ###reference_###\u20133 ###reference_###) are inverted 222We leave aside the following pertinent problem; see [19 ###reference_b19###] for details. If probabilities are extracted from finite populations, the more conditioned version (2 ###reference_###, 3 ###reference_###) is less reliable, because it is extracted from a smaller population. For us all probability-providing populations will be sufficiently large. .\nFor Simpson\u2019s paradox (1 ###reference_###\u20133 ###reference_###) to hold, it is necessary to have one of the following two conditions:\nTo find these relations, expand and over the probabilities in (4 ###reference_###, 5 ###reference_###) [cf. (23 ###reference_###, 24 ###reference_###)], and note that e.g. is a weighted mean of and . Given that (4 ###reference_###) or (5 ###reference_###) hold, Simpson\u2019s paradox can be generated via suitable choices of and . For such choices, it is necessary that\ni.e., and must be dependent variables."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Attempts to resolve the paradox",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.2.1",
|
| 31 |
+
"parent_section_id": "2.2",
|
| 32 |
+
"section_name": "2.2.1 Replacing prediction with retrodiction",
|
| 33 |
+
"text": "Over time, several resolutions to the paradox have been proposed. Barigelli and Scozzafava [10 ###reference_b10###, 11 ###reference_b11###] proposed to replace (1 ###reference_###) by\ni.e. to interchanging and in (1 ###reference_###). Then it is easy to see that its inversion under additional conditioning over is impossible. While (1 ###reference_###) stands for prediction, i.e. aiming at (and not at ) will more likely produce (than ), the proposal by Ref. [10 ###reference_b10###, 11 ###reference_b11###] looks for retrodiction. Though retrodicting (in contrast to predicting) does not suffer from Simpson\u2019s paradox, retrodicting and predicting are different things, hold different intuitions, and cannot generally be substituted for each other.\nRudas also sought to change the criterion (1 ###reference_###) so that it does not allow inversion after additional conditioning over , but still has several reasonable features [32 ###reference_b32###]. The proposal is to employ instead of (1 ###reference_###) [32 ###reference_b32###]. Notice the conceptual relation of this with the previous proposal (7 ###reference_###).\nAn unnatural point of both these proposals is that they depend on the ratio ; e.g. for the Example 1 mentioned below this means that if the treatment was applied more, it has better chances to be accepted. This drawback is acknowledged in [32 ###reference_b32###]."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.2.2",
|
| 37 |
+
"parent_section_id": "2.2",
|
| 38 |
+
"section_name": "2.2.2 Exchangeability and causality",
|
| 39 |
+
"text": "According to Lindley and Novick, the paradox may be resolved by going beyond probabilistic considerations (as we do below as well) and by employing the notion of exchangeability or causality [9 ###reference_b9###]. Within that proposal, the data generally provides only propensities, and one needs additional assumptions of sample homogeneity (exchangeability) for equating propensities with probabilities even for a large sample size. Exchangeability and the closely related notion of ergodicity remain influential in the current analysis of statistical problems exemplified by Simpson\u2019s paradox [33 ###reference_b33###]. Lindley and Novick studied the following two examples that support Simpson\u2019s paradox (more examples are discussed in sections 4 ###reference_###, 5 ###reference_###, and in Appendix .1 ###reference_###).\nExample 1. Medical treatment [9 ###reference_b9###]. (the target variable) is the recovery rate of medical patients: , . refers to a specific medical treatment: , . is the sex of patients: , . The times to which the random variables , and refer clearly hold .\nExample 2. Plant yield [9 ###reference_b9###]. (the target variable) is the yield of a single plant: , . refers to the variety (color) of the plant: , . refers to the height of the plant: , . The times hold .\nLindley and Novick proposed that assumptions on exchangeability lead to preferring (1 ###reference_###) for Example 2 and (2 ###reference_###, 3 ###reference_###) for Example 1 [9 ###reference_b9###]. They also proposed that the same results can be found by using causality instead of exchangeability [9 ###reference_b9###].\nThe same proposal was made earlier by Cartwright in the context of abstract causality [7 ###reference_b7###, 8 ###reference_b8###]. Pearl elaborated this proposal assuming that the above examples can be represented via time-ordered direct acyclic graphs (TODAG) [13 ###reference_b13###, 14 ###reference_b14###], where an arrow represents the influence of an earlier variable to the later one; see Fig. 1 ###reference_### for details. If we follow this assumption, then\u2014given the time constraints for the examples\u2014each of them can be related to a unique TODAG:\nIn (8 ###reference_###) the suggestion is to condition over [hence using (2 ###reference_###, 3 ###reference_###)] if influences both and [9 ###reference_b9###, 13 ###reference_b13###, 14 ###reference_b14###]. This is because conditioning over the cause reduces spurious correlations. This reasoning was generalized as the back-door criterion [13 ###reference_b13###].\nIn contrast, it is advised to use (1 ###reference_###) in (9 ###reference_###) since is an effect of , but still a cause of [9 ###reference_b9###, 13 ###reference_b13###, 14 ###reference_b14###]. The intuition of this suggestion is seen in the extreme case when screens and from each other, i.e. , and form a Markov chain. Then the conditional probability will not depend on begging the original question in (1 ###reference_###). Thus, for the two examples considered in [9 ###reference_b9###], Refs. [13 ###reference_b13###, 14 ###reference_b14###] make similar recommendations. The basis of these recommendations was criticized in [17 ###reference_b17###]."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.2.3",
|
| 43 |
+
"parent_section_id": "2.2",
|
| 44 |
+
"section_name": "2.2.3 Criticism",
|
| 45 |
+
"text": "Realistically, Example 1 need not to support any TODAG. In fact, both arrows and are generally questionable: sex need not influence the selection of the treatment, (unless the data was collected in that specific way), and many treatments are sex-indifferent, i.e. . For Example 1 it is more natural to assume that does not causally influence . In such a situation, the common cause principle proposes that there is an unobserved random variable , which is a common cause for and [34 ###reference_b34###, 35 ###reference_b35###]; see section 3 ###reference_###.\nSimilar reservations apply to Example 2: now is perhaps argued on the basis of color () being more directly related to the genotype of the plant, while the height () is a phenotypical feature. First, color-genotype and height-phenotype relations need not hold for all plants. Second (and more importantly), it is more natural to assume that the plant genotype influences both its color and height than that the color influences height. Hence the genotype can be a common cause for and ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.3",
|
| 49 |
+
"parent_section_id": "2",
|
| 50 |
+
"section_name": "How frequent is Simpson\u2019s paradox: an estimate based on the non-informative Dirichlet density",
|
| 51 |
+
"text": "To estimate the frequency of Simpson\u2019s paradox under fair data-gathering, we can try to generate the probabilities in (1 ###reference_###\u20133 ###reference_###) randomly in an unbiased way, and calculate the frequency of holding the paradox [36 ###reference_b36###, 30 ###reference_b30###]. The best and widely accepted candidate for an unbiased density of probabilities is the Dirichlet density, which is widely employed in statistics and machine learning [37 ###reference_b37###, 38 ###reference_b38###]; see Ref.[39 ###reference_b39###] for a recent review. The Dirichlet probability density for probabilities reads:\nwhere are the parameters of the Dirichlet density, is the delta-function, and is the Euler\u2019s -function. Since is non-zero only for and , the continuous variables themselves have the meaning of probabilities.\nMany standard prior densities for probabilities are contained in (2.3 ###reference_###); e.g., homogeneous (), Haldane\u2019s (), Jeffreys (). For estimating the frequency of Simpson\u2019s paradox, Ref. [36 ###reference_b36###] employed homogeneous and Jeffreys prior.\nFor modeling a non-informative Dirichlet density we find it natural to take\nThe homogeneity feature, in (12 ###reference_###) is natural for an unbiased density. The factor in (12 ###reference_###) makes an intuitive sense, since become homogeneous (non-informative) probabilities.\nEq. (12 ###reference_###) arises when we assume that the distribution of random probabilities is independent of whether they were generated directly from (2.3 ###reference_###) with components, or alternatively from (2.3 ###reference_###) with components , and then marginalized. This requirement indeed leads to (12 ###reference_###), as can be checked with the following feature of (2.3 ###reference_###):\nThe message of (13 ###reference_###) is that aggregating over two probabilities leads to the same Dirichlet density with the sum of the corresponding weights and .\nWe estimated the frequency of Simpson\u2019s paradox assuming that\n8 probabilities in (1 ###reference_###\u20133 ###reference_###) are generated from (2.3 ###reference_###, 12 ###reference_###) with (binary situation). This amounts to checking two relations (they amount to (1 ###reference_###\u20133 ###reference_###) and its reversal)\nOur numerical result is that the frequency of two inequalities in (14 ###reference_###) is . For this precision it was sufficient to generate samples from (2.3 ###reference_###, 12 ###reference_###) with . This result compares favorably with obtained for (homogeneous prior), and obtained for (Jeffreys prior) [36 ###reference_b36###]. It is seen that the frequency of Simpson\u2019s paradox is a decreasing function of [36 ###reference_b36###].\nRoughly, the above result means that in every 1000 instances of 3 binary variables, 42 instances will show Simpson\u2019s paradox. This number is reassuring: it is not very large meaning that the standard decision-making based on the marginal probabilities in (1 ###reference_###) will frequently be reasonable. But it is also not very small, showing that Simpson\u2019s paradox is generic and has its range of applicability."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Common cause principle and reformulation of Simpson\u2019s paradox",
|
| 57 |
+
"text": "###figure_1###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.1",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Common cause and screening",
|
| 63 |
+
"text": "The common cause for and means that there exists a random variable [34 ###reference_b34###, 35 ###reference_b35###]\nwhere (15 ###reference_###) holds for all values assumed by , , and , and where (16 ###reference_###) follows from (15 ###reference_###) 333There are formulations of the common cause principle that look for (15 ###reference_###) holding for certain events only and not for random variables [34 ###reference_b34###, 35 ###reference_b35###]. We do not focus on them. .\nThe same (15 ###reference_###) applies if causes and screens from .\nThese two scenarios are shown in Fig. 1 ###reference_### as (resp.) the third and fourth graphs. Sections 4 ###reference_###, 5 ###reference_### and Appendix .1 ###reference_### provide several examples of a causing (or screening) variable in the context of Simpson\u2019s paradox.\nThe common cause principle was proposed to explain probabilistic correlations [34 ###reference_b34###, 35 ###reference_b35###]. It later found important applications in data science, where approximate relations similar to (15 ###reference_###) are applied to effective data compression (Non-negative matrix factorization, Probabilistic Latent Dirichlet indexing, etc); see [40 ###reference_b40###] for a review.\nNote from (15 ###reference_###) that gets rid of the conditional dependence on in . Thus, a sensible way of looking at the association between and is to check the sign of\nTo support the usage of the common cause C for decision-making, we note that (15 ###reference_###) has an important implication in the context of (1 ###reference_###). (This implication generalizes the argument given in [35 ###reference_b35###].) Assume that for all values of . Note from (15 ###reference_###) that there exists an event such that , and an event such that . Hence, if conditioning over facilitates (hinders) the association between and , then conditioning over () is not worse in this facilitation (hindering) 444To deduce the first relation assume that for all , multiply both parts by , sum over and get contradiction . Likewise for the second relation..\nAfter the above reformulation, Simpson\u2019s paradox seems even less resolvable since is not observed. Indeed, there are common causes that reproduce (1 ###reference_###), those that reproduce (2 ###reference_###, 3 ###reference_###), but there are many other possibilities. Common causes that are close to () imply option (2 ###reference_###, 3 ###reference_###) of the paradox, while leads to (1 ###reference_###). These conclusions are based on the fact that (15 ###reference_###) holds exactly for and . Thus, Simpson\u2019s paradox is not a choice between two options (2 ###reference_###, 3 ###reference_###) and (1 ###reference_###), it is a choice between many options given by different common causes .\nFinally, two remarks about the applicability of (15 ###reference_###\u201317 ###reference_###). First, if is a common cause for both and , the times of these variables naturally hold . When screens from , it holds . In certain applications of (17 ###reference_###), it will suffice to have even a weaker condition .\nSecond, we note that for applying (1 ###reference_###, 2 ###reference_###, 3 ###reference_###) we do not need , i.e. only is needed for connecting (1 ###reference_###) with (2 ###reference_###, 3 ###reference_###). Indeed, does not necessarily need to be a random variable, but can simply be a label describing the situation. Now the same holds for (17 ###reference_###): once (15 ###reference_###) is written as\nwe need only to pass from (18 ###reference_###) to quantities involved\nin (1 ###reference_###, 2 ###reference_###, 3 ###reference_###); i.e., is not needed."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.2",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "A common cause (or screening variable) resolves Simpson\u2019s paradox for binary variables",
|
| 69 |
+
"text": "The following theorem shows a definite statement for all binary causes. The message of the theorem is that once we know that is binary, then the correct decision is (2 ###reference_###, 3 ###reference_###).\nTheorem 1: If , , and are binary, and provided that (1 ###reference_###) and (2 ###reference_###, 3 ###reference_###) are valid, all causes hold\ni.e. all holding (15 ###reference_###) predict the same sign of association between and as (2 ###reference_###, 3 ###reference_###).\nThe main idea of proving (19 ###reference_###) is inverting (15 ###reference_###):\nwhere unknown quantities and are represented via known ones (i.e. ) and free parameters . Eqs. (21 ###reference_###, 22 ###reference_###) hold upon changing by and are deduced in Appendix .3 ###reference_###\nvia specific notations that should be useful when dealing with (15 ###reference_###) for a non-binary .\nThe rest of the proof is algebraic but non-trivial. It also works out and employs constraints (4 ###reference_###, 31 ###reference_###) on Simpson\u2019s paradox itself. Expanding both sides of (1 ###reference_###),\nand using there (2 ###reference_###, 3 ###reference_###) we subtract the sides of (1 ###reference_###) from each other and find:\nWe return to (2 ###reference_###, 3 ###reference_###) and note that we can assume without loosing generality\nEqs. (23 ###reference_###, 24 ###reference_###) imply that for the validity of (1 ###reference_###\u20133 ###reference_###, 26 ###reference_###) it is necessary to have\n, which together with (2 ###reference_###, 3 ###reference_###, 26 ###reference_###) revert to (4 ###reference_###).\nNow (1 ###reference_###, 23 ###reference_###, 24 ###reference_###) read\nwhere (27 ###reference_###) and (28 ###reference_###) are equivalent. Eqs. (27 ###reference_###, 28 ###reference_###, 4 ###reference_###) imply\nAs checked directly, Eqs. (3.2 ###reference_0###, 30 ###reference_###) lead to\nNow we return to (22 ###reference_###) and assume there , which leads to from (22 ###reference_###). Writing down from (22 ###reference_###) the formula for and making the same assumption we get . Now look at (21 ###reference_###) and its analog obtained via , and use there these two results together with (30 ###reference_###, 31 ###reference_###) and (4 ###reference_###) to deduce the first inequality in (19 ###reference_###) under assumption . It should be obvious that the second inequality in (19 ###reference_###) holds under the same assumption since we nowhere used any specific feature of compared to .\nFor we need to use instead of (21 ###reference_###) another form of (20 ###reference_###)\nThe rest is similar to the above: we proceed via (3.2 ###reference_0###, 31 ###reference_###) and (4 ###reference_###) and deduce (19 ###reference_###) from (22 ###reference_###), (3.2 ###reference_2###) and the analog of (3.2 ###reference_2###) obtained via ."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.3",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "Non-binary causes",
|
| 75 |
+
"text": "Starting from a tertiary , all three options of Simpson\u2019s paradox become possible: there are common causes that support (1 ###reference_###), those which support (2 ###reference_###, 3 ###reference_###), and eventually (random) cause variables for which has different signs for different values of . (Our numerical examples showing these possibilities are available upon request.) Hence, already for the tertiary cause one needs prior information on the common cause to decide on the solution of Simpson\u2019s paradox. Alternatively, we can infer this unknown cause via e.g. one of the methods proposed recently [41 ###reference_b41###, 42 ###reference_b42###].\nIt is not excluded that such inference methods will provide further information on the solution of Simpson\u2019s paradox.\nWe hope to discuss this problem elsewhere."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Example: smoking and surviving",
|
| 81 |
+
"text": "In sections 2.2.2 ###reference_.SSS2### and 2.2.3 ###reference_.SSS3### we discussed two examples studied in the literature and argued that they can be\nalso interpreted via the common cause principle. In the present case, the standard approaches do not seem to apply, but the common cause can still be motivated. This example on survival of smokers versus nonsmokers is taken from Ref. [16 ###reference_b16###]. Its technical details are discussed in Appendix .2 ###reference_###.\nBinary represents the survival in a group of women as determined by two surveys taken 20 years apart:\nwhere , and where and denote age-groups. According to the data of [16 ###reference_b16###], Simpson\u2019s paradox reads\nNote that here influences : the age of a person is a predictor of his/her survival.\nCausal influences from age to smoking can be neglected because the number of people that quit or started smoking is small [16 ###reference_b16###]. We can assume that influences from smoking to age are absent. Then this example is intermediate between two situations considered in [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 13 ###reference_b13###]. Recall that when influenced , these references advised to decide via the fine-grained option of the paradox, while for the case of the inverse influence (from to ) they recommend to employ the coarse-grained version; see Fig. 1 ###reference_###.\nHence, we should expand on the above situation to achieve a workable model. We can assume that and are influenced by a common cause. Genetic factors influence an individual\u2019s age and tendency to smoke. Originally proposed by Fisher [43 ###reference_b43###], this hypothesis was later substantiated in several studies; see Refs. [44 ###reference_b44###, 45 ###reference_b45###] for reviews. Note that this refers to genetics of the smoking behavior itself, and not to health problems that can be caused by smoking plus genetic factors. Several sets of studies that contributed to genetic determinants of smoking behavior are as follows. (i) Children of smoking parents tend to smoke. (ii) Smoking behavior of adopted kids correlates stronger with that of their biological parents. (iii) Monozygotic (genetically identical) twins correlate in their smoking behavior much stronger than heterozygotic twins. Smoking behavior includes both the acquisition and maintenance of smoking. Monozygotic twins show correlations in both these aspects.\nHence as a preliminary hypothesis, we can suggest that genetic factors are the common cause of both smoking and age. If this common cause is binary, then Theorem 1 applies and we conclude\u2014judging from the fine-grained data and consistently with other studies\u2014that smoking is not beneficial for surviving."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Example: COVID-19, Italy versus China",
|
| 87 |
+
"text": "Here the COVID-19 death rates are compared in Italy and China [31 ###reference_b31###, 46 ###reference_b46###]. According to the data, aggregated death rates in Italy are higher than in China, but in each age group, the death rates are higher in China. More precisely,\nwhere is the death rate out of COVID-19, is found from the number of positively tested people in each age group, , and where and . According to the data of [31 ###reference_b31###], Simpson\u2019s paradox reads\nThe authors of [31 ###reference_b31###] proposed that this situation is described by TODAG ; cf. (9 ###reference_###). Then the conclusion from [9 ###reference_b9###, 13 ###reference_b13###] will be that the aggregated version of Simpson\u2019s paradox works, i.e. Italy did worse than China. The authors of Ref. [31 ###reference_b31###] reached the same conclusion.\nWhen applying the common cause set-up from section 3.1 ###reference_###, we can look at (18 ###reference_###), because is better described as a label (avoiding dealing with the probability of country). Hence, from the viewpoint of (18 ###reference_###), we need a common cause that supplements and acts on both and . We propose that the quality of healthcare system can be the common cause here. In particular, a more affordable healthcare system may cause a higher proportion of older people in the country\u2019s society. Indeed, for 2019, Italy had a larger percentage of people aged above 65 than China: 24.05 % versus 12.06 %.\nOn the other hand, the healthcare system will influence death rates in all age groups.\nIf is binary, then our conclusion from Theorem 1 is opposite to that of [31 ###reference_b31###]: China did worse than Italy."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Simpson\u2019s paradox and common cause principle for Gaussian variables",
|
| 93 |
+
"text": ""
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "6.1",
|
| 97 |
+
"parent_section_id": "6",
|
| 98 |
+
"section_name": "Formulation of Simpson\u2019s paradox for continuous variables",
|
| 99 |
+
"text": "Simpson\u2019s paradox is uncovered earlier for continuous variables than for the discrete case [1 ###reference_b1###]. Researching the continuous variable paradox and identifying it in big datasets is currently an active research field [23 ###reference_b23###, 24 ###reference_b24###, 29 ###reference_b29###, 47 ###reference_b47###, 48 ###reference_b48###, 49 ###reference_b49###].\nThe association between continuous variables and \ncan be based on a reasonable definition of correlation coefficient [1 ###reference_b1###, 30 ###reference_b30###]. We focus on Gaussian variables, because this definition is unique for them and amounts to conditional variance. These variables are also important in the context of machine learning (e.g. linear regressions) [50 ###reference_b50###].\nHence the formulation of Simpson\u2019s paradox given reads instead of (1 ###reference_###\u20133 ###reference_###) [1 ###reference_b1###, 30 ###reference_b30###, 23 ###reference_b23###, 24 ###reference_b24###]:\nwhere and are the conditional mean and covariance;\n and are the mean and covariance; is the conditional probability density of .\nThe message of (41 ###reference_###) is that the usual and conditional covariance have different signs, i.e., they predict different types of associations between and . For instance, means correlation, while implies anti-correlation. Note a subtle difference between this formulation of Simpson\u2019s paradox and that presented in section 2.2 ###reference_###.\nIn (41 ###reference_###\u201342 ###reference_###) the formulation is symmetric with respect to and ."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6.2",
|
| 103 |
+
"parent_section_id": "6",
|
| 104 |
+
"section_name": "General solution for Gaussian variables",
|
| 105 |
+
"text": "For fuller generality, we shall assume that , and are Gaussian column vectors with a number of components (i.e., dimensionality) , and , respectively. We also define\nwhere means transposition: is a number, while is a matrix.\nWe assume that a Gaussian -dimensional variable is the common cause variable for and :\nwhere the common cause feature of is ensured by the block-diagonal structure of the covariance matrix : and are (resp.) covariance matrices for and . In (6.2 ###reference_7###), is matrix that ensures the coupling between and . For simplicity and without loss of generality we assumed that and hence in (6.2 ###reference_7###). We get from (6.2 ###reference_7###) after arranging similar terms (and omitting normalization):\nEmploying (84 ###reference_###) we obtain:\nWe now recall (45 ###reference_###, 49 ###reference_###), introduce the block-diagonal form for , and find\nwhere (LABEL:woot) can be deduced via Appendix .4 ###reference_###. In that formula we need only the upper-left block, so that all other blocks are omitted. Collecting pertinent expressions from (45 ###reference_###, 54 ###reference_###, LABEL:woot, 49 ###reference_###), we deduce finally"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "6.3",
|
| 109 |
+
"parent_section_id": "6",
|
| 110 |
+
"section_name": "The minimal set-up of Simpson\u2019s paradox: 3 scalar variables + scalar cause",
|
| 111 |
+
"text": "For this simplest situation, is a 3-dimensional vector, is a matrix, is a matrix, while and are positive scalars. Now (57 ###reference_###\u201359 ###reference_###) read:\nNow consider a scenario of Simpson\u2019s paradox, where\nDue to , these two inequalities demand . Likewise,\n and demand . It is seen that under Simpson\u2019s paradox for this minimal situation, the sign of \ncoincides with the sign of . We are thus led to the following:\nTheorem 2: In the minimal situation (6.3 ###reference_2###\u201362 ###reference_###) with the (minimal) common cause, the continuous Simpson\u2019s paradox (41 ###reference_###) is resolved in the sense that the decision on the sign of correlations should proceed according to the fine-grained option: ; see (41 ###reference_###\u201342 ###reference_###).\nFor non-minimal common causes, all possibilities of the paradox can be realized; see Appendix .5 ###reference_###."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "7",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "Conclusion",
|
| 117 |
+
"text": "We addressed Simpson\u2019s paradox: the problem of setting up an association between two events , given the lurking variable . This decision-making paradox provides two plausible but opposite suggestions for the same situation; see (1 ###reference_###) and (2 ###reference_###, 3 ###reference_###). Either the first option is correct, the second option is correct, or none of them is correct.\nWe focus on cases when there is a common cause for and (which combines , and their complements). Alternatively, screens out from . These cases include those in which there is no causal influence from to , as well as from to . Hence, correlations between and are to be explained via the common cause , which is a statement of the common cause principle [34 ###reference_b34###, 35 ###reference_b35###]. Now the association between and is to be decided by looking at for various values of . This task is normally difficult given the fact that is frequently not fully known and is not observed. However, provided that , , and are binary, shows the same association as the option (2 ###reference_###, 3 ###reference_###) of Simpson\u2019s paradox. In this sense, Simpson\u2019s paradox is resolved in the binary situation, provided that the situation allows a binary cause or a binary screening variable. The same conclusion on resolving Simpson\u2019s paradox was reached for Gaussian variables in the minimal situation. Several examples can illustrate the plausibility of a minimal .\nThese results lead to several interesting research directions. First, in the present paper, we limited ourselves to results that hold for all (minimal) common causes. For many applications this is too stringent: if the common cause is known to exist, but is not observed directly, then it may be sufficient to infer it e.g. via the (generalized) maximum likelihood [42 ###reference_b42###] or the minimal entropy method [41 ###reference_b41###]. This may provide pertinent information on the real common cause and on the structure of Simpson\u2019s paradox. Second, we insisted on a precise common cause. The screening relation (16 ###reference_###) is also useful, when it does hold approximately, but the support of is relatively small. Such an approximate relation (16 ###reference_###) provides data compression via feature detection, which is the main message of unsupervised methods such as Non-negative Matrix factorization and Probabilistic Latent Dirichlet indexing [40 ###reference_b40###]. The impact of such approximate, but efficient causes on probabilistic reasoning is an interesting research subject that we plan to explore in the future. Third, the general formalism we developed in section 6 ###reference_### for Gaussian variables may find further applications in the causal analysis of Gaussian machine learning algorithms [50 ###reference_b50###]."
|
| 118 |
+
}
|
| 119 |
+
],
|
| 120 |
+
"appendix": [],
|
| 121 |
+
"tables": {},
|
| 122 |
+
"image_paths": {
|
| 123 |
+
"1": {
|
| 124 |
+
"figure_path": "2403.00957v2_figure_1.png",
|
| 125 |
+
"caption": "Figure 1: Directed acyclic graphs between random variables A=(A1,A2)\ud835\udc34subscript\ud835\udc341subscript\ud835\udc342A=(A_{1},A_{2})italic_A = ( italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_A start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ), B\ud835\udc35Bitalic_B and C\ud835\udc36Citalic_C involved in discussing Simpson\u2019s paradox. The first and second graphs were studied in Refs. [13, 14]; see (8, 9). The third or fourth graphs are basic assumptions of this work; see (15). In the first graph, B\ud835\udc35Bitalic_B influences A1subscript\ud835\udc341A_{1}italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and A2subscript\ud835\udc342A_{2}italic_A start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, but B\ud835\udc35Bitalic_B is not the common cause in the strict sense, because there is an influence from A2subscript\ud835\udc342A_{2}italic_A start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT to A1subscript\ud835\udc341A_{1}italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. A similar interpretation applies to the second graph. We emphasize that the joint probability p\u2062(A1,A2,B)\ud835\udc5dsubscript\ud835\udc341subscript\ud835\udc342\ud835\udc35p(A_{1},A_{2},B)italic_p ( italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_A start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_B ) for the first and second graphs has the same form, i.e. such graphs are extra constructions employed for interpretation of data. In contrast, the third and fourth graph imply a definite (but the same for both graphs) limitation on the joint probability p\u2062(A1,A2,B,C)\ud835\udc5dsubscript\ud835\udc341subscript\ud835\udc342\ud835\udc35\ud835\udc36p(A_{1},A_{2},B,C)italic_p ( italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_A start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_B , italic_C ), which is expressed by (15).",
|
| 126 |
+
"url": "http://arxiv.org/html/2403.00957v2/extracted/5735988/simpson_figure.png"
|
| 127 |
+
}
|
| 128 |
+
},
|
| 129 |
+
"validation": true,
|
| 130 |
+
"references": [],
|
| 131 |
+
"url": "http://arxiv.org/html/2403.00957v2"
|
| 132 |
+
}
|
20240721/2403.01915v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2403.05016v2.json
ADDED
|
@@ -0,0 +1,664 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "DiffClass: Diffusion-Based Class Incremental Learning",
|
| 3 |
+
"abstract": "Class Incremental Learning (CIL) is challenging due to catastrophic forgetting. On top of that, exemplar-free CIL is even more challenging due to forbidden access to data of previous tasks. Recent exemplar-free CIL methods attempt to mitigate catastrophic forgetting by synthesizing previous task data. However, they fail to overcome the catastrophic forgetting due to the inability to deal with the significant domain gap between real and synthetic data.\nTo overcome these issues, we propose a novel exemplar-free CIL method.\nOur method adopts multi-distribution matching (MDM) diffusion models to align quality of synthetic data and bridge domain gaps among all domains of training data. Moreover, our approach integrates selective synthetic image augmentation (SSIA) to expand the distribution of the training data, thereby improving the model\u2019s plasticity and reinforcing the performance of our multi-domain adaptation (MDA) technique. With the proposed integrations, our method then reformulates exemplar-free CIL into a multi-domain adaptation problem to implicitly address the domain gap problem and enhance model stability during incremental training.\nExtensive experiments on benchmark CIL datasets and settings demonstrate that our method excels previous exemplar-free CIL methods with non-marginal improvements and achieves state-of-the-art performance. Our project page is available at https://cr8br0ze.github.io/DiffClass.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Although recent deep learning (DL) models have achieved superior performance even better than humans in various tasks, catastrophic forgetting [9 ###reference_b9###] remains a challenging problem that limits the continual learning capabilities of DL models. Unlike humans, DL models are unable to learn multiple tasks sequentially, which forget the previous learned knowledge after learning new tasks. To address this, Class Incremental Learning (CIL) extensively investigates how to learn the information of new classes without forgetting past knowledge of previous classes. Various CIL works [25 ###reference_b25###, 12 ###reference_b12###, 3 ###reference_b3###, 39 ###reference_b39###, 38 ###reference_b38###] try to untangle catastrophic forgetting through saving a small proportion of previous training data as exemplars in memory and retraining with them in new tasks. However, these methods suffer from privacy and legality issues of utilizing past training data, as well as memory constraints on devices. Different from previous exemplar-based CIL, Exemplar-Free CIL [33 ###reference_b33###, 7 ###reference_b7###, 8 ###reference_b8###] has gained increasing popularity where DL models incrementally learn new knowledge without storing previous data as exemplars.\nTo counteract forgetting knowledge of past tasks, the most recent exemplar-free CIL works [33 ###reference_b33###, 7 ###reference_b7###, 8 ###reference_b8###, 47 ###reference_b47###] propose to synthesize previous data instead of using real data. The synthetic data of previous tasks are generated through either model inversion [45 ###reference_b45###] with knowledge distillation or denoising diffusion models [11 ###reference_b11###].\nHowever, these methods suffer from significant domain gaps between synthetic data and real data especially when the number of incremental tasks is large (i.e. long-term CIL), which inevitably misleads the decision boundaries between new and previous classes. The obtained models favor plasticity over stability, meaning they tend to learn new knowledge but without keeping previous knowledge in mind as demonstrated in Sec. 3 ###reference_###. Therefore, how to exhibit both stability and plasticity in exemplar-free CIL remains a crucial challenge.\nTo address these problems, we propose a novel exemplar-free CIL approach that bridges the crucial domain gaps and balances stability and plasticity.\nOur method incorporates a multi-distribution-matching (MDM) technique to finetune diffusion models resulting in closer distributions between not only synthetic and real data but also among synthetic data through all incremental training phases.\nOur method also reformulates exemplar-free CIL as task-agnostic multi-domain adaptation (MDA) problems to further deal with domain gaps between real and synthetic data, with selective synthetic image augmentation (SSIA) to enhance each incremental task learning with current task synthetic data.\nWe summarize our contributions as follows:\nWe introduce a novel exemplar-free CIL method that explicitly mitigates forgetting and balances stability & plasticity by adopting MDM diffusion models and enhancing the dataset with SSIA\nto address domain gaps in exemplar-free CIL settings.\nWe propose an innovative approach to reformulate exemplar-free CIL as task-agnostic MDA problems. This groundbreaking step implicitly manages domain gaps during CIL training, better addressing catastrophic forgetting in exemplar-free CIL.\nExtensive experiments on CIFAR100 [16 ###reference_b16###] and ImageNet100 [28 ###reference_b28###] demonstrate that our method effectively mitigates catastrophic forgetting in different exemplar-free CIL settings, surpassing SOTA methods with significant improvements."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Diagnosis: Domain Gaps in Exemplar-Free CIL",
|
| 21 |
+
"text": "###figure_1### Although recent advancements in generative artificial intelligence can generate realistic images, we notice that the distributions of the generated synthetic images are still different from those of real images with domain gaps, leading to low accuracy in the classes trained with synthetic data in exemplar-free CIL settings. We also further dig into the low accuracy and find that the reason may be the model\u2019s preference for domains over classes after training, i.e. the model classifies whether the image is real or synthetic rather than its true label.\nIn Fig. 1 ###reference_###, a t-SNE visualization is performed to compare real data of class 0 and 1 from ImageNet100 [12 ###reference_b12###] with synthetic data of class 0 generated by the pretrained stable diffusion V1.5 model [26 ###reference_b26###]. The visualization reveals that the distributions of the real classes are more closely aligned, while a significant domain gap is evident between the synthetic data of class 0 and its real counterpart.\nThese domain gaps can potentially effect model\u2019s performance after model training with real and synthetic data, since the decision boundary can be significantly distorted by synthetic data, as it may treat the real class 0 and class 1 (with a smaller distribution discrepancy) as the same class in testing.\nWe also conduct an experiment in a class incremental setting to further verify. In specific, we train a model with only a ResNet [10 ###reference_b10###] backbone and a linear classifier for the first four tasks (each with 5 classes) in a 20-task CIL setting on the ImageNet100 dataset (refer to Sec. 5 ###reference_### for more details).\nFrom the second to the fourth tasks, aside from the real data of the current task, we also train with synthetic data of the previous tasks generated by the pre-trained SD V1.5 model.\nWe additionally train another model with entirely real data for the four tasks as a reference for how well the model can perform with real data.\nIn Tab. 1 ###reference_###, we present the accuracy on the real test dataset at the end of task 4. As observed, the model performs significantly better on the classes of the new task (i.e. class 15-19, trained with real data) than previous tasks (i.e. class 0-14, trained with both real in previous task and synthetic data in the current task), demonstrating the model\u2019s preference for plasticity over stability.\n###figure_2### ###figure_3### In Fig. 3 ###reference_###, we further\nuse t-SNE visualization for the feature embeddings of test data extracted from the incrementally trained ResNet18 backbone.\nAs observed from Fig. 3 ###reference_###,\nmost of misclassified test data from classes of previous tasks are labeled to new classes of the most recent task,\nindicating the model\u2019s labeling preference of domain over class, i.e. the model labels whether it is real or synthetic, rather than its true class.\nInspired by the diagnosis experiments, our method tries to mitigate the domain gaps and balance plasticity & stability."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Methodology",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "Framework",
|
| 33 |
+
"text": "Following previous works [25 ###reference_b25###, 33 ###reference_b33###, 7 ###reference_b7###], CIL contains incremental learning phases or tasks.\nIn the incremental phase (or interchangeably ) , our framework mainly consists of the following three steps.\n###figure_4###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.1.1",
|
| 37 |
+
"parent_section_id": "4.1",
|
| 38 |
+
"section_name": "4.1.1 Finetuning Multi-Distribution Matching Diffusion Model with LoRA.",
|
| 39 |
+
"text": "In the incremental task, the real data of the current task and the synthetic data of the previous tasks (notation 0:i means integers from 0 up to but not including i) generated by fine-tuned diffusion models is available.\nWe use and randomly sampled a small batch of to fine-tune a multi-distribution matching diffusion model using LoRA. The finetuned diffusion model can be used to generate synthetic data. Based on LoRA, the cost to finetune and store diffusion models is relatively small."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.1.2",
|
| 43 |
+
"parent_section_id": "4.1",
|
| 44 |
+
"section_name": "4.1.2 Forming Training Dataset for Current Task.",
|
| 45 |
+
"text": "The training dataset for the current task consists of three parts, (1) the synthetic data of the previous tasks synthesized by fine-tuned diffusion models , (2) the real data of the current task , and (3) the image augmentation data generated from . For , the synthetic data are ignored.\nThe model can then start training by randomly sampling training batches from the newly-formed training dataset."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1.3",
|
| 49 |
+
"parent_section_id": "4.1",
|
| 50 |
+
"section_name": "4.1.3 Training with Multi-Domain Adaptation.",
|
| 51 |
+
"text": "For each batch of training data, we adopt the training method with multi-domain adaptation. Specifically, after feature extraction with a CNN backbone defined as , the extracted features go through two branches: a linear classifier , and a gradient reverse layer (GRL) followed by a linear classifier .\nDuring training, learns to classify representations of new classes in new tasks without forgetting previous classes, while acquires the knowledge of boundaries between diffusion-generated synthetic data and real data.\nThe details and advantages of the three stages in each incremental learning phase are specified below."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Finetuning Multi-Distribution Matching Diffusion Model with LoRA",
|
| 57 |
+
"text": "Previous exemplar-free CIL works either use or alter the sampling pre-trained diffusion models to synthesize data of previous tasks [14 ###reference_b14###, 8 ###reference_b8###].\nHowever, these methods fail to generate realistic data with evident domain gaps (or distribution discrepancies) for the classes in the same incremental task or\nkeep consistent generation quality across different incremental tasks. These bottlenecks affect the model\u2019s robustness in stability as shown previously in Sec. 3 ###reference_###."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2.1",
|
| 61 |
+
"parent_section_id": "4.2",
|
| 62 |
+
"section_name": "4.2.1 Multi-Distribution Matching.",
|
| 63 |
+
"text": "To address this significant limitation in exemplar-free CIL, inspired by the recent work on training data synthesis [46 ###reference_b46###] with an additional synthetic-to-real distribution-matching technique to enclose the gap between synthetic and real data distributions, we propose a multi-distribution matching (MDM) technique to fine-tune the diffusion model that best fit our exemplar-free CIL setting.\nIn specific, when finetuning a diffusion model, we not only match the distributions of the synthetic and real data for the current task but also align the distributions of synthetic data in the current task with that in all previous tasks.\nWith MDM, the diffusion models can be finetuned by optimizing the following loss:\nwhere and . Here is a random selection function to incorporate only a small portion of synthetic data of past tasks for multi-distribution matching purposes. is the noise predictor for latent space noised with noise . And denotes it\u2019s in the universal Reproducing Kernel Hilbert\nSpace. The Loss is further constraint by the original stable diffusion loss on only to emphasize while MDM is focused on multi-distribution matching crossing all training phase data, it should not compromise the fundamental denoising or data generation ability of the model of current real task classes.\nWe also provide detailed deduction and proof for this equation in the Appendix.\nIn this way, the synthetic images generated using the diffusion models with the proposed MDM are of uniform quality in different classes and tasks. More importantly, the distribution discrepancies or demain gaps between synthetic and real images become smaller, which fundamentally alleviates the potential domain bias problems and achieves better CIL performance."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.3",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Forming Current Task Training Dataset",
|
| 69 |
+
"text": "Synthetic Data Augmentation has proven to enhance the model performance on various computer vision tasks due to its ability to enlarge training data distribution [1 ###reference_b1###, 37 ###reference_b37###]. In exemplar-free CIL, various image augmentation techniques [14 ###reference_b14###, 54 ###reference_b54###, 53 ###reference_b53###] are frequently adopted.\nTherefore, when structuring the current task training dataset, aside from synthetic previous-task data generated by diffusion models , and the real data of the current task, we further incorporate data augmentation with synthetic data of current task from .\nHowever, in enhancing and aligning our method, we propose a different data augmentation technique, i.e. selective synthetic image augmentation (SSIA), to obtain . In specific, rather than finetuning and utilizing generative models after each training phase [33 ###reference_b33###, 7 ###reference_b7###, 8 ###reference_b8###], at the beginning phase of each task , we finetune a MDM diffusion model using LoRA as proposed in Sec. 4.2 ###reference_###.\nWe generate twice the number of synthetic data as real data for the current task and filter out the same number (or less) of distributional representative synthetic images as real data. It includes the following key steps.\nCalculate each generated class mean and create covariance matrices.\nwhere denotes all classes in the current task.\nSample the generated images for each current task class\nCalculate a selected threshold for synthetic image selection and construct the image augmentation dataset.\nWith SSIA, our method can benefit for multiple reasons. MDM mitigates the domain gaps between synthetic data in different tasks and the diffusion models can generate more realistic high-quality images for SSIA.\nThis helps to enhance the model\u2019s stability since domain-aligned training data can contribute to preventing feature embedding domain bias problems in exemplar-free CIL settings.\nSSIA can enable the model to better build knowledge for new classes. The model is capable of learning from the classes of current task trained with broader data distributions. The quality of images in SSIA is strong and representative since the synthetic images are selected from clusters around the class mean and span a calculated range with a broader class distribution.\nMoreover, the current task training dataset consists of both real and synthetic domains, which fortifies the multi-domain adaptation capabilities in our framework later discussed in Sec. 4.4 ###reference_###."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.4",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Training with Multi-Domain Adaptation",
|
| 75 |
+
"text": "Even with the multi-distribution matching technique, we still notice a nontrivial domain gap between synthetic data and real data in the training dataset. This domain gap will inevitably affect the model performance on classifying previous-task images during incremental learning, as shown in Sec. 3 ###reference_###.\nPrevious exemplar-free CIL works mainly adopt knowledge distillation techniques [33 ###reference_b33###, 7 ###reference_b7###] to implicitly avoid the model favoring domains over classes, i.e. aiming to enable the model to classify whether it is real or synthetic rather than its true labels.\nHowever, knowledge distillation still fails to address the domain gap problem with low classification performance in CIL and a high computation complexity."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.4.1",
|
| 79 |
+
"parent_section_id": "4.4",
|
| 80 |
+
"section_name": "4.4.1 Multi-Domain Adaptation.",
|
| 81 |
+
"text": "To deal with these problems, we propose to reformulate exemplar-free CIL as a task-agnostic multi-domain adaption problem. Inspired by domain-adversarial training [6 ###reference_b6###], for each task , after the original CNN backbone, besides the original linear classifier for class label classification, we further construct an additional branch with a gradient reverse layer followed by another linear classifier for domain prediction.\nHence we can formulate our exemplar-free CIL training approach in each task as optimizing the following:\nwhere\nand\nHere represents the ground truth label for class , and represents the ground truth domain label .\nThe model needs to not only learn to classify the image but also distinguish whether it is real or synthetic.\nDifferent from traditional domain-adversarial training with a focus on single target domain (real) data only, in our exemplar-free CIL setting, our model benefits from training both classification and domain branches using both target (real) and source (synthetic) domain data in each incremental task .\nFor learning classification knowledge in , synthetic data is a crucial key for reviewing previous knowledge while real data contributes to gaining new knowledge.\nFor learning multi-domain adaptation knowledge, adopting a mixture of data from both domains can contribute to differentiating and adapting to the distinct characteristics of each domain.\nBy reforming exemplar-free CIL as a straightforward task-agnostic multi-domain adaption problem, our method enjoys the following advantages. (i) Our model framework keeps simple without any cumbersome parts, which benefits incremental training efficiency.\n(ii) More importantly, our model is robust in both stability and plasticity since it is fully capable of learning important feature knowledge from both label classification and domain classification (synthetic vs. real) in each task.\n(iii) Our proposed method can not only perform well on a test dataset consisting of entirely real data but also elaborate to perform well on entirely synthetic test data and combined image groups (see Appendix)\n, which better simulates the continual learning scenarios in real-world settings."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Experiment",
|
| 87 |
+
"text": ""
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.1",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Datasets and Evaluation Protocol",
|
| 93 |
+
"text": "Datasets. To accurately and fairly evaluate our method in comparison with baselines, we use two representative datasets CIFAR100 [16 ###reference_b16###] and ImageNet100\n [12 ###reference_b12###], which are widely adopted in CIL.\nCIFAR100 consists of 100 classes, each containing 500 training and 100 test images with the resolution 32323. ImageNet100 is a randomly sampled subset of ImageNet1000 [28 ###reference_b28###], consisting of 100 classes each with 1300 training and 50 test images of various sizes.\nIncremental Settings. Following prior works [33 ###reference_b33###, 7 ###reference_b7###, 29 ###reference_b29###], for CIFAR100 and ImageNet100 datasets, we split the classes equally into , 10, or 20 tasks (e.g., each task has 5 classes if ). For all approaches, we use the same random seed to randomly shuffle class orders for all datasets.\nFollowing previous works [33 ###reference_b33###, 54 ###reference_b54###, 53 ###reference_b53###, 7 ###reference_b7###, 55 ###reference_b55###, 22 ###reference_b22###, 29 ###reference_b29###], the classification accuracy is defined as\nWe report both the final accuracy from the last task and the average incremental accuracy\naveraged over all incremental tasks .\nImplementation Details.\nFor a fair comparison, for CIFAR100, following previous works [33 ###reference_b33###, 7 ###reference_b7###], we use a modified 32-layer ResNet [10 ###reference_b10###] as the backbone for all approaches. For our model, we train with SGD optimizer for 120 epochs. The learning rate is initially set to 0.1 with a decay factor of 0.1 after 100 epochs. The weight decay is set to 0.0002 and batch size of 128. For ImageNet100, we use ResNet18 [10 ###reference_b10###] as the backbone for all methods. For our training, the SGD optimizer is adopted to train 40 epochs. The learning rate is initially set to 0.1 with a decay factor of 0.1 after 30 epochs. The weight decay is set to 0.0001 and batch size of 128. We train and report all methods from scratch with original implementations.\n###figure_5### ###figure_6### ###figure_7###"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.2",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Results and Analysis",
|
| 99 |
+
"text": ""
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.2.1",
|
| 103 |
+
"parent_section_id": "5.2",
|
| 104 |
+
"section_name": "5.2.1 CIFAR100.",
|
| 105 |
+
"text": "We report the results of our method and SOTA exemplar-free CIL methods on CIFAR100 in Tab. 2 ###reference_###. As observed, our method achieves the highest average and final accuracy among all approaches with non-marginal improvements. Moreover, as CIL becomes more difficult with a larger (such as 20), the baselines suffer from significant accuracy drop (such as from 51.42% to 42.87% for SEED [29 ###reference_b29###] when increasing from 10 to 20), while our method still maintains high accuracy close to that of smaller (such as our final accuracy from 58.4% to 57.11%) with larger improvements over baselines. Notably, compared with SOTA exemplar-free CIL method SEED(ICLR 2024) [29 ###reference_b29###], when , our method is 9.68 percent more accurate for the average incremental accuracy and 14.24 percent more accurate for the final accuracy .\nWe further present the detailed incremental accuracy of various learning phases for , 10, and 20 on CIFAR100 in Fig. 6 ###reference_###. We observe that our curve drops significantly slower than all baseline methods with the highest accuracy at various phases, demonstrating our superior performance to mitigate the forgetting of previously learned knowledge over baseline methods.\n###figure_8### ###figure_9### ###figure_10###"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.2.2",
|
| 109 |
+
"parent_section_id": "5.2",
|
| 110 |
+
"section_name": "5.2.2 ImageNet100.",
|
| 111 |
+
"text": "In Tab. 3 ###reference_###, we present the results of our method and SOTA exemplar-free CIL methods on ImageNet100. Similarly, our method outperforms all baselines in terms of the average accuracy and final accuracy with non-marginal improvements. As CIL becomes more difficult with a larger , the advantages or improvements of our method become more significant.\nCompared with SOTA exemplar-free CIL method seed [29 ###reference_b29###], for , our method is 10.25 percent more accurate for and 22.91 percent more accurate for .\nThe detailed incremental accuracy of various learning phases for , 10, and 20 on ImageNet100 are presented in Fig. 8 ###reference_###. As observed, our method keeps the highest accuracy at almost all of the learning phases or stages. As it goes through more learning phases, our method can maintain almost consistent accuracy, outperforming baselines (which suffer from significant accuracy drops) with larger improvements. The results demonstrate that our method performs much better to mitigate the catastrophic forgetting problem in CIL."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "5.3",
|
| 115 |
+
"parent_section_id": "5",
|
| 116 |
+
"section_name": "Ablation Studies",
|
| 117 |
+
"text": "We ablate the three major components in our method on ImageNet100 with .\nIn each phase, new classes are learned. We present our ablation results in Tab. 4 ###reference_###.\nThe results demonstrate that all proposed components contribute greatly. We further show that all three components are crucial to achieving better plasticity vs. stability balance through an ablation study\nin Fig. 9 ###reference_###."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "5.3.1",
|
| 121 |
+
"parent_section_id": "5.3",
|
| 122 |
+
"section_name": "5.3.1 Multi-Distribution Matching(MDM).",
|
| 123 |
+
"text": "Without finetuning diffusion models with a multi-distribution matching technique, the average accuracy drops by 15.14 percent (74.85% vs. 59.71%), and the final classification accuracy drops by 16.09 percent (67.26% vs. 51.17%). From Fig. 9 ###reference_###, we also observe that MDM serves a crucial role in reviewing previous knowledges (i.e. stability)."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5.3.2",
|
| 127 |
+
"parent_section_id": "5.3",
|
| 128 |
+
"section_name": "5.3.2 Multi-Domain Adaptation (MDA).",
|
| 129 |
+
"text": "Without reforming exemplar-free CIL into a multi-domain adaptation problem, the average accuracy drops by 9.56 percent, and the final accuracy drops by 12.04 percent. MDA also contributes to building model stability as shown in Fig. 9 ###reference_###."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "5.3.3",
|
| 133 |
+
"parent_section_id": "5.3",
|
| 134 |
+
"section_name": "5.3.3 Selective Synthetic Image Augmentation (SSIA).",
|
| 135 |
+
"text": "Without further enhancement from selective synthetic image augmentation, the average accuracy drops by 12.48 percent, and the final accuracy drops by 14.97 percent. Furthermore, Fig. 9 ###reference_### shows that SSIA helps the model not only learn new knowledge (i.e. plasticity) but also remember the knowledge from previous tasks.\n###figure_11###"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "6",
|
| 139 |
+
"parent_section_id": null,
|
| 140 |
+
"section_name": "Conclusion",
|
| 141 |
+
"text": "In this paper, we introduce a novel exemplar-free CIL approach to address catastrophic forgetting and stability and\nplasticity imbalance caused by the domain gap between synthetic and real data. Specifically, our method generates synthetic data using multi-distribution matching (MDM) diffusion models to explicitly bridge the domain gap and unify quality among all training data. Selective synthetic image augmentation (SSIA) is also applied to enlarge training data distribution, enhancing the model\u2019s plasticity and bolstering the efficacy of our method\u2019s final component, multi-domain adaptation (MDA). With the proposed integrations, our method then reforms exemplar-free CIL to a multi-domain adaptation problem to implicitly address the domain gap problem during incremental training.\nOur method achieves state-of-the-art performance in various exemplar-free CIL settings on CIFAR100 and ImageNet100 benchmarks. In the ablation study, we\nproved that each component of our method is significant to best perform in exemplar-free CIL."
|
| 142 |
+
}
|
| 143 |
+
],
|
| 144 |
+
"appendix": [],
|
| 145 |
+
"tables": {
|
| 146 |
+
"1": {
|
| 147 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.4.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S3.T1.5.2\" style=\"font-size:90%;\">Diagnosis experiment accuracy result (in %) of incremental training the model with synthetic previous task data and real data of current task <em class=\"ltx_emph ltx_font_italic\" id=\"S3.T1.5.2.1\">vs</em>.<span class=\"ltx_text\" id=\"S3.T1.5.2.2\"></span> training model with all real data for first four tasks of twenty-task incremental setting on ImageNet100. </span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.6.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.6.1.1.1\" style=\"padding:1.25pt 4.3pt;\">Training Data Domain</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.6.1.1.2\" style=\"padding:1.25pt 4.3pt;\">CLS 0-4</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.6.1.1.3\" style=\"padding:1.25pt 4.3pt;\">CLS 5-9</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.6.1.1.4\" style=\"padding:1.25pt 4.3pt;\">CLS 10-14</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.6.1.1.5\" style=\"padding:1.25pt 4.3pt;\">CLS 15-19</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.6.1.1.6\" style=\"padding:1.25pt 4.3pt;\">Total Classes</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.6.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.6.2.1.1\" style=\"padding:1.25pt 4.3pt;\">Synthetic & Real</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.2.1.2\" style=\"padding:1.25pt 4.3pt;\">47.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.2.1.3\" style=\"padding:1.25pt 4.3pt;\">48.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.2.1.4\" style=\"padding:1.25pt 4.3pt;\">51.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.2.1.5\" style=\"padding:1.25pt 4.3pt;\">89.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.6.2.1.6\" style=\"padding:1.25pt 4.3pt;\">59.37</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S3.T1.6.3.2.1\" style=\"padding:1.25pt 4.3pt;\">Real Data Only</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.6.3.2.2\" style=\"padding:1.25pt 4.3pt;\">85.97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.6.3.2.3\" style=\"padding:1.25pt 4.3pt;\">80.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.6.3.2.4\" style=\"padding:1.25pt 4.3pt;\">83.54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.6.3.2.5\" style=\"padding:1.25pt 4.3pt;\">81.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.6.3.2.6\" style=\"padding:1.25pt 4.3pt;\">82.72</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 148 |
+
"capture": "Table 1: Diagnosis experiment accuracy result (in %) of incremental training the model with synthetic previous task data and real data of current task vs. training model with all real data for first four tasks of twenty-task incremental setting on ImageNet100. "
|
| 149 |
+
},
|
| 150 |
+
"2": {
|
| 151 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.13.2.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.2.1\" style=\"font-size:90%;\">Evaluation results on CIFAR100 with protocol that equally split 100 classes into tasks. The best results are in bold.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.11\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S5.T2.5.3.4\" rowspan=\"2\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text\" id=\"S5.T2.5.3.4.1\">Approach</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S5.T2.3.1.1\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S5.T2.4.2.2\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S5.T2.5.3.3\" style=\"padding:1pt 5.7pt;\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.11.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.4.1\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.7.5.2\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.8.6.3\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.9.7.4\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.10.8.5\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.11.9.6\" style=\"padding:1pt 5.7pt;\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.11.10.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S5.T2.11.10.1.1\" style=\"padding:1pt 5.7pt;\">Upper Bound</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.11.10.1.2\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.11.10.1.3\" style=\"padding:1pt 5.7pt;\">70.67</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.11.10.1.4\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.11.10.1.5\" style=\"padding:1pt 5.7pt;\">70.67</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.11.10.1.6\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.11.10.1.7\" style=\"padding:1pt 5.7pt;\">70.67</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.11.11.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.11.11.1.1\" style=\"padding:1pt 5.7pt;\">ABD\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib33\" title=\"\">33</a>]</cite> (ICCV 2021)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.11.11.1.2\" style=\"padding:1pt 5.7pt;\">60.78</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.11.11.1.3\" style=\"padding:1pt 5.7pt;\">44.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.11.11.1.4\" style=\"padding:1pt 5.7pt;\">54.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.11.11.1.5\" style=\"padding:1pt 5.7pt;\">34.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.11.11.1.6\" style=\"padding:1pt 5.7pt;\">43.32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.11.11.1.7\" style=\"padding:1pt 5.7pt;\">21.18</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.11.12.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.11.12.2.1\" style=\"padding:1pt 5.7pt;\">PASS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib54\" title=\"\">54</a>]</cite> (CVPR 2021)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.12.2.2\" style=\"padding:1pt 5.7pt;\">63.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.12.2.3\" style=\"padding:1pt 5.7pt;\">49.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.12.2.4\" style=\"padding:1pt 5.7pt;\">52.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.12.2.5\" style=\"padding:1pt 5.7pt;\">36.08</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.12.2.6\" style=\"padding:1pt 5.7pt;\">41.84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.12.2.7\" style=\"padding:1pt 5.7pt;\">27.45</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.11.13.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.11.13.3.1\" style=\"padding:1pt 5.7pt;\">IL2A\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib53\" title=\"\">53</a>]</cite> (NeurIPS 2021)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.13.3.2\" style=\"padding:1pt 5.7pt;\">58.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.13.3.3\" style=\"padding:1pt 5.7pt;\">45.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.13.3.4\" style=\"padding:1pt 5.7pt;\">43.28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.13.3.5\" style=\"padding:1pt 5.7pt;\">24.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.13.3.6\" style=\"padding:1pt 5.7pt;\">40.54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.13.3.7\" style=\"padding:1pt 5.7pt;\">21.15</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.11.14.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.11.14.4.1\" style=\"padding:1pt 5.7pt;\">R-DFCIL\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib7\" title=\"\">7</a>]</cite> (ECCV 2022)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.14.4.2\" style=\"padding:1pt 5.7pt;\">64.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.14.4.3\" style=\"padding:1pt 5.7pt;\">50.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.14.4.4\" style=\"padding:1pt 5.7pt;\">59.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.14.4.5\" style=\"padding:1pt 5.7pt;\">42.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.14.4.6\" style=\"padding:1pt 5.7pt;\">49.74</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.14.4.7\" style=\"padding:1pt 5.7pt;\">31.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.11.15.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.11.15.5.1\" style=\"padding:1pt 5.7pt;\">SSRE\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib55\" title=\"\">55</a>]</cite> (CVPR 2022)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.15.5.2\" style=\"padding:1pt 5.7pt;\">56.96</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.15.5.3\" style=\"padding:1pt 5.7pt;\">43.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.15.5.4\" style=\"padding:1pt 5.7pt;\">43.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.15.5.5\" style=\"padding:1pt 5.7pt;\">29.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.15.5.6\" style=\"padding:1pt 5.7pt;\">31.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.15.5.7\" style=\"padding:1pt 5.7pt;\">16.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.11.16.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.11.16.6.1\" style=\"padding:1pt 5.7pt;\">FeTril\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib22\" title=\"\">22</a>]</cite> (WACV 2023)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.16.6.2\" style=\"padding:1pt 5.7pt;\">58.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.16.6.3\" style=\"padding:1pt 5.7pt;\">42.67</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.16.6.4\" style=\"padding:1pt 5.7pt;\">47.14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.16.6.5\" style=\"padding:1pt 5.7pt;\">30.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.16.6.6\" style=\"padding:1pt 5.7pt;\">37.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.16.6.7\" style=\"padding:1pt 5.7pt;\">20.62</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.11.17.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.11.17.7.1\" style=\"padding:1pt 5.7pt;\">SEED\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib29\" title=\"\">29</a>]</cite> (ICLR 2024)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.17.7.2\" style=\"padding:1pt 5.7pt;\">63.05</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.17.7.3\" style=\"padding:1pt 5.7pt;\">52.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.17.7.4\" style=\"padding:1pt 5.7pt;\">62.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.11.17.7.5\" style=\"padding:1pt 5.7pt;\">51.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.17.7.6\" style=\"padding:1pt 5.7pt;\">57.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.17.7.7\" style=\"padding:1pt 5.7pt;\">42.87</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.11.18.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S5.T2.11.18.8.1\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.11.18.8.1.1\">Ours</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T2.11.18.8.2\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.11.18.8.2.1\">69.77</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.11.18.8.3\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.11.18.8.3.1\">62.21</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T2.11.18.8.4\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.11.18.8.4.1\">68.05</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.11.18.8.5\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.11.18.8.5.1\">58.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T2.11.18.8.6\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.11.18.8.6.1\">67.10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T2.11.18.8.7\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.11.18.8.7.1\">57.11</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 152 |
+
"capture": "Table 2: Evaluation results on CIFAR100 with protocol that equally split 100 classes into tasks. The best results are in bold."
|
| 153 |
+
},
|
| 154 |
+
"3": {
|
| 155 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.13.2.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S5.T3.2.1\" style=\"font-size:90%;\">Evaluation on ImageNet100 with protocol that equally split 100 classes into tasks.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.11\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S5.T3.5.3.4\" rowspan=\"2\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text\" id=\"S5.T3.5.3.4.1\">Approach</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S5.T3.3.1.1\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S5.T3.4.2.2\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S5.T3.5.3.3\" style=\"padding:1pt 5.7pt;\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.11.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.6.4.1\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.7.5.2\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.8.6.3\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.9.7.4\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.10.8.5\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.11.9.6\" style=\"padding:1pt 5.7pt;\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.11.10.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S5.T3.11.10.1.1\" style=\"padding:1pt 5.7pt;\">Upper Bound</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.11.10.1.2\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.11.10.1.3\" style=\"padding:1pt 5.7pt;\">80.41</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.11.10.1.4\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.11.10.1.5\" style=\"padding:1pt 5.7pt;\">80.41</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.11.10.1.6\" style=\"padding:1pt 5.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T3.11.10.1.7\" style=\"padding:1pt 5.7pt;\">80.41</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.11.11.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.11.11.1.1\" style=\"padding:1pt 5.7pt;\">ABD\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib33\" title=\"\">33</a>]</cite> (ICCV 2021)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.11.11.1.2\" style=\"padding:1pt 5.7pt;\">67.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.11.11.1.3\" style=\"padding:1pt 5.7pt;\">52.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.11.11.1.4\" style=\"padding:1pt 5.7pt;\">57.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.11.11.1.5\" style=\"padding:1pt 5.7pt;\">35.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.11.11.1.6\" style=\"padding:1pt 5.7pt;\">45.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.11.11.1.7\" style=\"padding:1pt 5.7pt;\">22.10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.11.12.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.11.12.2.1\" style=\"padding:1pt 5.7pt;\">PASS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib54\" title=\"\">54</a>]</cite> (CVPR 2021)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.12.2.2\" style=\"padding:1pt 5.7pt;\">55.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.12.2.3\" style=\"padding:1pt 5.7pt;\">39.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.12.2.4\" style=\"padding:1pt 5.7pt;\">33.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.12.2.5\" style=\"padding:1pt 5.7pt;\">16.18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.12.2.6\" style=\"padding:1pt 5.7pt;\">27.30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.12.2.7\" style=\"padding:1pt 5.7pt;\">18.24</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.11.13.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.11.13.3.1\" style=\"padding:1pt 5.7pt;\">IL2A\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib53\" title=\"\">53</a>]</cite> (NeurIPS 2021)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.13.3.2\" style=\"padding:1pt 5.7pt;\">62.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.13.3.3\" style=\"padding:1pt 5.7pt;\">48.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.13.3.4\" style=\"padding:1pt 5.7pt;\">43.46</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.13.3.5\" style=\"padding:1pt 5.7pt;\">26.04</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.13.3.6\" style=\"padding:1pt 5.7pt;\">35.59</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.13.3.7\" style=\"padding:1pt 5.7pt;\">20.72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.11.14.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.11.14.4.1\" style=\"padding:1pt 5.7pt;\">R-DFCIL\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib7\" title=\"\">7</a>]</cite> (ECCV 2022)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.14.4.2\" style=\"padding:1pt 5.7pt;\">68.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.14.4.3\" style=\"padding:1pt 5.7pt;\">53.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.14.4.4\" style=\"padding:1pt 5.7pt;\">59.36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.14.4.5\" style=\"padding:1pt 5.7pt;\">42.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.14.4.6\" style=\"padding:1pt 5.7pt;\">49.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.14.4.7\" style=\"padding:1pt 5.7pt;\">30.80</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.11.15.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.11.15.5.1\" style=\"padding:1pt 5.7pt;\">SSRE\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib55\" title=\"\">55</a>]</cite> (CVPR 2022)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.15.5.2\" style=\"padding:1pt 5.7pt;\">52.25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.15.5.3\" style=\"padding:1pt 5.7pt;\">37.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.15.5.4\" style=\"padding:1pt 5.7pt;\">46.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.15.5.5\" style=\"padding:1pt 5.7pt;\">29.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.15.5.6\" style=\"padding:1pt 5.7pt;\">34.96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.15.5.7\" style=\"padding:1pt 5.7pt;\">18.90</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.11.16.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.11.16.6.1\" style=\"padding:1pt 5.7pt;\">FeTril\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib22\" title=\"\">22</a>]</cite> (WACV 2023)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.16.6.2\" style=\"padding:1pt 5.7pt;\">58.40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.16.6.3\" style=\"padding:1pt 5.7pt;\">41.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.16.6.4\" style=\"padding:1pt 5.7pt;\">46.44</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.16.6.5\" style=\"padding:1pt 5.7pt;\">27.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.16.6.6\" style=\"padding:1pt 5.7pt;\">37.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.16.6.7\" style=\"padding:1pt 5.7pt;\">20.62</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.11.17.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.11.17.7.1\" style=\"padding:1pt 5.7pt;\">SEED\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.05016v2#bib.bib29\" title=\"\">29</a>]</cite> (ICLR 2024)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.17.7.2\" style=\"padding:1pt 5.7pt;\">69.08</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.17.7.3\" style=\"padding:1pt 5.7pt;\">58.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.17.7.4\" style=\"padding:1pt 5.7pt;\">67.55</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.11.17.7.5\" style=\"padding:1pt 5.7pt;\">55.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.17.7.6\" style=\"padding:1pt 5.7pt;\">62.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.17.7.7\" style=\"padding:1pt 5.7pt;\">45.77</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.11.18.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S5.T3.11.18.8.1\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.18.8.1.1\">Ours</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.11.18.8.2\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.18.8.2.1\">74.85</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.11.18.8.3\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.18.8.3.1\">67.26</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.11.18.8.4\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.18.8.4.1\">73.87</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.11.18.8.5\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.18.8.5.1\">67.02</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.11.18.8.6\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.18.8.6.1\">72.51</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.11.18.8.7\" style=\"padding:1pt 5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.11.18.8.7.1\">68.68</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 156 |
+
"capture": "Table 3: Evaluation on ImageNet100 with protocol that equally split 100 classes into tasks."
|
| 157 |
+
},
|
| 158 |
+
"4": {
|
| 159 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T4.6.2.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S5.T4.2.1\" style=\"font-size:90%;\">Abalation Study results of comparison between our method with all components and without the multi-distribution-matching diffusion model (MDM), without multi-domain adaptation reformation (MDA), and without selective synthetic image augmentation (SSIA). The ablation study is conducted on ImageNet100 with . </span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T4.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T4.4.2.3\" style=\"padding:1.25pt 14.4pt;\">MDM</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S5.T4.4.2.4\" style=\"padding:1.25pt 14.4pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T4.4.2.5\" style=\"padding:1.25pt 14.4pt;\">MDA</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S5.T4.4.2.6\" style=\"padding:1.25pt 14.4pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T4.4.2.7\" style=\"padding:1.25pt 14.4pt;\">SSIA</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_t\" id=\"S5.T4.4.2.8\" style=\"padding:1.25pt 14.4pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.3.1.1\" style=\"padding:1.25pt 14.4pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T4.4.2.2\" style=\"padding:1.25pt 14.4pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.4.3.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.3.1.1\" style=\"padding:1.25pt 14.4pt;\">\u2717</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T4.4.3.1.2\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.4.3.1.3\" style=\"padding:1.25pt 14.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T4.4.3.1.4\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.4.3.1.5\" style=\"padding:1.25pt 14.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T4.4.3.1.6\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.4.3.1.7\" style=\"padding:1.25pt 14.4pt;\">59.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.4.3.1.8\" style=\"padding:1.25pt 14.4pt;\">51.17</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.4.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.4.2.1\" style=\"padding:1.25pt 14.4pt;\">\u2713</td>\n<td class=\"ltx_td\" id=\"S5.T4.4.4.2.2\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.4.4.2.3\" style=\"padding:1.25pt 14.4pt;\">\u2717</td>\n<td class=\"ltx_td\" id=\"S5.T4.4.4.2.4\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.4.4.2.5\" style=\"padding:1.25pt 14.4pt;\">\u2713</td>\n<td class=\"ltx_td\" id=\"S5.T4.4.4.2.6\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.4.4.2.7\" style=\"padding:1.25pt 14.4pt;\">65.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.4.4.2.8\" style=\"padding:1.25pt 14.4pt;\">55.22</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.5.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.4.5.3.1\" style=\"padding:1.25pt 14.4pt;\">\u2713</td>\n<td class=\"ltx_td\" id=\"S5.T4.4.5.3.2\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.4.5.3.3\" style=\"padding:1.25pt 14.4pt;\">\u2713</td>\n<td class=\"ltx_td\" id=\"S5.T4.4.5.3.4\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.4.5.3.5\" style=\"padding:1.25pt 14.4pt;\">\u2717</td>\n<td class=\"ltx_td\" id=\"S5.T4.4.5.3.6\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.4.5.3.7\" style=\"padding:1.25pt 14.4pt;\">62.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.4.5.3.8\" style=\"padding:1.25pt 14.4pt;\">52.94</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.6.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.4.6.4.1\" style=\"padding:1.25pt 14.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_border_b\" id=\"S5.T4.4.6.4.2\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T4.4.6.4.3\" style=\"padding:1.25pt 14.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_border_b\" id=\"S5.T4.4.6.4.4\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T4.4.6.4.5\" style=\"padding:1.25pt 14.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_border_b\" id=\"S5.T4.4.6.4.6\" style=\"padding:1.25pt 14.4pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.4.6.4.7\" style=\"padding:1.25pt 14.4pt;\">74.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T4.4.6.4.8\" style=\"padding:1.25pt 14.4pt;\">67.26</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 160 |
+
"capture": "Table 4: Abalation Study results of comparison between our method with all components and without the multi-distribution-matching diffusion model (MDM), without multi-domain adaptation reformation (MDA), and without selective synthetic image augmentation (SSIA). The ablation study is conducted on ImageNet100 with . "
|
| 161 |
+
}
|
| 162 |
+
},
|
| 163 |
+
"image_paths": {
|
| 164 |
+
"1": {
|
| 165 |
+
"figure_path": "2403.05016v2_figure_1.png",
|
| 166 |
+
"caption": "Figure 1: Domain Gaps in Exemplar-Free CIL. The distribution of real classes is closer to each other while domain gaps exist between real class 0 and synthetic class 0.",
|
| 167 |
+
"url": "http://arxiv.org/html/2403.05016v2/x1.png"
|
| 168 |
+
},
|
| 169 |
+
"2(a)": {
|
| 170 |
+
"figure_path": "2403.05016v2_figure_2(a).png",
|
| 171 |
+
"caption": "(a) Feature Embedding with Ground Truth Label\nFigure 3: t-SNE Visualization of Test Data\u2019s Feature Embedding. Most of the previous task test data in incremental task 3 are misclassified as one of the task 3 classes.",
|
| 172 |
+
"url": "http://arxiv.org/html/2403.05016v2/x2.png"
|
| 173 |
+
},
|
| 174 |
+
"2(b)": {
|
| 175 |
+
"figure_path": "2403.05016v2_figure_2(b).png",
|
| 176 |
+
"caption": "(b) Feature Embedding with Predition Label\nFigure 3: t-SNE Visualization of Test Data\u2019s Feature Embedding. Most of the previous task test data in incremental task 3 are misclassified as one of the task 3 classes.",
|
| 177 |
+
"url": "http://arxiv.org/html/2403.05016v2/x3.png"
|
| 178 |
+
},
|
| 179 |
+
"3": {
|
| 180 |
+
"figure_path": "2403.05016v2_figure_3.png",
|
| 181 |
+
"caption": "Figure 4: Model Framework Overview learning on currect task \ud835\udcafi+1subscript\ud835\udcaf\ud835\udc561\\mathcal{T}_{i+1}caligraphic_T start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT. previous MDM diffusion models J0:isubscript\ud835\udc3d:0\ud835\udc56J_{0:i}italic_J start_POSTSUBSCRIPT 0 : italic_i end_POSTSUBSCRIPT are used to generated Synthetic Data of previous tasks \ud835\udc9f0:isynsuperscriptsubscript\ud835\udc9f:0\ud835\udc56syn\\mathcal{D}_{0:i}^{\\text{syn}}caligraphic_D start_POSTSUBSCRIPT 0 : italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT syn end_POSTSUPERSCRIPT. MDM diffusion model of current task is then finetuned using MDM technique using Real current task Data \ud835\udc9firealsuperscriptsubscript\ud835\udc9f\ud835\udc56real\\mathcal{D}_{i}^{\\text{real}}caligraphic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT real end_POSTSUPERSCRIPT and randomly sampled small batch of \ud835\udc9f0:isynsuperscriptsubscript\ud835\udc9f:0\ud835\udc56syn\\mathcal{D}_{0:i}^{\\text{syn}}caligraphic_D start_POSTSUBSCRIPT 0 : italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT syn end_POSTSUPERSCRIPT. J0:isubscript\ud835\udc3d:0\ud835\udc56J_{0:i}italic_J start_POSTSUBSCRIPT 0 : italic_i end_POSTSUBSCRIPT is subsequently used to obtain \ud835\udc9fiaugsuperscriptsubscript\ud835\udc9f\ud835\udc56aug\\mathcal{D}_{i}^{\\text{aug}}caligraphic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT aug end_POSTSUPERSCRIPT by SSIA. The model trains with MDA on the combined dataset.",
|
| 182 |
+
"url": "http://arxiv.org/html/2403.05016v2/x4.png"
|
| 183 |
+
},
|
| 184 |
+
"4(a)": {
|
| 185 |
+
"figure_path": "2403.05016v2_figure_4(a).png",
|
| 186 |
+
"caption": "(a) 5 tasks, 20 classes/task\nFigure 6: Classification Accuracy of Each Incremental Task on CIFAR100. Our method greatly outperforms all data-free CIL baselines in all incremental settings.",
|
| 187 |
+
"url": "http://arxiv.org/html/2403.05016v2/x5.png"
|
| 188 |
+
},
|
| 189 |
+
"4(b)": {
|
| 190 |
+
"figure_path": "2403.05016v2_figure_4(b).png",
|
| 191 |
+
"caption": "(b) 10 tasks, 10 classes/task\nFigure 6: Classification Accuracy of Each Incremental Task on CIFAR100. Our method greatly outperforms all data-free CIL baselines in all incremental settings.",
|
| 192 |
+
"url": "http://arxiv.org/html/2403.05016v2/x6.png"
|
| 193 |
+
},
|
| 194 |
+
"4(c)": {
|
| 195 |
+
"figure_path": "2403.05016v2_figure_4(c).png",
|
| 196 |
+
"caption": "(c) 20 tasks, 5 classes/task\nFigure 6: Classification Accuracy of Each Incremental Task on CIFAR100. Our method greatly outperforms all data-free CIL baselines in all incremental settings.",
|
| 197 |
+
"url": "http://arxiv.org/html/2403.05016v2/x7.png"
|
| 198 |
+
},
|
| 199 |
+
"5(a)": {
|
| 200 |
+
"figure_path": "2403.05016v2_figure_5(a).png",
|
| 201 |
+
"caption": "(a) 5 tasks, 20 classes/task\nFigure 8: Incremental Accuracy on ImageNet100. Our method greatly outperforms all baseline methods in all incremental settings. Our method achieves more significant improvements in more incremental task settings (e.g. increase N\ud835\udc41Nitalic_N from 5 to 10 or to 20)",
|
| 202 |
+
"url": "http://arxiv.org/html/2403.05016v2/x8.png"
|
| 203 |
+
},
|
| 204 |
+
"5(b)": {
|
| 205 |
+
"figure_path": "2403.05016v2_figure_5(b).png",
|
| 206 |
+
"caption": "(b) 10 tasks, 10 classes/task\nFigure 8: Incremental Accuracy on ImageNet100. Our method greatly outperforms all baseline methods in all incremental settings. Our method achieves more significant improvements in more incremental task settings (e.g. increase N\ud835\udc41Nitalic_N from 5 to 10 or to 20)",
|
| 207 |
+
"url": "http://arxiv.org/html/2403.05016v2/x9.png"
|
| 208 |
+
},
|
| 209 |
+
"5(c)": {
|
| 210 |
+
"figure_path": "2403.05016v2_figure_5(c).png",
|
| 211 |
+
"caption": "(c) 20 tasks, 20 classes/task\nFigure 8: Incremental Accuracy on ImageNet100. Our method greatly outperforms all baseline methods in all incremental settings. Our method achieves more significant improvements in more incremental task settings (e.g. increase N\ud835\udc41Nitalic_N from 5 to 10 or to 20)",
|
| 212 |
+
"url": "http://arxiv.org/html/2403.05016v2/x10.png"
|
| 213 |
+
},
|
| 214 |
+
"6": {
|
| 215 |
+
"figure_path": "2403.05016v2_figure_6.png",
|
| 216 |
+
"caption": "Figure 9: Ablation Study about Stability-Plasticity Balance. Our method with all three components shows a better balance vs. w/o each of the three components.",
|
| 217 |
+
"url": "http://arxiv.org/html/2403.05016v2/x11.png"
|
| 218 |
+
}
|
| 219 |
+
},
|
| 220 |
+
"validation": true,
|
| 221 |
+
"references": [
|
| 222 |
+
{
|
| 223 |
+
"1": {
|
| 224 |
+
"title": "Synthetic data from diffusion models improves imagenet classification.",
|
| 225 |
+
"author": "Shekoofeh Azizi, Simon Kornblith, Chitwan Saharia, Mohammad Norouzi, and David J Fleet.",
|
| 226 |
+
"venue": "arXiv preprint arXiv:2304.08466, 2023.",
|
| 227 |
+
"url": null
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"2": {
|
| 232 |
+
"title": "Gan memory with no forgetting.",
|
| 233 |
+
"author": "Yulai Cong, Miaoyun Zhao, Jianqiao Li, Sijia Wang, and Lawrence Carin.",
|
| 234 |
+
"venue": "In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 16481\u201316494. Curran Associates, Inc., 2020.",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"3": {
|
| 240 |
+
"title": "Podnet: Pooled outputs distillation for small-tasks incremental learning.",
|
| 241 |
+
"author": "Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, and Eduardo Valle.",
|
| 242 |
+
"venue": "In Proceedings of the European Conference on Computer Vision (ECCV), 2020.",
|
| 243 |
+
"url": null
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"4": {
|
| 248 |
+
"title": "An image is worth one word: Personalizing text-to-image generation using textual inversion.",
|
| 249 |
+
"author": "Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or.",
|
| 250 |
+
"venue": "arXiv preprint arXiv:2208.01618, 2022.",
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"5": {
|
| 256 |
+
"title": "Personalizing text-to-image generation via aesthetic gradients.",
|
| 257 |
+
"author": "Victor Gallego.",
|
| 258 |
+
"venue": "arXiv preprint arXiv:2209.12330, 2022.",
|
| 259 |
+
"url": null
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"6": {
|
| 264 |
+
"title": "Domain-adversarial training of neural networks.",
|
| 265 |
+
"author": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Laviolette, Mario Marchand, and Victor Lempitsky.",
|
| 266 |
+
"venue": "J. Mach. Learn. Res., 17(1):2096\u20132030, 2016.",
|
| 267 |
+
"url": null
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"7": {
|
| 272 |
+
"title": "R-dfcil: Relation-guided representation learning for data-free class incremental learning.",
|
| 273 |
+
"author": "Qiankun Gao, Chen Zhao, Bernard Ghanem, and Jian Zhang.",
|
| 274 |
+
"venue": "In European Conference on Computer Vision, pages 423\u2013439. Springer, 2022.",
|
| 275 |
+
"url": null
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"8": {
|
| 280 |
+
"title": "Ddgr: continual learning with deep diffusion-based generative replay.",
|
| 281 |
+
"author": "Rui Gao and Weiwei Liu.",
|
| 282 |
+
"venue": "In Proceedings of the 40th International Conference on Machine Learning, ICML\u201923. JMLR.org, 2023.",
|
| 283 |
+
"url": null
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"9": {
|
| 288 |
+
"title": "An empirical investigation of catastrophic forgetting in gradient-based neural networks.",
|
| 289 |
+
"author": "Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio.",
|
| 290 |
+
"venue": "arXiv preprint arXiv:1312.6211, 2013.",
|
| 291 |
+
"url": null
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"10": {
|
| 296 |
+
"title": "Deep residual learning for image recognition.",
|
| 297 |
+
"author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.",
|
| 298 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016.",
|
| 299 |
+
"url": null
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"11": {
|
| 304 |
+
"title": "Denoising diffusion probabilistic models.",
|
| 305 |
+
"author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.",
|
| 306 |
+
"venue": "Advances in neural information processing systems, 33:6840\u20136851, 2020.",
|
| 307 |
+
"url": null
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"12": {
|
| 312 |
+
"title": "Learning a unified classifier incrementally via rebalancing.",
|
| 313 |
+
"author": "Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin.",
|
| 314 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.",
|
| 315 |
+
"url": null
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"13": {
|
| 320 |
+
"title": "Lora: Low-rank adaptation of large language models.",
|
| 321 |
+
"author": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.",
|
| 322 |
+
"venue": "arXiv preprint arXiv:2106.09685, 2021.",
|
| 323 |
+
"url": null
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"14": {
|
| 328 |
+
"title": "Class-incremental learning using diffusion model for distillation and replay.",
|
| 329 |
+
"author": "Q. Jodelet, X. Liu, Y. Phua, and T. Murata.",
|
| 330 |
+
"venue": "In 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), pages 3417\u20133425, Los Alamitos, CA, USA, 2023. IEEE Computer Society.",
|
| 331 |
+
"url": null
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"15": {
|
| 336 |
+
"title": "Fearnet: Brain-inspired model for incremental learning.",
|
| 337 |
+
"author": "Ronald Kemker and Christopher Kanan.",
|
| 338 |
+
"venue": "arXiv preprint arXiv:1711.10563, 2017.",
|
| 339 |
+
"url": null
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"16": {
|
| 344 |
+
"title": "Learning multiple layers of features from tiny images.",
|
| 345 |
+
"author": "Alex Krizhevsky, Geoffrey Hinton, et al.",
|
| 346 |
+
"venue": "Technical Report, 2009.",
|
| 347 |
+
"url": null
|
| 348 |
+
}
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"17": {
|
| 352 |
+
"title": "Learning without forgetting.",
|
| 353 |
+
"author": "Zhizhong Li and Derek Hoiem.",
|
| 354 |
+
"venue": "IEEE transactions on pattern analysis and machine intelligence, 40(12):2935\u20132947, 2017.",
|
| 355 |
+
"url": null
|
| 356 |
+
}
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"18": {
|
| 360 |
+
"title": "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps.",
|
| 361 |
+
"author": "Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu.",
|
| 362 |
+
"venue": "Advances in Neural Information Processing Systems, 35:5775\u20135787, 2022.",
|
| 363 |
+
"url": null
|
| 364 |
+
}
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"19": {
|
| 368 |
+
"title": "Instructgie: Towards generalizable image editing.",
|
| 369 |
+
"author": "Zichong Meng, Changdi Yang, Jun Liu, Hao Tang, Pu Zhao, and Yanzhi Wang.",
|
| 370 |
+
"venue": "arXiv preprint arXiv:2403.05018, 2024.",
|
| 371 |
+
"url": null
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"20": {
|
| 376 |
+
"title": "Improved denoising diffusion probabilistic models.",
|
| 377 |
+
"author": "Alexander Quinn Nichol and Prafulla Dhariwal.",
|
| 378 |
+
"venue": "In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8162\u20138171. PMLR, 18\u201324 Jul 2021.",
|
| 379 |
+
"url": null
|
| 380 |
+
}
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"21": {
|
| 384 |
+
"title": "VAEs meet diffusion models: Efficient and high-fidelity generation.",
|
| 385 |
+
"author": "Kushagra Pandey, Avideep Mukherjee, Piyush Rai, and Abhishek Kumar.",
|
| 386 |
+
"venue": "In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021.",
|
| 387 |
+
"url": null
|
| 388 |
+
}
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"22": {
|
| 392 |
+
"title": "Fetril: Feature translation for exemplar-free class-incremental learning.",
|
| 393 |
+
"author": "Gr\u00e9goire Petit, Adrian Popescu, Hugo Schindler, David Picard, and Bertrand Delezoide.",
|
| 394 |
+
"venue": "In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 3911\u20133920, January 2023.",
|
| 395 |
+
"url": null
|
| 396 |
+
}
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"23": {
|
| 400 |
+
"title": "Dreamfusion: Text-to-3d using 2d diffusion.",
|
| 401 |
+
"author": "Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall.",
|
| 402 |
+
"venue": "arXiv preprint arXiv:2209.14988, 2022.",
|
| 403 |
+
"url": null
|
| 404 |
+
}
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"24": {
|
| 408 |
+
"title": "Hierarchical text-conditional image generation with clip latents.",
|
| 409 |
+
"author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.",
|
| 410 |
+
"venue": "arXiv preprint arXiv:2204.06125, 1(2):3, 2022.",
|
| 411 |
+
"url": null
|
| 412 |
+
}
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"25": {
|
| 416 |
+
"title": "icarl: Incremental classifier and representation learning.",
|
| 417 |
+
"author": "Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert.",
|
| 418 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017.",
|
| 419 |
+
"url": null
|
| 420 |
+
}
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"26": {
|
| 424 |
+
"title": "High-resolution image synthesis with latent diffusion models.",
|
| 425 |
+
"author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.",
|
| 426 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684\u201310695, June 2022.",
|
| 427 |
+
"url": null
|
| 428 |
+
}
|
| 429 |
+
},
|
| 430 |
+
{
|
| 431 |
+
"27": {
|
| 432 |
+
"title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation.",
|
| 433 |
+
"author": "Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman.",
|
| 434 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22500\u201322510, 2023.",
|
| 435 |
+
"url": null
|
| 436 |
+
}
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"28": {
|
| 440 |
+
"title": "Imagenet large scale visual recognition challenge.",
|
| 441 |
+
"author": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al.",
|
| 442 |
+
"venue": "International Journal of Computer Vision (IJCV), 2015.",
|
| 443 |
+
"url": null
|
| 444 |
+
}
|
| 445 |
+
},
|
| 446 |
+
{
|
| 447 |
+
"29": {
|
| 448 |
+
"title": "Divide and not forget: Ensemble of selectively trained experts in continual learning.",
|
| 449 |
+
"author": "Grzegorz Rype\u015b\u0107, Sebastian Cygert, Valeriya Khan, Tomasz Trzcinski, Bartosz Micha\u0142 Zieli\u0144ski, and Bart\u0142omiej Twardowski.",
|
| 450 |
+
"venue": "In The Twelfth International Conference on Learning Representations, 2024.",
|
| 451 |
+
"url": null
|
| 452 |
+
}
|
| 453 |
+
},
|
| 454 |
+
{
|
| 455 |
+
"30": {
|
| 456 |
+
"title": "Photorealistic text-to-image diffusion models with deep language understanding.",
|
| 457 |
+
"author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.",
|
| 458 |
+
"venue": "Advances in Neural Information Processing Systems, 35:36479\u201336494, 2022.",
|
| 459 |
+
"url": null
|
| 460 |
+
}
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"31": {
|
| 464 |
+
"title": "Archisound: Audio generation with diffusion.",
|
| 465 |
+
"author": "Flavio Schneider.",
|
| 466 |
+
"venue": "arXiv preprint arXiv:2301.13267, 2023.",
|
| 467 |
+
"url": null
|
| 468 |
+
}
|
| 469 |
+
},
|
| 470 |
+
{
|
| 471 |
+
"32": {
|
| 472 |
+
"title": "Continual learning with deep generative replay.",
|
| 473 |
+
"author": "Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim.",
|
| 474 |
+
"venue": "Advances in neural information processing systems, 30, 2017.",
|
| 475 |
+
"url": null
|
| 476 |
+
}
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"33": {
|
| 480 |
+
"title": "Always be dreaming: A new approach for data-free class-incremental learning.",
|
| 481 |
+
"author": "James Smith, Yen-Chang Hsu, Jonathan Balloch, Yilin Shen, Hongxia Jin, and Zsolt Kira.",
|
| 482 |
+
"venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021.",
|
| 483 |
+
"url": null
|
| 484 |
+
}
|
| 485 |
+
},
|
| 486 |
+
{
|
| 487 |
+
"34": {
|
| 488 |
+
"title": "Deep unsupervised learning using nonequilibrium thermodynamics.",
|
| 489 |
+
"author": "Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli.",
|
| 490 |
+
"venue": "In International conference on machine learning, pages 2256\u20132265. PMLR, 2015.",
|
| 491 |
+
"url": null
|
| 492 |
+
}
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"35": {
|
| 496 |
+
"title": "Denoising diffusion implicit models.",
|
| 497 |
+
"author": "Jiaming Song, Chenlin Meng, and Stefano Ermon.",
|
| 498 |
+
"venue": "arXiv preprint arXiv:2010.02502, 2020.",
|
| 499 |
+
"url": null
|
| 500 |
+
}
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"36": {
|
| 504 |
+
"title": "Generative modeling by estimating gradients of the data distribution.",
|
| 505 |
+
"author": "Yang Song and Stefano Ermon.",
|
| 506 |
+
"venue": "In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.",
|
| 507 |
+
"url": null
|
| 508 |
+
}
|
| 509 |
+
},
|
| 510 |
+
{
|
| 511 |
+
"37": {
|
| 512 |
+
"title": "Effective data augmentation with diffusion models.",
|
| 513 |
+
"author": "Brandon Trabucco, Kyle Doherty, Max A Gurinas, and Ruslan Salakhutdinov.",
|
| 514 |
+
"venue": "In The Twelfth International Conference on Learning Representations, 2024.",
|
| 515 |
+
"url": null
|
| 516 |
+
}
|
| 517 |
+
},
|
| 518 |
+
{
|
| 519 |
+
"38": {
|
| 520 |
+
"title": "BEEF: Bi-compatible class-incremental learning via energy-based expansion and fusion.",
|
| 521 |
+
"author": "Fu-Yun Wang, Da-Wei Zhou, Liu Liu, Han-Jia Ye, Yatao Bian, De-Chuan Zhan, and Peilin Zhao.",
|
| 522 |
+
"venue": "In The Eleventh International Conference on Learning Representations, 2023.",
|
| 523 |
+
"url": null
|
| 524 |
+
}
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"39": {
|
| 528 |
+
"title": "Foster: Feature boosting and compression for class-incremental learning.",
|
| 529 |
+
"author": "Fu-Yun Wang, Da-Wei Zhou, Han-Jia Ye, and De-Chuan Zhan.",
|
| 530 |
+
"venue": "In European conference on computer vision, pages 398\u2013414. Springer, 2022.",
|
| 531 |
+
"url": null
|
| 532 |
+
}
|
| 533 |
+
},
|
| 534 |
+
{
|
| 535 |
+
"40": {
|
| 536 |
+
"title": "In-context learning unlocked for diffusion models.",
|
| 537 |
+
"author": "Zhendong Wang, Yifan Jiang, Yadong Lu, Pengcheng He, Weizhu Chen, Zhangyang Wang, Mingyuan Zhou, et al.",
|
| 538 |
+
"venue": "Advances in Neural Information Processing Systems, 36, 2024.",
|
| 539 |
+
"url": null
|
| 540 |
+
}
|
| 541 |
+
},
|
| 542 |
+
{
|
| 543 |
+
"41": {
|
| 544 |
+
"title": "Dualhsic: Hsic-bottleneck and alignment for continual learning.",
|
| 545 |
+
"author": "Zifeng Wang, Zheng Zhan, Yifan Gong, Yucai Shao, Stratis Ioannidis, Yanzhi Wang, and Jennifer Dy.",
|
| 546 |
+
"venue": "In International Conference on Machine Learning, pages 36578\u201336592. PMLR, 2023.",
|
| 547 |
+
"url": null
|
| 548 |
+
}
|
| 549 |
+
},
|
| 550 |
+
{
|
| 551 |
+
"42": {
|
| 552 |
+
"title": "SparCL: Sparse continual learning on the edge.",
|
| 553 |
+
"author": "Zifeng Wang, Zheng Zhan, Yifan Gong, Geng Yuan, Wei Niu, Tong Jian, Bin Ren, Stratis Ioannidis, Yanzhi Wang, and Jennifer Dy.",
|
| 554 |
+
"venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.",
|
| 555 |
+
"url": null
|
| 556 |
+
}
|
| 557 |
+
},
|
| 558 |
+
{
|
| 559 |
+
"43": {
|
| 560 |
+
"title": "Memory replay gans: Learning to generate new categories without forgetting.",
|
| 561 |
+
"author": "Chenshen Wu, Luis Herranz, Xialei Liu, yaxing wang, Joost van de Weijer, and Bogdan Raducanu.",
|
| 562 |
+
"venue": "In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.",
|
| 563 |
+
"url": null
|
| 564 |
+
}
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"44": {
|
| 568 |
+
"title": "Learning latent representations across multiple data domains using lifelong vaegan.",
|
| 569 |
+
"author": "Fei Ye and Adrian G Bors.",
|
| 570 |
+
"venue": "In Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part XX 16, pages 777\u2013795. Springer, 2020.",
|
| 571 |
+
"url": null
|
| 572 |
+
}
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"45": {
|
| 576 |
+
"title": "Dreaming to distill: Data-free knowledge transfer via deepinversion.",
|
| 577 |
+
"author": "Hongxu Yin, Pavlo Molchanov, Jose M Alvarez, Zhizhong Li, Arun Mallya, Derek Hoiem, Niraj K Jha, and Jan Kautz.",
|
| 578 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8715\u20138724, 2020.",
|
| 579 |
+
"url": null
|
| 580 |
+
}
|
| 581 |
+
},
|
| 582 |
+
{
|
| 583 |
+
"46": {
|
| 584 |
+
"title": "Real-fake: Effective training data synthesis through distribution matching.",
|
| 585 |
+
"author": "Jianhao Yuan, Jie Zhang, Shuyang Sun, Philip Torr, and Bo Zhao.",
|
| 586 |
+
"venue": "In The Twelfth International Conference on Learning Representations, 2024.",
|
| 587 |
+
"url": null
|
| 588 |
+
}
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"47": {
|
| 592 |
+
"title": "Target: Federated class-continual learning via exemplar-free distillation.",
|
| 593 |
+
"author": "Jie Zhang, Chen Chen, Weiming Zhuang, and Lingjuan Lyu.",
|
| 594 |
+
"venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4782\u20134793, 2023.",
|
| 595 |
+
"url": null
|
| 596 |
+
}
|
| 597 |
+
},
|
| 598 |
+
{
|
| 599 |
+
"48": {
|
| 600 |
+
"title": "Federated generative learning with foundation models.",
|
| 601 |
+
"author": "Jie Zhang, Xiaohua Qi, and Bo Zhao.",
|
| 602 |
+
"venue": "arXiv preprint arXiv:2306.16064, 2023.",
|
| 603 |
+
"url": null
|
| 604 |
+
}
|
| 605 |
+
},
|
| 606 |
+
{
|
| 607 |
+
"49": {
|
| 608 |
+
"title": "Generalized universal domain adaptation with generative flow networks.",
|
| 609 |
+
"author": "Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, and Chao Wu.",
|
| 610 |
+
"venue": "In ACM International Conference on Multimedia (MM) 2023, 2023.",
|
| 611 |
+
"url": null
|
| 612 |
+
}
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"50": {
|
| 616 |
+
"title": "Universal domain adaptation via compressive attention matching.",
|
| 617 |
+
"author": "Didi Zhu, Yinchuan Li, Junkun Yuan, Zexi Li, Kun Kuang, and Chao Wu.",
|
| 618 |
+
"venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6974\u20136985, 2023.",
|
| 619 |
+
"url": null
|
| 620 |
+
}
|
| 621 |
+
},
|
| 622 |
+
{
|
| 623 |
+
"51": {
|
| 624 |
+
"title": "Bridging the gap: neural collapse inspired prompt tuning for generalization under class imbalance.",
|
| 625 |
+
"author": "Didi Zhu, Yinchuan Li, Min Zhang, Junkun Yuan, Jiashuo Liu, Kun Kuang, and Chao Wu.",
|
| 626 |
+
"venue": "arXiv preprint arXiv:2306.15955, 2023.",
|
| 627 |
+
"url": null
|
| 628 |
+
}
|
| 629 |
+
},
|
| 630 |
+
{
|
| 631 |
+
"52": {
|
| 632 |
+
"title": "Model tailor: Mitigating catastrophic forgetting in multi-modal large language models.",
|
| 633 |
+
"author": "Didi Zhu, Zhongyi Sun, Zexi Li, Tao Shen, Ke Yan, Shouhong Ding, Kun Kuang, and Chao Wu.",
|
| 634 |
+
"venue": "arXiv preprint arXiv:2402.12048, 2024.",
|
| 635 |
+
"url": null
|
| 636 |
+
}
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"53": {
|
| 640 |
+
"title": "Class-incremental learning via dual augmentation.",
|
| 641 |
+
"author": "Fei Zhu, Zhen Cheng, Xu-Yao Zhang, and Cheng-lin Liu.",
|
| 642 |
+
"venue": "Advances in Neural Information Processing Systems, 34:14306\u201314318, 2021.",
|
| 643 |
+
"url": null
|
| 644 |
+
}
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"54": {
|
| 648 |
+
"title": "Prototype augmentation and self-supervision for incremental learning.",
|
| 649 |
+
"author": "Fei Zhu, Xu-Yao Zhang, Chuang Wang, Fei Yin, and Cheng-Lin Liu.",
|
| 650 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5871\u20135880, June 2021.",
|
| 651 |
+
"url": null
|
| 652 |
+
}
|
| 653 |
+
},
|
| 654 |
+
{
|
| 655 |
+
"55": {
|
| 656 |
+
"title": "Self-sustaining representation expansion for non-exemplar class-incremental learning.",
|
| 657 |
+
"author": "Kai Zhu, Wei Zhai, Yang Cao, Jiebo Luo, and Zheng-Jun Zha.",
|
| 658 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9296\u20139305, 2022.",
|
| 659 |
+
"url": null
|
| 660 |
+
}
|
| 661 |
+
}
|
| 662 |
+
],
|
| 663 |
+
"url": "http://arxiv.org/html/2403.05016v2"
|
| 664 |
+
}
|
20240721/2403.05018v2.json
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "InstructGIE: Towards Generalizable Image Editing",
|
| 3 |
+
"abstract": "Recent advances in image editing have been driven by the development of denoising diffusion models, marking a significant leap forward in this field. Despite these advances, the generalization capabilities of recent image editing approaches remain constrained. In response to this challenge, our study introduces a novel image editing framework with enhanced generalization robustness by boosting in-context learning capability and unifying language instruction.\nThis framework incorporates a module specifically optimized for image editing tasks, leveraging the VMamba block and an editing-shift matching strategy to augment in-context learning. Furthermore, we unveil a selective area-matching technique specifically engineered to address and rectify corrupted details in generated images, such as human facial features, to further improve the quality. Another key innovation of our approach is the integration of a language unification technique, which aligns language embeddings with editing semantics to elevate the quality of image editing.\nMoreover, we compile the first dataset for image editing with visual prompts and editing instructions that could be used to enhance in-context capability.\nTrained on this dataset, our methodology not only achieves superior synthesis quality for trained tasks, but also demonstrates robust generalization capability across unseen vision tasks through tailored prompts. Our project page is available at https://cr8br0ze.github.io/InstructGIE.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "As a crucial task in computer vision, image editing has witnessed significant improvements enhanced with the increasingly popular denoising stable diffusion techniques in recent years[38 ###reference_b38###, 40 ###reference_b40###, 41 ###reference_b41###, 25 ###reference_b25###, 22 ###reference_b22###]. Given a set of text or image prompts as generation constraints or instructions, diffusion-based image editing can follow the instructions and synthesize an edited image. However, since the model does not have the capability to accurately model all possible samples in the conditional distribution space [17 ###reference_b17###], if specific instructions are not included in the training dataset, current diffusion-based image editing methods can hardly generate satisfactory results. Thus, editing performance largely depends on the training dataset without superior generalization capabilities.\nOn the other hand, large language models (LLMs) have proven extraordinary abilities to learn from contexts, referred to as in-context learning, which allows LLMs to perform unseen tasks by providing a combination of input-output examples and a query input. Inspired by the potential to enhance the generalization of the model with LLMs, [38 ###reference_b38###, 41 ###reference_b41###] explore in-context learning for computer vision tasks, allowing them to perform unseen tasks with novel vision-language prompt designs. However, these methods are not tailored for image editing applications, leading to unsatisfying synthetic qualities with inaccurate or incorrect output and lack of detail.\nTo improve the generalization of image editing with improved synthetic image quality, it is crucial to effectively understand the text & image prompts and specifically control image editing details, which is challenging in the current literature.\nIn this work, we propose InstructGIE, an image editing framework with enhanced generalizability. We improve image editing performance from both visual and text aspects. (i) For the visual information, we incorporate a VMamba-based module to specifically enhance the image editing outputs. As VMamba[26 ###reference_b26###] has proven its better performance in capturing in-context information from inputs with larger receptive fields[26 ###reference_b26###], we leverage VMamba and propose an editing-shift matching strategy to augment in-context learning. Furthermore, since current image editing works do not perform well in generating correct features with accurate details, we unveil a selective area-matching technique specifically engineered to address and rectify corrupted details in generated images, such as human facial features, to further improve the quality.\n(ii) Another key innovation of our approach is the integration of a language unification technique, which aligns language embeddings with editing semantics to elevate the quality of image editing. Our framework not only achieves superior in-context generation for trained tasks but also demonstrates robust generalization across unseen vision tasks. Moreover, we compile a publicly available image editing dataset with plenty of visual prompts and editing instructions for better generalization evaluation of image editing. Our contributions are summarized as follows:\nWe propose an image editing framework, including in-context learning enhancement and language unification strategies, specifically designed to enhance generalization ability from both visual and text domains.\nWe compile the first dataset for image editing with visual prompts and editing instructions that could be used to enhance in-context capability.\nWe conduct extensive experiments and achieve great generalization ability in the multiple unseen image editing task, both quantitatively and qualitatively."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Works",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Denoising Stable Diffusion Based Image Editing",
|
| 21 |
+
"text": "Denoising Stable Diffusion[16 ###reference_b16###, 35 ###reference_b35###, 36 ###reference_b36###] based image editing could follow guidance from text or image prompts. With the foundation of text-guided models offering rich generative capabilities, there has been a surge in research aimed at adapting these models for image manipulation tasks from textual descriptions. To steer the image editing process in the desired direction, the use of models like CLIP to fine-tune diffusion models has become a common practice. Although these methods[3 ###reference_b3###, 41 ###reference_b41###, 22 ###reference_b22###] have shown impressive results, they often involve costly fine-tuning processes. Recent innovations[14 ###reference_b14###] have introduced techniques that inject cross-attention into the models to more effectively edit specific semantic areas within the spatial feature maps. Further advancements[24 ###reference_b24###] have enhanced these techniques by adding semantic loss or applying attention loss to refine the integration of plugged features, improving the precision and quality of the editing outcomes. [38 ###reference_b38###] proposes a framework that could learn instructions from visual image pairs for more accurate editing and firstly formulate this task as an image inpainting problem."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Generalization Capability in Visual Tasks",
|
| 27 |
+
"text": "In-context learning is widely applied in the field of natural language processing (NLP), enabling models to adapt to new tasks such as translation, question answering, and complex reasoning. NLP models utilize in-context examples, comprising text and corresponding labels, to tackle tasks they haven\u2019t seen before. However, applying in-context learning to the visual domain introduces more challenges and remains less explored.\nA significant hurdle is the nature of fixed-size input requirements for vision models, as opposed to variable-length text inputs that can be managed by language models. Vision models generally struggle with processing inputs of varying sizes, making it impractical to process multiple image prompts in one-shot for global understanding.\nMoreover, in intricate visual understanding, specific instructions are often implied from a limited set of image examples rather than explicitly stated, which poses additional difficulties for vision models in identifying and understanding high-level visual relationships.\nRecent strides in applying masked image modeling have marked a step forward in improving in-context learning for vision models. The method proposed by [41 ###reference_b41###], employing a masked autoencoder-based technique, predicts a missing image within a two-by-two grid, using two images as in-context examples and another as the query. This concept was later expanded by [38 ###reference_b38###] with a multitask framework. Despite their progress, such inpainting methods are limited by the necessity of a fixed number of in-context examples and increased memory demands. Painter, highlighted in [40 ###reference_b40###], exemplifies an inpainting approach tailored for versatility across various vision tasks.\nIn contrast, inspired by ControlNet [44 ###reference_b44###], [41 ###reference_b41###] refines the framework by adding an additional pair of example images and employing a multitask supervised finetuning method.\nPrompt diffusion excels in visual in-context learning.\nHowever, it faces certain limitations or challenges in its practical applications."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Dataset for Diffusion-based Image Editing",
|
| 33 |
+
"text": "Currently, various types of datasets are used for training in diffusion-based image editing. There are datasets that concentrate on specific domains like CelebA [27 ###reference_b27###] and FFHQ [19 ###reference_b19###] for human face image manipulation, AFHQ [7 ###reference_b7###] for animal face image editing, LSUN [42 ###reference_b42###] for object modification, and WikiArt [29 ###reference_b29###] for style transfer. In-the-wild video datasets could also be leveraged to train image editing tasks. The Scannet dataset [9 ###reference_b9###] encompasses a vast array of more than 1,500 indoor scenes from various settings, such as apartments, offices, and hotels, providing extensive annotations. The LRW dataset [8 ###reference_b8###], tailored for lip reading tasks, includes more than 1000 video utterances of 500 distinct words. The UBC-Fashion dataset [43 ###reference_b43###] features 600 videos spanning various clothing categories, with 500 videos allocated for training and 100 for testing, guaranteeing no repetition of individuals in the training set. The DAVIS dataset[43 ###reference_b43###] (Densely Annotated VIdeo Segmentation), a widely recognized benchmark for video object segmentation, contains 150 videos in total. There are also image editing works proposing to generate datasets with editing instructions. InstructPix2pix [3 ###reference_b3###] collects over 450,000 training image pairs. For each pair, given an image with its caption, it first uses a finetuned GPT-3 [4 ###reference_b4###] to generate an editing instruction and an edited image caption. Then it employs Stable Diffusion and the Prompt-to-Prompt algorithm [14 ###reference_b14###] to generate edited image following the caption.\nHowever, currently there are no datasets with multiple image pairs under one editing instruction, which is crucial to enhance the generalization ability of image editing."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Preliminary",
|
| 39 |
+
"text": "Recent advances in generative models have been significantly driven by the emergence of diffusion models, which have set new benchmarks in image creation[10 ###reference_b10###, 21 ###reference_b21###, 37 ###reference_b37###]. These models have found applications across a broad spectrum of areas[5 ###reference_b5###, 45 ###reference_b45###, 39 ###reference_b39###, 2 ###reference_b2###, 20 ###reference_b20###], demonstrating their versatility and effectiveness. The fundamental concept behind diffusion models involves starting with an image that is initially just random noise and progressively refining this image step by step until it becomes a high-quality, realistic image . This refinement process involves generating intermediate samples (for ), where each sample is defined as:\nwhere the parameter sets the pace of the diffusion process, ranging from , and represents the added noise. The model refines the image by applying a neural network to each sample , followed by the addition of Gaussian noise to produce the next sample . This neural network is optimized to achieve a denoising goal, striving for , resulting in a generative process that closely mimics the desired image distribution.\nExpanding this framework to conditional generative modeling, the process involves conditioning the neural network on an additional input , enabling the generation of images from a distribution conditioned on . This conditional input could be anything from a low-resolution image, a category label, or a descriptive text sequence. Leveraging the advancements in LLMs [33 ###reference_b33###] and hybrid vision-language models [31 ###reference_b31###], text-to-image diffusion models are developed. These models allow for the creation of detailed, high-resolution images from mere text descriptions, starting with a low-resolution image generated through the diffusion process, which is subsequently refined into a high-resolution image using additional models."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "The Proposed Method",
|
| 45 |
+
"text": "###figure_1### We present our framework pipeline in Fig. 2 ###reference_###. For efficient training and better controllability, we adopt a line of techniques with popular architectures such as ControlNet [44 ###reference_b44###] and Stable Diffusion [34 ###reference_b34###] to design a generalizable image editing tool with accurate high-quality outputs. Specifically, we introduce enhanced in-context learning both at the architecture level and training level\nto improve the image quality. Furthermore, language instruction unification is adopted to maximize the generalization ability for unseen editing tasks. Moreover, selective area matching is proposed to further improve the output quality with full details."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "Enhanced In-Context Learning",
|
| 51 |
+
"text": "Visual prompting based on inpainting is an effective visual in-context learning method in various computer vision tasks [1 ###reference_b1###, 40 ###reference_b40###], which is applied in image editing tasks [38 ###reference_b38###] recently. However, the methods perform poorly in quality when dealing with unseen image manipulation tasks. Therefore, we propose the enhanced in-context learning specifically tailored for generalizable image editing.\n###figure_2###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1.1",
|
| 55 |
+
"parent_section_id": "4.1",
|
| 56 |
+
"section_name": "4.1.1 Reforming Conditioned Latent Diffusion Model.",
|
| 57 |
+
"text": "To improve the generalization ability of image editing, it is crucial for the framework to explicitly capture low-level visual editing contexts. Current diffusion-based image editing methods [38 ###reference_b38###, 30 ###reference_b30###] that involve visual prompting either adopt ConvNet [23 ###reference_b23###] or ViT [11 ###reference_b11###] as the vision encoder for visual prompts. However, these methods fail to generalize well as they are not able to capture enough visual editing contexts (see Fig. 3 ###reference_###). To address this, we formulate the visual prompted condition as a single image = { Grey}, as shown in Fig 2 ###reference_###, with a global effective receptive field (ERF).\nMoreover, we further propose to reform the conditioned latent diffusion model. Inspired by the recent visual state space model VMamba [26 ###reference_b26###] which exhibits a better global ERF and also emphasizes shifting boundaries of input\u2019s four quadrants, we propose to adopt a vision encoder based on Zero-VMamba to fit our structure. Specifically, the vision encoder comprehends the visual prompted condition in latent space as follows,\nwhere is the processed embedding of the visual prompted condition, and is the model parameters initialized to zeros.\nTo further improve the performance, after each ControlNet trainable copied modules with parameters , we propose to link and inject the processed visual prompted condition information to the frozen Stable Diffusion model with parameters through zero-VMamba layer .\nWe use two instances of VMamba with parameters and \nrespectively. The complete model then computes the following,\nwhere is the output of our conditioned diffusion model block.\nOur conditioned latent diffusion model can process all four quadrants in our visual prompted conditions with a global receptive field, while it does not generate random noises during initial training steps."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1.2",
|
| 61 |
+
"parent_section_id": "4.1",
|
| 62 |
+
"section_name": "4.1.2 Editing-Shift Matching.",
|
| 63 |
+
"text": "Besides the architecture innovation, we incorporate an editing-shift-matching technique to enhance in-context learning ability in image editing with more accurate detailed outputs.\nIn specific, for each training ground truth = { }, we calculate a implicit editing shift value using CLIP [32 ###reference_b32###] image embedding:\nDuring the training process, after predicting the noise, we use it to reverse the noised input and obtain a pseudo output image = { }. Our framework then calculates the editing transfer value of the pseudo output image and deduces a editing shift loss to optimize during our training via the cosine similarity of the two design transfer values:\nThrough editing-shift matching, our model can better comprehend how editing should be done within a visual context level through an implicit editing shift value. Furthermore, this implicit editing shift value can further guide the sampling process creating a controllable editing."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Language Instruction Unification",
|
| 69 |
+
"text": "Previous works that utilize visual prompting in image editing tend to focus more on visual instructions and believe language instructions can be vague and unstable [38 ###reference_b38###].\nWe believe that the potential of text prompts is not fully explored in image editing.\nLanguage instructions have significant impacts on diffusion model outputs. Language instructions with the same meaning can still result in entirely different outputs due to different processed language embeddings.\nTo improve image editing and explore language instructions, we propose a novel approach, language instruction unification.\nDuring the training process, for each batch of training data, we randomly sample 50% of training data, collect their language editing instructions , and process them through a frozen lightweight LLM, Open Llama 3b V2 Quant 4 [13 ###reference_b13###] denoted as . The LLM is fixed prompted with a fixed random seed to uniformly reformulate the language instruction better for machine-level understanding. The LLM will output a unified language editing instruction .\nWe then augment the training data with unified language editing instructions.\nDuring the inference, each language editing instruction is passed through the frozen LLM for language instruction unification and then sent to our conditioned diffusion model.\nBy adopting language instruction unification for training data augmentation during the training, our conditioned diffusion model can learn diverse non-uniformed editing instructions to build up the model\u2019s knowledge distribution with unified language prompts.\nAdopting language instruction unification during inference aligns with the training, therefore, greatly minimizing the possibility of diverse quality in edited outputs and maximizing the ability to generalize to unseen editing tasks."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.3",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Selective Area Matching",
|
| 75 |
+
"text": "Diffusion-based image editing models usually suffer from low quality in specific details and this bottleneck appears to be more crucial in generalizable image editing methods [38 ###reference_b38###, 30 ###reference_b30###]. The details of human and animal images are easily distorted in these methods.\nA naive solution might be utilizing negative prompts [34 ###reference_b34###] like \u2018do not draw hands with 6 fingers\u2019 in general text-to-image tasks. However, in image editing, it is challenging to apply negative prompts. Users typically can not foresee the specific details after-edited outputs, therefore they are not able to construct appropriate negative prompts. Besides, negative prompts may limit the artistic potential of image editing models.\nTo properly address this issue for generalizable image editings, we propose an optimization method, namely selective area matching, that targets the difference in the chosen detailed editing area between the original training ground truth and the reversed pseudo output .\nIn particular, during the training process, we incorporate a frozen Mask2Former model[6 ###reference_b6###] to obtain panoptic segmented training image information including segmented masks and class information .\nAfter that, our framework processes the class information using the same lightweight LLM described in Sec. 4.2 ###reference_### to filter out pre-defined classes including living creatures and humans requiring special attention for addressing the details.\nBased on selected class labels, the framework then deduces a segmented binary mask for the selected editing area.\nDuring the training process, our framework calculates and optimizes the selective-area matching loss by\nwhere as the total pixel number in the image.\nWith selective area matching, image editing does not need negative prompts to deal with distorted details in images, which can make the most of the model\u2019s artistic editing capacity to generate high-quality outputs with great details. It is only incorporated during training, which does not change the inference, greatly saving inference efforts compared with negative prompts."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Visual Prompt Dataset For Image Editing",
|
| 81 |
+
"text": "###figure_3### Traditional image editing datasets only contain editing image pairs with a small amount of similar editing instructions. To the best of our knowledge, there is no open-sourced image editing dataset that is explicitly designed for image editing with visual prompting, which utilizes multiple different image pairs for each editing instruction to provide a general demonstration in various cases.\nTherefore, we introduce and open source a new dataset that is designed specifically for image editing tasks utilizing visual prompts. Our dataset generation pipeline is shown in Fig. 4 ###reference_### with 2 phases: Data Generation and Data Processing. In Data Generation phase, we first fine-tune GPT-3 [4 ###reference_b4###] for 2 epochs with 700 groups of human-written edits, each consisting of 5 different pairs of input and edited captions with one editing instruction. Then as shown in step 1, we generate around 3500 groups of editing prompts using the fine-tuned GPT-3 model. In each group, there is one instruction and five pairs of caption and edited caption . In step 2, similar to InstructPix2Pix, we then also adopt Prompt-to-Prompt for image generation. For each input and edited caption pair, we generated 50 times each with random noise and followed InstructPix2Pix to keep the best one image pair using CLIP-based metric. In addition, we also make sure for each editing instruction, there are at least two pairs of images. In step 3, we generate more image pair sets using in the same way. With filtering, we obtained around 12,000 images with around 2,000 editing instructions. In data preparing, we randomly choose 2 pairs of images under the same editing instruction and concatenate them for training."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "6",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Experiments",
|
| 87 |
+
"text": ""
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6.1",
|
| 91 |
+
"parent_section_id": "6",
|
| 92 |
+
"section_name": "Experimental Settings",
|
| 93 |
+
"text": "###figure_4### Datasets.\nTo fairly compare our methods with baseline methods, we conduct both qualitative and quantitative experiments on our proposed synthetic dataset that involves over 12,000 filtered images and 2,000 filtered editing instructions. All single input images have a resolution of and are resized to for training and testing purposes.\nImplementation Details.\nIn our approach, we split the dataset with 80% for training and 20% for testing.\nAs demonstrated in Fig. 2 ###reference_###, two image pairs are concatenated into one image with the same editing instructions as and mask the fourth quadrant with a grey color as .\nWe prepare the in-domain test dataset in the same format. For out-of-domain testings, we ensure that both the visual instruction pairs and the text instructions are not used during training, to best simulate how models perform in real-life image editing generalization scenarios.\nFor baselines, we use their original configurations to train their model. In our method, we only fine-tune the additional ControlNet for 5000 steps with a batch size of 1024 and a learning rate at . During\ntraining, we set the classifier-free scale the same as the original ControlNet. And we randomly drop 15% of language or visual editing instructions to further enhance our model\u2019s generalization ability. Our implementation utilizes PyTorch and is trained on 4 Tesla A100-40G GPUs with AdamW optimizer.\nComparison Methods.\nTo evaluate the effectiveness of our work, we compare with other state-of-the-art image editing frameworks, including SDEdit[28 ###reference_b28###], Instruct-Pix2pix[3 ###reference_b3###] and PromptDiffusion[41 ###reference_b41###]. We adopt two quantitative metrics Fr\u2019echet Inception Distance (FID) and CLIP directional Similarity (CLIP DirSim) proposed by Gal et al. [12 ###reference_b12###]. We utilize the FID score to quantitatively represent the image quality of generated editing outputs, and CLIP DirSim to evaluate how well the models follow editing instructions to produce the output."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "6.2",
|
| 97 |
+
"parent_section_id": "6",
|
| 98 |
+
"section_name": "State-of-the-Art Comparisons",
|
| 99 |
+
"text": "Qualitative Evaluation.\nIn Fig. 5 ###reference_###, we present our qualitative results in the testing set (in domain). The comparison shows that our method surpass previous baseline methods. Our method understands and follows both visual and language editing instructions better, and produces a far more detailed manipulated output especially in human figures.\n###figure_5### In Fig. 6 ###reference_###, we present our qualitative results tested in out-of-domain settings. We include five editing instructions that are considered extremely hard for diffusion-based image editing model [18 ###reference_b18###], including object add, object remove, structure location change, structure action change, and object size change. It is important to note that due to how we generate our training dataset utilizing Prompt-to-Prompt [15 ###reference_b15###], editing images pairs with these types of editing instructions is not feasible to generate our training data. This generalization comparison shows that our method excels baseline methods by a significant margin. Our method shows a great capability to carry out well-detailed quality outputs following these hard editing instructions in diffusion-based image editing models, while other baseline methods all fail to understand the editing instructions well or perform manipulations close to the editing instructions.\n###figure_6### Quantitative Evaluation.\nIn quantitative evaluation, we score 7.57 in FID, better than SDEdit (E), InstructPix2Pix and PromptDiffusion which scores 21.67, 17.87 and 13.75.\nWe achieve 0.27 in CLIP DirSim, better than baselines with 0.11/0.17/0.21 of CLIP DirSim scores. These quantitative findings show that our method generates higher-quality images with better detailed qualities and also exactly follows both language and visual editing instructions.\nAblation study.\nWe conduct an ablation study on each of the four components of our proposed method. Namely, the Reformed Conditioned Latent Diffusion Model (RCLDM), Editing Shift Matching (ESM), Language Instruction Unification (LIU) and Selective Area Matching (SAM).\nWe present the qualitative results in Fig. 7 ###reference_###. From the qualitative results, we can see that without RCLDM and ESM, the model understands the visual editing instructions much weaker, especially in out-of-domain editing. Without LIU, for two language editing instructions with the same meaning, the model produces two output edited images in different detail and quality. This difference in quality tends to increase in out-of-domain settings. Without SAM, the details of the human face are distorted making the model more vulnerable to producing outputs with worse detailed qualities.\nWe also include the qualitative ablation results on the testing dataset in Tab. 2 ###reference_###. From observation, our method performs the best when incorporating all four components. Without SAM or LIU, the FID score increases, meaning those modules enhance the detail quality of the output generated. Without CLDM or ESM, the CLIP DirSim score decreases, showing that those two modules contribute to a better understanding in both language and visual level."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "7",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "Conclusion",
|
| 105 |
+
"text": "In this work, we propose InstructGIE, an image editing framework with enhanced generalization ability, improving performance in both visual and text aspects. We incorporate a VMamba-based module to enhance visual outputs and introduce an editing-shift matching strategy to augment in-context learning. Our selective area-matching technique addresses and rectifies corrupted details, while a language unification technique aligns language embeddings with editing semantics. Additionally, we compile a publicly available dataset for better generalization evaluation. Extensive experiments demonstrate our framework\u2019s superior in-context generation performance and robust generalization capability across unseen vision tasks, both quantative and qualitively."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [],
|
| 109 |
+
"tables": {
|
| 110 |
+
"1": {
|
| 111 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T1.4.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S6.T1.5.2\" style=\"font-size:90%;\">Quantitative results comparison between our method and baseline methods. Quantitative results shows our method excels the baseline methods in both FID and CLIP directional Similarity score in a great margin.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T1.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T1.2.3.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T1.2.3.1.1\" style=\"padding:1.25pt 8.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T1.2.3.1.2\" style=\"padding:1.25pt 8.5pt;\">SDEdit (E)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T1.2.3.1.3\" style=\"padding:1.25pt 8.5pt;\">InstructPix2Pix</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T1.2.3.1.4\" style=\"padding:1.25pt 8.5pt;\">PromptDiffusion</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.2.3.1.5\" style=\"padding:1.25pt 8.5pt;\">Ours</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S6.T1.1.1.1\" style=\"padding:1.25pt 8.5pt;\">FID \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.1.1.2\" style=\"padding:1.25pt 8.5pt;\">21.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.1.1.3\" style=\"padding:1.25pt 8.5pt;\">17.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.1.1.4\" style=\"padding:1.25pt 8.5pt;\">13.75</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S6.T1.1.1.5\" style=\"padding:1.25pt 8.5pt;\">7.57</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T1.2.2.1\" style=\"padding:1.25pt 8.5pt;\">CLIP DirSim \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T1.2.2.2\" style=\"padding:1.25pt 8.5pt;\">0.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T1.2.2.3\" style=\"padding:1.25pt 8.5pt;\">0.17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T1.2.2.4\" style=\"padding:1.25pt 8.5pt;\">0.21</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_t\" id=\"S6.T1.2.2.5\" style=\"padding:1.25pt 8.5pt;\">0.27</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 112 |
+
"capture": "Table 1: Quantitative results comparison between our method and baseline methods. Quantitative results shows our method excels the baseline methods in both FID and CLIP directional Similarity score in a great margin."
|
| 113 |
+
},
|
| 114 |
+
"2": {
|
| 115 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T2.4.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S6.T2.5.2\" style=\"font-size:90%;\">Abalation Study results of comparison between our method with all components and without the Reformed Conditioned Latent Diffusion Model (RCLDM), Editing Shift Matching (ESM), Language Instruction Unification (LIU) and Selective Area Matching (SAM). The ablation study is conducted on the entire test dataset. </span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T2.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T2.2.3.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T2.2.3.1.1\" style=\"padding:1.25pt 8.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T2.2.3.1.2\" style=\"padding:1.25pt 8.0pt;\">w/o RCLDM</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T2.2.3.1.3\" style=\"padding:1.25pt 8.0pt;\">w/o ESM</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T2.2.3.1.4\" style=\"padding:1.25pt 8.0pt;\">w/o LIU</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T2.2.3.1.5\" style=\"padding:1.25pt 8.0pt;\">w/o SAM</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T2.2.3.1.6\" style=\"padding:1.25pt 8.0pt;\">Ours</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T2.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T2.1.1.1\" style=\"padding:1.25pt 8.0pt;\">FID \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.1.1.2\" style=\"padding:1.25pt 8.0pt;\">10.15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.1.1.3\" style=\"padding:1.25pt 8.0pt;\">9.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.1.1.4\" style=\"padding:1.25pt 8.0pt;\">10.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.1.1.5\" style=\"padding:1.25pt 8.0pt;\">11.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.1.1.6\" style=\"padding:1.25pt 8.0pt;\">7.57</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T2.2.2.1\" style=\"padding:1.25pt 8.0pt;\">CLIP DirSim \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T2.2.2.2\" style=\"padding:1.25pt 8.0pt;\">0.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T2.2.2.3\" style=\"padding:1.25pt 8.0pt;\">0.15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T2.2.2.4\" style=\"padding:1.25pt 8.0pt;\">0.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T2.2.2.5\" style=\"padding:1.25pt 8.0pt;\">0.19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T2.2.2.6\" style=\"padding:1.25pt 8.0pt;\">0.27</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 116 |
+
"capture": "Table 2: Abalation Study results of comparison between our method with all components and without the Reformed Conditioned Latent Diffusion Model (RCLDM), Editing Shift Matching (ESM), Language Instruction Unification (LIU) and Selective Area Matching (SAM). The ablation study is conducted on the entire test dataset. "
|
| 117 |
+
}
|
| 118 |
+
},
|
| 119 |
+
"image_paths": {
|
| 120 |
+
"1": {
|
| 121 |
+
"figure_path": "2403.05018v2_figure_1.png",
|
| 122 |
+
"caption": "Figure 1: Demo results of the proposed InstructGIE framework on various image manipulation tasks to both humans and scenes. By our proposed method, our model can generalize to generate the desired output with great detail qualities.",
|
| 123 |
+
"url": "http://arxiv.org/html/2403.05018v2/x1.png"
|
| 124 |
+
},
|
| 125 |
+
"2": {
|
| 126 |
+
"figure_path": "2403.05018v2_figure_2.png",
|
| 127 |
+
"caption": "Figure 2: Overall architecture of InstructGIE. The lower pipeline is for both training and inference processes where the model obtains unified editing instructions outputted by Instruction Unification Module \ud835\udcb0\ud835\udcb0\\mathcal{U}caligraphic_U and combines with visual prompted input ImgVPconsuperscriptImgVPcon\\textbf{Img}^{\\text{VPcon}}Img start_POSTSUPERSCRIPT VPcon end_POSTSUPERSCRIPT to pass through Zero-VMamba integrated Stable Diffusion model with ControlNet for output image. The upper pipeline is for training only which compares output image and training ground truth ImgtrainsuperscriptImgtrain\\textbf{Img}^{\\text{train}}Img start_POSTSUPERSCRIPT train end_POSTSUPERSCRIPT and computes editing shift loss \u2112e\u2062ssubscript\u2112\ud835\udc52\ud835\udc60\\mathcal{L}_{es}caligraphic_L start_POSTSUBSCRIPT italic_e italic_s end_POSTSUBSCRIPT with Editing Shift Module and selective area matching loss \u2112s\u2062a\u2062msubscript\u2112\ud835\udc60\ud835\udc4e\ud835\udc5a\\mathcal{L}_{sam}caligraphic_L start_POSTSUBSCRIPT italic_s italic_a italic_m end_POSTSUBSCRIPT with Selective Area Matching Module.",
|
| 128 |
+
"url": "http://arxiv.org/html/2403.05018v2/x2.png"
|
| 129 |
+
},
|
| 130 |
+
"3": {
|
| 131 |
+
"figure_path": "2403.05018v2_figure_3.png",
|
| 132 |
+
"caption": "Figure 3: Effective Reception Field (ERF) of ConvNet, ViT, VMamba based model architectures.",
|
| 133 |
+
"url": "http://arxiv.org/html/2403.05018v2/x3.png"
|
| 134 |
+
},
|
| 135 |
+
"4": {
|
| 136 |
+
"figure_path": "2403.05018v2_figure_4.png",
|
| 137 |
+
"caption": "Figure 4: Dataset generation process Our dataset generation consists of two phases. Data Generation is to generate sets of image pairs under one editing caption. Data Processing is randomly pick image pairs under the same editing instruction and concatenate them together as one input for training.",
|
| 138 |
+
"url": "http://arxiv.org/html/2403.05018v2/x4.png"
|
| 139 |
+
},
|
| 140 |
+
"5": {
|
| 141 |
+
"figure_path": "2403.05018v2_figure_5.png",
|
| 142 |
+
"caption": "Figure 5: Qualitative Comparison on our Test Dataset. We conducted experiments on various scenarios, including human, architecture and landscape.",
|
| 143 |
+
"url": "http://arxiv.org/html/2403.05018v2/x5.png"
|
| 144 |
+
},
|
| 145 |
+
"6": {
|
| 146 |
+
"figure_path": "2403.05018v2_figure_6.png",
|
| 147 |
+
"caption": "Figure 6: Qualitative Comparison on Out-of-Domain Images. We conducted experiments on images and instruct that are not in training dataset.",
|
| 148 |
+
"url": "http://arxiv.org/html/2403.05018v2/x6.png"
|
| 149 |
+
},
|
| 150 |
+
"7": {
|
| 151 |
+
"figure_path": "2403.05018v2_figure_7.png",
|
| 152 |
+
"caption": "Figure 7: Ablation Study Results for both in-domain (first two rows), and out-of-domain first two rows) image manipulations",
|
| 153 |
+
"url": "http://arxiv.org/html/2403.05018v2/x7.png"
|
| 154 |
+
}
|
| 155 |
+
},
|
| 156 |
+
"validation": true,
|
| 157 |
+
"references": [],
|
| 158 |
+
"url": "http://arxiv.org/html/2403.05018v2"
|
| 159 |
+
}
|
20240721/2403.08495v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2403.11437v3.json
ADDED
|
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Formalization of Complexity Analysis of the First-order Algorithms for Convex Optimization",
|
| 3 |
+
"abstract": "The convergence rate of various first-order optimization algorithms is a pivotal concern within the\nnumerical optimization community, as it directly reflects the efficiency of these algorithms across\ndifferent optimization problems. Our goal is to make a significant step forward in the formal mathematical representation\nof optimization techniques using the Lean4 theorem prover. We first formalize the gradient for smooth functions and the subgradient\nfor convex functions on a Hilbert space, laying the groundwork for the accurate formalization\nof algorithmic structures. Then, we extend our contribution by proving several properties\nof differentiable convex functions that have not yet been formalized in Mathlib. Finally, a comprehensive formalization of these algorithms is presented. These developments\nare not only noteworthy on their own but also serve as essential precursors to the formalization of\na broader spectrum of numerical algorithms and their applications in machine learning as well as many other areas.111Our implementation of formalization of complexity analysis of the first-order algorithms for convex optimization can be found in https://github.com/optsuite/optlib",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Within the expansive domain of optimization and operational research, the analysis and application of first-order optimization algorithms are fundamental, crucial for addressing diverse challenges in fields such as machine learning [1 ###reference_b1###], data science, and engineering.\nNevertheless, the theoretical foundations that ensure their efficacy, especially from the perspective of convergence analysis, are complex and require rigorous formalization. This paper is dedicated to navigating the complexities of formalizing the analysis of first-order optimization algorithms.\nThese algorithms are not merely tools for immediate problem-solving but also form the groundwork for developing more sophisticated optimization techniques.\nTo the best of our knowledge, few works relate to the formalization of convex optimization and numerical algorithms. However, the formalization of analysis has been extensively pursued by many researchers [2 ###reference_b2###]\nusing various formalization languages, including Coq, Isabelle [3 ###reference_b3###] and Lean [4 ###reference_b4###]. For instance, Kudryashov formalized the divergence theorem and the Cauchy integral formula in Lean [5 ###reference_b5###]. Gou\u00ebzel extensively studied the formalization of the change of variables formula for integrals [6 ###reference_b6###]. The application of formal methods in machine learning was explored by Tassarotti [7 ###reference_b7###]. In the area of convex analysis and optimization, Grechuk presented a formalization of lower semicontinuous functions in Isabelle, including some related properties [8 ###reference_b8###]. Allamigeon provided a formalization of convex polyhedra based on the simplex method in Coq [9 ###reference_b9###]. Verified reductions for optimization problems have also been explored [10 ###reference_b10###].\nIn this paper, building on Lean4 language and the corresponding mathlib 4 library [11 ###reference_b11###], we formalize the complexity analysis of first-order algorithms for convex and strongly convex functions, including the gradient descent method, the subgradient method, the proximal gradient method, and the Nesterov acceleration method [12 ###reference_b12###]. The theoretical properties of these numerical algorithms are discussed in various sources, including [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]\n. The main contributions of this paper are listed as follows.\n1) To address derivative calculations in optimization, we propose formalizing the definition of the gradient.\nIn mathlib, the differentiability of a function is formalized using the fderiv construct, which represents the function\u2019s derivative as a continuous linear map at differentiable points. However, the type-checking mechanisms inherent in Lean pose challenges for direct computation. This limitation underscores the need for a more computationally friendly representation of the gradient within mathlib. By utilizing the Riesz Representation Theorem in a Hilbert space, we can transform the continuous linear map into a vector in the Hilbert space, thereby simplifying calculations with elements in this space.\n2) We explore the formalization of the properties of convex functions and subgradients. The formalization of complexity analysis for first-order optimization algorithms fundamentally draws on the properties of convex functions. Currently, mathlib\u2019s treatment of convex functions primarily encompasses their zero-order characteristics. This focus results in a notable absence of properties that leverage the function\u2019s gradient. Thus, we formalize properties such as the first-order conditions in this paper. Additionally, to address challenges associated with non-smooth optimization, we have extended the library by introducing the definitions of the subgradient and the proximal operator, alongside proofs of their relevant properties.\n3) Whereas the majority of current formalization efforts concentrate on theoretical mathematics, our work seeks to extend formalization into the realm of applied mathematics by formalizing numerical algorithms. This approach opens up broader possibilities for formalization across a wider range of fields.\nTo broaden the applicability of our algorithm definitions to concrete optimization problems, we employ the class structure to formalize the definitions of first-order algorithms, which facilitates a more generic representation. For implementing specific algorithm examples, the instance structure [19 ###reference_b19###] allows for the straightforward application of these algorithms, enabling users to instantiate specific cases and subsequently prove the requisite properties associated with them. We also build a blueprint for the whole project222The whole blueprint can be found in https://chenyili0818.github.io/optlib-blueprint/dep_graph_document.html ###reference_print/dep_graph_document.html###, which gives a brief introduction and contains the correlation between the definitions, theorems and proofs. Part of the blueprint, which focuses on the properties of convex functions and the convergence rate of the gradient descent method, is illustrated in Figure 1 ###reference_###.\n###figure_1### The rest of the paper is organized as follows: In section 2 ###reference_###, we briefly review the basic mathematical definitions and a general introduction to four types of first-order optimization algorithms. In section 3 ###reference_###, we introduce relevant definitions already formalized by pioneers in the mathlib community. The formalization of the definition and basic properties of the gradient and subgradient is presented in section 4 ###reference_###. In sections 5 ###reference_### and 6 ###reference_###, we formalize the properties of convex functions and L-smooth functions in Lean, respectively. The proximal operator is formally introduced in section 7 ###reference_###. Finally, in section 8 ###reference_###, we build the class for different first-order algorithms and prove the convergence rate of these algorithms."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Mathematical preliminaries",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "The subgradient and the proximal operator",
|
| 21 |
+
"text": "Differentiability of a function within an Euclidean space is often characterized using little-o notation. When dealing with functions defined on general normed spaces, the complexity increases. To address this issue, we utilize the concept of the Fr\u00e9chet derivative.\nLet and be normed vector spaces, with representing an open subset.\nA function is called Fr\u00e9chet differentiable at a point if there exists\na bounded linear operator satisfying the condition:\nThe concept of a subgradient is introduced to address points where the function may not be differentiable, yet still possesses certain advantageous properties.\nFor a function mapping from a Hilbert space to and with in the domain of ,\na vector is called a subgradient of at if for all ,\nDefine the collection of all subgradients at a point\n as the subderiv at that point, denoted\nas . It is critical to note that for convex functions, the subdgradient is guaranteed to be well-defined and nonempty at every point within the domain. Notably, at points where the function is smooth, the subderiv reduces to a singleton set containing only the gradient of the function at that point, i.e. . Building upon this conceptual framework, we next introduce the proximal operator.\nFor a function mapping from a Hilbert space to , the proximal operator is defined as:\nFor a convex function , the addition of the term transforms the optimization problem into a strongly convex one, simplifying the original problem. Due to the characteristics of convex and strongly convex functions, the proximal operator is well-defined across all points, providing a means to minimize the function within a vicinity of the current point ."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "First order algorithms solving optimization problems",
|
| 27 |
+
"text": "In this subsection, we give a brief review of the general first order algorithms solving optimization problem.\nWe mainly focus on unconstrained optimization problems\nwhere is a convex function. Moreover,\ncomposite optimization problems are also considered:\nwhere and are convex, and is smooth, while may not be differentiable. The proximal gradient and Nesterov acceleration methods are particularly prevalent for these composite optimization problems. The efficiency of these algorithms, often measured by their convergence rates, is a key focus within the field of numerical optimization, making a detailed analysis of these rates essential.\nGradient Descent Method\nThis foundational algorithm targets smooth functions in problem (2 ###reference_###) and is notable for its simplicity and effectiveness.\nThe update mechanism is defined as\nwhere represents the stepsize for the -th iteration, and denotes the gradient at the point .\nIts convergence is characterized by for convex functions and for strongly convex functions, where indicates the condition number of the target function.\nSubgradient Descent Method\nIn cases where the target function in problem (2 ###reference_###) is nonsmooth and a gradient may not exist at every point,\nthe subgradient is utilized instead. The update formula is as follows\nwhere is the subgradient at . The convergence rate for convex functions follows a pattern. More concrete results can be found in [20 ###reference_b20###].\nProximal Gradient Method\nThe proximal gradient method is widely used in optimization problems with the form (3 ###reference_###).\nThe update date scheme of this algorithm is given as\nwhere denotes the proximal operator of the function at the point . This\nmethod can be viewed as an implicit version of subgradient method. The convergence rate of this algorithm is under the assumptions stated above. More concrete results are referred to [21 ###reference_b21###].\nNesterov Acceleration Method\nAs an enhancement of the proximal gradient method, the Nesterov acceleration approach improves the convergence speed.\nNesterov acceleration method utilizes two sequences of points, and , to update the point.\nThe algorithm updates as following\nAssuming the hyperparameters satisfy , the algorithm achieves the convergence rate of . This method is also known as FISTA [22 ###reference_b22###] which is widely used in compressive sensing. There is also another version of Nesterov acceleration scheme known as the second version of Nesterov acceleration, which is given as\nThe same convergence rate holds, if the hyperparameters satisfy\n."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Lean preliminaries",
|
| 33 |
+
"text": ""
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "The differentiable structure of a normed space",
|
| 39 |
+
"text": "The mathlib library, a comprehensive mathematical library for the Lean theorem prover, offers a robust framework for formalizing various concepts in calculus and analysis. Central to its calculus library is the concept of the Fr\u00e9chet derivative, or fderiv, which facilitates the rigorous definition of the derivative for smooth functions between normed spaces.\nIn Lean, the fderiv structure is pivotal in defining the derivative of a smooth function between normed spaces. It encapsulates the derivative as a continuous linear map, adhering to the rigorous mathematical foundation for which Lean is renowned. The fderiv structure is defined as follows:\nThe utilization of a continuous linear map to define the derivative in Lean\u2019s\nmathlib library enhances both generality and mathematical precision. Spaces E and F\nare not limited to Euclidean spaces but can be any normed spaces over a nontrivially normed field . This broad applicability supports a wide range of mathematical and analytical discussions within the Lean environment. However, this generality introduces certain challenges in the context of numerical optimization. The abstract nature of continuous linear maps may lead to complications when devising update schemes for optimization algorithms. Precise type checks, a cornerstone of Lean\u2019s system, necessitate a reevaluation of the fderiv type when applied to numerical methods.\nMoreover, the mathlib introduce the definition of deriv to deal with the special case that E is merely a NontriviallyNormedField . In this way, the continuous linear map becomes a single element in the space F.\nTo address these challenges, we pivot towards the gradient in vector form within E. This approach aligns more closely with the practical requirements of numerical optimization, allowing for a more straightforward computation of update schemes. The transition from the Fr\u00e9chet derivative to the gradient, along with the implications for numerical optimization, will be explored in detail in section 4.1 ###reference_###."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "The convexity of a function",
|
| 45 |
+
"text": "The concept of convexity plays a pivotal role in optimization, underpinning many algorithms and theoretical results.\nIn the mathlib library, the definition of a convex function is articulated through below:\nIt is worth noting that the conditions on the input and output spaces are mild, which may not even require normed spaces. However, in this paper, we primarily focus on convex functions from a Hilbert space to , which\nis a special case of this definition as ConvexOn \u211d s f.\nThe formalization of convexity within mathlib provides a solid foundation for discussing and proving various properties of convex functions, particularly those that are differentiable.\nWhile mathlib\u2019s current formalization encompasses the core concept of convexity and some differentiable properties concerning only single-variable convex functions, there is ongoing work to enrich the library with additional properties related to differentiable convex functions of multiple variables, or more generally, on normed or Hilbert spaces. These properties are crucial for analyzing the behavior of optimization algorithms, especially in proving their convergence. The discussion of these extensions and their implications for algorithmic analysis will be elaborated upon in section 5 ###reference_###."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Gradient and Subgradient in Lean",
|
| 51 |
+
"text": ""
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Gradient",
|
| 57 |
+
"text": "The earlier discussion highlights that while fderiv broadly defines derivatives within normed spaces, our interest in numerical optimization primarily lies with Hilbert spaces, which offer more intricate structures compared to normed spaces. Specifically, for functions mapping from a Hilbert space to the fields or \u2014collectively referred to as \u2014the formal type of their Frechet derivative (fderiv) is denoted as E \u2192L[K] K. In the process of formalizing the gradient descent algorithm, the objective is to compute the update step, which involves applying the formula . This computation requires additive and scalar multiplicative operations between the point and its derivative .\nHowever, using type of a continuous linear map from a Hilbert space to does not directly support these operations. Consequently, converting the continuous linear map into a vector in the Hilbert space becomes crucial. This is precisely where the definition of the gradient becomes relevant and useful, as it is inherently designed to facilitate such operations by converting the abstract derivative into a tangible vector in the Hilbert space, thereby enabling the additive and scalar multiplicative operations necessary for the gradient descent update formula.\nLet be a Hilbert space with inner product , while representing an open subset.\nA function owns a gradient at a point if there exists\na vector satisfying the condition:\nLeveraging the definition of the Fr\u00e9chet derivative, and utilizing the Riesz Representation Theorem on a Hilbert space, it becomes evident that the continuous linear operator , integral to the formulation of the Fr\u00e9chet derivative (see 1 ###reference_###), can be represented as\nIn Lean, we can define the gradient as follows:\nThe segment toDual K F f\u2019 is to convert an element from the space F into an element within the dual space F \u2192L[K] K. This conversion is facilitated by a canonical mapping that links a Hilbert space to its corresponding dual space. Based on this definition, it enables the extension to more nuanced definitions such as HasGradientWithinAt\nand HasGradientAt, which are more frequently used in the formalization of optimization algorithms.\nIt is crucial to distinguish between the spaces within which the gradient and the Fr\u00e9chet derivative are defined. Specifically, the gradient is defined within a complete inner product space, and this specification is necessary to leverage the Riesz Representation Theorem. In contrast, the Fr\u00e9chet derivative is applicable to a broader category of normed spaces. It is evident that for a general normed space, we cannot define the gradient as mentioned above."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Subgradient",
|
| 63 |
+
"text": "To the best of our current knowledge, there has not yet been a formalization of the subgradient definition within the mathlib library. Serving as an extension of the gradient, the subgradient concept accommodates non-smooth functions. The precise definition of the subgradient for a convex function is articulated as follows:\nA core theorem related to the subgradient is the existence of the subgradient at the interior point of the domain. For simplicity, we only consider the case when the function is convex.\nIn this theorem, we assume that the function is continuous within the interior of the domain s. This is a technical assumption, as only mild conditions are imposed on the space E. However, if the input space E is finite-dimensional, it is established that the convex function is continuous within the interior of the domain, or equivalently, any possible discontinuity of the convex function occurs only at the boundary points. In the proof of the theorem, a crucial element is a lemma stating the supporting hyperplane theorem. Viewed as a geometric version of the Hahn-Banach theorem, we utilize the theorem geometric_hahn_banach_open in mathlib, which asserts that given disjoint convex sets s, t, where s is open, there exists a continuous linear functional which separates them.\nAnother important aspect is the equivalence of the subgradient and the gradient at points where the function is smooth. This highlights that the subgradient is a more general definition of a gradient for non-smooth convex functions.\nFurthermore, the computation of the subgradient for two convex functions holds significant importance. In this context, we refer to the Moreau-Rockafellar theorem, which is instrumental for subsequent proofs involving the proximal operator. The underlying intuition behind this theorem is direct, but needs a novel construction in the proof.\nAssume and are two convex functions define on , then we have for any\nThe theorem is formalized as:\nWe focus on proving a more stringent variant of the original Moreau-Rockafellar theorem, imposing stricter conditions on the convex function\u2019s domain. To simplify our analysis and avoid the complexities associated with the interior points of the function\u2019s domain, we assume that the function is convex across the entire space. A more comprehensive formulation of the theorem would necessitate exploring the continuity of convex functions within the interior of the domain, an endeavor we reserve for future investigation. Additionally, for general nonconvex functions, we can also define Fr\u00e9chet differentiability as outlined in [23 ###reference_b23###]."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Properties of Convex Functions in Lean",
|
| 69 |
+
"text": "Throughout the discussion from this section to the concluding section, we uniformly assume, except in specific cases, that the input space E constitutes a Hilbert space, and represents a function mapping from E to E. Consequently, the gradient of functions as E \u2192 \u211d. In certain scenarios, we will consider the domain of the function as a subset within E, designated as s. These parameters are specified as follows:"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.1",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "General Properties",
|
| 75 |
+
"text": "For convex functions, certain properties are crucial for establishing the convergence of algorithms. These properties are encapsulated in the following theorem:\nLet be a smooth function defined on a convex set . The statements below are equivalent:\nis convex on .\nFor all , the function satisfies the first-order condition: .\nFor all , the gradient of is monotonic: .\nThis collection of theorems has been formalized in the Convex_Function.lean file.\nIn these theorems, it is important to note that the use of the gradient definition is not strictly necessary, as the term is interpreted as the continuous linear map at , evaluated at , producing a real number. To provide a comprehensive formalization, we present statements for each theorem in both fderiv and gradient forms. For simplicity, we have shown only the version utilizing the gradient above. These theorems introduce a practical method for assessing the convexity of a function through gradient information. More automated ways of checking the convexity of a function can be explored in future work."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.2",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "Strongly Convex Functions",
|
| 81 |
+
"text": "While gradient descent exhibits a sublinear convergence rate for convex functions, it can achieve a linear convergence rate for strongly convex functions. The formalization of strongly convex functions represents a pivotal advancement in accurately formalizing the convergence rates of gradient descent across various function types. The definitions for uniform convexity and strong convexity are delineated as follows:\nIt is essential to clarify that the concept of uniform convexity can be applied within the framework of general normed spaces. However, strong convexity necessitates a definition within a Hilbert space, primarily due to the need to utilize the inner product to decompose the expression . Following the establishment of this definition, it is imperative to elucidate the properties of strongly convex functions, leveraging derivative information. Consequently, we can formalize the following theorem concerning strongly convex functions\nLet be a function defined on a convex set . The following statements are equivalent:\nexhibits -strong convexity on .\nThe function is convex on .\nFor differentiable , for all , it holds that .\nFor differentiable , for all , it holds that .\nWe only list the most important part of the formalization of the theorem here, while more detailed descriptions can be found in the Lean file Strong_Convex.lean."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "6",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Properties of Lipschitz Smooth Functions in Lean",
|
| 87 |
+
"text": "Another significant class of function is the Lipschitz smooth function. The concept of Lipschitz smoothness serves to quantify a function\u2019s degree of smoothness. This property is formalized through the notion of Lipschitz continuity for a function over a specific set, which is defined in the mathlib library as follows:\nA central theorem regarding Lipschitz smooth functions pertains to their upper bound. The lemma is articulated as follows:\nLet be a -Lipschitz smooth function defined on a set , then it holds\nWithin the formalized framework, we provide both the Frechet derivative and the gradient formulations of this theorem. For the sake of brevity, here we present only the fderiv formulation as:\nIn this proof, we use the auxiliary function as a function from to\n. Using this function, we can transform the original problem to a one-variable problem,\nand then utilize the mean-value theorem image_le_of_deriv_right_le_deriv_boundary to get the result.\nWhen it comes to convex Lipschitz smooth function, we can derive more properties of the function considering the\nconvexity of the function. We state the theorem as:\nLet be a differentiable convex function defined on , then the following statement is equivalent\nis - Lipschitz smooth on .\nis convex .\n.\nNote that sometimes the natural language statement would hide some of the assumptions which human\nwould think as trivial, but in formalization, such assumptions need to be stated explicitly. We can state the\nformalization of the above theorem as :\nFor functions that are both strongly convex and have a Lipschitz continuous gradient, we can propose an enhanced estimation, specifically formulated in the following theorem:\nLet be a -Lipschitz smooth and -strongly convex function defined on , then the following\ninequality holds,\nThe formalized theorem is stated as:"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "7",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Proximal Operator in Lean",
|
| 93 |
+
"text": "In this section, we need to introduce an additional assumption on the space E, specifically [CompleteSpace E]. The rationale behind this will be explained later. To define the proximal operator in Lean, we must take a few steps to circumvent the direct use of as commonly described in natural language. Since the operator must be clarified as to whether the target function can reach the minima at a finite point, defining it directly is not straightforward. Instead, we can define the proximal property and then identify the set of points that satisfy this property. If we can demonstrate that this set is non-empty, we can then select one of these points as the proximal point.\nFirstly, we can define the proximal property as:\nWe define the proximal set as all the points that satisfy the proximal property. This set is unique when the function possesses certain desirable properties, and it may be empty when is neither continuous nor convex.\nFor the proximal point, assuming that the proximal set is nonempty, we simply need to select one of its members. Here, we use the function Classical.choose to select one element from this nonempty set.\nAfter defining the proximal operator, we need to prove the wellposedness of the proximal operator. Generally speaking, we have the following theorem.\nThe proximal set of each point is nonempty and compact, when satisfies one of the following conditions:\nis lower semicontinuous and has lower bound over the whole space.\nis a continuous convex function, in this case, the proximal set is unique.\nIn this theorem, we evaluate whether the function\u2019s minima can be achieved. For this reason, we need to consider the relationship between closed and compact sets, specifically, the requirement that any bounded closed set be compact. This condition is straightforward in Euclidean space but does not generally hold in infinite-dimensional Hilbert spaces. This equivalence implies that the Hilbert space is finite-dimensional. In mathlib, we utilize the definition of a \u201dproper space,\u201d which is characterized as a space in which every bounded closed set is compact. It is evident that some Hilbert spaces, such as , are not proper spaces, whereas Euclidean space is an example of a proper space.\nWhen formalizing, it becomes necessary to relax the conditions required by the theorem, as we are not working within but in a more abstract space . This adjustment allows us to appreciate the properties inherently possesses. We can then articulate the following formalized theorem\nWe can derive additional properties of the proximal operator, particularly its connection to the subgradient. This relationship is readily apparent from the optimality conditions of the unconstrained optimization problem.\nIf is a closed and convex function, then we have\nIn Lean, we state the theorem as:"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "8",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Convergence of First Order Algorithms in Lean",
|
| 99 |
+
"text": "In this section, we give the formalization of first order algorithms in Lean using class structure.\nFrom the perspective that the class structure in Lean is easy to generalize for different target functions.\nFor specialized problem, such as the LASSO problem in compressive sensing with target function\n and , we can use the instance structure to formalize\nthe algorithm for this specific problem. For each algorithm, under different assumptions on the stepsize,\nwe will get the convergence theorem. In this section, we assume that is a Hilbert space and is a\nfunction defined on . is a point in and denotes for the minima of the function,\nand denotes the initial point we put into the algorithm. Generally speaking, an algorithm contains the following parts.\nUpdate scheme: we will take track of the update points in the algorithm.\nInformation on the target function: we need information for the target function, like the gradient,\nand the Lipschitz continuous information on the gradient.\nStep size constraint: only suitable stepsize choice is admittable for the corresponding algorithm."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "8.1",
|
| 103 |
+
"parent_section_id": "8",
|
| 104 |
+
"section_name": "Gradient Descent Method",
|
| 105 |
+
"text": "For general definition of gradient descent method in Lean, we use the class type to define what\na numerical optimization method is in Lean. In this class we have the function , the gradient ,\nand the initial point as the input, and contains the necessary information with the optimization problem.\nWe can also define the gradient descent with fixed step size as a special instance of the general gradient\ndescent method. In this paper, we mainly focus on the fixed step size version of the gradient descent method,\nbut more general version can be added easily based on this work.\nIt is straightforward to see that the gradient descent method with fixed stepsize is a special case of\nthe gradient descent method, hence we can get the instance structure as above.\nThe convergence rate of the fixed step size gradient method is give by the following theorem:\nFor unconstrained optimization problem (2 ###reference_###), is L-smooth. Let be the optimal value.\nIf is convex, then for any step size satisfying ,\nthe gradient descent algorithm (4 ###reference_###) generates a sequence of points whose function values satisfy the inequality\nIf is -strongly convex, then for any step size satisfying ,\nthe gradient descent algorithm (4 ###reference_###) generates a sequence of points whose function values satisfy the inequality\nTo prove the convergence rate of the fixed step size gradient descent method, we need to prepare for a bunch of\ntheorems ahead, including the one iteration property of the method and the sum up property of the monotonic sequences.\nFinally we can prove the convergence rate of the gradient descent method for Lipschitz smooth function.\nIt is interesting to find out from the proof that there is no assumptions on xm here. In general setting,\nwe use the case which xm is the minima, but in the proof, we can see that the proof is valid for any point xm.\nSo doing formalized proof can let us know the direct connection between the assumptions and the theorem."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "8.2",
|
| 109 |
+
"parent_section_id": "8",
|
| 110 |
+
"section_name": "Subgradient Descent Method",
|
| 111 |
+
"text": "In this subsection, we focus on the subgradient descent method. For subgradient descent method, our assumption is given as:\nConsidering the unconstrained optimization problem (2 ###reference_###), we assume\nis convex on .\nthere exists at least one minima and .\nis Lipschitz continuous, i.e. for all with .\nNote that the assumption (c) is equivalent with assuming that the subgradient of the target function \nis bounded by . The subgradient descent method is defined as follows:\nMany different results can be derived with different kinds of step sizes. For simplicity, we only show theorem for the diminishing step size in this paper, while more relevant results such as the convergence rate of fixed step size can be found in the code.\nSuppose that Assumption 1 ###reference_umption1### is satisfied, and the step size sequence for all , and , then the sequence generated by subgradient method (5 ###reference_###) converges to the optimal solution , and for all with the rate\nwhere , and is the minimum value of up to the iteration of \u2019s values, i.e., .\nThen we have the formalized version of theorem (8 ###reference_orem8###) as:\nMoreover we can have the convergence result as:"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "8.3",
|
| 115 |
+
"parent_section_id": "8",
|
| 116 |
+
"section_name": "Proximal Gradient Method",
|
| 117 |
+
"text": "From this subsection to the end of the paper, for the usage of the proximal operator, we need to require the space E satisfying [ProperSpace E]. Considering the composite optimization problem (3 ###reference_###). Using the proximal property we defined and formalized in section 7 ###reference_###,\nwe can give the formalization of the proximal gradient method (6 ###reference_###) in Lean. In this method, we use the definition\nprox_prop rather prox_point since for general function, the proximal set is not unique.\nWe have that any point in the proximal set satisfying the proximal property is admittable for proximal\ngradient method. Similar to the first order algorithms above, we also define a class for proximal\ngradient method.\nFirst we need to give the basic assumptions for this problem.\nFor composite optimization problem (3 ###reference_###), we have assumptions below:\nis a differentiable convex function with -Lipschitz continuous gradient.\nThe function is continuous convex function (which means the proximal operator is well defined here);\nThe minima of function is attainable at the finite point , with the minimal value .\nWe can get the convergence rate for proximal gradient as the theorem below:\nSuppose that Assumption 2 ###reference_umption2### is satisfied and the fixed step size\n, then the sequence generated by (6 ###reference_###)\nsatisfies\nThe formalized convergence rate is given as:"
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "8.4",
|
| 121 |
+
"parent_section_id": "8",
|
| 122 |
+
"section_name": "Nesterov Acceleration Method",
|
| 123 |
+
"text": "In this section, we mainly focus on the formalization of the Nesterov\u2019s acceleration method used on composite optimization. Since there are a few forms of the Nesterov\u2019s acceleration method, we only choose two of them which\nare formalized in two relevant files Nesterov_Acceleration_first.lean and Nesterov_Acceleration_second.lean. Although having differences in the update scheme, they enjoy the same acceleration convergence rate.\nIn this paper, we also exploit the instance structure to define the algorithms. Firstly, we define the general method with abstract hyperparameter and stepsize , and then use the instance structure to connect the definition with the fixed stepsize ones. For the first form of the Nesterov\u2019s acceleration method, which is also known as FISTA method, we can formalize the fix stepsize version of the algorithm as:\nThe convergence theorem is stated as:\nSuppose that Assumption 2 ###reference_umption2### is satisfied, the fixed step size ,\nand the hyperparameters , then the sequence generated by (7 ###reference_###) satisfies\nWe can prove the convergence rate for the Nesterov acceleration method stated by the theorem above by complicated calculation in Lean4. The formalized version of the convergence rate for the fixed stepsize is given as:\nFor the second version of Nesterov\u2019s acceleration algorithm (8 ###reference_###), we can formalize as:\nFor this method, we also have the rate as:\nSuppose that Assumption 2 ###reference_umption2### is satisfied, the fixed step size ,\nand the hyperparameters , then the sequence generated by (7 ###reference_###) satisfies\nThe formalize version of the theorem is given as:"
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "8.5",
|
| 127 |
+
"parent_section_id": "8",
|
| 128 |
+
"section_name": "Application: Convergence Rate for Lasso Problem",
|
| 129 |
+
"text": "In this subsection, we apply the formalization of the convergence rate of different algorithms for a concrete optimization problem, \u201cLasso\u201d, from compressive sensing and sparse optimization. It is widely used in image processing, statistics and many other areas. Theoretical properties have been extensively studied in [24 ###reference_b24###]. We demonstrate that the convergence can be easily formalized based on what we have done. The Lasso optimization problem is given as\nwhere , , and denotes the -norm for the vector in . The corresponding and in composite optimization problem are given as and . From basic analysis, we can get the explicit form of and . With the explicit form of the update rule, the class of proximal gradient method for the Lasso problem can be defined as\nThis definition contains the information of the target function, the derivative function, the Lipschitz constant and relevant update scheme. We can directly prove this update scheme for Lasso problem is exactly a special form of the proximal gradient descent method using the instance below.\nBy the result we have in section 8.3 ###reference_###, we can easily get the convergence rate for proximal gradient method as\nWe can easily get a similar formulation of the formalization of Nesterov\u2019s acceleration method for Lasso problem and its convergence rate with the same technique, where most of the code is the same with the class LASSO_prox except for the particular update rules."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "9",
|
| 133 |
+
"parent_section_id": null,
|
| 134 |
+
"section_name": "Conclusion and Future Work",
|
| 135 |
+
"text": "In this paper, we primarily discuss the formalization of first-order algorithms in convex optimization. First, to conveniently demonstrate the derivative and first-order information of convex functions, we define the gradient and subgradient in Lean. Leveraging these definitions allows us to delve into the properties of convex functions and Lipschitz smooth functions. We then define the proximal operator, which is widely used in non-smooth optimization. By integrating these tools, we describe the class of first-order algorithms and prove the convergence rate for four widely used algorithms. These foundations provide a base and offer experience for proving more complex algorithms, such as ADMM [25 ###reference_b25###] and BCD [26 ###reference_b26###], in the near future. Future work will include defining the Fr\u00e9chet sub-differentiability of general functions and the KL property, respectively. Additionally, discussing the optimality conditions of constrained optimization problems is of vital importance. We hope that, based on this work, we can progressively train the large language model to perform formalization automatically."
|
| 136 |
+
}
|
| 137 |
+
],
|
| 138 |
+
"appendix": [],
|
| 139 |
+
"tables": {},
|
| 140 |
+
"image_paths": {},
|
| 141 |
+
"validation": true,
|
| 142 |
+
"references": [],
|
| 143 |
+
"url": "http://arxiv.org/html/2403.11437v3"
|
| 144 |
+
}
|
20240721/2403.12422v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2403.17222v2.json
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Physics-compliant diagonal representation of beyond-diagonal RIS",
|
| 3 |
+
"abstract": "Physics-compliant models of RIS-parametrized channels assign a load-terminated port to each RIS element. For conventional diagonal RIS (D-RIS), each auxiliary port is terminated by its own independent and individually tunable load (i.e., independent of the other auxiliary ports). For beyond-diagonal RIS (BD-RIS), the auxiliary ports are terminated by a tunable load circuit which couples the auxiliary ports to each other.\nHere, we point out that a physics-compliant model of the load circuit of a BD-RIS takes the same form as a physics-compliant model of a D-RIS-parametrized radio environment: a multi-port network with a subset of ports terminated by individually tunable loads (independent of each other).\nConsequently, we recognize that a BD-RIS-parametrized radio environment can be understood as a multi-port cascade network (i.e., the cascade of radio environment with load circuit) terminated by individually tunable loads (independent of each other). Hence, the BD-RIS problem can be mapped into the original D-RIS problem by replacing the radio environment with the cascade of radio environment and load circuit.\nThe insight that BD-RIS can be physics-compliantly analyzed with the conventional D-RIS formalism implies that (i) the same optimization protocols as for D-RIS can be used for the BD-RIS case, and (ii) it is unclear if existing comparisons between BD-RIS and D-RIS are fair because for a fixed number of RIS elements, a BD-RIS has usually more tunable lumped elements.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": ""
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "Background on conventional D-RIS",
|
| 15 |
+
"text": "The parametrization of wireless channels with reconfigurable intelligent surfaces (RISs) is at the core of the emerging paradigm shift toward smart radio environments. Conventionally, an RIS is an array of elements (oftentimes backscatter patch antennas) that each contain a tunable lumped element. Consider a scenario with transmitting antennas, receiving antennas and RIS elements. The system can be described as an multi-port network, where and , because we can model the tunable lumped elements as auxiliary ports terminated by tunable load impedances. The multi-port network can be characterized by its scattering matrix or impedance matrix which are related to each other via , where is the characteristic impedance of the single-mode transmission lines (e.g., coaxial cables) connected to the ports and is the identity matrix. The impedance matrix that can be measured at the antenna ports is\nwhere and denote the sets of port indices associated with antennas and RIS elements, respectively, and is the load impedance matrix terminating the auxiliary ports [1 ###reference_b1###].111The applicability of Eq. (1 ###reference_###) to an arbitrarily complex linear passive time-invariant radio environment as well as antennas and RIS elements with arbitrary structural scattering was first noted and leveraged in Ref. [1 ###reference_b1###], to the best of our knowledge. References to earlier works that were limited to free-space radio environments and/or antennas and RIS elements without structural scattering can be found in Ref. [1 ###reference_b1###]. The notation denotes the selection of the block of whose row and column indices are the entries of the sets and , respectively.\nConventionally, the load impedance network only connects each auxiliary RIS-element port to its own tunable load but not to the other auxiliary RIS-element ports, implying that is diagonal. In the following, we refer to such a conventional RIS as D-RIS (diagonal RIS).\nRemark 1: The number of parameters of the above physics-compliant model does not depend on the radio environment\u2019s complexity and there is no need to explicitly describe the radio environment or structural antenna scattering [1 ###reference_b1###]. All parameters can be estimated with a single full-wave simulation [1 ###reference_b1###]. Experimentally, the parameters can be estimated in closed-form or via gradient descent [2 ###reference_b2###, 3 ###reference_b3###], and are usually ambiguous (which facilitates the parameter estimation [2 ###reference_b2###]) unless there are at least three distinct known load impedances for each RIS element [3 ###reference_b3###].\nThe wireless channel matrix is an off-diagonal block of the measurable scattering matrix , i.e.,\nwhere and denote the sets of port indices associated with receiving antennas and transmitting antennas, respectively, and\nRemark 2: Eqs. (1 ###reference_###-3 ###reference_###) define the complete physics-compliant end-to-end model of a RIS-parametrized channel for an arbitrarily complex radio environment without any approximations. Many authors make simplifying assumptions to reduce the mathematical complexity; however, throughout this paper, no simplifying assumptions will be made.\nRemark 3: Alternative physics-compliant models with lower mathematical complexity can be formulated in terms of coupled dipoles characterized by their polarizabilities [2 ###reference_b2###]. The number of parameters is the same as for the load-impedance-based formulation used in the present paper. Because the polarizabilities are local quantities, the polarizability-based formulation offers some unique physical insights, e.g., about the decomposition of the wireless channel into multi-bounce paths [4 ###reference_b4###] as well as about the effect of moving wireless entities [5 ###reference_b5###, 6 ###reference_b6###]. Throughout the present paper we use the more widespread load-impedance-based formulation to help readers connect our insights to prior literature on BD-RIS.\nRemark 4: The theory developed in terms of impedance parameters in the present paper can be equivalently expressed in terms of scattering parameters or admittance parameters."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "The concept of BD-RIS",
|
| 21 |
+
"text": "Recently, Ref. [7 ###reference_b7###] proposed to consider a beyond-diagonal load impedance circuit for which is potentially a fully populated matrix. We refer to such a device as BD-RIS in this paper. Following up on Ref. [7 ###reference_b7###], various studies have claimed that BD-RISs outperform D-RISs, for instance, in terms of achieving more wave control with a fixed number of RIS elements [8 ###reference_b8###]. However, except for Ref. [9 ###reference_b9###], these studies were not based on physics-compliant models and even Ref. [9 ###reference_b9###] was limited to a free-space radio environment and made multiple simplifying assumptions about wave propagation.\nTheoretical papers on BD-RIS (experimental papers do not exist so far) devise new optimization algorithms for BD-RIS that essentially declare all entries of as optimizable parameters (up to some constraints like passivity and reciprocity). However, the optimized is usually not rigorously mapped to a concrete realistic circuit that could implement the optimized in practice. Thereby, the optimization is somewhat detached from the physical reality, seemingly obscuring the fundamental insights presented in the present paper."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "1.3",
|
| 25 |
+
"parent_section_id": "1",
|
| 26 |
+
"section_name": "Contributions",
|
| 27 |
+
"text": "The two main theoretical insights of the present paper are as follows:\nWe recognize that the BD-RIS load impedance circuit is itself a multi-port network for which a subset of ports are terminated with individually tunable loads (without connections to other ports).\nWe recognize that a BD-RIS-parametrized wireless channel is the cascade of two multi-port networks (the radio environment and the BD-RIS load impedance circuit) terminated by individually tunable loads. An illustration of this insight is provided in the lower part of Fig. 2 ###reference_###. In other words, we can map the BD-RIS problem into the conventional D-RIS problem by replacing the radio environment in the conventional D-RIS case with the cascade of the radio environment and the load impedance circuit in the BD-RIS case.\nThe implications of these insights are as follows:\nThere is no need to develop BD-RIS-specific optimization algorithms. In fact, considering the cascade of radio environment and load circuit enforces by construction the consideration of a concrete load circuit, guaranteeing automatically that the obtained results can be mapped to a realistic circuit.\nIt is unclear how to make a fair comparison between the performances of D-RIS and BD-RIS. Existing papers fix but allow such that they consider a BD-RIS that has many more tunable load impedances (and hence a much larger hardware complexity) than the benchmark D-RIS."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "II The multi-port network cascade \nunderlying the BD-RIS concept",
|
| 33 |
+
"text": "Assuming that the load circuit attached to the auxiliary RIS ports is linear, irrespective of its detailed implementation (e.g., \u201cfully connected\u201d, \u201cgroup-connected\u201d, randomly connected), it can be understood as a multi-port network with ports, where is the number of tunable load impedances in the load circuit. Hence, the load circuit can be characterized by its impedance matrix . of the load circuit\u2019s ports are terminated with individual (i.e., not interconnected) load impedances; the set contains the corresponding port indices. The set contains the indices of the remaining ports of the load circuit that are connected to the ports of the radio environment whose indices are contained in the set defined earlier.\nRemark 5: A port is defined as a \u201ctwo-terminal pair\u201d, as highlighted in Fig. 1 ###reference_### and also seen in Fig. 2 ###reference_###, and this definition allows but does not require that one of the two terminals of the port is grounded \u2013 see Fig. 1 ###reference_###.\n###figure_1### ###figure_2### To start, let us determine the load impedance matrix that terminates the auxiliary RIS ports given a load circuit characterized by an impedance matrix as defined in the previous paragraph. This problem is analogous to that of a radio environment parametrized by a D-RIS, and hence the answer resembles Eq. (1 ###reference_###):\nwhere\nwhere is the load impedance of the th load-terminated port of the load circuit.\nRemark 6: Identifying a configuration of load impedances that yields a desired load impedance matrix is in general a non-trivial inverse-design problem (analogous to the optimization of the configuration of a D-RIS to achieve a desired property of the wireless channel).\nGiven , one can insert into Eq. (1 ###reference_###) and determine the physics-compliant channel matrix with Eq. (2 ###reference_###). This corresponds to the conventional interpretation of BD-RIS-parametrized channels which explains the terminology \u201cbeyond diagonal\u201d: is not a diagonal but a \u201cbeyond diagonal\u201d matrix (e.g., block diagonal or fully populated). This usual BD-RIS interpretation is summarized in the upper part of Fig. 2 ###reference_###. Nonetheless, to the best of our knowledge, the fact that a BD-RIS load circuit\u2019s impedance matrix takes the form of Eq. (4 ###reference_###) has to date not been recognized in the literature.\nBesides this usual BD-RIS interpretation, an equivalent alternative interpretation of the BD-RIS-parametrized end-to-end channel matrix exists that has, to date, not been recognized. As illustrated in the lower part of Fig. 2 ###reference_###, one can consider the cascade of radio environment and load circuit. This cascade is itself an multi-port network characterized by its impedance matrix , where ; the ports whose indices are in the set are then terminated by the diagonal load impedance matrix .\n is related to and as follows [10 ###reference_b10###, 11 ###reference_b11###]:\nwhere\nand\nRemark 7: Equivalent expressions to Eqs. (6 ###reference_###-8 ###reference_###) in terms of the corresponding scattering parameters are known as the \u201cRedheffer star product\u201d and can be found, for instance, in Refs. [12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 11 ###reference_b11###].\nGiven and , we obtain an alternative expression for the measurable impedance matrix :\nfrom which we can obtain the wireless end-to-end channel matrix as before using Eqs. (2 ###reference_###-3 ###reference_###). Importantly, recall that is a diagonal load impedance matrix.\nThe key result of the present paper is Eq. (9 ###reference_###). Comparing Eq. (1 ###reference_###) and Eq. (9 ###reference_###) reveals that the BD-RIS problem can be mapped into the conventional D-RIS problem using the following analogies:\nOf course, under the assumption of a trivial load circuit for which each auxiliary RIS port is terminated with an individual load impedance, the generic formulation from Eq. (9 ###reference_###) would collapse to that of Eq. (1 ###reference_###) because would simply equal ."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "III Implications",
|
| 39 |
+
"text": "The first and most obvious implication of the insights derived in the present paper is that there is no need to develop new optimization algorithms for BD-RIS. For any realistic BD-RIS implementation, the load circuit (and hence its characterization via ) must be known such that one can always determine and use Eq. (10b ###reference_.2###) to map the BD-RIS problem into the original D-RIS formulation.\nThe second implication is that the insights derived in the present paper raise questions about the fairness (or practical relevance) of existing comparisons between BD-RIS and D-RIS. Leaving aside the fact that existing comparisons are not or only partially compliant with physics, a fundamental question is whether the comparison should be for a fixed number of RIS elements or for a fixed number of tunable load impedances . For D-RIS, whereas for the BD-RIS types considered to date, . Existing comparisons are for fixed such that a BD-RIS benefits from having drastically more tunable load impedances than a D-RIS. However, arguably the number of tunable load impedances is a limiting critical hardware aspect that is at least as important as the number of RIS elements."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "IV Conclusion",
|
| 45 |
+
"text": "The consideration of BD-RIS has enriched the RIS literature by generalizing the termination of the auxiliary RIS ports to arbitrarily complex tunable load circuits. However, prior to the present paper, the implications of the BD-RIS concept in terms of multi-port network theory were not fully appreciated. Here, we have shown that the BD-RIS problem constitutes a multi-port network cascade (the radio environment and load circuit are cascaded) that can always be mapped into the original D-RIS framework using Eq. (10b ###reference_.2###). Our results imply that BD-RIS do not require the development of dedicated optimization algorithms and challenge the basis on which BD-RIS and D-RIS are compared in existing literature."
|
| 46 |
+
}
|
| 47 |
+
],
|
| 48 |
+
"appendix": [],
|
| 49 |
+
"tables": {},
|
| 50 |
+
"image_paths": {
|
| 51 |
+
"1": {
|
| 52 |
+
"figure_path": "2403.17222v2_figure_1.png",
|
| 53 |
+
"caption": "Figure 1: Clarification of the notion of a port being a \u201ctwo-terminal pair\u201d for a simple 2-port \u03c0\ud835\udf0b\\piitalic_\u03c0-network. (a) Schematic circuit topology. (b) Detailed circuit topology clearly showing both conductors and both terminals for each port. (c) Replacement of the three impedances in (b) with three auxiliary ports that are to be terminated by individual independent load impedances. The two terminals of each port and of each auxiliary port are clearly shown. The \u03c0\ud835\udf0b\\piitalic_\u03c0-network involves one series and two parallel impedances (or auxiliary load-terminated ports).",
|
| 54 |
+
"url": "http://arxiv.org/html/2403.17222v2/x1.png"
|
| 55 |
+
},
|
| 56 |
+
"2": {
|
| 57 |
+
"figure_path": "2403.17222v2_figure_2.png",
|
| 58 |
+
"caption": "Figure 2: This figure summarizes the key insight of the present paper.",
|
| 59 |
+
"url": "http://arxiv.org/html/2403.17222v2/x2.png"
|
| 60 |
+
}
|
| 61 |
+
},
|
| 62 |
+
"validation": true,
|
| 63 |
+
"references": [],
|
| 64 |
+
"url": "http://arxiv.org/html/2403.17222v2"
|
| 65 |
+
}
|
20240721/2404.00801v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2404.02059v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2404.07988v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2404.12228v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240721/2404.13903v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|