Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- 20241001/1811.03019v2.json +49 -0
- 20241001/2106.07718v4.json +571 -0
- 20241001/2109.04993v4.json +0 -0
- 20241001/2112.05193v2.json +342 -0
- 20241001/2206.09885v2.json +0 -0
- 20241001/2210.16928v2.json +0 -0
- 20241001/2211.02032v3.json +423 -0
- 20241001/2211.12371v3.json +0 -0
- 20241001/2212.04223v3.json +0 -0
- 20241001/2301.04907v3.json +0 -0
- 20241001/2301.11301v4.json +151 -0
- 20241001/2304.02730v4.json +499 -0
- 20241001/2305.06888v3.json +0 -0
- 20241001/2305.13214v2.json +0 -0
- 20241001/2307.07635v3.json +0 -0
- 20241001/2307.15586v4.json +0 -0
- 20241001/2308.03547v2.json +166 -0
- 20241001/2308.07766v2.json +128 -0
- 20241001/2308.16697v3.json +352 -0
- 20241001/2309.04109v2.json +0 -0
- 20241001/2309.10103v2.json +193 -0
- 20241001/2310.03394v3.json +205 -0
- 20241001/2310.04922v4.json +213 -0
- 20241001/2310.06000v3.json +398 -0
- 20241001/2310.06341v2.json +0 -0
- 20241001/2310.07867v6.json +460 -0
- 20241001/2310.12239v2.json +411 -0
- 20241001/2310.12831v3.json +0 -0
- 20241001/2311.02262v2.json +0 -0
- 20241001/2311.08369v4.json +0 -0
- 20241001/2311.09356v3.json +448 -0
- 20241001/2311.10122v3.json +0 -0
- 20241001/2312.01314v2.json +0 -0
- 20241001/2312.05492v6.json +0 -0
- 20241001/2312.06908v3.json +0 -0
- 20241001/2312.07783v3.json +0 -0
- 20241001/2312.08255v4.json +559 -0
- 20241001/2312.08367v4.json +185 -0
- 20241001/2312.08887v4.json +224 -0
- 20241001/2312.17397v2.json +195 -0
- 20241001/2401.00416v2.json +0 -0
- 20241001/2401.01643v3.json +220 -0
- 20241001/2401.04978v2.json +549 -0
- 20241001/2401.09108v2.json +215 -0
- 20241001/2401.10226v2.json +0 -0
- 20241001/2401.10229v2.json +0 -0
- 20241001/2401.12261v4.json +186 -0
- 20241001/2401.15497v5.json +0 -0
- 20241001/2401.17985v2.json +0 -0
- 20241001/2402.01107v3.json +0 -0
20241001/1811.03019v2.json
ADDED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "On the Maximum Distance Sublattice Problem and Closest Vector Problem",
|
| 3 |
+
"abstract": "In this paper, we introduce the Maximum Distance Sublattice Problem (). We observed that the problem of solving an instance of the Closest Vector Problem () in a lattice is the same as solving an instance of in the dual lattice of . We give an alternate reduction between the and . This alternate reduction does not use the concept of dual lattice.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "For any set of linearly independent vectors\n, a lattice is defined to be the set of vectors that consists of the integer linear combinations of vectors from\n. Formally it is defined as follows.\nHere, we call the rank of the lattice and as the ambient dimension. We call the set a basis of the lattice. Note that, a lattice can have infinitely many bases. Lattices have an enormous number of applications in Number theory [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###] and Cryptanalysis [4 ###reference_b4###, 5 ###reference_b5###]. In the last two decade lattices got special attention due to their applications in Cryptography. Lattice-based Cryptosystems are considered the most prominent candidate for Post-Quantum Cryptography [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###].\nThe Shortest Vector problem () and Closest Vector problem () are two\nwell known and widely studied lattice problems.\nGiven a basis of the lattice , the shortest vector problem is to find a shortest (in some norm, usually in Euclidean-norm) non-zero vector in the lattice. In the closest vector problem we are also given a target vector in the vector space of the lattice and the goal is to find the lattice vector closest (usually in Euclidean-norm) to the target . is known to be NP-hard for approximation factor less than [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###]. is only shown to be NP-hard to approximate with constant approximation factor only by a randomized reduction111It is an long standing open problem to show NP-hardness for SVP via a deterministic reduction. [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###]. It is also known to be poly-time hard for approximation factor\n under some complexity theoretic assumption [16 ###reference_b16###, 17 ###reference_b17###]. Recently, there is also a series of works on the fine grained hardness of [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###] and [21 ###reference_b21###]. It is also know that is at least as hard as as there is an approximation factor, rank and dimension preserving reduction from to [22 ###reference_b22###].\nAll known algorithms for and require at least exponential time. Kannan [2 ###reference_b2###] gave an enumeration based algorithm for which takes time and polynomial space. There are also some improvements on running time of Kannan\u2019s algorithm [23 ###reference_b23###, 24 ###reference_b24###]. In 2001, Ajtai, Kumar and Sivakumar gave the first time and space sieving algorithm for [25 ###reference_b25###] and [26 ###reference_b26###]. There has been extensive works to improve the sieving algorithms for and [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###]. Fastest known classical algorithm for and takes time and space, based on Discrete Gaussian Sampling [33 ###reference_b33###, 34 ###reference_b34###]. Recently Aggarwal, Chen, Kumar and Shen gave a faster quantum algorithm for that requires time and exponential size QRAM and classical space [35 ###reference_b35###].\nIn 1982, Lenstra, Lenstra and Lovasz [1 ###reference_b1###] gave a polynomial time algorithm (known as\nLLL) for finding an exponential approximation of the shortest vector in the\nlattices. The applications of LLL are found in factoring polynomials over rationals, finding linear Diophantine approximations, cryptanalysis of RSA and other\ncryptosystems [36 ###reference_b36###, 4 ###reference_b4###, 37 ###reference_b37###]. Babai [38 ###reference_b38###] gave a polynomial time algorithm, which uses\nLLL, for approximating with exponential approximation factor. Schnorr\nhas given improvements over the LLL algorithm [39 ###reference_b39###, 40 ###reference_b40###]."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "Our Contributions:",
|
| 15 |
+
"text": "In this paper, we introduce the Maximum Distance Sublattice Problem (). Given a lattice vector , the goal is to find a sublattice of rank whose distance from the lattice vector is maximum. We first observe that the problem reduces to the on the dual lattice. The main technical contribution of our work is a reduction between the and without using the notion of the dual lattice. The reduction employs novel geometric results that might be of independent importance. Our reduction preserves the dimension and rank of the lattice222We say a reduction is dimension-preserving and rank-preserving as long as the rank and dimension increases (or decreases) at most by 1..\nThere exists a polynomial time rank-preserving dimension-preserving many-one (Karp) reduction between and .\nThe proof of the theorem is presented in Section III ###reference_###. We state our reduction for only for exact problem. It is easy to extend it for any approximation factor."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "Organisation:",
|
| 21 |
+
"text": "The rest of the paper is organised as follows. In section 2, we provide definitions and the trivial reduction between and . Section 3 contains our new reduction between and ."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "II Preliminaries",
|
| 27 |
+
"text": "In this paper , and will denote the sets of integers, reals and\nrationals respectively. Vectors will be denoted by small letters as in and matrices\nand basis sets will be denoted in capital letters. We will use to denote the identity matrix. Let be a set of vectors in .\nThe subspace of spanned by will be denoted by .\nIn this paper, we will work with vector space . For any vectors , we use the notation to denote the dot-product of the two vectors, i.e., and denotes the norm of the , i.e., . For a subspace\n, is also a subspace and it is called the orthogonal subspace of .\nGiven a set of linearly independent vectors in a vector space , the lattice spanned by is the set\nIn other words, a lattice is an integral span of . The set is referred to as a basis of the lattice. The rank of the lattice is the number of linearly independent vectors in and the dimension of a lattice is the dimension of the ambient vector space containing the lattice. In this paper, we denote by a matrix where column vectors are the vectors of the generating set.\nIn this representation, the rank of a lattice is the same as the rank of the matrix . Similar to a vector space, a lattice has infinitely many bases. We will need the concept of unimodular matrices to characterize the bases of a given lattice.\nA matrix which has a determinant equal to or , is called a unimodular\nmatrix.\nNotice that the inverse and the transpose of a unimodular matrix are also unimodular. The following theorem states that two bases generate the same lattice if they are related by a unimodular matrix.\nand (in matrix form) are bases of the same rank- lattice in if and only if there exists an unimodular matrix such that .\nAn important concept in lattice theory is the dual of a lattice which is defined as follows.\nLet be a lattice in . Then, the dual lattice of , denoted by is\nLet be an invertible matrix. Then, it can be easily shown that if is the basis of , then is a basis for the dual lattice . is called the dual basis of . Observe that from the definition of dual basis, we have .\nIf is the dual basis of , then for a basis where is a unimodular matrix, the dual basis is .\nWe will now proceeds to define certain computationally hard problems in lattice theory.\nGiven a basis , find a shortest non-zero vector in the lattice\n, i.e\nGiven a basis and a vector , find a vector in the lattice\n which is closest from , i.e\nIn this paper, we assume the vector in instance is linearly independent of basis . In the case where is not independent, we can increase the dimension of the vector space and obtain linear independence as follows. We work with and such that\nExcept for a constant factor, this one-dimensional increase has no effect on our/existing algorithms\u2019 running time.\nGiven a basis of a subspace in ,\nthe subspace has an orthogonal basis given by\n where\n. This transformation of the basis is called\nGram Schmidt orthogonalization.\nUsing a Gram Schmidt orthogonalization of a basis of a subspace , it is easy to compute the projection of a vector onto the subspace as follows. Let be a basis of a -dimensional subspace of and \nbe a vector in . The projection of on the subspace \nis its component in . If is an orthogonal basis of (such as\nthe one computed by Gram-Schmidt orthogonalization), then the projection of on \nis\nThe component of\n perpendicular to is . It is equal to the projection\nof on , i.e., .\nThe distance of the point from the subspace is the length of this vector. So\nWe now proceed to define Maximum Distance Sublattice Problem.\nGiven a basis for an dimensional lattice , find\n such that \nis also a basis for and the distance is maximum.\nHere, we call the fixed vector.\nThe following theorem shows that a solution to the can be achieved from by adding integral multiples\nof to vectors in .\nLet be a basis of an dimensional lattice in . Then for any\nbasis of the lattice of the form , there exists integers such that is also a lattice basis\nand where\nWe have included a proof of the above theorem in the Proof of Theorem 3 ###reference_### as we were unable to provide a reference for it.\nThe following theorem shows a trivial reduction between and .\nThere exist polynomial time rank and dimension preserving many-one (Karp) reductions between and .\nWe will show that is equivalent to on basis and target where is the dual basis of . We will first show the reduction from to and since all the computations in the reduction are invertible, the other direction is trivial.\nLet the input to be and its dual basis be . From Theorem 3 ###reference_orem3###, we know that a solution to can be written as , i.e.,\nwhere is an integer vectors. From 1 ###reference_im1###, we know that the dual basis of is where\nTherefore, . Also, from the definition of dual basis, we have , therefore,\nUsing the fact that where is the angle between and , we get\nwhere is the angle between and . Using the definition of dual basis, we know that is perpendicular to all because is the dual of . Therefore, is perpendicular to . This implies that is the angle between and . Hence, is the perpendicular distance between and .\nRecall that is the solution to the instance, which means that the perpendicular distance between and is maximized. In other words, is maximized. Therefore, is minimized due to Equation 2 ###reference_###. But, this is essentially computing the shortest vector in the shifted lattice , which is exactly with the basis and target .\n\u220e"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III New Reduction between and",
|
| 33 |
+
"text": "In this section, we prove our main theorem, i.e., Theorem 1 ###reference_orem1### which is reduction between and which does not utilize the concept of dual lattices. Let be an input to the .\nKeeping Theorem 3 ###reference_orem3### in consideration, the maximum distance sub-lattice problem can be stated as\nfollows. Given an -dimensional lattice with basis , compute an alternative basis \nsuch that the distance of point from the subspace spanned by\n is maximum, where for all .\nLet denote the subspace spanned by the vectors for . Following result determines\nthe distance of the point from for the special case when\n is an orthonormal basis.\nLet be an orthonormal basis. Then\nthe distance of point from is \nfor any .\nLet be the projection of vector on .\nThen is the perpendicular drop from point\n to the plane. This implies that for all ,\nBy expanding the term and crucially using the fact that the vectors are orthonormal, we get\nBy equating the last equation to , we get where . This gives us\nThe square of the distance of from the plane is\nWe now focus on expressing in terms of \u2019s. We have\nPlugging this in the expression for we get .\n\u220e\nThe distance of a vector from a plane is equal to the length of the vector\u2019s projection on the orthogonal plane and projection is directly proportional to the length of the vector. Hence we have a trivial consequence.\nLet be an orthogonal basis in which all but are unit vectors. Then the distance of point from is for any .\nIn this case is no longer a unit vector. The basis of is . It is same as where the additive vector is a unit vector as required in Lemma 5 ###reference_orem5### and . From the lemma, the distance of the point from , is . Hence the distance from is .\n\u220e\nWe will now focus on the general case in which the vectors are not necessarily orthogonal to the vector . Let be perpendicular to for each , where\n. So and\nthe plane spanned by is perpendicular to .\nNote that need not be an integer. Note that a lattice vector can now be represented as in the new reference frame.\nConsider the plane which is spanned by . In the new basis, we have\nLet us now transform the basis, , of the -dimensional subspace\ninto an orthonormal basis. Let denote the matrix in which column vectors are . Let be a linear transformation such that the column vectors of \nform an orthonormal basis. Denote the column vectors of by \nwhich are unit vectors and mutually orthogonal. Therefore,\nNote that the new basis spans the same subspace which is spanned by\n. Now forms\nan orthogonal basis such that all but are unit vectors.\nThe plane is spanned by . We will now focus on expressing this plane in terms of the unit vectors . If we extend a line parallel to from the point (where and are perpendicular to each other, for all ),\nthen it must intersect this plane at one point, say, . Then the plane\nspanned by is \nitself.\nUsing Equation 5 ###reference_###, we have\nBy the choice of , belongs to . From Equation 4 ###reference_###, we know that vector also belongs to the plane for each . But, \ndoes not belong to the plane because it is linearly independent from the set of vector . Thus, from the linear independence, we can conclude that\nThis implies that\nThe plane is spanned by \nwhere is an orthonormal basis and is perpendicular to\neach vector of the set. From Corollary 6 ###reference_orem6###, the square of the distance of from the\nplane is .\nRecall that our goal is to find a sub-lattice plane , where , such that the distance from is maximized. Equivalently, we want to find a sub-lattice plane such that\n is minimized, i.e., to minimize the length of the vector . Let , then corresponding .\nWe now proceed to construct a instance that will solve the instance. We start define a lattice with basis , i.e., the row vectors of form a basis of .\nWe denote the rows of by . Let . Then the length of the vector is equal to the distance between\nthe fixed point and the lattice point of . Thus\nthe problem reduces to finding a lattice point of closest to the point .\nTherefore, we have reduced to an instance of where is a lattice basis and\n is the fixed point.\nThe following lemma summarises the computations needed to convert a instance to a instance.\nGiven a basis of an -dimensional lattice as\nan instance of . Let for all \nwhere .\nLet be a linear transformation such that is an orthonormal basis.\nEquivalently is an orthonormal basis where\n. Let denote the -th row of .\nThen the sub-lattice plane has maximum distance from the point if\n is a closest lattice vector for the instance\nin which the lattice basis is and the fixed point is\n.\nThe entire transformation involves only invertible steps hence the converse of the above claim also holds.\nLet the basis and the fixed point be an instance of . Let be the matrix in which -th row is for all . Let . Pick an arbitrary orthonormal basis for . Let be the matrix with column vectors . Let . Let denote the -th column of . Let . If the instance has an optimum solution sub-lattice plane formed by , then is the solution of the given instance.\nFinally, Theorem 1 ###reference_orem1### is obtained by combining Lemma 7 ###reference_orem7### and Lemma 8 ###reference_orem8###."
|
| 34 |
+
}
|
| 35 |
+
],
|
| 36 |
+
"appendix": [
|
| 37 |
+
{
|
| 38 |
+
"section_id": "Appendix x1",
|
| 39 |
+
"parent_section_id": null,
|
| 40 |
+
"section_name": "Proof of Theorem\u00a03",
|
| 41 |
+
"text": "In this section, we provide a proof for Theorem 3 ###reference_orem3###.\nSince and generate the same lattice, there exists a unimodular matrix (refer Theorem 2 ###reference_orem2###) such that\nwhere\nThe determinant , so\n. Observe that , so and\nis unimodular. So exists and it is also unimodular.\nLet us denote by . Then\nwhere .\nThe left-hand side in the above equation is equal to . So\n.\nThe matrix is unimodular so and span the same sub-lattice and\n.\n\u220e"
|
| 42 |
+
}
|
| 43 |
+
],
|
| 44 |
+
"tables": {},
|
| 45 |
+
"image_paths": {},
|
| 46 |
+
"validation": true,
|
| 47 |
+
"references": [],
|
| 48 |
+
"url": "http://arxiv.org/html/1811.03019v2"
|
| 49 |
+
}
|
20241001/2106.07718v4.json
ADDED
|
@@ -0,0 +1,571 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "HUMAP: Hierarchical Uniform Manifold Approximation and Projection",
|
| 3 |
+
"abstract": "Dimensionality reduction (DR) techniques help analysts to understand patterns in high-dimensional spaces. These techniques, often represented by scatter plots, are employed in diverse science domains and facilitate similarity analysis among clusters and data samples. For datasets containing many granularities or when analysis follows the information visualization mantra, hierarchical DR techniques are the most suitable approach since they present major structures beforehand and details on demand.\nThis work presents HUMAP, a novel hierarchical dimensionality reduction technique designed to be flexible on preserving local and global structures and preserve the mental map throughout hierarchical exploration. We provide empirical evidence of our technique\u2019s superiority compared with current hierarchical approaches and show a case study applying HUMAP for dataset labelling.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Related Work",
|
| 9 |
+
"text": "By reducing the number of dimensions while maintaining structural and similarity relations in the low-dimensional representation, DR techniques aid in the analysis of high-dimensional datasets [33 ###reference_b33###]. Users typically search for patterns in the data when using a DR technique, such as clusters, shapes, and outliers. To ensure a successful data exploration process, it is essential to understand the data and the available DR techniques. For instance, linear DR techniques are well known for quickly revealing variability [19 ###reference_b19###, 35 ###reference_b35###, 18 ###reference_b18###]. While generally being more difficult and time-consuming, non-linear DR techniques [26 ###reference_b26###, 36 ###reference_b36###, 28 ###reference_b28###, 30 ###reference_b30###] can reveal more complex structures that are present in high-dimensional space.\nThe scatterplot metaphor is frequently employed represent DR results. Despite being widely used for exploratory data analysis [29 ###reference_b29###], scatter plots lack effectiveness due to marker overlap [42 ###reference_b42###, 41 ###reference_b41###]. When exploring large datasets as a whole, traditional DR techniques may obscure crucial details within and between clusters. For instance, some datasets exhibit inherently multilevel structures [25 ###reference_b25###], which call for a DR technique to be more adaptable. Knowledge discovery is facilitated in this way by strategies that make exploration simpler and direct users through the exploratory process. One such strategy is the use of hierarchical dimensionality reduction (HDR) techniques, which offer exploration mechanisms based on the mantra overview first & details-on-demand [43 ###reference_b43###], concentrating on essential information as needed by the user.\nThere are many traditional DR strategies (see [33 ###reference_b33###, 9 ###reference_b9###] for helpful surveys), but very few hierarchical DR (HDR) methods [33 ###reference_b33###, 14 ###reference_b14###]. These HDR methods have the feature of first establishing the hierarchical structure before enabling multilevel exploration. The challenge is conveying different levels while preserving context and neighborhood structures [33 ###reference_b33###]. Therefore, current studies concentrate on defining the hierarchical structure while using the projection engine of previous traditional DR approaches. For example, in order to solve the MDS computational complexity problem, MDSteer [48 ###reference_b48###] incrementally computes a multidimensional scaling layout in response to user demand. Glimmer [16 ###reference_b16###] also addresses MDS complexity by performing projection using a multilevel GPU scheme. Simply put, the authors interpolate high hierarchy levels to create a hierarchy. However, MDS-based approaches do not adequately capture the complex structures (such as manifolds) found in the majority of real-world and practical datasets (e.g., deep learning features, image collections, or biological data). Hierarchical PCA variants suffer from a lack of non-linear structure communication strategies [47 ###reference_b47###, 17 ###reference_b17###, 1 ###reference_b1###].\nData organization in HiPP [34 ###reference_b34###] involves landmarks and hierarchical clustering. In finer levels, the data points are represented and influenced by the landmarks of coarser levels. HiPP uses a force algorithm to deal with overlaps when positioning points heuristically and takes into account the landmarks of LSP [35 ###reference_b35###] for context preservation and to communicate hierarchy levels. One of the most reliable methods is HSNE [36 ###reference_b36###]. The HSNE algorithm builds a hierarchy to preserve global and local relationships at high hierarchy levels using random walks on a transition matrix. The analysis is hampered when mental map preservation is crucial because the HSNE\u2019s embeddings lose the structural relationships that were presented at higher levels during hierarchical exploration. To provide interactive analysis in a reasonable amount of time, HSNE also needs a GPU. Using diffusion condensation [5 ###reference_b5###], a recent method known as Multiscale PHATE [22 ###reference_b22###] computes a manifold-intrinsic diffusion space on the input data and condenses data points towards centroids to produce groups of multiple granularities. Massive datasets can be handled by multiscale PHATE, but only for a few dimensions. Additionally, it appears to have issues with its embedding engine when the input dataset does not contain continuous phenomena [31 ###reference_b31###]. The hierarchy levels of these methods all share the inclusion of representative samples or landmarks. It is interesting to note that any single-level-of-detail dimensionality reduction method that projects data points onto landmarks is capable of having a hierarchical version [33 ###reference_b33###].\n###figure_1### Finally, users work with embeddings of increasing size during hierarchical exploration. Thus, it is essential to maintain the mental map between successive projections in order to prevent users from being tricked by geometrical transformations during exploratory analysis. This task, often refered as projection alignment, can be addresses in different ways. A time-varying dataset is projected using dynamic t-SNE [38 ###reference_b38###] to reduce temporal variability that is unimportant to the final layout. This is accomplished by including a term in the cost equation for the t-SNE that regulates the trade-off between alignment and conventional t-SNE optimization. A trade-off parameter is also used in VFF [13 ###reference_b13###] to regulate the alignment of projections in feature fusion tasks. VFF, however, is ineffective for preserving local structures [6 ###reference_b6###]. Finally, Cantareira and Paulovich [6 ###reference_b6###] proposed a general model for projection alignment that incorporates the original cost function of the DR technique and a penalty term applied for alignment. Another important problem that these techniques address is when dealing with streaming applications, where the use of DR techniques for conveying information has grown [MOHEDANOMUNOZ2023120252]. The preservation of mental maps is critical in such scenarios, as users would have to expend a significant amount of cognitive effort tracking the position of data points in the visual space.. Our approach to maintaining the mental map entails directing the movement of data samples that have already been projected at high hierarchy levels and affecting the placement of new data samples. Fig. HUMAP: Hierarchical Uniform Manifold Approximation and Projection illustrates this. The highest hierarchy level (top-level) of HUMAP resembles the lowest hierarchy level and maintains the structures of chosen subsets (B). In contrast to single-level techniques (A), it summarizes the data while providing information on the connections between and within clusters."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Hierarchical Uniform Manifold Approximation and Projection (HUMAP)",
|
| 15 |
+
"text": "Two main components of our HDR technique are Hierarchy construction and Projection (Fig. 1 ###reference_###). In the former, we use a similarity measure between landmarks to create a tree-like structure on the high-dimensional dataset. In the latter, we incorporate the hierarchy levels in response to the user\u2019s demand for more specific data. All the steps (A\u2013G) in this process are shown in Fig. 1 ###reference_###.\nThe first step in building a hierarchy from bottom to top is to use a kernel function to determine the connection strengths (local affinities) of a -nearest neighbor graph of data points in the high-dimensional space (step A). Then, similar to previous research [36 ###reference_b36###], we employ the Finite Markov Chain (FMC) to identify the most visited nodes, which consist of the landmarks for the higher level (step C). Steps (D) and (E) of the FMC process are used to encode local and global neighborhood information for each landmark as well as to build a neighborhood structure for high hierarchy levels (). In order to define a new hierarchy level, a new neighborhood graph is created in step (F) using the computed similarity (step G).\nFor projecting hierarchy levels, the neighborhood graph is first symmetrized (step H), so each edge\u2019s strength helps in finding a suitable position in the low-dimensional representation. For the purpose of maintaining mental maps, the projection of lower levels, with the exception of the top hierarchy level, is influenced by the low-dimensional positions (I) of data points in higher levels, which we discuss in Section 2.3.2 ###reference_.SSS2###.\nWe use a few strategies from UMAP [28 ###reference_b28###] to build our approach, including the kernel function to calculate the connection strengths among data samples and the embedding strategy and concepts from HSNE [36 ###reference_b36###] for landmark selection. We concentrate on the HUMAP components for hierarchical dimensionality reduction in the following section."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Landmark selection",
|
| 21 |
+
"text": "DR methods frequently use -nearest neighbor graphs to capture manifolds in high-dimensional spaces. The path formed by the vertices in the -nearest neighbor graph is used to represent the distance between two arbitrary points in the high-dimensional space in this process, which approximates global distances by adding up local relations [30 ###reference_b30###]. Therefore, the first step in many DR techniques that can find manifolds in data is to fit a -nearest neighbor graph [26 ###reference_b26###, 28 ###reference_b28###, 30 ###reference_b30###].\nAfter finding the -nearest neighbor graph, a kernel function defines the strength (or probability) of every edge in the graph. Here, we adopted UMAP\u2019s [28 ###reference_b28###] kernel function for two data points (the high-dimensional dataset):\nwhere is the euclidean distance between the data points and , is the euclidean distance of to its closest neighbor, and is a kernel value that depends on the number of neighbors , according to the following equation [28 ###reference_b28###],\nNotice that is unknown. However, by fixing a value for , we can use Equation 2 ###reference_### to compute it using a binary search [28 ###reference_b28###]. That is, for each data point (matrix row), Equation 1 ###reference_### is plugged into Equation 2 ###reference_### and a binary search is used to find the best value given the number of nearest neighbors (). Thus, the value for that produces a closest to the actual is selected. This procedure ensures a different kernel value () for each data point and its neighborhood. For a fixed , higher values encode dense neighborhoods while lower values encode sparse neighborhoods. In other words, encodes how close the data points are in the neighborhood of . Lastly, the parameter gives a locally adaptive kernel for each data point and ensures a good topological representation of high-dimensional data [28 ###reference_b28###].\nThe connection strength between each neighbor is determined by computing for each pair of data points in accordance with the neighborhood density (please, see Equation 1 ###reference_###). The dataset is summarized at higher hierarchical levels using good representative data points (or landmarks) based on the strengths of these connections. For this task, we use random walks on a finite markov chain, as in Pezzotti et al.\u2019s [36 ###reference_b36###] work (FMC). The path from an initial state to the final state after steps (or hops) on the neighborhood graph is known as a random walk and is defined by a length. As a result, we compute the following equation to generate the transition probabilities for the FMC from :\nwhere data points and are in the neighborhood of (). Then, the landmarks are sampled from by a simple Markov Chain Monte Carlo technique [12 ###reference_b12###]. For each data point , we start random walks with length . We empirically evaluated that results in embeddings that trustfully represent the underlying data structure. This whole process, from kernel smoothing to landmark selection, is represented by the steps from (A) to (C) in Fig. 1 ###reference_###.\nUnlike previous approaches [36 ###reference_b36###, 22 ###reference_b22###], users have control over the number of data points in each hierarchy level, making it easier to experiment with datasets with varying characteristics (e.g., cluster density)\u2014though other threshold-based approaches can be used [36 ###reference_b36###]. By specifying the number of landmarks on level (), users control the number of data points in a hierarchy level. We store the random walk endpoints and choose the most frequented data points as landmarks during the Monte Carlo simulation."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Dissimilarity definition",
|
| 27 |
+
"text": "When building the hierarchy, the hierarchical level encodes information of the level below (). This ensures that the information at the top-level accurately represents the information in the whole dataset. Therefore, for each subsequent hierarchy level, we transfer the manifold and structure relationships from bottom to top. For the first hierarchical level, the similarity between two data points and is computed using Equation 1 ###reference_### based on euclidean distance (), as in Fig. 1 ###reference_### (A). For higher levels, the similarities among landmarks in follow the data organization of . Such similarities are computed based on the -nearest neighbor graph of .\nOn level , for two landmarks, and , we compute their similarity using two components: the intersection of the global and local neighborhoods. Fig. 1 ###reference_### (D-F) illustrates these two components for two landmarks in green. The local neighborhood (in pink) consists of the k-nearest neighbors as usual, while the global neighborhood (computed only for ) is found through random walks (in blue). Two landmarks that share more local neighbors are very close in the high-dimensional space since there are single data points that connect them (for landmarks and , there is a data point in the neighborhood of and ). Landmarks sharing global neighbors have global relationships since there is a path between these landmarks. In Fig. 1 ###reference_###, the landmarks in green only share a global relationship, highlighted in gray between steps (E) and (F).\nWe use random walks to compute the global neighborhood (the blue relationship in Fig. 1 ###reference_###) similarly to the HSNE [36 ###reference_b36###] approach. In this case, we start random walks of length from each non-landmark () on the -nearest neighbor graph of . Then, when a random walk reaches a landmark, , we add the non-landmark () to the \u201crepresentation neighborhood\u201d of landmark . Notice that such a representation neighborhood (RNH) is created only for similarity computation.\nThe local neighborhood component adapts our approach to preserve more of the local structures (see the pink relationship in Fig. 1 ###reference_###). To this end, we use the -nearest neighbors (NH) of each landmark to augment their representation neighborhoods with the first nearest neighbors of , where is a threshold between and . Thus, the greater , the more important the -nearest neighbors, and the similarity measure will encode more local relations. HUMAP supports parameter tuning to perform better on neighborhood preservation. The similarity between two landmarks consists of the intersection between their global and local neighborhoods. The local neighborhood for a landmark is in the set of its nearest neighbors, so one can increase it by adjusting the parameter that adds neighbors to the local neighborhood\u2014by default, is set to . We evaluated HUMAP with different values of and found a clear relationship between local neighborhoods and neighborhood preservation.\nFinding these two components results in the representation neighborhood (RNH) for each landmark on level . Such a neighborhood consists of a sparse matrix of dimensions where is if RNH() or , otherwise. The similarity among landmarks is computed based on the intersection of representation neighborhoods, that is, the shaded area of Fig. 1 ###reference_### (E-F). We compute the intersection with the matrices as follows:\nwhere is a normalization factor that ensures similarity between and \u2014notice that since is sparse, computing the above matrix multiplication is fast in practice. Note that the intersection between neighborhoods is also used in HSNE to compute the similarity between two landmarks. However, unlike in HSNE, we do not directly employ such a similarity measure to compute the embeddings. Instead, HUMAP extends the neighborhood to encode global information and uses UMAP\u2019s kernel to maintain its properties throughout the hierarchy.\nThe output of the above operation is a similarity measure, meaning that a value equal to 1 corresponds to landmarks sharing the same representation neighborhood, as illustrated by the operation (F) in Fig. 1 ###reference_###. Thus, we transform it into a dissimilarity measure by subtracting the similarity value from 1. To benefit from UMAP\u2019s kernel function (Equation 1 ###reference_###), this process of creating the neighborhood using Markov Chain also eliminates the necessity to perform another kNN step. Using the dissimilarity, we can simply apply the same kernel function on every hierarchical level."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Generating the embedding",
|
| 33 |
+
"text": "Having the dissimilarity values (), where is a matrix with as rows, we are able to use them in Equation 1 ###reference_### to generate the matrix for each hierarchical level. Then, HUMAP, as in UMAP [28 ###reference_b28###] technique, applies the following matrix symmetrization to produce an undirected weighted graph:\nThe matrix is used to produce an embedding that converges to positions in a low-dimensional space. While we refer to McInnes et al. [28 ###reference_b28###] for a detailed description of the UMAP algorithm, it is important to mention that an initial low-dimensional representation of the dataset is created using Spectral Embedding [32 ###reference_b32###]. Then, is used to reposition the data points using a force-directed graph layout whose convergence is performed by decreasing attractive and repulsive forces (given by ) using Stochastic Gradient Descent [39 ###reference_b39###]."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.3.1",
|
| 37 |
+
"parent_section_id": "2.3",
|
| 38 |
+
"section_name": "2.3.1 Subset projection",
|
| 39 |
+
"text": "Each landmark represents a set of data points on the level below when hierarchical levels are defined. It is crucial to specify the functionality of interaction with hierarchical levels and request more detail (more data points) of a data subset in order to keep track of such representation.\nEach non-landmark data point on level is linked up with a landmark on level in two steps. The first step entails going through the landmarks iteratively and designating which of their neighbors will be influenced by them. A landmark will eventually attempt to \u201drepresent\u201d a data point that has already been represented. As a result, we designate the data point as being a part of the area surrounding the closest landmark. The majority of data points are assigned to a landmark by this process, but it is still necessary to look for instances where data points are not in any landmark\u2019s neighborhood.\nIn this case, we iterate over each data point on to make sure that it is not associated with a landmark in case we want to search its neighborhood. This procedure is described in detail in Algorithm 1 ###reference_###. We examine the neighborhood of each non-landmark (Lines 2 to 5). If a neighbor, , is a landmark (Line 6), we set to represent the current data point (Lines 7 and 8). If is not a landmark but is associated with one (Line 9), the landmark that represents will also represent the current data point (Lines 10 and 11).\nFinally, we begin a depth-first search in the -nearest neighbor graph and associate the first landmark found with the current data point if neither of these scenarios holds true for all neighborhoods.\nWith the data points assigned to each landmark, users can select a list of indices on level to embed the associated data points on . Thus, supposing the list of indices in is denoted by , we only have to create the matrix , where returns the landmark that represents the data point ."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.3.2",
|
| 43 |
+
"parent_section_id": "2.3",
|
| 44 |
+
"section_name": "2.3.2 Mental map preservation",
|
| 45 |
+
"text": "When updates are made on the current dataset at time , the concept of mental map preservation entails maintaining the same overall layout shape and point positioning [2 ###reference_b2###, 11 ###reference_b11###]\u2014these updates may involve new data points or modifications to the features of existing data points. In the case of interacting with hierarchical dimensionality reduction results, the updates correspond to new data points from lower hierarchy levels being projected.\nThe subsets of data chosen for fine-grained exploration on level during hierarchical exploration must resemble the organization of level , especially for unlabeled datasets. Rauber et al. [38 ###reference_b38###] and Fujiwara et al. [38 ###reference_b38###] also present strategies to deal with mental map preservation in the context of dimensionality reduction, even though the majority of the work on mental map preservation is limited to dynamic graph drawing [50 ###reference_b50###, 24 ###reference_b24###]. In both situations, the goal is to ease the analysis burden during exploration and facilitate smoother embedding layout changes. Fig. 2 ###reference_### shows successive hierarchical projections using HUMAP, whether or not the mental map is preserved. Note that the embedding generated with no mental-map preservation (i.e., ) might deceive the user in thinking that the overall organization of the layout was preserved.\n###figure_2### HUMAP uses the coordinates of higher\u2014and already projected\u2014data points on level to guide the positioning of level -1. When initializing the low-dimensional representation before Stochastic Gradient Descent (SGD) optimization, the coordinates for the projected data points in are used as starting points together with the remaining data points initialized with Spectral Embedding\u2014these coordinates only move a fraction during SGD optimization. Thus, we also provide this fraction hyperparameter to be tuned according to one\u2019s needs, although we empirically find that yields embeddings that best preserve the mental map throughout the hierarchy expansion.\n###figure_3###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Case study",
|
| 51 |
+
"text": "This section aims to investigate the relationship of tweets about COVID-19 symptoms in the S\u00e3o Paulo state, Brazil [27 ###reference_b27###]. We use HUMAP to explore it using two hierarchical levels, aiming at discovering dominant structures and detailed information about these dominant structures through interaction. The dataset was scraped from Twitter\u00ae by querying COVID-19 symptoms (fever, high fever, cough, dry cough, difficulty breathing, and shortness of breath) in the territory of S\u00e3o Paulo state (Brazil) from March 2020 to August 2020. The authors classified the tweets according to their relevancy (relevant or not relevant) to COVID-19 infection. For this case study, we set HUMAP to freely find the embeddings as we drill-down the hierarchy by not projecting the low-levels bases on the higher levels.\nTo explore the dataset, we manually defined clusters in the visual space using lasso selection, as shown in Fig. 3 ###reference_###(A-B). After associating each data point to a cluster using this procedure, we compute topics (F) and proceed to the second and lowest level of the hierarchy, choosing the desired cluster (e.g., cluster ). Then, we also compute the topics for the manually defined cluster of the new hierarchical level.\n###figure_4### Fig. 3 ###reference_###(B) shows three clusters with different characteristics. First, cluster is very cohesive and dissimilar from the other two dominant structures. Second, due to visual proximity, cluster and share some information but contain various substructures that need further investigation. The topics for these three major clusters further explain their organization in the visual space (F). That is, cluster presents important terms related to respiratory problems, such as stuffy nose, sneeze, or splashing. Other important terms in this cluster can indicate tweets about individuals waiting (wait) for COVID-19 exams or are associated with desire to sneeze, which supports a hypothesis that this cluster corresponds to individuals worrying about symptoms. Cluster shows three terms in the topic associated with fever: fever, burning, and cold. Other terms such as think, woke, and night might be associated to phrases describing experiences with COVID-19 symptoms, such as: \u201cI think I have fever\u201d, \u201cToday I woke-up in the middle of the night burning in fever\u201d. Lastly, cluster is associated with dry cough and shortness of breath. The terms for this cluster in Fig. 3 ###reference_###(F) show that the tweets talk about these symptoms while the term anxiety could be the cause of shortness of breath [44 ###reference_b44###].\n###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### We proceed to cluster to investigate its substructures and retrieve more information about the tweets associated with it. Then, we define the clusters and compute the topics, as shown Fig. 3 ###reference_### (C) and (F). Here, there are a few interesting patterns. The first one is that the previous hierarchy level sufficiently gives an overview of the data organization since the topic for cluster encodes most of the information expressed in these sub-clusters. The second and most interesting aspect is the global relationship among these structures apparent in the embedding. The topics retrieved from each local structure explains this aspect. The leftmost cluster () shows terms related to dry cough, throat pain, and a few other important terms. Cluster has a relationship with cluster , which also adds difficult breathing, body pain, and headache (\u201cpain in the head\u201d, using a direct translation from Portuguese). In cluster , shortness of breath, anxiety, crisis, and breast become more important. These symptoms might be easily confused with anxiety crisis, a common problem during COVID-19 pandemic [7 ###reference_b7###]. The last cluster () is the most related to COVID-19 symptoms, showing the majority of term: shortness of breath, fever, and cough.\n###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### Analyzing the substructures of cluster , we define three clusters as shown in Fig. 3 ###reference_### (E). As well-defined by the higher-level cluster, all data points refer mainly to the COVID-19 symptom of fever. However, there are a few characteristics that might explain the differentiation of these clusters. For example, cluster present terms related to cough, shortness of breath, which do not appear in the other clusters. Lastly, cluster has an interesting characteristic since its topic suggest that individuals are commenting about the period in which they experience fever: fever, days, was, today, and home. The most cohesive cluster of top-level () and its two subclusters ( and ) revealed by drilling-down the hierarchy show tweets that seem to talk more about daily aspects than truly COVID-19 symptoms.\n###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### Now, to make readers more informed about differences between traditional and hierarchical approaches, we present an analysis of the same dataset using UMAP."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.1",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Comparison to UMAP",
|
| 57 |
+
"text": "Fig. 4 ###reference_### shows and annotated UMAP projection of the same dataset in an intent to showcase the differences between analysis using hierarchical and one-level dimensionality reduction techniques.\n###figure_24### The first thing to notice is that the layout generated by UMAP is somewhat similar to HUMAP\u2019s highest level in Fig. 3 ###reference_###. Second, the overall structure of topics shortness of breath, cough, and fever is also seen and dictates the relationship among clusters\u2014from the top (blue cluster) to the bottom (gray cluster).\nHierarchical approaches allows to discover different relationship that would require more expertize and time to tune hyperparameters if traditional approaches were employed. Comparing the two types of analysis, hierarchical exploration enables identifying fine grained information containing in datasets.\n###figure_25###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Numerical evaluation",
|
| 63 |
+
"text": "In this section, we numerically evaluate HUMAP222https://github.com/wilsonjr/humap and compare it against existing HDR techniques: Hierarchical Stochastic Neighbor Embedding (HSNE)333https://github.com/biovault/nptsne [36 ###reference_b36###], Multiscale PHATE444https://github.com/KrishnaswamyLab/Multiscale_PHATE [22 ###reference_b22###], and UMAP555https://github.com/lmcinnes/umap [28 ###reference_b28###]. We cannot control the number of data points in each hierarchy level in HSNE and Multiscale PHATE. Thus, we generated HSNE with three levels and fit a HUMAP hierarchy with three levels based on the number of data points in each HSNE level. Multiscale PHATE does not accept a parameter to specify the number of levels. After fitting the Multiscale PHATE hierarchy, we searched for a level with a size similar to the top-level produced by HUMAP and HSNE techniques. Finally, we evaluate UMAP to demonstrate its differences from HUMAP regarding mental map and structure preservation by projecting the same data points generated in the HUMAP hierarchy.\nWe use the following datasets for evaluation. MNIST [23 ###reference_b23###] is a dataset composed of pixel grayscale images of handwritten digits classified into ten classes (from to ), where each flattened image results in a dimensional vector. Fashion MNIST [49 ###reference_b49###] (FMNIST) is a dataset composed of pixel grayscale images of fashion items (clothing, footwear, and bags) divided into ten classes; each flattened image results in a dimensional vector. Mammals is a synthetic dataset designed to have four well-separated classes, and it consists of data points described by dimensions. Embryoid Body (EB) is a single-cell RNA sequencing dataset for embryoid body data generated over days [31 ###reference_b31###] divided into five periods. For this dataset, we aim to visualize the development of the cells after preprocessing them and using the first 50 principal components to perform the projections. Note that, FMNIST, MNIST, and Mammals datasets are meant to produce cohesive clusters after embedding, while the ideal result for Embryoid Body is to provide an understanding of its continuous structures\u2014this is an attempt to understand how HUMAP compares to different techniques regarding their most known characteristics, that is, to cluster data (HSNE, derived from SNE) and to emphasize continuity (Multiscale PHATE derived from PHATE). The experiments were performed in a computer with the following configuration: Intel(R) Core(TM) i7-8700 CPU @ 3.20 GHz, 32 GB RAM, Ubuntu 64 bits, NVIDIA GeForce GTX 1660 Ti 22 GB.\nFig. 5 ###reference_### depicts the embeddings for the hierarchy levels. HSNE shows a good relationship between clusters in the top-level embedding, but fails to maintain that structure on consecutive embeddings (levels 1 and 0). The continuous nature of the biological data is not shown for the EB dataset. One significant problem, to sum up, is that the mental map cannot be maintained throughout the hierarchy.\nDue to their high dimensionality, Multiscale PHATE was unable to produce embeddings for the FMNIST and MNIST datasets. Despite revealing the continuous nature of the EB dataset, it produced embeddings for the mammals dataset that made analysis challenging because the data points for each cluster were too close to one another. On the other hand, HUMAP maintains the mental map through all levels of the hierarchy by encoding the data from lower levels at higher levels. The main challenge for Multiscale PHATE is scaling with respect to number of dimensions.\nFinally, UMAP only reveals the correct overview for the simpler datasets on level 2. (e.g., mammals and MNIST). UMAP is unable to uncover the structures displayed at the lowest level for the EB and FMNIST datasets. This outcome was anticipated because UMAP\u2019s primary objective is not to transfer the relationship between data points to the projection sample. The ability to successfully encode datasets with less data is achieved by HUMAP, which projects higher levels based on relationships at lower levels.\n###figure_26### For the numerical evaluation, we compare the techniques using well-known and traditional quality metrics for embedding evaluation (e.g., Trustworthiness and Continuity in Supplementary File) as well as the metrics that appear in the related works, such as DeMaP [30 ###reference_b30###] and Neighborhood Preservation [35 ###reference_b35###] that aim to evaluate how the low-dimensional embeddings represent the structures in higher dimensions."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.1",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Running-time and Scalability",
|
| 69 |
+
"text": "Fig. 6 ###reference_### displays boxen-plots in log scale for runtime execution in seconds in order to fit the hierarchy and embed levels 2 and 0. HUMAP offers comparable runtime execution to HSNE in GPU, with the exception of level 0, making it a promising strategy for users with limited resources when reasonable runtime execution is required. When fitting the hierarchy, HUMAP takes slightly longer than HSNE to run on the CPU, but it is quicker when embedding the hierarchy levels. When many data points are embedded, Multiscale PHATE seems unreasonable for interactive applications (for example, level 0 of mammals and EB datasets). Finally, the difference between HUMAP and UMAP narrows as we approach to the size of the whole dataset."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Neighborhood Preservation",
|
| 75 |
+
"text": "Fig. 7 ###reference_### depicts the neighborhood preservation (NP) for a variable number of neighbors, (). Such a metric calculates the mean ratio of neighbors preserved in the projection.\nThe mammals dataset is intended to have clearly defined classes, whereas the EB dataset is intended for continuity analysis. Because it does not just concentrate on local structures when projecting such datasets, HUMAP achieves a lower NP. The characteristics of SNE are inherited by HSNE, which exhibits a higher NP for cluster separation (see the EB projections, for example).\n###figure_27### We evaluated HUMAP using various values of (which controls the local neighbors\u2019 contribution to the similarity among landmarks), and the results demonstrated the link between local neighborhoods and neighborhood preservation. In addition, for the MNIST dataset at the top hierarchy levels, HUMAP outperforms HSNE by a considerable margin with the addition of only of the current neighborhood (100 neighbors). Similar data points are clustered by HUMAP as a result of the parameter; interested readers can visualize the analyses and projections for this scenario in the Suppl. File (Figs. 3 and 4)."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.3",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "DEMaP",
|
| 81 |
+
"text": "We employ the DEMaP metric [30 ###reference_b30###], which calculates the Spearman correlation between geodesic distances on high-dimensional and euclidean distances on low-dimensions, to assess how well the techniques convey manifolds, clusters, and other high-dimensional space structures. On level 2, HUMAP outperformed HSNE, UMAP, and GPU-based HSNE for the MNIST and FMNIST datasets, as shown in Fig. 8 ###reference_###. The pair of distributions (HUMAP, UMAP) for the EB, FMNIST datasets on level 2 and for the EB dataset on level 0 are not statistically different after a t-test, but the others are with a p-value of at least 0.00001 (see Supplementary File - Table 2 for the details). When the entire dataset is embedded for the mammals dataset, HUMAP displays higher values, providing proof that our method is stable across hierarchical levels. While HSNE and Multiscale PHATE produce better results at the top of the hierarchy, they omit key details as one descends hierarchy\u2014the relationship among clusters is lost on the HSNE side. Finally, HUMAP conveys the continuous structures present in the EB dataset even for the top-level embedding while M. PHATE and HSNE CPU show higher DEMaP values at level 0.\n###figure_28### Fig. 5 ###reference_### demonstrates the mental map preservation across hierarchical levels. In particular for unlabelled datasets, the preservation of the mental map is crucial to avoid misleading users and increasing cognitive load during exploration; this characteristic also holds for subsets of data points, which we address in the following section. According to the DEMaP metric, the fixing term for maintaining the mental map lowers the quality of the entire projection (level 0). We restrict the flexibility of the optimization algorithm when positioning the data points when we set the embedding of lower hierarchy levels to follow the pattern of higher hierarchy levels. In the Supplementary File (Section 5), we provide the same analysis by setting free the optimization algorithm (), which results in higher DEMaP values."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.4",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "Projecting subsets of data",
|
| 87 |
+
"text": "In this section, the methods are compared according to how well they can project subsets of data points. Since there is no way to drill down a hierarchy based on classes of data points, we do not analyze Multiscale PHATE. Fig. 9a ###reference_sf1### illustrates the same pattern as Fig. 5 ###reference_###, in which HUMAP conveys top-level structures and maintains the mental map throughout the hierarchy. In this case, we project a four-level hierarchy, adding complexity to the analysis. In Fig. 9a ###reference_sf1###, the first level\u2019s subset was lasso-selected, and the subsequent levels\u2019 subsets were chosen according to specific classes.\n###figure_29### ###figure_30### In all scenarios, as shown in Fig. 9b ###reference_sf2###, HUMAP has a higher DEMaP than HSNE, and it only falls short of UMAP on the fourth level for the MNIST dataset\u2014notice that UMAP by itself does not take the relationship between higher levels into account for projection."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.5",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "Other quantitative evaluations",
|
| 93 |
+
"text": "After dimensionality reduction, additional quality metrics can be used to assess the final embedding. In order to broaden the analysis performed in this paper, another set of comparisons are presented in the Supplementary File. In the Runtime analysis, we carefully examine how each HUMAP step is impacted by the dataset size and dimensionality as well as how it compares against the alternatives. We demonstrate in the Reproducibility analysis that HUMAP can be just as effective as UMAP in terms of the reproducibility of subsequent runs. Finally, we also performed a parameter analysis to comprehend HUMAP\u2019s behavior for various parameter configurations."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Discussion",
|
| 99 |
+
"text": "The experiments demonstrate that HUMAP is competitive with GPU-based methods in terms of running time execution while outperforming current approaches in their ability to represent structures found in high-dimensional spaces. It also preserves the relationships among clusters and other structures at various levels of the hierarchy. Since HUMAP preserves the overall structures of higher hierarchy levels at lower hierarchy levels, it is valuable ideal tool for progressive analysis [10 ###reference_b10###]. Here, we go over a few of the features of our method."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "Conclusion and Future Work",
|
| 105 |
+
"text": "For the analysis of high-dimensional data, DR techniques are excellent tools. Traditional methods, however, are unable to reveal substructures while giving a dataset\u2019s overview. Hierarchical DR approaches offer analysis that adheres to the mantra of visualization, in which analysts concentrate on crucial information and retain details as needed. However, the hierarchical approaches described in the literature either cannot be applied to a wide range of dataset types or do not preserve the mental map throughout the hierarchy levels.\nWe introduced HUMAP, a novel hierarchical DR technique that provides a viable alternative to high-dimensional data analysis and includes tunable parameters that make it simple to focus on global or local neighborhood preservation while also maintaining the mental map.\nFuture research will expand on our technique to examine distance distance metrics between landmarks and data points. Additionally, we intend to implement GPU versions to enable users to delve deeper into HUMAP\u2019s capabilities. Finally, we intend to conduct user experiments on the significance of mental map preservation for hierarchical approaches and investigate novel strategies to evaluate hierarchical DR techniques."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [],
|
| 109 |
+
"tables": {
|
| 110 |
+
"1": {
|
| 111 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.2\">Size</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.3\">Dimensions</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.2.1.1\">Mammals</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.1.2.1.2\">20000</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" id=\"S4.T1.1.2.1.3\">72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.3.2.1\">Embryoid Body (EB)</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.1.3.2.2\">31000</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_right\" id=\"S4.T1.1.3.2.3\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.4.3.1\">MNIST</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.1.4.3.2\">70000</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_right\" id=\"S4.T1.1.4.3.3\">784</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.1.5.4.1\">FMNIST</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.1.5.4.2\">70000</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_bb\" id=\"S4.T1.1.5.4.3\">784</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Datasets used for experimentation.</figcaption>\n</figure>",
|
| 112 |
+
"capture": "Table 1: Datasets used for experimentation."
|
| 113 |
+
}
|
| 114 |
+
},
|
| 115 |
+
"image_paths": {
|
| 116 |
+
"1": {
|
| 117 |
+
"figure_path": "2106.07718v4_figure_1.png",
|
| 118 |
+
"caption": "Figure 1: The hierarchy is built from the bottom up. First, the connection strength between data points in the high-dimensional space is determined using a k-nearest neighbor graph and a kernel function (A). After several random walk steps, the graph\u2019s structure and the strength of the connections are used to determine which nodes have been visited the most (B). The landmarks correspond to the points in the higher hierarchy level (C). To repeat the same procedure for high hierarchy levels, we calculate the intersection of representation neighborhoods (E), which were formed by joining local and global neighborhoods (D). With the exception of the first hierarchy level (whole dataset), we compute the k-nearest neighbors using a sorting algorithm (F). We employ a modified UMAP [28] optimization for projecting hierarchy levels (or subsets of them). Finally, the graph is symmetrized (H) and coordinates of projected points influence the positioning subsequent levels (I).",
|
| 119 |
+
"url": "http://arxiv.org/html/2106.07718v4/x2.png"
|
| 120 |
+
},
|
| 121 |
+
"2": {
|
| 122 |
+
"figure_path": "2106.07718v4_figure_2.png",
|
| 123 |
+
"caption": "Figure 2: Hierarchical exploration with HUMAP using mental map.",
|
| 124 |
+
"url": "http://arxiv.org/html/2106.07718v4/extracted/5884757/figures/mental-map-2.png"
|
| 125 |
+
},
|
| 126 |
+
"3": {
|
| 127 |
+
"figure_path": "2106.07718v4_figure_3.png",
|
| 128 |
+
"caption": "Figure 3: HUMAP exploration and annotation of a document collection of COVID-19 tweets. The top-level hierarchy level shows unlabeled data points and three major structures (A). We annotate these three clusters (B) and compute their topics computed (F). For each cluster in (B), we also project their corresponding level (and final) hierarchy to look for other patterns, annotating the dataset (C, D, E) and computing their topics.",
|
| 129 |
+
"url": "http://arxiv.org/html/2106.07718v4/x3.png"
|
| 130 |
+
},
|
| 131 |
+
"4": {
|
| 132 |
+
"figure_path": "2106.07718v4_figure_4.png",
|
| 133 |
+
"caption": "Figure 4: Manually annotated UMAP projection with cluster topics.",
|
| 134 |
+
"url": "http://arxiv.org/html/2106.07718v4/x24.png"
|
| 135 |
+
},
|
| 136 |
+
"5": {
|
| 137 |
+
"figure_path": "2106.07718v4_figure_5.png",
|
| 138 |
+
"caption": "Figure 5: Visual analysis of the embeddings generated for top and lowest hierarchical levels using a three-level hierarchy. For each dataset, top-level embedding appears on the left, and the lowest level (whole dataset) appears on the right.",
|
| 139 |
+
"url": "http://arxiv.org/html/2106.07718v4/x25.png"
|
| 140 |
+
},
|
| 141 |
+
"6": {
|
| 142 |
+
"figure_path": "2106.07718v4_figure_6.png",
|
| 143 |
+
"caption": "Figure 6: Runtime execution in seconds using a log10 scale. The boxen-plots show the runtime for 20202020 runs of each technique to fit the hierarchy (Hierarchy fit), to embed the top level (Level 2), and to embed the whole dataset (Level 0).",
|
| 144 |
+
"url": "http://arxiv.org/html/2106.07718v4/x26.png"
|
| 145 |
+
},
|
| 146 |
+
"7": {
|
| 147 |
+
"figure_path": "2106.07718v4_figure_7.png",
|
| 148 |
+
"caption": "Figure 7: Neighborhood Preservation after embedding on \u211d2superscript\u211d2\\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. HSNE outperforms HUMAP by a great margin only for EB dataset.",
|
| 149 |
+
"url": "http://arxiv.org/html/2106.07718v4/x27.png"
|
| 150 |
+
},
|
| 151 |
+
"8": {
|
| 152 |
+
"figure_path": "2106.07718v4_figure_8.png",
|
| 153 |
+
"caption": "Figure 8: Evaluation of techniques\u2019 ability to represent complex structures such as clusters, manifolds, and other relationships.",
|
| 154 |
+
"url": "http://arxiv.org/html/2106.07718v4/x28.png"
|
| 155 |
+
},
|
| 156 |
+
"9(a)": {
|
| 157 |
+
"figure_path": "2106.07718v4_figure_9(a).png",
|
| 158 |
+
"caption": "(a)\nFigure 9: Using HUMAP, UMAP, and HSNE to project a subset of data.",
|
| 159 |
+
"url": "http://arxiv.org/html/2106.07718v4/extracted/5884757/figures_revision2/revised-drill-down.png"
|
| 160 |
+
},
|
| 161 |
+
"9(b)": {
|
| 162 |
+
"figure_path": "2106.07718v4_figure_9(b).png",
|
| 163 |
+
"caption": "(b)\nFigure 9: Using HUMAP, UMAP, and HSNE to project a subset of data.",
|
| 164 |
+
"url": "http://arxiv.org/html/2106.07718v4/x29.png"
|
| 165 |
+
}
|
| 166 |
+
},
|
| 167 |
+
"validation": true,
|
| 168 |
+
"references": [
|
| 169 |
+
{
|
| 170 |
+
"1": {
|
| 171 |
+
"title": "Efficient hierarchical-pca dimension reduction for hyperspectral\nimagery.",
|
| 172 |
+
"author": "A. Agarwal, T. El-Ghazawi, H. El-Askary, and J. Le-Moigne.",
|
| 173 |
+
"venue": "In 2007 IEEE International Symposium on Signal Processing and\nInformation Technology, pp. 353\u2013356, 2007. doi: 10\u2006.\u20061109/ISSPIT\u2006.\u20062007\u2006.\u20064458191",
|
| 174 |
+
"url": null
|
| 175 |
+
}
|
| 176 |
+
},
|
| 177 |
+
{
|
| 178 |
+
"2": {
|
| 179 |
+
"title": "Animation, small multiples, and the effect of mental map preservation\nin dynamic graphs.",
|
| 180 |
+
"author": "D. W. Archambault, H. C. Purchase, and B. Pinaud.",
|
| 181 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n17:539\u2013552, 2011.",
|
| 182 |
+
"url": null
|
| 183 |
+
}
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"3": {
|
| 187 |
+
"title": "Dimensionality reduction for visualizing single-cell data using umap.",
|
| 188 |
+
"author": "E. Becht, L. McInnes, J. Healy, C.-A. Dutertre, I. Kwok, L. G. Ng, F. Ginhoux,\nand E. W. Newell.",
|
| 189 |
+
"venue": "Nature Biotechnology, 37:38\u201344, 2018.",
|
| 190 |
+
"url": null
|
| 191 |
+
}
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"4": {
|
| 195 |
+
"title": "Semi-supervised learning with interactive label propagation guided by\nfeature space projections.",
|
| 196 |
+
"author": "B. C. Benato, A. C. Telea, and A. X. Falc\u00e3o.",
|
| 197 |
+
"venue": "In 2018 31st SIBGRAPI Conference on Graphics, Patterns and\nImages (SIBGRAPI), pp. 392\u2013399, 2018. doi: 10\u2006.\u20061109/SIBGRAPI\u2006.\u20062018\u2006.\u200600057",
|
| 198 |
+
"url": null
|
| 199 |
+
}
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"5": {
|
| 203 |
+
"title": "Coarse graining of data via inhomogeneous diffusion condensation.",
|
| 204 |
+
"author": "N. Brugnone, A. Gonopolskiy, M. W. Moyle, M. Kuchroo, D. V. Dijk, K. R. Moon,\nD. A. Col\u00f3n-Ramos, G. Wolf, M. J. Hirn, and S. Krishnaswamy.",
|
| 205 |
+
"venue": "2019 IEEE International Conference on Big Data (Big Data), pp.\n2624\u20132633, 2019.",
|
| 206 |
+
"url": null
|
| 207 |
+
}
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"6": {
|
| 211 |
+
"title": "A Generic Model for Projection Alignment Applied to Neural Network\nVisualization.",
|
| 212 |
+
"author": "G. D. Cantareira and F. V. Paulovich.",
|
| 213 |
+
"venue": "In C. Turkay and K. Vrotsou, eds., EuroVis Workshop on Visual\nAnalytics (EuroVA). The Eurographics Association, 2020. doi: 10\u2006.\u20062312/eurova\u2006.\u200620201089",
|
| 214 |
+
"url": null
|
| 215 |
+
}
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"7": {
|
| 219 |
+
"title": "Depression, anxiety, and lifestyle among essential workers: A web\nsurvey from brazil and spain during the covid-19 pandemic.",
|
| 220 |
+
"author": "R. B. De Boni, V. Balanz\u00e1-Mart\u00ednez, J. C. Mota, T. D. A. Cardoso,\nP. Ballester, B. Atienza-Carbonell, F. I. Bastos, and F. Kapczinski.",
|
| 221 |
+
"venue": "J Med Internet Res, 22(10), Oct 2020. doi: 10\u2006.\u20062196/22835",
|
| 222 |
+
"url": null
|
| 223 |
+
}
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"8": {
|
| 227 |
+
"title": "Topic-based coordination for visual analysis of evolving document\ncollections.",
|
| 228 |
+
"author": "D. M. Eler, F. V. Paulovich, M. C. F. d. Oliveira, and R. Minghim.",
|
| 229 |
+
"venue": "In 2009 13th International Conference Information\nVisualisation, pp. 149\u2013155, 2009. doi: 10\u2006.\u20061109/IV\u2006.\u20062009\u2006.\u200631",
|
| 230 |
+
"url": null
|
| 231 |
+
}
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"9": {
|
| 235 |
+
"title": "Towards a quantitative survey of dimension reduction techniques.",
|
| 236 |
+
"author": "M. Espadoto, R. M. Martins, A. Kerren, N. S. T. Hirata, and A. C.\nTelea.",
|
| 237 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics, pp.\n1\u20131, 2019. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062019\u2006.\u20062944182",
|
| 238 |
+
"url": null
|
| 239 |
+
}
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"10": {
|
| 243 |
+
"title": "An incremental dimensionality reduction method for visualizing\nstreaming multidimensional data.",
|
| 244 |
+
"author": "T. Fujiwara, J.-K. Chou, Shilpika, P. Xu, L. Ren, and K.-L. Ma.",
|
| 245 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n26:418\u2013428, 2020.",
|
| 246 |
+
"url": null
|
| 247 |
+
}
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"11": {
|
| 251 |
+
"title": "Supporting analysis of dimensionality reduction results with\ncontrastive learning.",
|
| 252 |
+
"author": "T. Fujiwara, O.-H. Kwon, and K.-L. Ma.",
|
| 253 |
+
"venue": "IEEE Trans. Vis. and Comp. Graph., 26:45\u201355, 2019.",
|
| 254 |
+
"url": null
|
| 255 |
+
}
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"12": {
|
| 259 |
+
"title": "Introduction to markov chain monte carlo.",
|
| 260 |
+
"author": "C. Geyer.",
|
| 261 |
+
"venue": "2011.",
|
| 262 |
+
"url": null
|
| 263 |
+
}
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"13": {
|
| 267 |
+
"title": "Visual feature fusion and its application to support unsupervised\nclustering tasks.",
|
| 268 |
+
"author": "G. M. Hilasaca and F. V. Paulovich.",
|
| 269 |
+
"venue": "Information Visualization, 19(2):163\u2013179, 2020.",
|
| 270 |
+
"url": null
|
| 271 |
+
}
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"14": {
|
| 275 |
+
"title": "Focus+context exploration of hierarchical embeddings.",
|
| 276 |
+
"author": "T. H\u00f6llt, A. Vilanova, N. Pezzotti, B. Lelieveldt, and H. Hauser.",
|
| 277 |
+
"venue": "Computer Graphics Forum (Proceedings of EuroVis 2019), (3),\n2019. doi: 10\u2006.\u20061111/cgf\u2006.\u200613711",
|
| 278 |
+
"url": null
|
| 279 |
+
}
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"15": {
|
| 283 |
+
"title": "Cyteguide: Visual guidance for hierarchical single-cell analysis.",
|
| 284 |
+
"author": "T. H\u00f6llt, N. Pezzotti, V. van Unen, F. Koning, B. P. F. Lelieveldt,\nand A. Vilanova.",
|
| 285 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n24(1):739\u2013748, 2018.",
|
| 286 |
+
"url": null
|
| 287 |
+
}
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"16": {
|
| 291 |
+
"title": "Glimmer: Multilevel mds on the gpu.",
|
| 292 |
+
"author": "S. Ingram, T. Munzner, and M. Olano.",
|
| 293 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n15:249\u2013261, 2009.",
|
| 294 |
+
"url": null
|
| 295 |
+
}
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"17": {
|
| 299 |
+
"title": "Hierarchical principal component analysis (pca) and projection to\nlatent structure (pls) technique on spectroscopic data as a data pretreatment\nfor calibration.",
|
| 300 |
+
"author": "K. Jann\u00e9, J. Pettersen, N.-O. Lindberg, and T. Lundstedt.",
|
| 301 |
+
"venue": "Journal of Chemometrics, 15(4):203\u2013213, 2001.",
|
| 302 |
+
"url": null
|
| 303 |
+
}
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"18": {
|
| 307 |
+
"title": "Local affine multidimensional projection.",
|
| 308 |
+
"author": "P. Joia, D. Coimbra, J. A. Cuminato, F. V. Paulovich, and L. G.\nNonato.",
|
| 309 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n17(12):2563\u20132571, 2011. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062011\u2006.\u2006220",
|
| 310 |
+
"url": null
|
| 311 |
+
}
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"19": {
|
| 315 |
+
"title": "Principal Component Analysis.",
|
| 316 |
+
"author": "I. Jolliffe.",
|
| 317 |
+
"venue": "Springer Verlag, 1986.",
|
| 318 |
+
"url": null
|
| 319 |
+
}
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"20": {
|
| 323 |
+
"title": "Initialization is critical for preserving global data structure in\nboth t-sne and umap.",
|
| 324 |
+
"author": "D. Kobak and G. C. Linderman.",
|
| 325 |
+
"venue": "Nature biotechnology, 2021.",
|
| 326 |
+
"url": null
|
| 327 |
+
}
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"21": {
|
| 331 |
+
"title": "Facetto: Combining unsupervised and supervised learning for\nhierarchical phenotype analysis in multi-channel image data.",
|
| 332 |
+
"author": "R. Krueger, J. Beyer, W. Jang, N. W. Kim, A. Sokolov, P. K. Sorger,\nand H. Pfister.",
|
| 333 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n26(1):227\u2013237, 2020.",
|
| 334 |
+
"url": null
|
| 335 |
+
}
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"22": {
|
| 339 |
+
"title": "Multiscale phate exploration of sars-cov-2 data reveals multimodal\nsignatures of disease.",
|
| 340 |
+
"author": "M. e. a. Kuchroo.",
|
| 341 |
+
"venue": "bioRxiv, 2020. doi: 10\u2006.\u20061101/2020\u2006.\u200611\u2006.\u200615\u2006.\u2006383661",
|
| 342 |
+
"url": null
|
| 343 |
+
}
|
| 344 |
+
},
|
| 345 |
+
{
|
| 346 |
+
"23": {
|
| 347 |
+
"title": "MNIST handwritten digit database.",
|
| 348 |
+
"author": "Y. LeCun and C. Cortes.",
|
| 349 |
+
"venue": "2010.",
|
| 350 |
+
"url": null
|
| 351 |
+
}
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"24": {
|
| 355 |
+
"title": "Dynamic animations of journal maps: Indicators of structural changes\nand interdisciplinary developments.",
|
| 356 |
+
"author": "L. Leydesdorff and T. Schank.",
|
| 357 |
+
"venue": "59(11):1810\u20131818, sep 2008.",
|
| 358 |
+
"url": null
|
| 359 |
+
}
|
| 360 |
+
},
|
| 361 |
+
{
|
| 362 |
+
"25": {
|
| 363 |
+
"title": "Eleven grand challenges in single-cell data science.",
|
| 364 |
+
"author": "D. L\u00e4hnemann, J. K\u00f6ster, and E. e. a. Szczurek.",
|
| 365 |
+
"venue": "Genome Biol, 31, 2020.",
|
| 366 |
+
"url": null
|
| 367 |
+
}
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"26": {
|
| 371 |
+
"title": "Visualizing data using t-sne.",
|
| 372 |
+
"author": "L. V. D. Maaten and G. E. Hinton.",
|
| 373 |
+
"venue": "Journal of Machine Learning Research, 9:2579\u20132605, 2008.",
|
| 374 |
+
"url": null
|
| 375 |
+
}
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"27": {
|
| 379 |
+
"title": "Contrastive analysis for scatter plot-based representations of\ndimensionality reduction.",
|
| 380 |
+
"author": "W. E. Marc\u00edlio-Jr, D. M. Eler, and R. E. Garcia.",
|
| 381 |
+
"venue": "arXiv e-prints, p. arXiv:2101.12044, Jan. 2021.",
|
| 382 |
+
"url": null
|
| 383 |
+
}
|
| 384 |
+
},
|
| 385 |
+
{
|
| 386 |
+
"28": {
|
| 387 |
+
"title": "Umap: Uniform manifold approximation and projection for dimension\nreduction.",
|
| 388 |
+
"author": "L. McInnes and J. Healy.",
|
| 389 |
+
"venue": "ArXiv, abs/1802.03426, 2018.",
|
| 390 |
+
"url": null
|
| 391 |
+
}
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"29": {
|
| 395 |
+
"title": "Towards perceptual optimization of the visual design of scatterplots.",
|
| 396 |
+
"author": "L. Micallef, G. Palmas, A. Oulasvirta, and T. Weinkauf.",
|
| 397 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n23(6):1588\u20131599, 2017. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062017\u2006.\u20062674978",
|
| 398 |
+
"url": null
|
| 399 |
+
}
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"30": {
|
| 403 |
+
"title": "Visualizing structure and transitions in high-dimensional biological\ndata.",
|
| 404 |
+
"author": "K. Moon, D. van Dijk, and Z. e. a. Wang.",
|
| 405 |
+
"venue": "Nat Biotechnol, pp. 1482\u20131492, 2019. doi: 10\u2006.\u20061038/s41587-019-0336-3",
|
| 406 |
+
"url": null
|
| 407 |
+
}
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"31": {
|
| 411 |
+
"title": "Embryoid body data for phate.",
|
| 412 |
+
"author": "Moon, Keving.",
|
| 413 |
+
"venue": "https://data.mendeley.com/datasets/v6n743h5ng/1, 2018.",
|
| 414 |
+
"url": null
|
| 415 |
+
}
|
| 416 |
+
},
|
| 417 |
+
{
|
| 418 |
+
"32": {
|
| 419 |
+
"title": "On spectral clustering: Analysis and an algorithm.",
|
| 420 |
+
"author": "A. Y. Ng, M. I. Jordan, and Y. Weiss.",
|
| 421 |
+
"venue": "In ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, pp.\n849\u2013856. MIT Press, 2001.",
|
| 422 |
+
"url": null
|
| 423 |
+
}
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"33": {
|
| 427 |
+
"title": "Multidimensional projection for visual analytics: Linking techniques\nwith distortions, tasks, and layout enrichment.",
|
| 428 |
+
"author": "L. G. Nonato and M. Aupetit.",
|
| 429 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n25(8):2650\u20132673, 2019. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062018\u2006.\u20062846735",
|
| 430 |
+
"url": null
|
| 431 |
+
}
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"34": {
|
| 435 |
+
"title": "Hipp: A novel hierarchical point placement strategy and its\napplication to the exploration of document collections.",
|
| 436 |
+
"author": "F. V. Paulovich and R. Minghim.",
|
| 437 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n14(6):1229\u20131236, 2008. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062008\u2006.\u2006138",
|
| 438 |
+
"url": null
|
| 439 |
+
}
|
| 440 |
+
},
|
| 441 |
+
{
|
| 442 |
+
"35": {
|
| 443 |
+
"title": "Least square projection: A fast high-precision multidimensional\nprojection technique and its application to document mapping.",
|
| 444 |
+
"author": "F. V. Paulovich, L. G. Nonato, R. Minghim, and H. Levkowitz.",
|
| 445 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n14(3):564\u2013575, 2008. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062007\u2006.\u200670443",
|
| 446 |
+
"url": null
|
| 447 |
+
}
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"36": {
|
| 451 |
+
"title": "Hierarchical stochastic neighbor embedding.",
|
| 452 |
+
"author": "N. Pezzotti, T. H\u00f6llt, B. Lelieveldt, E. Eisemann, and A. Vilanova.",
|
| 453 |
+
"venue": "Computer Graphics Forum, 35(3):21\u201330, 2016. doi: 10\u2006.\u20061111/cgf\u2006.\u200612878",
|
| 454 |
+
"url": null
|
| 455 |
+
}
|
| 456 |
+
},
|
| 457 |
+
{
|
| 458 |
+
"37": {
|
| 459 |
+
"title": "Visualizing the hidden activity of artificial neural networks.",
|
| 460 |
+
"author": "P. E. Rauber, S. G. Fadel, A. X. Falc\u00e3o, and A. C. Telea.",
|
| 461 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n23(1):101\u2013110, 2017. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062016\u2006.\u20062598838",
|
| 462 |
+
"url": null
|
| 463 |
+
}
|
| 464 |
+
},
|
| 465 |
+
{
|
| 466 |
+
"38": {
|
| 467 |
+
"title": "Visualizing time-dependent data using dynamic t-sne.",
|
| 468 |
+
"author": "P. E. Rauber, A. X. Falc\u00e3o, and A. C. Telea.",
|
| 469 |
+
"venue": "EuroVis \u201916, p. 73\u201377. Eurographics Association, Goslar, DEU, 2016.",
|
| 470 |
+
"url": null
|
| 471 |
+
}
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"39": {
|
| 475 |
+
"title": "An overview of gradient descent optimization algorithms.",
|
| 476 |
+
"author": "S. Ruder.",
|
| 477 |
+
"venue": "ArXiv, abs/1609.04747, 2016.",
|
| 478 |
+
"url": null
|
| 479 |
+
}
|
| 480 |
+
},
|
| 481 |
+
{
|
| 482 |
+
"40": {
|
| 483 |
+
"title": "Network visualization and analysis of spatially aware gene expression\ndata with insitunet.",
|
| 484 |
+
"author": "J. Salamon, X. Qian, M. Nilsson, and D. J. Lynn.",
|
| 485 |
+
"venue": "Cell systems, 6 5:626\u2013630.e3, 2018.",
|
| 486 |
+
"url": null
|
| 487 |
+
}
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"41": {
|
| 491 |
+
"title": "Scatterplots: Tasks, data, and designs.",
|
| 492 |
+
"author": "A. Sarikaya and M. Gleicher.",
|
| 493 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n24(1):402\u2013412, 2018. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062017\u2006.\u20062744184",
|
| 494 |
+
"url": null
|
| 495 |
+
}
|
| 496 |
+
},
|
| 497 |
+
{
|
| 498 |
+
"42": {
|
| 499 |
+
"title": "Empirical guidance on scatterplot and dimension reduction technique\nchoices.",
|
| 500 |
+
"author": "M. Sedlmair, T. Munzner, and M. Tory.",
|
| 501 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics,\n19(12):2634\u20132643, 2013. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062013\u2006.\u2006153",
|
| 502 |
+
"url": null
|
| 503 |
+
}
|
| 504 |
+
},
|
| 505 |
+
{
|
| 506 |
+
"43": {
|
| 507 |
+
"title": "The eyes have it: a task by data type taxonomy for information\nvisualizations.",
|
| 508 |
+
"author": "B. Shneiderman.",
|
| 509 |
+
"venue": "In Proceedings 1996 IEEE Symposium on Visual Languages, pp.\n336\u2013343, 1996. doi: 10\u2006.\u20061109/VL\u2006.\u20061996\u2006.\u2006545307",
|
| 510 |
+
"url": null
|
| 511 |
+
}
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"44": {
|
| 515 |
+
"title": "Anxiety in older adolescents at the time of covid-19.",
|
| 516 |
+
"author": "P. Smirni, G. Lavanco, and D. Smirni.",
|
| 517 |
+
"venue": "Journal of Clinical Medicine, 9(10), 2020. doi: 10\u2006.\u20063390/jcm9103064",
|
| 518 |
+
"url": null
|
| 519 |
+
}
|
| 520 |
+
},
|
| 521 |
+
{
|
| 522 |
+
"45": {
|
| 523 |
+
"title": "Imacyte: Visual exploration of cellular microenvironments for imaging\nmass cytometry data.",
|
| 524 |
+
"author": "A. Somarakis, V. van Unen, F. Koning, B. P. F. Lelieveldt, and T. Hollt.",
|
| 525 |
+
"venue": "IEEE transactions on visualization and computer graphics, 2019.",
|
| 526 |
+
"url": null
|
| 527 |
+
}
|
| 528 |
+
},
|
| 529 |
+
{
|
| 530 |
+
"46": {
|
| 531 |
+
"title": "Visual analysis of mass cytometry data by hierarchical stochastic\nneighbor embedding reveals rare cell types.",
|
| 532 |
+
"author": "V. van Unen, T. H\u00f6llt, N. Pezzotti, N. Li, M. Reinders, E. Eisemann,\nA. Vilanova, F. Koning, and B. Lelieveldt.",
|
| 533 |
+
"venue": "Nature Communications, 8(1740):1 \u2013 10, 2017. doi: 10\u2006.\u20061038/s41467-017-01689-9",
|
| 534 |
+
"url": null
|
| 535 |
+
}
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"47": {
|
| 539 |
+
"title": "Analysis of multiblock and hierarchical pca and pls models.",
|
| 540 |
+
"author": "J. A. Westerhuis, T. Kourti, and J. F. MacGregor.",
|
| 541 |
+
"venue": "Journal of Chemometrics, 12(5):301\u2013321, 1998.",
|
| 542 |
+
"url": null
|
| 543 |
+
}
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"48": {
|
| 547 |
+
"title": "Steerable, progressive multidimensional scaling.",
|
| 548 |
+
"author": "M. Williams and T. Munzner.",
|
| 549 |
+
"venue": "In IEEE Symposium on Information Visualization, pp. 57\u201364,\n2004. doi: 10\u2006.\u20061109/INFVIS\u2006.\u20062004\u2006.\u200660",
|
| 550 |
+
"url": null
|
| 551 |
+
}
|
| 552 |
+
},
|
| 553 |
+
{
|
| 554 |
+
"49": {
|
| 555 |
+
"title": "Fashion-mnist: a novel image dataset for benchmarking machine\nlearning algorithms.",
|
| 556 |
+
"author": "H. Xiao, K. Rasul, and R. Vollgraf.",
|
| 557 |
+
"venue": "ArXiv, abs/1708.07747, 2017.",
|
| 558 |
+
"url": null
|
| 559 |
+
}
|
| 560 |
+
},
|
| 561 |
+
{
|
| 562 |
+
"50": {
|
| 563 |
+
"title": "A regularized graph layout framework for dynamic network\nvisualization.",
|
| 564 |
+
"author": "K. S. Xu, M. Kliger, and A. O. Hero.",
|
| 565 |
+
"venue": "Data Mining and Knowledge Discovery, 27:84\u2013116, 2012.",
|
| 566 |
+
"url": null
|
| 567 |
+
}
|
| 568 |
+
}
|
| 569 |
+
],
|
| 570 |
+
"url": "http://arxiv.org/html/2106.07718v4"
|
| 571 |
+
}
|
20241001/2109.04993v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2112.05193v2.json
ADDED
|
@@ -0,0 +1,342 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Individual Representation in Approval-Based Committee Voting",
|
| 3 |
+
"abstract": "When selecting multiple candidates based on approval preferences of voters, the proportional representation of voters\u2019 opinions is an important and well-studied desideratum. Existing criteria for evaluating the representativeness of outcomes focus on groups of voters and demand that sufficiently large and cohesive groups are \u201crepresented\u201d in the sense that candidates approved by some group members are selected. Crucially, these criteria say nothing about the representation of individual voters, even if these voters are members of groups that deserve representation.\nIn this paper, we formalize the concept of individual representation (IR) and explore to which extent, and under which circumstances, it can be achieved.\nWe show that checking whether an IR outcome exists is computationally intractable, and we verify that all common approval-based voting rules may fail to provide IR even in cases where this is possible. We then focus on domain restrictions and establish an interesting contrast between \u201cvoter interval\u201d and \u201ccandidate interval\u201d preferences. This contrast can also be observed in our experimental results, where we analyze the attainability of IR for realistic preference profiles.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "We consider the problem of selecting a fixed-size subset of candidates (a so-called committee) based on the approval preferences of voters. This problem has been extensively studied in recent years (Lackner and Skowron, 2022 ###reference_b15###) and has a wide variety of applications, including political elections (Brill et al., 2024b ###reference_b9###), recommender systems (Skowron et al., 2017 ###reference_b24###),\nmedical diagnostic decision-making (Gangl et al., 2019 ###reference_b12###),\nblockchain consensus protocols (Boehmer et al., 2024 ###reference_b3###),\nand participatory budgeting (Peters et al., 2021b ###reference_b19###).\nA central concern in committee voting is the principle of proportional representation, which states that the voters\u2019 interests and opinions should be reflected proportionately in the committee.\nWhile proportional representation is intuitive to understand in scenarios such as apportioning parliamentary seats based on vote shares (Balinski and Young, 1982 ###reference_b2###; Pukelsheim, 2014 ###reference_b21###), it is less straightforward to formalize in the context of approval-based committee elections. Indeed, the literature has defined a number of different concepts aiming to capture proportional representation (Aziz et al., 2017 ###reference_b1###; S\u00e1nchez-Fern\u00e1ndez et al., 2017 ###reference_b22###; Peters and Skowron, 2020 ###reference_b17###; Skowron, 2021 ###reference_b23###; Peters et al., 2021b ###reference_b19###; Brill and Peters, 2023 ###reference_b6###).\nMost (if not all) of these approval-based proportionality notions focus on the representation of groups of voters. Specifically, it is usually required that each sufficiently large group of voters is \u201crepresented\u201d in the committee,111Often, there is also a condition on the \u201ccohesiveness\u201d of the group, stating that the approval preferences of group members need to be sufficiently aligned. This requirement is extensively discussed by Brill and Peters (2023 ###reference_b6###).\nwhere the interpretation of \u201crepresentation\u201d differs across different notions. For example, extended justified representation Aziz et al. (2017 ###reference_b1###) prescribes that there exists at least one voter in the group approving a certain number of committee members, whereas proportional justified representation (S\u00e1nchez-Fern\u00e1ndez et al., 2017 ###reference_b22###) demands that there are sufficiently many committee members that are each approved by at least one voter in the group. Notably, neither definition comprises any representation requirements for individual voters in a group.\nThus, a group may count as \u201crepresented\u201d even though some voters in the group do not approve a single committee member.222Axioms like extended justified representation offer significant lower bounds on the average satisfaction of a voter group (e.g., a high proportionality degree (Skowron, 2021 ###reference_b23###)). However, this still does not ensure representation of voters on the individual level.\nIn this paper, we adopt an individualistic point of view: our goal is to provide all members of a voter group equal guarantees.\nIntuitively, when a population consists of voters and a committee of representatives is elected, we expect every cohesive voter group of size to be represented by representatives in the committee; thus, each individual group member might reasonably hope that at least candidates represent her in the committee. This notion, which we call individual representation, is aligned with the notion of \u201cindividual fairness\u201d that was recently introduced in clustering (and in particular in facility location problems) by Jung et al. (2020 ###reference_b14###): there, each individual expects to be served by a facility in distance proportional to the radius of the ball that captures its closest neighbors, where is the number of individuals and is the number of facilities.\nIndividual representation, as defined in this paper, is a strengthening of a notion called semi-strong justified representation by Aziz et al. (2017 ###reference_b1###). The latter property requires that all members of a group are represented in the committee at least once, given that the group is large and cohesive enough. Individual representation strengthens this requirement by demanding that all members of cohesive groups are represented multiple times (in proportion to the group size).\nAziz et al. (2017 ###reference_b1###) observed that semi-strong JR cannot be provided in all instances; this immediately implies our stronger requirement is not universally attainable either.\nIn this paper, we systematically study individual representation (IR). Notwithstanding the observation that IR demands cannot always be met, we clarify how IR relates to existing axioms and we show that a large range of common approval-based committee voting rules can fail to provide IR even in cases where IR is achievable.\nWe observe that even committees approximating IR may fail to exist. Moreover, we answer a question by Aziz et al. (2017 ###reference_b1###) by showing that it is computationally intractable to decide whether a given instance admits a committee providing semi-strong JR or individual representation.\nWe then turn our attention to restricted domains of preferences (Elkind and Lackner, 2015 ###reference_b10###; Yang, 2019 ###reference_b27###) and demonstrate that positive results can be obtained. Doing so, we uncover a striking difference between the candidate interval and voter interval domains: whereas the former restriction does not admit any non-trivial approximation of IR, we devise an efficient algorithm for selecting committees approximating IR for the latter.\nThis is surprising insofar as these two domain restrictions often exhibit similar behavior (Pierczy\u0144ski and Skowron, 2022 ###reference_b20###; Terzopoulou et al., 2021 ###reference_b26###).333A notable exception is the work by Peters (2018 ###reference_b16###), who derives polynomial-time algorithms for the candidate interval domain, but not for the voter interval domain.\nFinally, we experimentally study how often IR is achievable for a wide variety of generated preference data, and how often established voting rules select IR outcomes."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Preliminaries",
|
| 15 |
+
"text": "For , we let denote the set .\nLet be a set of voters and be a set of candidates.\nEach voter approves a subset of candidates. An (approval) profile contains the approval set of each voter .\nWe often illustrate approval profiles graphically, see Figures 1 ###reference_###, 2 ###reference_###, and 4 ###reference_###.\nGiven a committee size ,\nwe want to select a subset of size , referred to as a committee.\nWe call an approval-based committee (ABC) election. An ABC voting rule takes as input an ABC election and outputs one or more committees of size .\nAs is standard in the ABC election literature, we assume that voters only care about the number of approved candidates in the committee, i.e., voter evaluates a committee by .\nGiven a subset of candidates, we let denote the set of voters who approve all candidates in , i.e., .\nGiven an ABC election and , we call a group of voters -cohesive if\n and .\nThe following representation notions are due to Aziz et al. (2017 ###reference_b1###) and S\u00e1nchez-Fern\u00e1ndez et al. (2017 ###reference_b22###).\nConsider an ABC election .\nA committee of size provides\njustified representation (JR) if for each -cohesive group , there is a voter with ;\nproportional justified representation (PJR) if for each and each -cohesive group , it holds that ;\nextended justified representation (EJR) if for each and each -cohesive group , there is a voter with ;\ncore stability if for each group (independent of being -cohesive) and with , there is a voter with .\nIt is well-known that core stability implies EJR, which in turn implies PJR, which implies JR (Aziz et al., 2017 ###reference_b1###; S\u00e1nchez-Fern\u00e1ndez et al., 2017 ###reference_b22###).\nAll of these notions have in common that they consider a group of voters \u201crepresented\u201d as long as at least one voter in the group is sufficiently represented. This point of view might be hard to justify in many contexts. In the following section, we present our approach to representation that takes into account every voter in a group individually."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Individual Representation",
|
| 21 |
+
"text": "In this section, we define the main concept of this paper: individual representation. This notion builds on the idea of (semi-)strong justified representation as defined by Aziz et al. (2017 ###reference_b1###) and the notion of individual fairness in clustering as defined by Jung et al. (2020 ###reference_b14###).\nSimilarly to the proportionality notions defined in Section 2 ###reference_###, we assume that a voter deserves some representation in an ABC election if she can find enough other voters who all approve a subset of candidates in common. This follows the rationale that every member of a group of voters that (i) makes up a sizable part of the electorate and\n(ii) can come to an agreement on how (part of) the committee ought to be filled, should be represented accordingly.\nGiven an ABC election , we determine the number of seats that voter can justifiably demand as\nIn words, is the largest value such that voter can find enough like-minded voters to form an -cohesive group.\nIn particular, for all voters who are not contained in any cohesive group of size at least .\nGiven an ABC election , a committee of size provides individual representation (IR) if \u2004 for all voters .\nWhen only requiring for every voter with , we get semi-strong justified representation (semi-strong JR) as defined by Aziz et al. (2017 ###reference_b1###). The authors of that paper provide an example showing that semi-strong JR committees do not always exist (see Figure 1 ###reference_###). Since individual representation clearly is a more demanding property, it immediately follows that IR committees (i.e., committees providing IR) do not need to exist either.\nThere exist instances of ABC elections that do not admit an IR committee.\nOne immediate follow-up question is whether we can guarantee IR in an approximate sense. To study this question, we introduce the notion of -individual representation, which uses additive and multiplicative approximation parameters.\nGiven an ABC election , a committee of size at most provides -individual representation (-IR) if for every voter it holds that\n, with and .\nUnfortunately, non-trivial approximation guarantees are impossible to obtain without restricting the set of profiles.\nFor every , there exists an instance that does not admit an -IR committee for , and any .\nFix and let . Note that . Consider the profile in which for each voter , we have and all remaining voters approve all candidates. That is, the approval sets of the first voters are pairwise disjoint and contain candidates each.\nFor every voter we get that . Since , this implies that for all . Further, for all distinct voters it holds that . However, since , for each with there is a voter with . Thus, for any and , this instance does not admit an -IR committee.\n\u220e\nTo see that this bound on is the worst-possible, note that if for some voter , this means that all voters have a set of at least jointly approved candidates (and a committee consisting of such candidates would provide IR). On the other hand, every committee trivially provides -IR whenever for all .\nWe study approximation bounds for -IR on restricted domains in Section 4 ###reference_###."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Relation to other Proportionality Axioms",
|
| 27 |
+
"text": "We have already observed that IR is a strengthening of semi-strong JR. Furthermore, it is easy to see that every IR committee also provides EJR (and thus PJR and JR). On the other hand, there exist profiles where semi-strong JR committees exist that do not provide PJR.\nTo build intuition on how IR differs from the other notions, and on how it leads to the election of committees that might be considered \u201cfair\u201d from an individual voter perspective, consider the following two examples, illustrated in Figure 2 ###reference_###.\n\nThe first part of Figure 2 ###reference_### shows an approval profile with voters and candidates. Assuming , every voter has . Thus, the only committee providing IR is , which represents every voter once and, moreover, satisfies core stability. However, both and are core stable as well (and in fact would be selected when choosing a committee maximizing the total number of approvals). Many common ABC voting rules would select either or (see Proposition 5 ###reference_orem5###). One can argue that committee \nis a \u201cfairer\u201d or \u201cmore representative\u201d choice in this example.\nThe second part of Figure 2 ###reference_### shows an approval profile with voters and candidates. For , we have for and for . Here, the only committee providing IR is , representing each of the first four voters once, while representing all other voters at least twice. This committee is not core stable, because the group consisting of voters to would prefer to .\nIn order to appreciate the IR committee , consider voters to and observe that these voters are completely \u201csymmetric.\u201d\nHence, from an \u201cequal treatment of equals\u201d perspective,\nif one of them is represented by an approved candidate in the committee, the same should hold for the others. In fact, the only core-stable committees that provide this kind of symmetry are , in which one third of the electorate is not represented at all, or committees containing only two candidates among to .\nIn the latter case, by noticing that voters to are \u201csymmetric\u201d as well, we can argue similarly as above that they are not treated equally.\nThus, the committee that uniquely provides IR might be considered the \u201cfairest\u201d choice under an individualistic point of view.\nThe instance in Example 2 ###reference_mple2### shows that core stability and individual representation are incompatible in the strong sense that for this instance, the (nonempty) set of IR committees and the (nonempty) set of core-stable committees are disjoint.\nIR is incompatible with core stability.\nNext, we show further incompatibility results for semi-strong JR.\nSemi-strong JR is incompatible with PJR, EJR, and core stability.\nConsider an ABC election with , and the following approval profile: , , , , and . For an illustration of this instance, see Figure 4 ###reference_###.\nAs , every committee that provides semi-strong JR must satisfy . But then we have , even though the first four voters form a -cohesive group.\nAs a consequence semi-strong JR is incompatible with PJR and EJR.\nMoreover, as in this instance the core is nonempty (e.g., the committee is core stable), we can also deduce that semi-strong JR is incompatible with core stability.\n\u220e\nIn Appendix B ###reference_### we also establish the relation between perfect representation (PR) as defined by S\u00e1nchez-Fern\u00e1ndez et al. (2017 ###reference_b22###) and the two axioms we are interested in.\nA graphical representation of the results of this section can be found in Figure 3 ###reference_###."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "ABC Rules Violating IR",
|
| 33 |
+
"text": "Next, we consider the question whether we can find ABC rules that select IR committees whenever they exist. This question was already raised by Aziz et al. (2017 ###reference_b1###) in the context of semi-strong JR, but remained open.\nIn other words, we look for rules that are \u201cconsistent\u201d with individual representation.\nAn ABC rule is consistent with individual representation, or short IR-consistent, if it outputs at least one IR committee for every ABC election that admits one.\nConsistency with semi-strong JR can be defined analogously.\nWe show that all common ABC voting rules fail consistency with respect to both IR and semi-strong JR.444Since neither IR nor semi-strong JR is always achievable (and semi-strong JR may be achievable in instances where IR is not) we can, in general, not deduce consistency regarding one of the notions from consistency regarding the other. However, all our examples in this section satisfy for all voters , such that semi-strong JR and IR coincide.\nExample 1 ###reference_mple1### already rules out any rule that always selects one of the candidates with the highest numbers of approvals, so-called approval winners. In particular, this class of rules includes all common committee-monotonic ABC rules as well as other \u201csequential\u201d rules like the Method of Equal Shares (MES), as these rules select one of the approval winners in the very first round.555For definitions of ABC rules not defined in this paper, we refer the reader to the survey by Lackner and Skowron (2022 ###reference_b15###).\nFor a formal definition of sequentiality, see Brill et al. (2023 ###reference_b7###).\nNo ABC voting rule that always selects one of the approval winners is IR-consistent.\nMoreover, the rules PAV, Satisfaction-AV, and reverse-seqPAV select only committees including in Example 1 ###reference_mple1###, and thus fail IR-consistency as well.\nIn Appendix C ###reference_### we provide additional examples showing that all remaining ABC rules mentioned in Table 4.1 of the survey by Lackner and Skowron (2022 ###reference_b15###) fail IR-consistency as well."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Computational Complexity",
|
| 39 |
+
"text": "Another open problem stated by Aziz et al. (2017 ###reference_b1###) concerns the computational complexity of deciding whether a given ABC election admits a committee providing semi-strong JR. We settle this question and the analogous one for individual representation by showing that both problems are NP-hard.\nIt is NP-hard to decide whether an ABC election admits an IR committee or a semi-strong JR committee.\nWe reduce from exact cover by 3-sets. Here, we are given a set of elements and a collection of -element subsets of . The goal is to find a partition of into sets from . The problem is NP-hard even if each element appears in exactly three sets Garey and Johnson (1979 ###reference_b13###).\nWe construct an ABC instance by setting and , i.e., for each we have a candidate .\nFurther, for each set the candidate is approved exactly by voters , and . We set . Hence, only groups of voters corresponding to sets in are 1-cohesive, and we get for each .\nEvery exact cover by 3-sets corresponds to a committee of size where every voter is represented exactly once and thus provides IR in this instance. Conversely, every IR committee of the constructed ABC instance corresponds to a selection of sets from such that every element in is covered exactly once.\nSince for every voter, the same argument holds for semi-strong JR as well.\n\u220e\nMoreover, it is hard to compute a voter\u2019s -value.\nGiven an ABC instance, a voter , and , it is NP-complete to decide whether holds.\nIt is easy to see that this problem is in NP\u2009 since any subset of voters including voter of size and any subset of candidates of size approved by all selected voters serves as a witness.\nWe reduce from balanced complete bipartite subgraph. Here, we are given a bipartite graph and an integer and the goal is to decide whether has as a subgraph, i.e., a subgraph consisting of vertices from and vertices from forming a bipartite clique. The problem is known to be NP-hard Garey and Johnson (1979 ###reference_b13###).\nWe construct an ABC instance by setting , and . Thus, . Each approves exactly its neighbors in , as well as , while approves all candidates. It follows that if and only if there is a set of voters different from approving at least a common set of candidates. Since all voters approve , this is equivalent to these voters all approving candidates different from and therefore by definition all being connected to these vertices in . Thus, they form a if and only if .\n\u220e"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Domain Restrictions",
|
| 45 |
+
"text": "We have seen (Theorem 2 ###reference_orem2###) that non-trivial approximations of individual representation are impossible to obtain in general.\nIn this section, we explore whether this negative result can be circumvented by considering restricted domains of preferences. Domain restrictions for dichotomous (i.e., approval) preferences have been studied by Elkind and Lackner (2015 ###reference_b10###) and Yang (2019 ###reference_b27###).\nRestricting attention to a well-structured domain often allows for axiomatic and algorithmic results that are not achievable otherwise (Elkind et al., 2017 ###reference_b11###).\nIn the ABC setting, for example, it has recently been shown that a core-stable committee always exists in certain restricted domains (Pierczy\u0144ski and Skowron, 2022 ###reference_b20###), whereas the existence of such committees is an open problem for the unrestricted domain.\nWe start by recalling the definitions of two classic restricted domains of dichotomous preferences: candidate interval and voter interval (Elkind and Lackner, 2015 ###reference_b10###).\nAn approval profile satisfies candidate interval (CI) if there is a linear order over the candidates such that for every voter , the approval set forms an interval of that order.\nAn approval profile satisfies voter interval (VI) if there is a linear order over the voters such that for every candidate , the set of voters approving forms an interval of that order.\nThe profile in Example 1 ###reference_mple1### satisfies both candidate interval and voter interval. In fact, a voter order witnessing VI is given in Figure 2 ###reference_###. To see that the profile satisfies CI as well, consider the order . The profile in Example 2 ###reference_mple2###, on the other hand, satisfies neither CI nor VI.\nElkind and Lackner (2015 ###reference_b10###) have shown that it can be checked in polynomial time whether a profile satisfies CI or VI. (If the answer is yes, a linear order over candidates/voters can be found efficiently as well.)\nOur first observation is that the candidate interval domain is not helpful for our purposes: Indeed, the approval profile used to establish Theorem 2 ###reference_orem2### can easily be seen to satisfy CI. Thus, restricting preferences in this way does not yield any improved bounds.\nFor every , there exists a CI profile such that does not admit an -IR committee with and any .\nNow, we turn our attention to the voter interval domain. Due to the similarity between VI and CI, one might expect a similar result here. Surprisingly, however, we can prove a positive result for VI: We provide an algorithm that finds a -IR committee in polynomial time for any VI profile.\nBefore describing the high level idea of our algorithm, we state a useful property of VI profiles.\nWithout loss of generality, we assume that the linear order witnessing VI is given by . Moreover, for with , we let denote the integer interval .\nLet such that . For any , if and , then .\nLet , i.e., a largest subset of approved by sufficiently many voters to validate the -value. (If multiple such sets exist, we pick one of them arbitrarily.) From 9 ###reference_orem9### we know that if and exist such that or , and if , then , i.e., forms an interval of the order of voters including .\nFurther, let\n denote the set of voters in that are ordered before and let\n\ndenote the set of voters in that are ordered after (including itself).\nFor each voter , there exist and such that\n.\nUsing this observation and the fact that , Algorithm 1 ###reference_### returns a -IR committee for any VI profile as follows. In the first round, iterating from voter to , it selects at least candidates that are approved by voter . In the second round, iterating from voter to , it selects at least candidates that are approved by voter (excluding the candidates that are selected in the first round). Together, this ensures , where is the set of selected candidates.\nFor every instance such that satisfies voter interval,\nAlgorithm 1 ###reference_### returns a -IR committee in polynomial time.\nLet be the committee returned by Algorithm 1 ###reference_###, and let and . In the first round we ensure that , as at iteration if , we include candidates into that are not already included. Similarly, in the second round we ensure that as at iteration if , we include candidates into that are not already included.\nAs , for each we have that\nand therefore . Thus, we conclude that provides -IR.\nNow we show that and . We first consider .\nfor all .\nWe prove the lemma using induction.\nFor , and the statement holds.\nAssume that for all , we have\n.\nWe show that the statement holds for .\nNote that\nas at iteration , the algorithm adds candidates to .\nLet .\nIn words, denotes the leftmost voter in the linear order such that approves . Note that .\nFirst, assume that . This means that does not approve any for . From this we get that , since is an interval that contains agents to the right side of (including ), but since is not part of this interval, no agent to the right of can be part of it either. Then,\nfrom Equation 1 ###reference_### we have that\nNow assume that .\nFirst, using induction, we show that\nfor every . Intuitively, this follows from the fact that all the candidates that are added during iterations from up to are approved by voter as well, since for all . For , the claim immediately follows from Equation 1 ###reference_###. Assume that for all it holds that .\nWe have\nas at iteration we add candidates from to , and as , these candidates are approved by , too. Then,\nwhere the second transition follows since at iteration , candidates are added to , and\nthe third transition follows from Equation 2 ###reference_###.\nNow, we distinguish two cases.\nCase 1: .\nHere, we have\nCase 2: .\nHere, we have\nwhere the third inequality follows from the fact that , as is not in .\n\u220e\nAs Rounds 1 and 2 of Algorithm 1 ###reference_### are symmetric, with similar arguments, we can show the following lemma. The proof can be found in Section A.1 ###reference_###.\nfor all .\nFrom Lemma 12 ###reference_orem12###, for , we have , and hence\n.\nFrom Lemma 13 ###reference_orem13###, for , we have , and hence\n.\nThus, .\nLastly, we show that , and thus , can be computed in polynomial time.\nFor this, we employ 10 ###reference_orem10###, i.e., the fact that forms an interval of voters that includes . We consider all such intervals and for each of them calculate the maximum subset of candidates that the voters in this interval deserve due to their size.\nFor any voter , , and can be computed in polynomial time.\nWe show that Algorithm 2 ###reference_### correctly computes both and .\nFor each interval where and , the algorithm finds the maximum number of candidates that this interval is eligible to elect, denoted by . Moreover, denotes the set of candidates that is approved by all the voters in the interval. The algorithm calculates the maximum subset of that can be elected by the voters in the interval as . Then, and are updated properly by assigning them the biggest subset and the size of it, respectively, that an interval of voters can elect.\nAssume for contradiction that the algorithm returns a subset with . This means that there is an interval of voters that can elect . Thus, when the algorithm considers this interval, it would return a subset of size at least , a contradiction.\n\u220e\nThis concludes the proof of\nTheorem 11 ###reference_orem11###.\n\u220e\nFurther, we can show that the bound provided by Theorem 11 ###reference_orem11### is almost tight up to the additive part of .\nFor every , there exists a VI profile such that that does not admit an -IR committee with .\nFix and consider the following instance with and .\nAll voters approve all the candidates, while and .\nNotice that this profile is VI. Indeed, if we order the voters as , then the voters that approve each candidate form an interval of the ordering. Now, we see that , but for each with ,\neither or .\n\u220e\nIn Appendix D ###reference_###, we show that all common ABC rules may fail to return a committee that provides -IR for VI preferences.\nBeyond VI and CI, many other domain restrictions have been studied in the literature. In Appendix E ###reference_###, we provide lower and upper bounds for -IR for all domain restrictions introduced by Elkind and Lackner (2015 ###reference_b10###) and Yang (2019 ###reference_b27###).\nAny domain that is more restrictive than VI\ninherits the guarantee of a -IR committee from VI\u2009\u2014\u2009but we show that in some cases we can achieve better approximation guarantees. On the other hand, any domain that is more general than CI inherits the inapproximability from CI. In fact, we show that the same lower bound applies even in a slightly more restricted domain introduced by Yang (2019 ###reference_b27###).\nMoreover, we show that committees satisfying IR (without approximation) always exist and can be found in polynomial time for a subclass of VI profiles.\nWe also determined for which of the considered domain restrictions a semi-strong JR committee is guaranteed to exist.\nFor a summary of our results, see Table 1 ###reference_### in Appendix E ###reference_###."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Experimental Results",
|
| 51 |
+
"text": "To complement our theoretical results, we performed experiments on generated approval profiles in order to check how often IR committees exist and how often they are selected by common ABC rules."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5.1",
|
| 55 |
+
"parent_section_id": "5",
|
| 56 |
+
"section_name": "Setup",
|
| 57 |
+
"text": "Inspired by Peters et al. (2021a ###reference_b18###) and Szufa et al. (2022 ###reference_b25###), we used five models to generate approval profiles:\na voter-interval model (VI), a candidate-interval model (CI), an impartial culture model (IC), the truncated urn model (Truncated), and the resampling model (Resampling).\nAll generated approval profiles have voters and candidates. For each of the five models, we generated profiles, using a variety of parameters. For each generated profile, we created 50 ABC elections, one for each . Thus, the total number of generated ABC elections is 250,000.\nOur first model is the voter interval Euclidean model (VI). Here, we choose a location in uniformly at random for each voter and candidate. Further, for each candidate we choose a radius according to for a parameter . A candidate is approved by all voters in its radius. We select instances for each .\nOur second model is the candidate interval Euclidean model (CI). Here, we again choose a location in uniformly at random for each voter and candidate as well as a radius according to for a parameter for each voter. A voter approves all candidates in its radius. We select instances for each .\nIn the impartial culture model (IC), for all voters and each candidate, the candidate is approved by the voter with probability . We select instances for each .\nFinally, for the truncated urn model and the resampling model, we follow the approach of Szufa et al. (2022 ###reference_b25###), who use these (and other) models to draw \u201cmaps of elections.\u201d To cover a wide variety of locations on those maps, we pick several different parameter combinations for those models.666\nFor the truncated urn model, which uses parameters , we use the following 10 combinations of parameters:\n, , , , , and .\nFor the resampling model with parameters , we use the same set of parameter combinations.\nWe consider the ABC rules AV, PAV, seq-PAV, Greedy Monroe, MES, seq-Phragm\u00e9n, and sequential Chamberlin\u2013Courant (seq-CC).\n###figure_1###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.2",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "Results",
|
| 63 |
+
"text": "First, we studied how often the generated approval profiles admit an IR committee. The results are shown in Figure 5 ###reference_###. We found that IR committees exist quite often, especially for larger values of . In particular, profiles generated by the VI model or by the truncated urn model admit IR committees in more than 80% of instances, for all values of . On the other hand, profiles generated by the CI model rarely admit IR committees. This striking contrast between VI and CI, which is reminiscent of our theoretical results in Section 4 ###reference_###, can be explained with a feature of the preference generation model:\nDue to the way we generate CI preferences,\nmany voters tend to have rather large approval sets.\nThese voters approving many candidates are then part of multiple cohesive groups, not all of which can be represented in an IR manner. (A similar situation can be observed in the profile constructed in the proof of Theorem 2 ###reference_orem2###.)\n###figure_2### Second, we studied how often different ABC rules select a committee providing IR (or semi-strong JR). In order not to dilute our results, we restricted to the \u201cinteresting\u201d range between and . The results are shown in Figure 6 ###reference_###. Of course, the fraction of profiles for which a rule selects an IR (or semi-strong JR) committee is upper-bounded by the fraction of profiles that admit such a committee. For each model, the latter fraction is depicted in the graph as a solid black line for IR, and a dashed gray line for semi-strong JR.\nWhile no rule manages to find an IR committee every time one exists, the rules PAV, sequential PAV, MES, and sequential Phragm\u00e9n select IR committees often.\nFor the small fraction of CI profiles that admit an IR committee, all considered rules do a good job in finding one.\nSince seq-CC greedily optimizes the amount of voters that are represented at least once, it finds a committee providing semi-strong JR in almost all profiles that admit one. But as the rule does not aim at representing voters more than once, it rarely produces IR committees.\nIn the profiles generated by the IC model, IR often coincides with semi-strong JR (for ) because almost all non-zero -values are . This is in line with the effect noticed by Bredereck et al. (2019 ###reference_b5###), whose experiments showed that EJR and JR are very likely to coincide under IC."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "6",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Discussion",
|
| 69 |
+
"text": "Based on the observations that\ncommon axioms in approval-based committee voting do not address the representation of individual voters, and that\ncommon voting rules sometimes unfairly distinguish between such voters, we formalize individual representation (IR) as a requirement for committees.\nWe find that all common voting rules fail to select IR committees, even when these exist.\nNevertheless, for some restricted domains\u2009\u2014\u2009most prominently, voter interval preferences\u2009\u2014\u2009we provide polynomial-time algorithms for finding committees that provide a good approximation to IR.\nOur experimental results suggest that IR is achievable in many instances that follow somewhat realistic preferences.\nIt remains an open problem to find intuitive voting rules that provide (approximate) IR whenever possible."
|
| 70 |
+
}
|
| 71 |
+
],
|
| 72 |
+
"appendix": [
|
| 73 |
+
{
|
| 74 |
+
"section_id": "Appendix 1",
|
| 75 |
+
"parent_section_id": null,
|
| 76 |
+
"section_name": "Appendix A Omitted Proofs",
|
| 77 |
+
"text": "See 13 ###reference_orem13###\nWe prove the lemma using induction.\nFor , we have that and the statement holds.\nAssume that, for all , we have\n.\nWe show that the statement holds for .\nNote that\nas at iteration , the algorithm adds candidates to .\nLet .\nIf , from Equation 3 ###reference_### we get that\nwhere the second inequality follows from the fact that , as . Now assume that .\nFirst, using induction, we show that\nfor any .\nFor , the claim immediately follows from Equation 3 ###reference_###. Assume that for all it holds that .\nNow, we have\nas at iteration we add candidates from to , and as , these candidates are approved by , too. Then,\nwhere the second transition follows since at iteration , candidates are added to , and\nthe third transition follows from Equation 4 ###reference_###.\nNow, we distinguish two cases.\nCase 1: \nHere, we have\nCase 2: \nHere, we have\nwhere the third inequality follows from the fact that , as is not in .\n\u220e"
|
| 78 |
+
},
|
| 79 |
+
{
|
| 80 |
+
"section_id": "Appendix 2",
|
| 81 |
+
"parent_section_id": null,
|
| 82 |
+
"section_name": "Appendix B Relation to Perfect Representation",
|
| 83 |
+
"text": "Consider an approval profile and a committee size such that divides the number of voters . A committee of size provides perfect representation (PR) [S\u00e1nchez-Fern\u00e1ndez et al., 2017 ###reference_b22###] if it is possible to partition the electorate into pairwise disjoint subsets of size each, and assign a distinct candidate from to each of the subsets in such a way that for each subset all the voters in the subset approve of the assigned candidate.\nIt is known that not all ABC voting instances where divides admit a committee providing PR S\u00e1nchez-Fern\u00e1ndez et al. [2017 ###reference_b22###]. Thus, in the following, we will call an ABC election a PR-instance if it admits a PR committee.\nOn PR-instances, every committee providing perfect representation also provides semi-strong JR, but not the other way around.\nAssume a PR-instance is given together with a committee that provides perfect representation. By definition, for every voter . Thus, provides semi-strong JR.\nAs a counterexample for the other direction, consider the following instance with and .\nThe committee provides semi-strong JR but not perfect representation (whereas provides both).\n\u220e\nNote that S\u00e1nchez-Fern\u00e1ndez et al. [2017 ###reference_b22###, Theorem 4] establish that on PR-instances, perfect representation also implies PJR. It follows that, whereas semi-strong JR and PJR are incompatible in general (as we proved in Section 3 ###reference_###), every PR-instance admits a committee that satisfies semi-strong JR, PJR, and perfect representation.\nThere are PR-instances that do not admit an IR committee. Moreover, there are PR-instances where an IR committee exists but does not provide perfect representation.\nRegarding the first claim consider the following instance with and .\nHere provides perfect representation (and semi-strong JR) but not IR (which is not achievable in this instance).\nRegarding the second claim, again consider the instance from the proof of Proposition 16 ###reference_orem16### with and . Here, the committee provides IR, but not perfect representation.\n\u220e"
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"section_id": "Appendix 3",
|
| 87 |
+
"parent_section_id": null,
|
| 88 |
+
"section_name": "Appendix C Counterexamples for IR-Consistency",
|
| 89 |
+
"text": "Here we provide further examples showing that all common ABC rules violate IR-consistency. Note that all committee monotone rules (including AV, SAV, sequential PAV, and sequential Phragm\u00e9n) as well as MES and PAV were already ruled out to satisfy IR-consistency in Section 3.2 ###reference_###.\nConsider the following profile with voters and assume :\nHere we have for all voters and thus the only committee providing individual representation is . Chamberlin-Courant-AV (CCAV), Monroe-AV and PAV with the weight-vector ,\nwhich provide an IR committee in the example of Proposition 5 ###reference_orem5###, choose and thus fail individual representation.\nThe only two remaining rules from Table 4.1 in the survey by Lackner and Skowron [2022 ###reference_b15###] are leximin-Phragm\u00e9n [Brill et al., 2024a ###reference_b8###] (referred to as \u201cleximax-Phragm\u00e9n\u201d by Lackner and Skowron [2022 ###reference_b15###]) and Minimax-AV [Brams et al., 2007 ###reference_b4###].\nConsider the following profile with voters and let :\nHere we have for all voters and thus and are the only committees providing individual representation (or semi-strong JR). It turns out that leximin-Phragm\u00e9n does not select any of these two. It is easy to see that the load distribution with the minimum maximal load for either of the two IR committees is . The committee , however, induces a load distribution of which is lower both in terms of the maximum as well as lexicographic ordering.\nConsider the following profile with voters and let :\nMinimax-AV (which minimises the maximum Hamming-distance among all voters to the winning committee) selects any two candidates from and none that is supported by the 99 voters. This clearly violates individual representation and semi-strong JR."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"section_id": "Appendix 4",
|
| 93 |
+
"parent_section_id": null,
|
| 94 |
+
"section_name": "Appendix D Common ABC Rules do not Guarantee -IR for VI Preferences",
|
| 95 |
+
"text": "The two examples below show that many common ABC rules are not guaranteed to return a committee that provides -IR for VI preferences.\nConsider the following profile with voters and let :\nNotice that and for .\nHere, MES, AV, PAV, seq-PAV, rev-seq-PAV, seq-Phragm\u00e9n, SAV, and Greedy Monroe (depending on the tie breaking) all return committees where and thus only one of the candidates among is included. Hence, they all fail to select a -IR committee.\nConsider the following profile with voters and let :\nAgain, we have . CC and seq-CC return committees where only one candidate among is included. Thus, they fail to select a -IR committee.\nA simple adaption of Example 5 ###reference_mple5### shows that Minimax-AV fails to provide any good approximation for IR even on -PART instances. From Table 4.1 of the recent survey by Lackner and Skowron [2022 ###reference_b15###], the only rules missing are leximin-Phragm\u00e9n and Monroe-AV for which we experimentally found profiles with voters and candidates where these two rules also fail to select a -IR committee.\nThus, we can conclude that Algorithm 1 ###reference_### outperforms all common ABC voting rules in terms of approximating IR on VI profiles."
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix 5",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix E Further Domain Restrictions",
|
| 101 |
+
"text": "We consider all the restricted domains that are discussed by Elkind and Lackner [2015 ###reference_b10###] and Yang [2019 ###reference_b27###]. An overview of how these domain restrictions are related to each other (which is adapted from Yang [2019 ###reference_b27###]) can be found in Figure 7 ###reference_###. An overview of the results of this section can be found in Table 1 ###reference_###.\nAn approval profile satisfies -partition (-PART) if there is a partition of such that for every voter there exists such that .\nUnder -PART approval profiles an IR committee always exists.\nNote that for each voter with for some , . Now consider the committee that contains candidates from each . Clearly, provides IR and also\nwhich completes the proof.\n\u220e\nAn approval profile satisfies Candidate Extremal Interval (CEI) if there is a linear order of such that for every voter , the approval set forms a prefix or a suffix of that order.\nAn approval profile satisfies Voter Extremal Interval (VEI) if there is a linear order of such that for every candidate , the set of voters approving forms a prefix or a suffix of that order.\nAn approval profile satisfies Weakly Single-Crossing (WSC) if there is a linear order of such that\nfor each pair of candidates , in it holds that each\nof the voter sets , and forms an interval of this ordering, and \nappears between and .\nFor any CEI and VEI profile, there exists a -IR and a semi-strong JR committee, and both can be found in polynomial time.\nAssume we are given a CEI profile and the order over the candidates such that each is either a prefix or a suffix of that order. If for some voter, then this means that there are at least candidates that are approved by all the voters and these candidates can form the winning committee which clearly is IR and semi-strong JR. Otherwise, if for all voters , we set\ni.e., consists of the first candidates and the last candidates of the order. Hence, for each , it holds that , and as , we get that provides -IR. Now notice that as consists of the first and the last candidate in the order. Hence, provides semi-strong-JR as each voter approves one of these two candidates.\nNow, assume we are given a VEI profile and the order over the voters. We order the candidates as follows. Let and be the set of candidates such that the voters that approve a candidate in and form a prefix and a suffix of , respectively. Without loss of generality, a candidate such that is assigned to . A candidate in is ordered before in if the last voter that approves is ordered after the last voter that approves (break ties arbitrarily). A candidate in is ordered before in if the first voter that approves is ordered after the first voter that approves (break ties arbitrary). All the candidates in are ordered before the candidates in . As above, if for some voter, then this means that there are at least candidates that are approved by all the voters and these candidates form the winning committee which clearly is IR and semi-strong JR. Otherwise, if consists of the first candidates and the last candidates of the order that we describe above, then for each , it holds that , and as , we again get that provides -IR. Now similarly as above, as consists of the first and the last candidate of the order, is semi-strong-JR as if for some , then approves some candidate between these two candidates.\n\u220e\nFor any WSC profile, there exists a semi-strong JR committee and it can be found in polynomial time.\nConsider the order of the voters. Without loss of generality assume that for each , (otherwise we can just exclude them) and that . Now, let be a candidate that is approved by voter and let be the first voter in the order that does not approve . This means that approves a candidate different from , and the profile is WSC if all the voters from to approve . Hence, any committee with provides semi-strong JR, as all the voters are represented by at least one candidate.\n\u220e\nThere exists an instance that is VEI, CEI and WSC, and does not admit a -IR committee.\nConsider the instance defined in the proof of Theorem 15 ###reference_orem15###.\nWe now show that the profile is CEI, VEI and WSC, from which the lemma follows. Indeed, for CEI, if we order the candidates as , , then each forms a prefix or a suffix of the ordering. For VEI, if we order the voters as , , then the voters that approve each candidate form a prefix or a suffix of the ordering. Under the same ordering of the voters, note that for each candidate we have for either all or for all . Thus, the profile is also WSC.\n\u220e\nAn approval profile satisfies Dichotomous uniformly Euclidean (DUE) if there is a mapping of voters and candidates into the real line and a radius such that every voter approves the candidates that are at most far from her.\nThere exists a DUE profile which does not admit a semi-strong JR committee.\nConsider the following instance with voters and let :\nTo see that this is a DUE profile, consider the following mapping of the instance onto the real line. Each voter is mapped to the point and candidates to are mapped to 1.5, 2.5, 4.5 and 5.5, respectively. From this mapping we obtain the above profile by using an approval radius of .\nWe observe for all , but for any committee of size 3 it holds that for some .\n\u220e\nAn approval profile satisfies -tree representation (-TR)777We use instead of in order to avoid confusion with the use of for denoting an approximation of IR. Same with -VTR, -VPTR and -PTR. if there exists a rooted tree with vertices and root such that for every voter there is a candidate such that equals the set of vertices on the path from to (excluding but including ).\nUnder -TR preferences, the following algorithm always selects a committee that provides individual representation. Here we use to denote the (edge-)distance between the root and a candidate .\nNote that this algorithm basically resembles HareAV as defined by Aziz et al. [2017 ###reference_b1###] with a specific tie-breaking mechanism that depends on the tree representation. Due to this tie-breaking, it is clearly polynomial time computable.\nUnder -TR preferences, a committee providing individual representation always exists and can be found in polynomial time.\nLet be the subgraph of induced by the candidate set chosen by Algorithm 3 ###reference_###.\nFirst, note that is a subtree of with root . To see this, let and let be the direct ancestor of (i.e., is the candidate immediately before on the path from to ). By the -TR property we have and by the definition of the distance function it holds that . Thus and is also in .\nWe now show that the committee is of size . Then, we show that it indeed provides individual representation.\nBy 8 ###reference_8### and 9 ###reference_9### of Algorithm 3 ###reference_### it holds that . Now consider ; we will assign voters to each edge of this tree. In order to do this, for a leave of the tree choose (yet unassigned) voters from and assign them to the edge incident to . (If is not an integer we assign one voter only partially.) Then we delete and that edge from the tree, starting this process again with a leave of the smaller tree.\nSince and for all candidates on the path from to , there always exist such voters. Since there are only voters in total we have .\nTo show that satisfies IR, consider a group of voters such that and for some . By the definition of -TR there is a path of (edge-)length from to some such that the candidates on that path (including ) are a subset of of size . We call the set of these candidates .\nEvery candidate in is approved by all of , i.e., by at least voters, and is at distance from the root of . Thus Algorithm 3 ###reference_### chooses all candidates in and therefore we have for all .\n\u220e\nAn approval profile satisfies -vertex tree representation (-VTR) if there exists a rooted tree with vertices and root , such that for every candidate there exists a voter approving such that the set of voters approving forms a path from to that voter (excluding but including the voter).\nThere exists an -VTR instance that does not admit an -IR committee for and any .\nWe note that the instance in Theorem 2 ###reference_orem2### is also a -VTR instance, from which the statement follows. To see this, consider a rooted tree with root and vertices as follows: the voters form a simple path starting from . All the remaining voters are incident to the vertex farthest away from and make up the leaves of the tree. The set of voters approving a common candidate in the above instance now form a path from to one of the leaves and thus this is an -VTR representation of that instance.\n\u220e\nNote that the instance discussed in the proof of Proposition 24 ###reference_orem24### does not admit a committee providing semi-strong JR."
|
| 102 |
+
}
|
| 103 |
+
],
|
| 104 |
+
"tables": {
|
| 105 |
+
"1": {
|
| 106 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A4.T1.18.18\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A4.T1.18.18.19.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"A4.T1.18.18.19.1.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"A4.T1.18.18.19.1.2\">Individual Representation</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T1.18.18.19.1.3\">semi-strong-JR</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T1.18.18.20.2\">\n<td class=\"ltx_td\" id=\"A4.T1.18.18.20.2.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A4.T1.18.18.20.2.2\">Lower Bound</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A4.T1.18.18.20.2.3\">Upper Bound</th>\n<td class=\"ltx_td\" id=\"A4.T1.18.18.20.2.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T1.2.2.2.3\">PART</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T1.2.2.2.4\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T1.5.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.3.3.3.1\">\n-TR</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.4.4.4.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.5.5.5.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.5.5.5.4\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T1.7.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.7.7.7.3\">VEI</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.6.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.7.7.7.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.7.7.7.4\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T1.9.9.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.9.9.9.3\">CEI</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.8.8.8.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.9.9.9.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.9.9.9.4\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T1.12.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.12.12.12.4\">DUE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.10.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.11.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.12.12.12.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T1.15.15.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.15.15.15.4\">VI</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T1.15.15.15.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T1.18.18.18\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T1.18.18.18.4\">CI</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T1.16.16.16.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T1.17.17.17.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T1.18.18.18.3\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A4.T1.24.3.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"A4.T1.22.22.2\" style=\"font-size:90%;\">Individual representation guarantees in structured profiles. The column Lower Bound indicates that,\nwhile the column Upper Bound gives us values such that an -IR committee always exists (and can be computed efficiently). The last column refers to the existence of semi-strong JR committees in every instance.</span></figcaption>\n</figure>",
|
| 107 |
+
"capture": "Table 1: Individual representation guarantees in structured profiles. The column Lower Bound indicates that,\nwhile the column Upper Bound gives us values such that an -IR committee always exists (and can be computed efficiently). The last column refers to the existence of semi-strong JR committees in every instance."
|
| 108 |
+
}
|
| 109 |
+
},
|
| 110 |
+
"image_paths": {
|
| 111 |
+
"5": {
|
| 112 |
+
"figure_path": "2112.05193v2_figure_5.png",
|
| 113 |
+
"caption": "Figure 5: The ratio of generated profiles that admit an IR committee.",
|
| 114 |
+
"url": "http://arxiv.org/html/2112.05193v2/x1.png"
|
| 115 |
+
},
|
| 116 |
+
"6": {
|
| 117 |
+
"figure_path": "2112.05193v2_figure_6.png",
|
| 118 |
+
"caption": "Figure 6: For each model and each voting rule, the bold colored part of the bar indicates the ratio of instances the rule returned an IR committee, while the pale-colored part indicates the same ratio for semi-strong JR, averaged over all values k\ud835\udc58kitalic_k with 2\u2264k\u2264202\ud835\udc58202\\leq k\\leq 202 \u2264 italic_k \u2264 20. For each model, the black line indicates the fraction of instances admitting an IR committee, while the gray dashed line indicates the ratio of instances admitting a semi-strong JR committee.",
|
| 119 |
+
"url": "http://arxiv.org/html/2112.05193v2/x2.png"
|
| 120 |
+
}
|
| 121 |
+
},
|
| 122 |
+
"validation": true,
|
| 123 |
+
"references": [
|
| 124 |
+
{
|
| 125 |
+
"1": {
|
| 126 |
+
"title": "Justified representation in approval-based committee voting.",
|
| 127 |
+
"author": "H. Aziz, M. Brill, V. Conitzer, E. Elkind, R. Freeman, and T. Walsh.",
|
| 128 |
+
"venue": "Social Choice and Welfare, 48(2):461\u2013485, 2017.",
|
| 129 |
+
"url": null
|
| 130 |
+
}
|
| 131 |
+
},
|
| 132 |
+
{
|
| 133 |
+
"2": {
|
| 134 |
+
"title": "Fair Representation: Meeting the Ideal of One Man, One Vote.",
|
| 135 |
+
"author": "M. L. Balinski and H. P. Young.",
|
| 136 |
+
"venue": "Yale University Press, 1982.",
|
| 137 |
+
"url": null
|
| 138 |
+
}
|
| 139 |
+
},
|
| 140 |
+
{
|
| 141 |
+
"3": {
|
| 142 |
+
"title": "Approval-based committee voting in practice: A case study of (over-)representation in the Polkadot blockchain.",
|
| 143 |
+
"author": "N. Boehmer, M. Brill, A. Cevallos, J. Gehrlein, L. S\u00e1nchez-Fern\u00e1ndez, and U. Schmidt-Kraepelin.",
|
| 144 |
+
"venue": "In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI), pages 9519\u20139527, 2024.",
|
| 145 |
+
"url": null
|
| 146 |
+
}
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"4": {
|
| 150 |
+
"title": "A minimax procedure for electing committees.",
|
| 151 |
+
"author": "S. J. Brams, D. M. Kilgour, and M. R. Sanver.",
|
| 152 |
+
"venue": "Public Choice, 132:401\u2013420, 2007.",
|
| 153 |
+
"url": null
|
| 154 |
+
}
|
| 155 |
+
},
|
| 156 |
+
{
|
| 157 |
+
"5": {
|
| 158 |
+
"title": "An experimental view on committees providing justified representation.",
|
| 159 |
+
"author": "R. Bredereck, P. Faliszewski, A. Kaczmarczyk, and R. Niedermeier.",
|
| 160 |
+
"venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), pages 109\u2013115, 2019.",
|
| 161 |
+
"url": null
|
| 162 |
+
}
|
| 163 |
+
},
|
| 164 |
+
{
|
| 165 |
+
"6": {
|
| 166 |
+
"title": "Robust and verifiable proportionality axioms for multiwinner voting.",
|
| 167 |
+
"author": "M. Brill and J. Peters.",
|
| 168 |
+
"venue": "In Proceedings of the 24th ACM Conference on Economics and Computation (ACM-EC), page 301. ACM, 2023.",
|
| 169 |
+
"url": null
|
| 170 |
+
}
|
| 171 |
+
},
|
| 172 |
+
{
|
| 173 |
+
"7": {
|
| 174 |
+
"title": "Multiwinner voting with possibly unavailable candidates.",
|
| 175 |
+
"author": "M. Brill, H. Dindar, J. Israel, J. Lang, J. Peters, and U. Schmidt-Kraepelin.",
|
| 176 |
+
"venue": "In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI), pages 5532\u20135539, 2023.",
|
| 177 |
+
"url": null
|
| 178 |
+
}
|
| 179 |
+
},
|
| 180 |
+
{
|
| 181 |
+
"8": {
|
| 182 |
+
"title": "Phragm\u00e9n\u2019s voting methods and justified representation.",
|
| 183 |
+
"author": "M. Brill, R. Freeman, S. Janson, and M. Lackner.",
|
| 184 |
+
"venue": "Mathematical Programming, 203(1\u20132):47\u201376, 2024a.",
|
| 185 |
+
"url": null
|
| 186 |
+
}
|
| 187 |
+
},
|
| 188 |
+
{
|
| 189 |
+
"9": {
|
| 190 |
+
"title": "Approval-based apportionment.",
|
| 191 |
+
"author": "M. Brill, P. G\u00f6lz, D. Peters, U. Schmidt-Kraepelin, and K. Wilker.",
|
| 192 |
+
"venue": "Mathematical Programming, 203(1\u20132):77\u2013105, 2024b.",
|
| 193 |
+
"url": null
|
| 194 |
+
}
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"10": {
|
| 198 |
+
"title": "Structure in dichotomous preferences.",
|
| 199 |
+
"author": "E. Elkind and M. Lackner.",
|
| 200 |
+
"venue": "In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), pages 2019\u20132025. AAAI Press, 2015.",
|
| 201 |
+
"url": null
|
| 202 |
+
}
|
| 203 |
+
},
|
| 204 |
+
{
|
| 205 |
+
"11": {
|
| 206 |
+
"title": "Structured preferences.",
|
| 207 |
+
"author": "E. Elkind, M. Lackner, and D. Peters.",
|
| 208 |
+
"venue": "In U. Endriss, editor, Trends in Computational Social Choice, chapter 10. 2017.",
|
| 209 |
+
"url": null
|
| 210 |
+
}
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"12": {
|
| 214 |
+
"title": "Aggregating expert opinions in support of medical diagnostic decision-making.",
|
| 215 |
+
"author": "C. Gangl, J. Maly, M. Lackner, and S. Woltran.",
|
| 216 |
+
"venue": "In Proceedings of the 11th International Workshop on Knowledge Representation for Health Care (KR4HC-2019), 2019.",
|
| 217 |
+
"url": null
|
| 218 |
+
}
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"13": {
|
| 222 |
+
"title": "Computers and Intractability: A Guide to the Theory of NP-Completeness.",
|
| 223 |
+
"author": "M. R. Garey and D. S. Johnson.",
|
| 224 |
+
"venue": "W. H. Freeman, 1979.",
|
| 225 |
+
"url": null
|
| 226 |
+
}
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"14": {
|
| 230 |
+
"title": "A center in your neighborhood: Fairness in facility location.",
|
| 231 |
+
"author": "C. Jung, S. Kannan, and N. Lutz.",
|
| 232 |
+
"venue": "In Proceedings of the Symposium on Foundations of Responsible Computing (FORC), page 5:1\u20135:15, 2020.",
|
| 233 |
+
"url": null
|
| 234 |
+
}
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"15": {
|
| 238 |
+
"title": "Multi-Winner Voting with Approval Preferences.",
|
| 239 |
+
"author": "M. Lackner and P. Skowron.",
|
| 240 |
+
"venue": "Springer, 2022.",
|
| 241 |
+
"url": null
|
| 242 |
+
}
|
| 243 |
+
},
|
| 244 |
+
{
|
| 245 |
+
"16": {
|
| 246 |
+
"title": "Single-peakedness and total unimodularity: New polynomial-time algorithms for multi-winner elections.",
|
| 247 |
+
"author": "D. Peters.",
|
| 248 |
+
"venue": "In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), pages 1169 \u2013 1176, 2018.",
|
| 249 |
+
"url": null
|
| 250 |
+
}
|
| 251 |
+
},
|
| 252 |
+
{
|
| 253 |
+
"17": {
|
| 254 |
+
"title": "Proportionality and the limits of welfarism.",
|
| 255 |
+
"author": "D. Peters and P. Skowron.",
|
| 256 |
+
"venue": "In Proceedings of the 21st ACM Conference on Economics and Computation (ACM-EC), pages 793\u2013794. ACM, 2020.",
|
| 257 |
+
"url": null
|
| 258 |
+
}
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"18": {
|
| 262 |
+
"title": "Market-based explanations of collective decisions.",
|
| 263 |
+
"author": "D. Peters, G. Pierczy\u0144ski, N. Shah, and P. Skowron.",
|
| 264 |
+
"venue": "In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI), pages 5656\u20135663, 2021a.",
|
| 265 |
+
"url": null
|
| 266 |
+
}
|
| 267 |
+
},
|
| 268 |
+
{
|
| 269 |
+
"19": {
|
| 270 |
+
"title": "Proportional participatory budgeting with additive utilities.",
|
| 271 |
+
"author": "D. Peters, G. Pierczy\u0144ski, and P. Skowron.",
|
| 272 |
+
"venue": "In Advances in Neural Information Processing Systems, volume 34, pages 12726\u201312737, 2021b.",
|
| 273 |
+
"url": null
|
| 274 |
+
}
|
| 275 |
+
},
|
| 276 |
+
{
|
| 277 |
+
"20": {
|
| 278 |
+
"title": "Core-stable committees under restricted domains.",
|
| 279 |
+
"author": "G. Pierczy\u0144ski and P. Skowron.",
|
| 280 |
+
"venue": "In Proceedings of the 18th International Workshop on Internet and Network Economics (WINE), pages 311\u2013329, 2022.",
|
| 281 |
+
"url": null
|
| 282 |
+
}
|
| 283 |
+
},
|
| 284 |
+
{
|
| 285 |
+
"21": {
|
| 286 |
+
"title": "Proportional Representation: Apportionment Methods and Their Applications.",
|
| 287 |
+
"author": "F. Pukelsheim.",
|
| 288 |
+
"venue": "Springer, 2014.",
|
| 289 |
+
"url": null
|
| 290 |
+
}
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"22": {
|
| 294 |
+
"title": "Proportional justified representation.",
|
| 295 |
+
"author": "L. S\u00e1nchez-Fern\u00e1ndez, E. Elkind, M. Lackner, N. Fern\u00e1ndez, J. A. Fisteus, P. Basanta Val, and P. Skowron.",
|
| 296 |
+
"venue": "In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI), pages 670\u2013676. AAAI Press, 2017.",
|
| 297 |
+
"url": null
|
| 298 |
+
}
|
| 299 |
+
},
|
| 300 |
+
{
|
| 301 |
+
"23": {
|
| 302 |
+
"title": "Proportionality degree of multiwinner rules.",
|
| 303 |
+
"author": "P. Skowron.",
|
| 304 |
+
"venue": "In Proceedings of the 22nd ACM Conference on Economics and Computation (ACM-EC), pages 820\u2013840. ACM, 2021.",
|
| 305 |
+
"url": null
|
| 306 |
+
}
|
| 307 |
+
},
|
| 308 |
+
{
|
| 309 |
+
"24": {
|
| 310 |
+
"title": "Proportional rankings.",
|
| 311 |
+
"author": "P. Skowron, M. Lackner, M. Brill, D. Peters, and E. Elkind.",
|
| 312 |
+
"venue": "In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), pages 409\u2013415. IJCAI, 2017.",
|
| 313 |
+
"url": null
|
| 314 |
+
}
|
| 315 |
+
},
|
| 316 |
+
{
|
| 317 |
+
"25": {
|
| 318 |
+
"title": "How to sample approval elections?",
|
| 319 |
+
"author": "S. Szufa, P. Faliszewski, \u0141. Janeczko, M. Lackner, A. Slinko, K. Sornat, and N. Talmon.",
|
| 320 |
+
"venue": "In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI), pages 496\u2013502, 2022.",
|
| 321 |
+
"url": null
|
| 322 |
+
}
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"26": {
|
| 326 |
+
"title": "Restricted domains of dichotomous preferences with possibly incomplete information.",
|
| 327 |
+
"author": "Z. Terzopoulou, A. Karpov, and S. Obraztsova.",
|
| 328 |
+
"venue": "In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI), pages 5726\u20135733. AAAI Press, 2021.",
|
| 329 |
+
"url": null
|
| 330 |
+
}
|
| 331 |
+
},
|
| 332 |
+
{
|
| 333 |
+
"27": {
|
| 334 |
+
"title": "On the tree representations of dichotomous preferences.",
|
| 335 |
+
"author": "Y. Yang.",
|
| 336 |
+
"venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), pages 644\u2013650. IJCAI, 2019.",
|
| 337 |
+
"url": null
|
| 338 |
+
}
|
| 339 |
+
}
|
| 340 |
+
],
|
| 341 |
+
"url": "http://arxiv.org/html/2112.05193v2"
|
| 342 |
+
}
|
20241001/2206.09885v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2210.16928v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2211.02032v3.json
ADDED
|
@@ -0,0 +1,423 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "To spike or not to spike: the whims of the Wonham filter in the strong noise regime",
|
| 3 |
+
"abstract": "We study the celebrated Shiryaev-Wonham filter [Won64] in its historical setup where the hidden Markov jump process has two states. We are interested in the weak noise regime for the observation equation. Interestingly, this becomes a strong noise regime for the filtering equations.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "1. Introduction",
|
| 9 |
+
"text": "Filtering Theory adresses the problem of estimating a hidden process which can not be directly observed. At hand, one has access to an observation process which is naturally correlated to . The most simple setup, called the \u201csignal plus noise\u201d model, is the one where the observation process is of the form\nwhere is a standard Wiener process and . Moreover it is natural to assume that the noise is intrinsic to the observation system, so that the Brownian motion has no reason of being the same for different values of . See Figure 1.1 ###reference_### for an illustration which visually highlights the difficulty of recognizing a drift despite Brownian motion fluctuations. In this paper we shall focus on the case where is a pure jump Markov process on with c\u00e0dl\u00e0g trajectories. We denote (resp. ) the jump rate between and (resp. between and ), with and . This is the historical setting of the celebrated Wonham filter [Won64 ###reference_bx27###, Eq. (19)].\nIn the mean square sense, the best estimator taking value in at time of , given the observation , is equal to\nwhere is the conditional probability\nOur interest lies in the situation where the intensity of the observation noise is small, i.e. is large. At first glance, one could argue that weak noise limits for the observation process are not that interesting because we are dealing with extremely reliable systems since they are subject to very little noise. As such, one would naively expect that observing allows an optimal recovery of as , via a straightforward and stable manner. This paper aims at demonstrating that this regime is more surprizing and interesting from both a theoretical and a practical point of view.\n###figure_1### A motivating example. Let us describe a simple situation that falls into that scope and motivates our study. Consider for example a single classical bit \u2013 say, inside of a DRAM chip. The value of the bit is subject to changes, some of which are caused by CPU instructions and computations, some of which are due to errors. The literature points to spontaneous errors due to radiation, heat and various conditions [SPW09 ###reference_bx24###]. The value of that process is modeled by the Markov process as defined above. Here, the process is the electric current received by a sensor on the chip, which monitors any changes. Any retroaction, for example code correction in ECC memory [KLK+14 ###reference_bx15###, PKHM19 ###reference_bx20###], requires the observation during a finite window . And the reaction is at best instantaneous. For anything meaningful to happen, everything depends thus on the behavior of:\nand instead to consider the estimator given by Eq. (1.2 ###reference_###), we are left with the estimator\nFrom an engineering point of view, it is the interplay between different time scales which is important in order to design a system with high performance: if the noise is weak, how fast can a feed-back response be? For a given process with values in we denote the hitting time of by . Assume for example that initially . For a given time , a natural problem is to estimate, as , the probability to predict a false value of the bit given its value remains equal to during the time interval , i.e."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "1.1. Informal statement of the result.",
|
| 15 |
+
"text": "A consequence of the results of this paper is the precise identification of the regimes for which the probability in (1.5 ###reference_###) vanishes or not as :\nIf , i.e. is too small, the retroaction/control system can be surprised by a so\u2013called spike, causing a misfire in detecting the regime change and the limiting error probability in Eq. (1.5 ###reference_###) is equal to ;\nIf , i.e. is sufficiently large, the estimator will be very good at detecting jumps of the Markov process , the limiting error probability in Eq. (1.5 ###reference_###) vanishing. However the reaction time will deteriorate.\nIn the previous statement the presence of the number is due to a technical estimate and it is almost clear that it could be replaced by . We refer the reader to Section 5.7 ###reference_### and Figure 1.3 ###reference_###, even if the numerical simulations are not totally convincing. In this remark lies the only estimate which limits the extension of the claim to the case . Hence, there is no doubt that the transition occurs for .\nWhile the literature usually focuses on considerations for filtering processes, we focus on this article on pathwise properties of the filtering process under investigation when . Indeed, it is clear that the question addressed just above cannot be answered in an framework only.\nLet us now present in some informal way the reasons for which we have this difference of behavior. As it will be recalled later the process satisfies in law\nwhere is a Brownian motion with a now strong parameter in front of it. This is the so called Shiryaev-Wonham filtering theory [Won64 ###reference_bx27###, Lip01 ###reference_bx17###, VH07 ###reference_bx26###]. As shown in [BCC+22 ###reference_bx9###], when goes to infinity the process converges in law to an unusual and singular process in a suitable topology (see Figure 1.2 ###reference_###). Indeed as exhibited in the figure, the limiting process is the Markov jump process but decorated with vertical lines, called spikes, whose extremities are distributed according an inhomogeneous point Poisson process. As we can observe on Figure 1.3 ###reference_###, if is sufficiently large, the spikes in the process are suppressed while if is sufficiently small they survive. The spikes are responsible of the non vanishing error probability in Eq. (1.5 ###reference_###) since they are interpreted by the estimator as a jump from to of the process . The fact that the transition between the two regimes is precisely is more complicated to explain without going into computational details. Building on our earlier results, we examine hence in this paper the effect of smoothing and the relevance of various time scales required for filtering, smoothing and control in the design of a system with feedback.\n###figure_2### ###figure_3### Notice that the observation equation (1.1 ###reference_###) has a factor , while the filtering equation (1.6 ###reference_###) has a factor . This is a well-known duality between the weak noise limit in the observation process and the strong noise limit in the filtered state.\nIn fact, when analyzing the derivation of the Wonham-Shiryaev filter, this is simply due to writing:\nand using the Girsanov transform to construct a new measure , for the Kallianpur-Streibel formula, under which is a Brownian motion \u2013 [VH07 ###reference_bx26###, Chapter 7]."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "1.2. Literature review of filtering theory in the regime.",
|
| 21 |
+
"text": "The understanding of the behavior of the classical filter for jump Markov processes with small Brownian observation noise has attracted some attention in the 90\u2019s. Most of the work there is focused on the long time regime [Won64 ###reference_bx27###, KL92 ###reference_bx14###, KZ96 ###reference_bx16###, AZ97b ###reference_bx4###, AZ97a ###reference_bx3###, Ass97 ###reference_bx1###], by studying for example stationary measures, asymptotic stability or transmission rates. In the case where the jump Markov process is replaced by a diffusion process with a signal noise, possibly small, [Pic86 ###reference_bx19###, AZ98 ###reference_bx5###] study the efficiency (in the sense and at fixed time) of some asymptotically optimal filters. In [PZ05 ###reference_bx21###] are obtained quenched large deviations principles for the distribution of the optimal filter at a fixed time for one dimensional nonlinear filtering in the small observation noise regime \u2013 see also [RBA22 ###reference_bx22###]. In a similar context Atar obtains in [Ata98 ###reference_bx2###] some non-optimal upper bounds for the asymptotic rate of stability of the filter.\nGoing through the aforementioned literature one can observe that the term already appears in those references. Indeed the quantities of interest include the (average) long time error rate [Ass97 ###reference_bx1###, Eq. (1.4)]\nor the probability of error in long time ([Won64 ###reference_bx27###] and [KZ96 ###reference_bx16###, Theorem 1\u2019])\nor the long time mean squared error [Gol00 ###reference_bx12###]\nHere denotes the natural filtration of . These quantities are shown to be of order up to a constant which is related to the invariant measure of and some relative entropy but which is definitively not \u2013 see [Gol00 ###reference_bx12###, Eq. (3)]. Note that all these quantities are of asymptotic nature and their analysis goes through the invariant measure. Beyond the appearance of the quantity , which is fortuitous, our results are of a completely different nature since we want to obtain a sharp result on a fixed finite time interval. Also, due to the spiking phenomenon and the singularity of the involved processes, there is no chance that the limits can be exchanged.\nTo the best of the authors\u2019 knowledge, this paper is the first of its kind to aim for a trajectorial description of the limit, in the context of classical filtering theory. However, the spiking phenomenon has first been identified in the context of quantum filtering [Mab09 ###reference_bx18###, Fig. 2] and more specifically, for the control and error correction of qubits. The spiking phenomenon is already seen as a possible source of error where correction can be made while no error has occurred. To quote [Mab09 ###reference_bx18###, Section 4], when discussing the relevance of the optimal Wonham filter in the strong noise regime, it \u201cis not a good measure of the information\ncontent of the system, as it is very sensitive to the whims of the filter\u201d.\nThen, in the studies of quantum trajectories111Mathematically speaking quantum trajectories are (multi)-dimensional diffusion processes with a special form of the drift and volatility. with strong measurement, a flurry of developments have recently taken place, following the pioneering works of Bauer, Bernard and Tilloy [TBB15 ###reference_bx25###, BBT16 ###reference_bx8###]. Strong interaction with the environment, which is natural in the quantum setting, corresponds to a strong noise in the quantum trajectories.\nNote that, the SDEs are the same when comparing classical to quantum filtering. Nevertheless, the noise has a fundamentally different nature. And there is no hidden process in the quantum setting. See [BBC+21 ###reference_bx7###, BCC+22 ###reference_bx9###] for a recent account and more references on the quantum literature."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "2. Statement of the problem and Main Theorem",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.1",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "2.1. The Shiryaev-Wonham filter",
|
| 33 |
+
"text": "Let us start by presenting the Shiryaev-Wonham filter and refer to [Won64 ###reference_bx27###, Lip01 ###reference_bx17###, VH07 ###reference_bx26###] for more extensive material."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.1.1",
|
| 37 |
+
"parent_section_id": "2.1",
|
| 38 |
+
"section_name": "2.1.1. General setup",
|
| 39 |
+
"text": "In this paragraph only, we present the Shiryaev-Wonham filter on states, which will allow to highlight the structural aspects of Eq. (1.6 ###reference_###). In general, one considers a Markov process on a finite state space , of cardinal , and a continuous observation process of the usual additive form \u201csignal plus noise\u201d:\nHere is a function taking distinct values for identifiability purposes. The filtered state is given by:\nThe generator of is denoted by . The claim of the Shiryaev-Wonham\u2019s filter is that the filtering equation becomes:\nHere is a -standard Brownian motion called in the filtering literature the innovation process. The quantity denotes the expectation of with respect to the probability measure . Throughout the paper, we only consider , i.e. the two state regime."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.1.2",
|
| 43 |
+
"parent_section_id": "2.1",
|
| 44 |
+
"section_name": "2.1.2. Two states",
|
| 45 |
+
"text": "In this case, w.l.o.g , and all the information is contained in\nMaking explicit in this case Eq. (2.1 ###reference_###) we observe that it has exactly the same type of dynamic as the one studied in the authors\u2019 previous paper [BCC+22 ###reference_bx9###]. Using the notation\nwe have indeed that Eq. (2.1 ###reference_###) can be rewritten as\nwhere\nWithout loss of generality, we shall assume in the rest of the paper. Also . In the end, our setup is indeed given by Eq. (1.1 ###reference_###) and (1.6 ###reference_###), which we repeat for convenience:\nThe invariant probability measure of the Markov process solves\nWithout any computation, this is intuitively clear, as setting yields an extremely strong observation noise and no noise in the filtering equation:\nwhose asymptotic value is . Informally, this says that, in the absence of information, the best estimation of the law in long time is the invariant measure. This is essentially the content of [Chi06 ###reference_bx11###, Theorem 4], which holds for a Shiryaev-Wonham filter with any finite number of states."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.1.3",
|
| 49 |
+
"parent_section_id": "2.1",
|
| 50 |
+
"section_name": "2.1.3. Innovation process",
|
| 51 |
+
"text": "The innovation appearing in the SDE is the -Brownian motion obtained as:\nWith the simplifying assumption that , we obtain:"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "2.2",
|
| 55 |
+
"parent_section_id": "2",
|
| 56 |
+
"section_name": "2.2. Trajectorial strong noise limits and the question",
|
| 57 |
+
"text": "Eq. (1.6 ###reference_###) falls in the scope of [BCC+22 ###reference_bx9###] which treats the strong noise limits of a large class of one-dimensional SDEs. There the authors give a general result for SDEs not necessarily related to filtering theory. More precisely, the result is two-fold. On the one hand, the process converges in a weak \u201cLebesgue-type\u201d topology to a Markov jump process. On the other hand, if one considers a strong \u201cuniform-type\u201d topology it is possible to capture the convergence to a spike process.\nTopology: Fixing an arbitrary horizon time . The weaker topology uses the distance:\ninducing the Lebesgue topology on the compact set . This distance is the usual distance that turns the convergence in probability into a metric convergence. Notice that the previous paper [BCC+22 ###reference_bx9###] deals with an infinite time horizon. Of course, the restricted topology there is then the same as here.\nThe stronger topology is defined by using the Hausdorff distance for graphs. In this paper, a graph is nothing but a closed (hence compact) subset of , where denotes the slice of the graph at time . The Hausdorff distance between two graphs and is then defined by:\nwhere is the unit ball of and, for and , . A straightforward consequence that will be used many times in the sequel is the equivalence\nIn particular dealing with the Hausdorff distance requires treating those two conditions. This distance is the appropriate one which allows to capture the spiking process. Indeed, when interpreting in terms of processes, this distance corresponds to the distance associated to the convergence of the graph of the processes. Spikes are then understood as vertical lines for the limit of . Those lines are of Lebesgue measure zero and cannot be enlightened by smoothing measure of type . Note that the topologies usually used for the convergence of stochastic processes, such as the Skorohod topology are useless in this context. This is due to the singularity of the limiting processes as it as been pointed out in [BCC+22 ###reference_bx9###].\n###figure_4### Limiting processes: For the sake of completeness, we recall the construction of the spiking process which is described in [BCC+22 ###reference_bx9###].\nFirst at hand we have the process, which is a pure jump Markov process on with c\u00e0dl\u00e0g trajectories. Recall that (resp. ) are the jump rate between and (resp. between and ), with and . The initial position is sampled according to\nSecondly, we shall define the spike process as a set-valued random path , where is the power set of the segment . For a comprehensive sketch, see Figure 2.1 ###reference_###. It is formally obtained as follows:\nSample a random initial segment as\nSample following a Poisson point process on with intensity\nThen, by progressively rescaling time for by\nwe obtain a Poisson point process with random intensity which we denote by .\nFinally\nNotice that by virtue of being a Poisson point process with finite intensity away from zero, there are no points with the same abscissa and only countably many with . If there is no point with abscissa , then it is natural to set and thus . This convention is natural in the sense that morally, there is always by default a point because of the infinite measure at zero.\nIn the sequel, we call \u201cjump\u201d the set when corresponds to a jump of the process . We also call \u201cspike\u201d a non-trivial slice at a given time which is not a jump. The \u201csize\u201d of a spike is then given by the Lebesgue measure of .\nA mathematical statement: The convergences were established thanks to a convenient (but fictitious) coupling of the processes for different values of . In contrast, the filtering problem has a natural coupling for different which is given by the observation equation (1.1 ###reference_###). In this context, let us state a small adaptation of an already established result. The precise notion of graph is given in Section 4.2 ###reference_###.\nThere is a two-faceted convergence.\nIn probability, for the topology, we have the following convergence in probability:\nEquivalently, that is to say\nHere is Bernoulli distributed with parameter , the initial condition222We assume independent of . of .\nIn law, for the Hausdorff topology for graphs, we have that the graph of of converges in law to a spike process described by Fig. 2.1 ###reference_###.\nIn law, for the Hausdorff topology for graphs, we have that the graph of , defined by Eq. (1.2 ###reference_###), converges in law to another singular random closed set where\nNotice that the first convergence is in the weaker Lebesgue-type topology and holds in probability, i.e. on the same probability space. The second and third convergences are in the stronger uniform-type topology, however they only hold in law, hence not necessarily on the same probability space.\nThe second point is indeed a direct corollary of [BCC+22 ###reference_bx9###] since almost sure convergence after a coupling implies convergence in law, regardless of the coupling. Although this coupling will be used in the paper further down the road, the reader should not give it much thought for the moment.\nThe third point is also immediate modulo certain subtleties. Recalling that and that the graph of converges to the random closed set , it suffices to apply the Mapping Theorem [Bil13 ###reference_bx10###, Theorem 2.7]. Indeed, a spike is mapped to either , or when examining the range of the indicator on . However, when invoking the Mapping Theorem, one needs to check that discontinuity points of the map have measure zero for the law of . This is indeed true since there are no spikes of height equal to almost surely \u2013 recall that the spike process is described in terms of Poisson processes [BCC+22 ###reference_bx9###].\nThe first point, although simpler and intuitive, does not come from [BCC+22 ###reference_bx9###]. In the case of filtering, the process is intrinsically defined, and we require the use of the specific coupling given by the additive model (1.1 ###reference_###). Let us show how the result is reduced to a single claim. The result is readily obtained from Markov inequality and the convergence:\nThe above convergence itself only requires the definition of in Eq. (2.6 ###reference_###), Lebesgue dominated convergence theorem and the claim\nIn order to prove Claim (2.11 ###reference_###), recall that by definition is a conditional expectation:\nAt this stage, let and let us introduce the process defined for all by\nThis process is clearly adapted, so for all , by definition of\nNote that we have used that for\nTaking then proves Claim (2.11 ###reference_###).\n\u220e\nWe can now formally state the question of interest:\nFor different regimes of and , how do the spikes behave in the stochastic process (1.4 ###reference_###)? Basically, we need an understanding of the tradeoff between spiking and smoothing. The intuition is that there are two regimes:\nThe slow feedback regime: the smoothing window is large enough so that the optimal estimator correctly estimates the hidden process .\nThe fast feedback regime: the smoothing window is too small so that does not correctly estimate the hidden process . One does observe the effect of spikes."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "2.3",
|
| 61 |
+
"parent_section_id": "2",
|
| 62 |
+
"section_name": "2.3. Main Theorem",
|
| 63 |
+
"text": "Our finding is that there is sharp transition between the slow feedback regime and the fast feedback regime:\nAs long as , we have the convergence in probability, for the topology, as in the first item of Theorem 2.2 ###reference_thm2###:\nHowever, in the stronger topologies, there exists a sharp transition when writing:\nThe following convergences hold in the Hausdorff topology on graphs in .\n(Fast feedback regime) If , smoothing does not occur and we have convergence in law to the spike process:\n(Slow feedback regime) If , smoothing occurs and we have convergence:\nObserve that since we are dealing in this case with processes with c\u00e0dl\u00e0g paths, the convergence holds equivalently for the usual -Skorohod topology and for the Hausdorff topology on graphs.\nThe proof given in Theorem 2.2 ###reference_thm2### carries verbatim to proving (2.12 ###reference_###). We will not repeat it.\nFor the rest of the paper, since we only need to establish convergences in law, for the Hausdorff topology, it is more convenient to prove almost sure convergence for any coupling of the Wiener process in Eq. (1.1 ###reference_###). Equivalently, we can choose a coupling of , which we take as the Dambis-Dubins-Schwarz coupling of [BCC+22 ###reference_bx9###]. In that setting, we know that almost surely, for the Hausdorff topology.\nIn Section 3 ###reference_###, we give in Proposition 3.1 ###reference_thm1### a derivation of in terms of the process . This will allow for an informal discussion explaining the phenomenon via a certain damping factor which is denoted in the sequel.\nBefore the core of the proof, we do some preparatory work in Section 4 ###reference_###, where we prove that only the damping term needs to be analysed.\nThe core of the proof is in Section 5 ###reference_###. We start with a trajectorial decomposition of the process . The proof of the first statement of Theorem 2.4 ###reference_thm4### is in Subsection 5.5 ###reference_###, while the proof of the second statement is in Subsection 5.6 ###reference_###.\n\u220e"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "2.4",
|
| 67 |
+
"parent_section_id": "2",
|
| 68 |
+
"section_name": "2.4. Further remarks",
|
| 69 |
+
"text": "On the transition: Without much change in the proof, one can consider depending on . In that setting, the fast feed-back regime and the slow feed-back regime correspond respectively to\nFurthermore, one could ask the question if there exists a threshold point . See Section 5.7 ###reference_### for a discussion on this point. As discussed there we strongly believe that the transition is sharp i.e. the fast feed-back regime and the slow feed-back regime correspond respectively to\nWe can also ask what happens at exactly the transition and if there is possible zooming around the constant . This matter is beyond the scope of the paper.\nAway from the transition: Because of the monotonicity of the damping, as a positive integral, one can easily deduce what is happening if remains away from the threshold interval constant .\nIs the convergence to the spike process only in law as ? Not in probability or almost surely?\nThis point is rather subtle and we mainly choose to sweep it under the rug. Nevertheless, let us make the following comment. In the context of filtering, the spikes correspond to exceptionally fast points of the Brownian motion appearing in the noise . Let us assume that for some (unphysical) reason, remains the same, i.e. one can perfectly tune the strength of the noise at will. For different , the spikes appear as functionals of the Brownian motion at different scales. Therefore, we argue that there is no hope for obtaining a natural trajectorial limit to the spike process as .\nOn the general Wonham-Shiryaev filter: It is a natural question to generalise our main theorem to the Wonham-Shiryaev filter with states from Eq. (2.1 ###reference_###). However, the mathematical technology dealing with the spiking phenomenon in a multi-dimensional setting is an open problem still under investigation.\nNotations: The notation denotes a deterministic (resp. random) quantity negligible (resp. almost surely negligible) with respect to the dependent deterministic (resp. random) function , as goes to infinity, when the extra variable is fixed. Moreover, when we are considering random quantities, and to consider convergence in probability, we denote a random variable such that goes to zero in probability as goes to infinity, when the extra variable is fixed. Similar usual notations are used with replaced by . Finally we use the notation (resp. ) to say that there exists a finite constant (depending a priori on ) such that (resp. ). When the dependence in is obvious, we omit the subscript .\nWhen the dependence on is universal (i.e. does not depend on the parameter ) we omit the dependence on in the notation defined above.\nGiven a process and two times we denote the increment of between and ."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "3. Smoothing transform",
|
| 75 |
+
"text": "We shall express the equation satisfied by (1.4 ###reference_###) in the -states context of Section 2.1.2 ###reference_.SSS2###. The general theory is given in [Lip01 ###reference_bx17###, Chapter 9]. For we write:\nFor any we have that\nwhere the instantaneous damping term is given by\nTo simplify notation, during the proof, we forget the dependence in and denote, for all ,\nThanks to [Lip01 ###reference_bx17###, Theorem 9.5], we have:\nwhich we will specialize to the point . Note that:\nResuming the computation:\nOne recognizes an ordinary differential equation in the variable , with . Upon solving, we have:\nThis is exactly the result when .\nRecall Eq. (3.3 ###reference_###). The exact derivative:\ngives the dual expression when , and then Eq. (3.2 ###reference_###).\n\u220e"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "4. Reduction to the control of the damping term",
|
| 81 |
+
"text": ""
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.1",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "4.1. Informal discussion",
|
| 87 |
+
"text": "Recall that in the context of Proposition 3.1 ###reference_thm1### we are interested in the case where with\nFor we define thus the damping term associated to the instantaneous damping term defined by Eq. (3.3 ###reference_###)\nand the process (recall Eq. (3.1 ###reference_###))\nAssume there is no jumping times for in the time interval . Spikes, by definition, are of size strictly smaller than one. If is collapsing on , i.e. is close to zero, then for , hence:\nand reciprocally when the collapse is on . From the previous proposition, we thus have:\nAssuming that the damping term converges to some limiting process in the large limit we expect that\nAbove, the limiting graph is defined by its slice at time , which is given by translated by and then rescaled by a factor , and then translated by again.\nInformally, there are three cases:\nSlow feedback: and therefore\nTransitory regime: is non-trivial and therefore\nwith having a statistic which needs to be analyzed. This analysis is beyond the scope of this paper as mentioned in Subsection 2.4 ###reference_###.\nFast feedback: and therefore\nIn this section we prove a useful intermediary step following the previous discussion, which informally says that:\nThis is the combination of two simplifying facts:\nDuring jumps, Hausdorff proximity is guaranteed. Indeed, the graph of the spike process and the graph of are very close in the Hausdorff sense since their slices are exactly at the jump times of . Thus no matter where is, the Hausdorff distance will be small.\nIf , away from jumps, the remainder benefits from smoothing.\nOnce this is established, we only need to control the damping term outside from small spikes.\nLet us now make these informal statements rigorous."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.2",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "4.2. Formal statement",
|
| 93 |
+
"text": "Let us start with a few notations and conventions. If , , is a function defined on a closed subset of , we denote by its graph:\nIf is a continuous function continuous then we define its graph by\nIf is a c\u00e0dl\u00e0g function then, by denoting by the set of discontinuous points of , we define its graph by\nEven if this definition appears to be awkward at first sight, we shall only use it for the Markov jump process which stands to connect the graph at jumping times\nWe recall that the slice at time of a graph is denoted by . In order to simplify the notations, we write\nfor the graph induced by the process of interest (which has continuous trajectories). We also denote the candidate for the limiting graph, either the completed graphs (if ) or (if ). By the convention above, in the definition of , the graph induced by the process , we add a vertical bar when there is a jump.\nWe define also the graph whose slice at time is given by\nand the set if . Observe in particular that contains the vertical bar when there is a jump of .\nThe following formalises the informal statement of Eq. (4.3 ###reference_###):\nConsider a coupling such that almost surely\nThen, almost surely:\nLet be the successive jump times of and let us denote the number of jumps in the time interval . It is easy to prove that\nWe define then on , for , the compact sets:\nBy [Bar06 ###reference_bx6###, Theorem 1.12.15] we have that for any compact subsets of ,\nSince (and similarly for replaced by ) it follows that\nHence, to prove Eq. (4.9 ###reference_###) we only have to prove that, a.s., on each event :\nand\nWe divide the proof of the proposition in three steps: we first prove Eq. (4.12 ###reference_###), then Eq. (4.8 ###reference_###) and then Eq. (4.13 ###reference_###).\nStep 1: Hausdorff proximity away from the jump times: proof of Eq. (4.12 ###reference_###).\nStep 1.1: Spikes are of size less than with high probability.\nLet be the largest length of a spike:\nFrom the explicit description of the law of , is the maximum decoration of a Poisson process on with intensity\nUpon conditioning on the process , and considering the definition of a Poisson process [Kin92 ###reference_bx13###, \u00a72.1], notice that the number of points falling inside is a Poisson random variable with parameter\nAs such the event corresponds to having this Poisson random variable being zero, so that:\nAs such, it is clear from the last inequality of Eq. (4.15 ###reference_###) that\nStep 1.2: End of the proof of Eq. (4.12 ###reference_###).\nWe observe now, by definition of Hausdorff distance, that for any and there exists and such that\nFrom the definition of , it implies that\nand\nRecalling Eq. (4.10 ###reference_###) and Eq. (4.16 ###reference_###), let us then denote the event\nwhich satisfies , and on which we have\nwhere\nNotice in particular that:\nRecall Eq. (3.2 ###reference_###). For , on the event (defined by Eq. (4.19 ###reference_###)), we have thanks to Eq. (4.20 ###reference_###), that, for sufficiently large,\nThe step marked with (*) holds because there is no jump during for as soon as . Taking limits and using Eq. (4.21 ###reference_###), we conclude that:\nThis limit holds on for all . We have thus proven that away from jumps the Hausdorff distance tends to zero. This concludes the proof of Eq. (4.12 ###reference_###).\nStep 2: Proof of Eq. (4.8 ###reference_###): .\nFrom the definition of the distance\nNow, notice that because and the estimate from Eq. (4.22 ###reference_###), we have:\nfor all . Since almost surely, this concludes the proof of Eq. (4.8 ###reference_###).\nStep 3: Hausdorff proximity around jump times: proof of Eq. (4.13 ###reference_###).\nThanks to the triangle inequality, to prove Eq. (4.13 ###reference_###), it is sufficient to prove that\nand\nFor Eq. (4.23 ###reference_###), it is then sufficient to show that on :\nThe first inequality (4.25 ###reference_###) is readily obtained by noticing that contains vertical lines at the moment of jumps:\nAs such, we simply need to pick and , where is such that .\nFor the second inequality (4.26 ###reference_###) we notice that we can assume that is a jump time of since any is at distance at most from a jump time. Hence we assume for some . Observe now that\nwhich goes to as goes to infinity by Eq. (4.8 ###reference_###). Observe that the process takes different values in and in . The previous bound implies that\nwhere . Because is continuous, the Intermediate Value Theorem states that Eq. (4.26 ###reference_###) is satisfied for large enough. Hence we have proved Eq. (4.23 ###reference_###).\nTo prove Eq. (4.24 ###reference_###), we recall that (defined by Eq. (4.5 ###reference_###)) contains the vertical bar when there is a jump of . It follows then immediately that\nand Eq. (4.24 ###reference_###) then follows.\n\u220e"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "5. Proof of Main Theorem: Study of the damping term",
|
| 99 |
+
"text": "Thanks to Proposition 4.1 ###reference_thm1### the establishment of Theorem 2.4 ###reference_thm4### is reduced to the proof of\nAbove, as mentioned above, the convergence is understood almost surely since we can always assume the existence of a coupling between the processes involved in this limit. We recall that is defined via Eq. (4.5 ###reference_###). In Proposition 5.2 ###reference_thm2### we will show that this can be done by showing the two following facts:\nIf then\nfor the relevant times , which correspond to a spike.\nIf then\nagain, for the relevant times corresponding to a spike.\nIn order to prove Proposition 5.2 ###reference_thm2### we need to introduce several definitions."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.1",
|
| 103 |
+
"parent_section_id": "5",
|
| 104 |
+
"section_name": "5.1. Decomposition of trajectory",
|
| 105 |
+
"text": "Recall that without loss of generality, we may assume the almost sure convergence of to the spike process \u2013 see the discussion in the sketch of proof of the Main Theorem 2.4 ###reference_thm4###.\nLet be a sufficiently small positive number, which will go to zero at the end of the proof, but after the large limit. We define a sequence of stopping times by and, via induction on , by (see Fig. 5.1 ###reference_###),\nTo enlighten the notation, the dependence in and of these stopping times is in the sequel usually omitted. The \u2019s have to be understood as the starting times of spikes (or jumps), and the \u2019s have to be understood as the terminating times of spikes (or jumps). Observe there exists a finite random variable such that , i.e. almost surely there are intervals completely included in . Indeed, we know from our previous work [BCC+22 ###reference_bx9###], that converges a.s. to for the Hausdorff topology on graphs as goes to infinity. Also, for any , there are finitely many spikes of size larger than (for ). Therefore is necessary a.s. bounded independently of .\n###figure_5### Let us start with a useful lemma, which will permit, in the proof of Proposition 5.2 ###reference_thm2###, to avoid to control the damping outside the excursion intervals .\nAssume .\nFor all , we have:\non the event defined by Eq. (4.19 ###reference_###) and by Eq. (4.14 ###reference_###).\nBy definition, we have for such times :\nso that the natural estimator for is\n .\nBy definition of Hausdorff distance, there exists a pair such that\nwhich implies\nOn the event , it entails that\nNecessarily\nwhich amounts to equality. Using that , there are no jumps between and on the event . We thus have .\n\u220e\nWe consider the event defined by Eq. (4.19 ###reference_###) and on such event, for or any , we denote the finite random set\nSeparation argument: See Figure 5.2 ###reference_### for a comprehensive graphical explanation of the following argument. A single segment corresponds, in the large limit, to either a spike of size larger than , or a jump. Because , we are far from jumps and the segment necessarily corresponds to a spike. Notice that multiple can correspond to the same spike in the limit.\nTherefore for a spike of size larger than , with , we denote by the random finite set of indexes \u2019s such that the interval asymptotically coalesces to the time location of the spike:\nThe equality between the two limits follows from Corollary 2.4 in [BCC+22 ###reference_bx9###]. Indeed, by this corollary we know that the time spent by in the interval during the time window is of order . Hence . Since converges a.s. to in the Hausdorff topology in the large limit, this implies the existence of the limits above.\n###figure_6### The spikes (for ) larger than are separated by a random constant and, by definition of Hausdorff distance, we have then that333The Hausdorff distance in dimension one is defined similarly to the one in dimension two.:\nThanks to this, supposing the spike at is starting from , i.e. , we can strengthen the claim:\nfor any to\nSimilarly, supposing the spike at is starting from , i.e. , we can strengthen the claim:\nfor any to\nRecall the definition of the damping term given in Eq. (4.1 ###reference_###) and of the set in Eq. (5.3 ###reference_###). Assume that either\nor\nThen we have\nBy Proposition 4.1 ###reference_thm1### and the triangle inequality it is sufficient to prove\nCase 1: . Then and thanks to the triangle inequality:\nThe second term goes to zero thanks to Theorem 2.2 ###reference_thm2### of [BCC+22 ###reference_bx9###]. It thus suffices to show that\nAs such, starting with the definition of Hausdorff distance, we have that:\nOn , we are dealing with graphs of functions, so that\nwhere the inequality comes from a \u201cslice by slice\u201d bound. The same argument gives that\nRecalling Eq. (5.10 ###reference_###), we get\nNow, any point in is at most at distance of . This gives:\nBecause contains the set of vertical lines , . Regarding the term , we know that\nwith . This claim can be proved exactly as for Eq. (4.27 ###reference_###), replacing by the simpler process during the proof. Since has continuous trajectories, the Intermediate Value Theorem implies that\nTherefore, by sending to after sending to infinity, Eq. (5.9 ###reference_###) is proved once we show that\nObserve now that\nThanks to Lemma 5.1 ###reference_thm1###, we find on the good event, that\nIn the last line we used also Eq. (5.3 ###reference_###). Because of the inequality for all , Eq. (5.12 ###reference_###) follows from assumption (5.7 ###reference_###) and the proof is complete.\nCase 2: . Here . The proof in this case is slightly different. By Eq. (4.11 ###reference_###), we have that\nSince the graphs and contain both the set of vertical lines, we get easily that\nand\nso that it remains only to prove that\nSince, on , we are dealing with the graphs of functions, we give a bound \u201cslice by slice\u201d:\nThanks to Lemma 5.1 ###reference_thm1###, we find on the good event (recall Eq. (4.19 ###reference_###)):\nNotice in the last equality the appearance of the index set because if then (see the definition in Eq. (5.3 ###reference_###)). This latter bound goes to zero by assumption (5.8 ###reference_###).\n\u220e"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.2",
|
| 109 |
+
"parent_section_id": "5",
|
| 110 |
+
"section_name": "5.2. Coordinates via logistic regression",
|
| 111 |
+
"text": "In the previous paper [BCC+22 ###reference_bx9###], a crucial role was played by the scale function which is the unique change of variable such that is a martingale. This uniqueness is of course up to affine transformations. Another useful change of variable is as follows. Instead of asking for a vanishing drift, one can ask for a constant volatility term.\nRecall Eq. (2.4 ###reference_###). Up to affine transformations, the unique function such that has constant volatility term is the logistic function\nThe process satisfies:\nwhere and are related by Eq. (2.5 ###reference_###).\nThis elementary lemma is proved in Appendix C.1 ###reference_###.\n\u220e\nGiven this information the instantaneous damping term in Eq. (3.3 ###reference_###) takes a particularly convenient expression:\nInformal discussion: In particular, in order to prove that the damping term in Eq. (4.1 ###reference_###) either converges to zero or diverges to infinity, it suffices to control for :\nDepending on whether () or (), one of the two expressions in the integrand is dominant. Assuming on the entire interval , we have:\nContinuing:\nWe have thus proved the expression which is useful for :\nIf , then the useful expression is:"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "5.3",
|
| 115 |
+
"parent_section_id": "5",
|
| 116 |
+
"section_name": "5.3. Path transforms",
|
| 117 |
+
"text": "In order to systematically control the fluctuations of the process , we make the following change of variables. For , define:\nThe choice of letter for is that it will later play the role of a residual quantity. Thanks to this reformulation, the SDE defining in Eq. (5.13 ###reference_###) becomes:\nThe following lemma gives two ways of integrating Eq. (5.21 ###reference_###) \u2013 in the sense that we consider known, unknown and vice versa.\nConsider two real-valued semi-martingales and satisfying Eq. (5.21 ###reference_###). Then for all , we have the forward and backward formulas:\nSee Appendix C.2 ###reference_###.\n\u220e"
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "5.4",
|
| 121 |
+
"parent_section_id": "5",
|
| 122 |
+
"section_name": "5.4. Controlling the residual",
|
| 123 |
+
"text": "Recall that in the context of Proposition 5.2 ###reference_thm2###, we need to control:\nfor where , i.e. the \u2019s corresponding to a spike in the limit.\nExamining this specific interval with , it corresponds to one of the following two situations\n: \u2003in the limit, it is a spike from to for ;\n: \u2003in the limit, it is a spike from to for .\nBy symmetry, we only have to consider case ii).\nRecall that denotes the increment of defined by Eq. (5.20 ###reference_###). Fix two arbitrary positive constants and . Then, for all corresponding to a spike from to , i.e. like in (i ###reference_i1###), or a spike from to , i.e. like in (ii ###reference_i2###), the following holds\nThe implied function is random, yet finite, depends on and , but is independent of and .\nTo lighten the notation we omit sometimes during the proof the parameter . Moreover the reader has to remember the notations defined at the end of Section 2.4 ###reference_###.\nWe prove the claim only in the case (ii ###reference_i2###) of a spike from to , since the other case is similar. By definition of the \u2019s and \u2019s, we have that\nIn fact, thanks to the separation argument, the right bound holds on a much longer interval as claimed in Eq. (5.5 ###reference_###). As such, recalling the definition of from Eq. (5.4 ###reference_###) and the fact that , if is sufficiently large to have , then\nwhich implies\nAgain within the range , let us control the process from Eq. (5.20 ###reference_###). We have:\nwhere we used in the last line the fact that\nBy Corollary 2.4 in [BCC+22 ###reference_bx9###] we know that the time spent by in the interval during the time window is of order . Hence . For any going to as goes to infinity, we have that:\nIn order to control the last term in the above equation, we need to refine the previously invoked Corollary 2.4 in [BCC+22 ###reference_bx9###]. This is done in Lemma B.2 ###reference_thm2### and we have then that\nAs such, we can take in order to have:\n\u220e"
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5.5",
|
| 127 |
+
"parent_section_id": "5",
|
| 128 |
+
"section_name": "5.5. Fast feedback regime",
|
| 129 |
+
"text": "To lighten the notation we omit usually in the sequel the parameter . Moreover the reader has to remember the notations defined at the end of Section 2.4 ###reference_###.\nAs announced in Proposition 5.2 ###reference_thm2###, we only need to prove a uniform absence of damping:\nWe proceed by symmetry, as in the proof of Lemma 5.5 ###reference_thm5###, by considering only spikes from to . As in the proof of that lemma, we have then that for any ,\nHence:\nLet us now control this last term. Thanks to the reformulation of Eq. (5.18 ###reference_###\u20135.20 ###reference_###) and then the backward formula of Lemma 5.4 ###reference_thm4### we have:\nGoing back to the previous equation, we find:\nNow, recall that since , is in the middle of a spike away from and \u2013 see Eq. (5.23 ###reference_###). As such is bounded from below and from above. This is unlike for , where is bounded only from above. In any case, this yields and . Therefore it suffices to prove:\nFocusing on Eq. (5.27 ###reference_###), we have thanks to Lemma 5.5 ###reference_thm5### and the change of variable :\nRecall now L\u00e9vy\u2019s modulus of continuity theorem [RY13 ###reference_bx23###, Chapter 1, Theorem 2.7]. Let be any fixed standard one dimensional Brownian motion and define\nBecause of the Dambis-Dubins-Schwartz coupling, actually depends on . Therefore, the control provided by L\u00e9vy\u2019s modulus of continuity cannot be used in its almost sure version but only in its probability convergence version. We introduce the notation:\nand we have that\nwhere denotes a random variable converging to in probability as goes to infinity. This is because for any , we have that\nthe first equality holding because and have the same law.\nGoing back to the proof of Eq. (5.27 ###reference_###), we deduce by Lemma 5.5 ###reference_thm5### that:\nObserve that this upper bound goes to zero as for any . Therefore we are done. We have proved Eq. (5.7 ###reference_###), which indeed gives no damping."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "5.6",
|
| 133 |
+
"parent_section_id": "5",
|
| 134 |
+
"section_name": "5.6. Slow feedback regime",
|
| 135 |
+
"text": "To lighten the notation we omit sometimes in the sequel the parameter . Moreover the reader has to remember the notations defined at the end of Section 2.4 ###reference_###.\nAs announced in Proposition 5.2 ###reference_thm2###, we only need to prove there is damping:\nBy Corollary 2.4 in [BCC+22 ###reference_bx9###] we know that the time spent by in the interval during the time window is of order . Hence . Therefore, for large enough, for any ,\nHence, in the above infimum defined by Eq. (5.29 ###reference_###), since for any and ,\nand , we are reduced to prove that (recall Eq. (4.1 ###reference_###)):\nuniformly in , i.e. for the \u2019s corresponding to a spike (recall the definition (5.3 ###reference_###)).\nAs we have seen before, examining any interval , , corresponds to one of the following two situations:\n: in the limit, it is a spike from to for ;\n: in the limit, it is a spike from to for .\nBy symmetry, we only have to consider case (ii ###reference_i2###). Since, by Eq. (5.15 ###reference_###),\nwe only have to consider the limit of the integral on the right-hand side of the previous display444In fact the neglected term in this inequality vanishes as goes to infinity since remains at distance from (hence remains bounded) on the time interval considered, and the length of the time interval is of order , which goes to as goes to infinity..\nStep 1: Starting backward from , the process reaches .\nHere is a large but fixed constant independent of and . Let us prove that there is a random time such that:\nWithout loss of generality, we can look for with and . Indeed, we will have then, for large enough, that because . For the sake of notational simplicity, the new constant and will be denoted and .\nNow, by Eq. (5.21 ###reference_###), for any , we have:\nFix . Let us call the backward hitting time of by starting from . Here is defined by\nObserve that is not a stopping time. Also, we can define a random variable by writing\nAt this point, we do not even know that or are bounded. By convention, if the backward hitting time is never reached.\nStarting from Eq. (5.31 ###reference_###), take now with and , and write:\nThen divide by and invoke L\u00e9vy\u2019s modulus of continuity theorem (see Eq. (5.28 ###reference_###)) to find that for all in and :\nBy shifting from to , there is no loss of generality in writing\nThis will allow our subsequent reasoning to use absolute constants. The reasoning is decomposed into two steps. The first Step 1.1 uses the idea that on a segment large enough, the drift term in Eq. (5.31 ###reference_###) is the main term, overpowering the oscillation of Brownian motion, so that is reached. This yields a bound on , or equivalently . The second Step 1.2 uses this estimate and refines it in order to pinpoint the location of around (for and large enough).\nStep 1.1: The initial estimate: for .\nNote that the following statements are trivially equivalent: , or , or reaches in the time interval .\nLet us start by proving, by contraposition, that with high probability, all (or any of) these statements hold. As such, we start by supposing the converse, i.e. .\nThanks to Eq. (5.32 ###reference_###) and Lemma 5.5 ###reference_thm5###, used to bound , we have that for all in and\nNote that for , we have\nwhich combined with Eq. (5.25 ###reference_###), implies\n. This way the smallest possible LHS in Eq. (5.33 ###reference_###) is . As such, we find a contradiction as soon as there is a such that:\nThis is rearranged as:\nTherefore, we have a contradiction for and large enough. For example and .\nStep 1.2: Pinpointing the location of :\n\n\nThanks to the previous estimate, we now know that for large enough ( after shift by ), the segment has length . We can thus safely apply Lemma 5.5 ###reference_thm5### to control the term in Eq. (5.32 ###reference_###) for all . This yields that for all in and :\nChoose now , and therefore . We have then\nThis implies\nHence\nMultiplying by , we find\nWe are now done with Step 1.2. In particular, it proves that for and large enough, which also finishes proving Step 1.\nFrom the backward formula of Lemma 5.4 ###reference_thm4###, we see that:\nFrom the forward formula, on the other hand:\nwhich easier to control in order to prove a divergence to .\nAs such, we plan on using a forward estimate once we have reached level . Let us record the expression for further use:\nStep 2: Conclusion. Now, we know that reaches at thanks to the threshold of . Moreover, by Step 1.2, the gap between and is sufficiently large as it is equal to for some small random variable .\nNow, because of the reasoning at the beginning of Step 1, , with , , it suffices to prove\nTo simplify notations we denote by and by . Let us rearrange Eq. (5.31 ###reference_###) as\nBy using Levy\u2019s modulus of continuity for Brownian motion, we get\nNow, we use a pretty loose lower bound. As in the proof of Lemma 5.5 ###reference_thm5###, because spikes are separated, we know that . This is more precisely given in Eq. (5.25 ###reference_###). As such\nReinjecting this inequality in the previous lower bound yields\nThis lower bound goes to infinity as goes to infinity for\nThis equivalent to . We are done in this regime."
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "5.7",
|
| 139 |
+
"parent_section_id": "5",
|
| 140 |
+
"section_name": "5.7. Intuitions on the slow feedback regime",
|
| 141 |
+
"text": "In this section we explain why we conjecture that the conclusions of the slow feedback regime proved for should in principle hold for \u2013 even if a rigorous argument eludes us for now. Observe first that Proposition 5.2 ###reference_thm2### and the previous Step 1 are valid for . Hence assuming only , we know that reaches at as soon as . Moreover, we have seen that is sufficient.\nNow let be any time such that and . Recall Proposition 5.2 ###reference_thm2### and Eq. (5.30 ###reference_###). It thus suffices to find a such that\nLet . Recall Eq. (5.31 ###reference_###):\nand the forward integration of Lemma 5.4 ###reference_thm4###, which gives (see Remark 5.6 ###reference_thm6### )\nCombining the two last expressions we obtain\nWe specialise now this expression to to get that\nHere, for the second equality we used Lemma 5.5 ###reference_thm5### and for the last inequality Jensen inequality. Using for we get\nSince , we are done if we can prove that\nRecall that we have some freedom in the choice of the constant and that depends on . Then, if we could find such that is a generic point for Brownian motion, i.e. where the Law of Iterated Logarithm [RY13 ###reference_bx23###, Chapter 2, Theorem 1.9] is satisfied, instead of the full L\u00e9vy modulus of continuity [RY13 ###reference_bx23###, Chapter 1, Theorem 2.7], we could conclude the proof for ."
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "6",
|
| 145 |
+
"parent_section_id": null,
|
| 146 |
+
"section_name": "6. Acknowledgements",
|
| 147 |
+
"text": "The authors are grateful to R. Ch\u00e9trite for useful initial discussions. C.P is supported by the ANR project \u2018ESQuisses\u2019, grant number ANR-20-CE47-0014-01, the ANR project \u2018Quantum Trajectories\u2019, grant number ANR-20-CE40-0024-01 and the program \u2018Investissements d\u2019Avenir\u2019 ANR-11-LABX-0040 of the French National Research Agency. C. P. is also supported by the ANR project Q-COAST ANR-19-CE48-0003. This work has been also supported by the 80 prime project StronQU of MITI-CNRS: \u2018Strong noise limit of stochastic processes and application of quantum\nsystems out of equilibrium\u2019."
|
| 148 |
+
}
|
| 149 |
+
],
|
| 150 |
+
"appendix": [
|
| 151 |
+
{
|
| 152 |
+
"section_id": "Appendix 1",
|
| 153 |
+
"parent_section_id": null,
|
| 154 |
+
"section_name": "Appendix A Scale function and time change [BCC+22]",
|
| 155 |
+
"text": "Let us recall the expressions of the scale function and change of time used in the paper [BCC+22 ###reference_bx9###]. We define the scale function of Eq. (1.6 ###reference_###) as:\nwhere\nThanks to the Dambis-Dubins-Schwartz theorem, if denotes the solution of Eq. (1.6 ###reference_###), there is a Brownian motion starting from such that:\nThe time change is given by:\nand the inverse change is given by [BCC+22 ###reference_bx9###, Subsection 3.2]\nFor and we denote the occupation time of level by during the time interval . Via the occupation time formula:\nand the weak convergence of to the mixture , we can deduce the almost sure convergence:\nuniformly on all compact sets of the form .\nWe observe finally that introducing\nwe have that\nthe equality being in law."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"section_id": "Appendix 2",
|
| 159 |
+
"parent_section_id": null,
|
| 160 |
+
"section_name": "Appendix B Asymptotic analysis of a singular additive functional",
|
| 161 |
+
"text": "Throughout the paper, it is important to control the damping term\nwhere is given by Eq. (3.3 ###reference_###).\nMore generally, for any positive map , we define the additive functional:\nWe have the exact expression:\nIn particular, with we get that\nRecalling Eq. (A.4 ###reference_###) and Eq. (A.3 ###reference_###) we have that\nInvoking the occupation time formula:\n\u220e\nLet be fixed and such that . Then, a.s., for any , we have that\nBy the occupation time formula in Lemma B.1 ###reference_thm1###:\nThe previous term results from Corollary 2.4 in [BCC+22 ###reference_bx9###] which states that the time spent by in some fixed (i.e. independent of ) interval is of order .\nTo control the local time increment we observe that\nand we recall that, by Eq. (A.6 ###reference_###), converges to a finite limit. As such, this is controlled by the maximal local time over a finite (random) time interval. Hence\n\u220e"
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"section_id": "Appendix 3",
|
| 165 |
+
"parent_section_id": null,
|
| 166 |
+
"section_name": "Appendix C Few technical lemmas",
|
| 167 |
+
"text": "In order to lighten notation we omit the superscript in the next equations. For a given smooth function , It\u00f4 formula yields\nAs such, has constant volatility term, say , if and only if:\nfor a certain choice of constant . The first claim is proved.\nNow, choosing for convenience, let us derive the SDE for . By using we obtain:\nhence the first expression (5.13 ###reference_###) with .\nFor the second expression (5.14 ###reference_###), recalling that\n\nwe get:\nWe start by writing:\nBackward integration: Consider as fixed and as varying. Then differentiate in the expression for :\nIt yields:\nEquivalently:\nIntegrating on gives:\nwhich gives the backward formula.\nForward integration: The other way around, fix and take as varying. Then differentiate in the expression for :\nIt yields:\nEquivalently:\nIntegrating between on gives:\nwhich gives the forward formula."
|
| 168 |
+
}
|
| 169 |
+
],
|
| 170 |
+
"tables": {},
|
| 171 |
+
"image_paths": {
|
| 172 |
+
"1": {
|
| 173 |
+
"figure_path": "2211.02032v3_figure_1.png",
|
| 174 |
+
"caption": "Figure 1.1. Numerical simulation of the hidden process \ud835\udc31\ud835\udc31{\\mathbf{x}}bold_x and the observation process \ud835\udc32\u03b3superscript\ud835\udc32\ud835\udefe{\\mathbf{y}}^{\\gamma}bold_y start_POSTSUPERSCRIPT italic_\u03b3 end_POSTSUPERSCRIPT for \u03b3=102\ud835\udefesuperscript102\\gamma=10^{2}italic_\u03b3 = 10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The challenge is to infer the drift of \ud835\udc32\u03b3superscript\ud835\udc32\ud835\udefe{\\mathbf{y}}^{\\gamma}bold_y start_POSTSUPERSCRIPT italic_\u03b3 end_POSTSUPERSCRIPT, in spite of Brownian noise and in a very short window. Parameters are \u03bb=1.3\ud835\udf061.3\\lambda=1.3italic_\u03bb = 1.3 and p=0.4\ud835\udc5d0.4p=0.4italic_p = 0.4. There are 106superscript10610^{6}10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT time steps to discretize [0,10]010[0,10][ 0 , 10 ]. The code is available at the online repository\n\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003%\\quad\u2423\\quad\u2423\\quad\u2423\\quadhttps://github.com/redachhaibi/Spikes-in-Classical-Filtering",
|
| 175 |
+
"url": "http://arxiv.org/html/2211.02032v3/extracted/5892183/process_xy.png"
|
| 176 |
+
},
|
| 177 |
+
"2": {
|
| 178 |
+
"figure_path": "2211.02032v3_figure_2.png",
|
| 179 |
+
"caption": "Figure 1.2. \u201cThe whims of the Wonham filter\u201d: Informally, on a very short time interval, it is difficult to distinguish between a change in the drift of \ud835\udc32\u03b3superscript\ud835\udc32\ud835\udefe{\\mathbf{y}}^{\\gamma}bold_y start_POSTSUPERSCRIPT italic_\u03b3 end_POSTSUPERSCRIPT and an exceptionnal time of Brownian motion. The figure shows a numerical simulation of the process (\u03c0t\u03b3;t\u22650)superscriptsubscript\ud835\udf0b\ud835\udc61\ud835\udefe\ud835\udc610\\left(\\pi_{t}^{\\gamma}\\ ;\\ t\\geq 0\\right)( italic_\u03c0 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_\u03b3 end_POSTSUPERSCRIPT ; italic_t \u2265 0 ) for the same realization of \ud835\udc31\ud835\udc31{\\mathbf{x}}bold_x as Fig. 1.1. Same time discretization. This time we chose the larger \u03b3=104\ud835\udefesuperscript104\\gamma=10^{4}italic_\u03b3 = 10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT to highlight spikes.",
|
| 180 |
+
"url": "http://arxiv.org/html/2211.02032v3/extracted/5892183/filter.png"
|
| 181 |
+
},
|
| 182 |
+
"3": {
|
| 183 |
+
"figure_path": "2211.02032v3_figure_3.png",
|
| 184 |
+
"caption": "Figure 1.3. Numerical simulation of the process (\u03c0t\u03b4,\u03b3;t\u22650)superscriptsubscript\ud835\udf0b\ud835\udc61\ud835\udeff\ud835\udefe\ud835\udc610(\\pi_{t}^{\\delta,\\gamma}\\ ;\\ t\\geq 0)( italic_\u03c0 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_\u03b4 , italic_\u03b3 end_POSTSUPERSCRIPT ; italic_t \u2265 0 ) for the same realisation of \ud835\udc31\ud835\udc31{\\mathbf{x}}bold_x as Fig. 1.1. Same time discretisation. We have \u03b3=104\ud835\udefesuperscript104\\gamma=10^{4}italic_\u03b3 = 10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT and \u03b4\u03b3=C\u2062log\u2061\u03b3\u03b3subscript\ud835\udeff\ud835\udefe\ud835\udc36\ud835\udefe\ud835\udefe\\delta_{\\gamma}=C\\frac{\\log\\gamma}{\\gamma}italic_\u03b4 start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT = italic_C divide start_ARG roman_log italic_\u03b3 end_ARG start_ARG italic_\u03b3 end_ARG, with C\u2208{12,1,2,4,8}\ud835\udc36121248C\\in\\{\\frac{1}{2},1,2,4,8\\}italic_C \u2208 { divide start_ARG 1 end_ARG start_ARG 2 end_ARG , 1 , 2 , 4 , 8 }.",
|
| 185 |
+
"url": "http://arxiv.org/html/2211.02032v3/extracted/5892183/filter_smoothed.png"
|
| 186 |
+
},
|
| 187 |
+
"4": {
|
| 188 |
+
"figure_path": "2211.02032v3_figure_4.png",
|
| 189 |
+
"caption": "Figure 2.1. Sketch of the two limiting processes. The graph \ud835\udca2\u2062(\ud835\udc31)\ud835\udca2\ud835\udc31\\mathcal{G}({\\mathbf{x}})caligraphic_G ( bold_x ) of the hidden Markov pure jump process \ud835\udc31\ud835\udc31{\\mathbf{x}}bold_x is in red (solid lines), and the set-valued spike process \ud835\udd4f\ud835\udd4f{\\mathbb{X}}blackboard_X is the union of the blue graph (dashed lines) and red graph (solid lines).",
|
| 190 |
+
"url": "http://arxiv.org/html/2211.02032v3/x1.png"
|
| 191 |
+
},
|
| 192 |
+
"5": {
|
| 193 |
+
"figure_path": "2211.02032v3_figure_5.png",
|
| 194 |
+
"caption": "Figure 5.1. Decomposition of trajectory.",
|
| 195 |
+
"url": "http://arxiv.org/html/2211.02032v3/x2.png"
|
| 196 |
+
},
|
| 197 |
+
"6": {
|
| 198 |
+
"figure_path": "2211.02032v3_figure_6.png",
|
| 199 |
+
"caption": "Figure 5.2. The separation argument. In blue is represented the spike process restricted on some interval of time where, to have a comprehensive picture, we have only two spikes with time position s\ud835\udc60sitalic_s and s\u2032superscript\ud835\udc60\u2032s^{\\prime}italic_s start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT of length bigger than \u03b5\ud835\udf00\\varepsilonitalic_\u03b5 and separated by a time-distance at least \ud835\udd4a\u03b5subscript\ud835\udd4a\ud835\udf00\\mathbb{S}_{\\varepsilon}blackboard_S start_POSTSUBSCRIPT italic_\u03b5 end_POSTSUBSCRIPT. The spikes of size smaller than \u03b5\ud835\udf00\\varepsilonitalic_\u03b5 are not represented. The green stars correspond to the stopping times Sjsubscript\ud835\udc46\ud835\udc57S_{j}italic_S start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT\u2019s and the red stars to the stopping times Tjsubscript\ud835\udc47\ud835\udc57T_{j}italic_T start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT\u2019s. The black curve representing the trajectory of \u03c0\u03b3superscript\ud835\udf0b\ud835\udefe\\pi^{\\gamma}italic_\u03c0 start_POSTSUPERSCRIPT italic_\u03b3 end_POSTSUPERSCRIPT is contained in a d\u210d\u2062(\ud835\udca2\u2062(\u03c0\u03b3),\ud835\udd4f)subscriptd\u210d\ud835\udca2superscript\ud835\udf0b\ud835\udefe\ud835\udd4f{\\rm d}_{\\mathbb{H}}({\\mathcal{G}}(\\pi^{\\gamma}),\\mathbb{X})roman_d start_POSTSUBSCRIPT blackboard_H end_POSTSUBSCRIPT ( caligraphic_G ( italic_\u03c0 start_POSTSUPERSCRIPT italic_\u03b3 end_POSTSUPERSCRIPT ) , blackboard_X )-thickening (painted in blue) of the blue graph of \ud835\udd4f\ud835\udd4f\\mathbb{X}blackboard_X.",
|
| 200 |
+
"url": "http://arxiv.org/html/2211.02032v3/x3.png"
|
| 201 |
+
}
|
| 202 |
+
},
|
| 203 |
+
"validation": true,
|
| 204 |
+
"references": [
|
| 205 |
+
{
|
| 206 |
+
"1": {
|
| 207 |
+
"title": "Estimating the state of a noisy continuous time markov chain when\ndynamic sampling is feasible.",
|
| 208 |
+
"author": "David Assaf.",
|
| 209 |
+
"venue": "The Annals of Applied Probability, 7(3):822\u2013836, 1997.",
|
| 210 |
+
"url": null
|
| 211 |
+
}
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"2": {
|
| 215 |
+
"title": "Exponential stability for nonlinear filtering of diffusion processes\nin a noncompact domain.",
|
| 216 |
+
"author": "Rami Atar.",
|
| 217 |
+
"venue": "Ann. Probab., 26(4):1552\u20131574, 1998.",
|
| 218 |
+
"url": null
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"3": {
|
| 223 |
+
"title": "Exponential stability for nonlinear filtering.",
|
| 224 |
+
"author": "Rami Atar and Ofer Zeitouni.",
|
| 225 |
+
"venue": "Ann. Inst. H. Poincar\u00e9 Probab. Statist., 33(6):697\u2013725,\n1997.",
|
| 226 |
+
"url": null
|
| 227 |
+
}
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"4": {
|
| 231 |
+
"title": "Lyapunov exponents for finite state nonlinear filtering.",
|
| 232 |
+
"author": "Rami Atar and Ofer Zeitouni.",
|
| 233 |
+
"venue": "SIAM J. Control Optim., 35(1):36\u201355, 1997.",
|
| 234 |
+
"url": null
|
| 235 |
+
}
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"5": {
|
| 239 |
+
"title": "A note on the memory length of optimal nonlinear filters.",
|
| 240 |
+
"author": "Rami Atar and Ofer Zeitouni.",
|
| 241 |
+
"venue": "Systems Control Lett., 35(2):131\u2013135, 1998.",
|
| 242 |
+
"url": null
|
| 243 |
+
}
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"6": {
|
| 247 |
+
"title": "Superfractals.",
|
| 248 |
+
"author": "Michael Fielding Barnsley.",
|
| 249 |
+
"venue": "Cambridge University Press, Cambridge, 2006.",
|
| 250 |
+
"url": null
|
| 251 |
+
}
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"7": {
|
| 255 |
+
"title": "Emergence of jumps in quantum trajectories via homogenization.",
|
| 256 |
+
"author": "Tristan Benoist, C\u00e9dric Bernardin, Rapha\u00ebl Chetrite, Reda Chhaibi,\nJoseph Najnudel, and Cl\u00e9ment Pellegrini.",
|
| 257 |
+
"venue": "Communications in Mathematical Physics, 387(3):1821\u20131867,\n2021.",
|
| 258 |
+
"url": null
|
| 259 |
+
}
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"8": {
|
| 263 |
+
"title": "Zooming in on quantum trajectories.",
|
| 264 |
+
"author": "Michel Bauer, Denis Bernard, and Antoine Tilloy.",
|
| 265 |
+
"venue": "Journal of Physics A: Mathematical and Theoretical,\n49(10):10LT01, 2016.",
|
| 266 |
+
"url": null
|
| 267 |
+
}
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"9": {
|
| 271 |
+
"title": "Spiking and collapsing in large noise limits of SDE\u2019s.",
|
| 272 |
+
"author": "C Bernardin, R Chetrite, R Chhaibi, J Najnudel, and C Pellegrini.",
|
| 273 |
+
"venue": "arXiv preprint arXiv:1810.05629, To Appear In Annals of Applied\nProbability, 2022.",
|
| 274 |
+
"url": null
|
| 275 |
+
}
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"10": {
|
| 279 |
+
"title": "Convergence of probability measures.",
|
| 280 |
+
"author": "Patrick Billingsley.",
|
| 281 |
+
"venue": "John Wiley & Sons, 2013.",
|
| 282 |
+
"url": null
|
| 283 |
+
}
|
| 284 |
+
},
|
| 285 |
+
{
|
| 286 |
+
"11": {
|
| 287 |
+
"title": "On filtering of markov chains in strong noise.",
|
| 288 |
+
"author": "Pavel Chigansky.",
|
| 289 |
+
"venue": "IEEE transactions on information theory, 52(9):4267\u20134272,\n2006.",
|
| 290 |
+
"url": null
|
| 291 |
+
}
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"12": {
|
| 295 |
+
"title": "On filtering for a hidden markov chain under square performance\ncriterion.",
|
| 296 |
+
"author": "Georgii Ksenofontovich Golubev.",
|
| 297 |
+
"venue": "Problemy Peredachi Informatsii, 36(3):22\u201328, 2000.",
|
| 298 |
+
"url": null
|
| 299 |
+
}
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"13": {
|
| 303 |
+
"title": "Poisson processes, volume 3.",
|
| 304 |
+
"author": "John Frank Charles Kingman.",
|
| 305 |
+
"venue": "Clarendon Press, 1992.",
|
| 306 |
+
"url": null
|
| 307 |
+
}
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"14": {
|
| 311 |
+
"title": "On some filtration procedure for jump Markov process observed in\nwhite Gaussian noise.",
|
| 312 |
+
"author": "Rafail Z. Khasminskii and Betty V. Lazareva.",
|
| 313 |
+
"venue": "Ann. Statist., 20(4):2153\u20132160, 1992.",
|
| 314 |
+
"url": null
|
| 315 |
+
}
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"15": {
|
| 319 |
+
"title": "The efficacy of error mitigation techniques for dram retention\nfailures: A comparative experimental study.",
|
| 320 |
+
"author": "Samira Khan, Donghyuk Lee, Yoongu Kim, Alaa R Alameldeen, Chris Wilkerson, and\nOnur Mutlu.",
|
| 321 |
+
"venue": "ACM SIGMETRICS Performance Evaluation Review, 42(1):519\u2013532,\n2014.",
|
| 322 |
+
"url": null
|
| 323 |
+
}
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"16": {
|
| 327 |
+
"title": "Asymptotic filtering for finite state Markov chains.",
|
| 328 |
+
"author": "Rafail Khasminskii and Ofer Zeitouni.",
|
| 329 |
+
"venue": "Stochastic Process. Appl., 63(1):1\u201310, 1996.",
|
| 330 |
+
"url": null
|
| 331 |
+
}
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"17": {
|
| 335 |
+
"title": "Statistics of random processes: I. General theory, volume 1.",
|
| 336 |
+
"author": "Liptser, Robert S and Shiryaev, Albert N.",
|
| 337 |
+
"venue": "Springer Science & Business Media, 2001.",
|
| 338 |
+
"url": null
|
| 339 |
+
}
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"18": {
|
| 343 |
+
"title": "Continuous quantum error correction as classical hybrid control.",
|
| 344 |
+
"author": "Hideo Mabuchi.",
|
| 345 |
+
"venue": "New Journal of Physics, 11(10):105044, oct 2009.",
|
| 346 |
+
"url": null
|
| 347 |
+
}
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"19": {
|
| 351 |
+
"title": "Nonlinear filtering of one-dimensional diffusions in the case of a\nhigh signal-to-noise ratio.",
|
| 352 |
+
"author": "Jean Picard.",
|
| 353 |
+
"venue": "SIAM J. Appl. Math., 46(6):1098\u20131125, 1986.",
|
| 354 |
+
"url": null
|
| 355 |
+
}
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"20": {
|
| 359 |
+
"title": "Understanding and modeling on-die error correction in modern dram: An\nexperimental study using real devices.",
|
| 360 |
+
"author": "Minesh Patel, Jeremie S Kim, Hasan Hassan, and Onur Mutlu.",
|
| 361 |
+
"venue": "In 2019 49th Annual IEEE/IFIP International Conference on\nDependable Systems and Networks (DSN), pages 13\u201325. IEEE, 2019.",
|
| 362 |
+
"url": null
|
| 363 |
+
}
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"21": {
|
| 367 |
+
"title": "Quenched large deviations for one dimensional nonlinear filtering.",
|
| 368 |
+
"author": "\u00c9tienne Pardoux and Ofer Zeitouni.",
|
| 369 |
+
"venue": "SIAM J. Control Optim., 43(4):1272\u20131297, 2004/05.",
|
| 370 |
+
"url": null
|
| 371 |
+
}
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"22": {
|
| 375 |
+
"title": "Some large deviation asymptotics in small noise filtering problems.",
|
| 376 |
+
"author": "Anugu Sumith Reddy, Amarjit Budhiraja, and Amit Apte.",
|
| 377 |
+
"venue": "SIAM J. Control Optim., 60(1):385\u2013409, 2022.",
|
| 378 |
+
"url": null
|
| 379 |
+
}
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"23": {
|
| 383 |
+
"title": "Continuous martingales and Brownian motion, volume 293.",
|
| 384 |
+
"author": "Daniel Revuz and Marc Yor.",
|
| 385 |
+
"venue": "Springer Science & Business Media, 2013.",
|
| 386 |
+
"url": null
|
| 387 |
+
}
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"24": {
|
| 391 |
+
"title": "Dram errors in the wild: a large-scale field study.",
|
| 392 |
+
"author": "Bianca Schroeder, Eduardo Pinheiro, and Wolf-Dietrich Weber.",
|
| 393 |
+
"venue": "ACM SIGMETRICS Performance Evaluation Review, 37(1):193\u2013204,\n2009.",
|
| 394 |
+
"url": null
|
| 395 |
+
}
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"25": {
|
| 399 |
+
"title": "Spikes in quantum trajectories.",
|
| 400 |
+
"author": "Antoine Tilloy, Michel Bauer, and Denis Bernard.",
|
| 401 |
+
"venue": "Phys. Rev. A, 92(5):052111, 2015.",
|
| 402 |
+
"url": null
|
| 403 |
+
}
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"26": {
|
| 407 |
+
"title": "Stochastic calculus, filtering, and stochastic control.",
|
| 408 |
+
"author": "Ramon Van Handel.",
|
| 409 |
+
"venue": "Course notes., URL http://www. princeton.\nedu/rvan/acm217/ACM217. pdf, 14, 2007.",
|
| 410 |
+
"url": null
|
| 411 |
+
}
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"27": {
|
| 415 |
+
"title": "Some applications of stochastic differential equations to optimal\nnonlinear filtering.",
|
| 416 |
+
"author": "W Murray Wonham.",
|
| 417 |
+
"venue": "Journal of the Society for Industrial and Applied Mathematics,\nSeries A: Control, 2(3):347\u2013369, 1964.",
|
| 418 |
+
"url": null
|
| 419 |
+
}
|
| 420 |
+
}
|
| 421 |
+
],
|
| 422 |
+
"url": "http://arxiv.org/html/2211.02032v3"
|
| 423 |
+
}
|
20241001/2211.12371v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2212.04223v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2301.04907v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2301.11301v4.json
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "A Complete Inference System for Skip-free Guarded Kleene Algebra with Tests",
|
| 3 |
+
"abstract": "Guarded Kleene Algebra with Tests (GKAT) is a fragment of Kleene Algebra with Tests (KAT) that was recently introduced to reason efficiently about imperative programs. In contrast to KAT, GKAT does not have an algebraic axiomatization, but relies on an analogue of Salomaa\u2019s axiomatization of Kleene Algebra. In this paper, we present an algebraic axiomatization and prove two completeness results for a large fragment of GKAT consisting of skip-free programs.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Kleene algebra with tests (KAT) [27 ###reference_b27###] is a logic for reasoning about semantics and equivalence of simple imperative programs.\nIt extends Kleene Algebra (KA) with Boolean control flow, which enables encoding of conditionals and while loops.\nKAT has been applied to verification tasks.\nFor example, it was used in proof-carrying Java programs [25 ###reference_b25###], in compiler optimization [29 ###reference_b29###], and file system verification [9 ###reference_b9###].\nMore recently, KAT was used for reasoning about packet-switched networks, serving as a core to NetKAT [4 ###reference_b4###] and Probabilistic NetKAT [13 ###reference_b13###, 45 ###reference_b45###].\nThe success of KAT in networking is partly due to its dual nature: it can be used to both specify and verify network properties.\nMoreover, the implementations of NetKAT and ProbNetKAT were surprisingly competitive with state-of-the-art tools [14 ###reference_b14###, 46 ###reference_b46###].\nPart of the surprise with the efficiency of these implementations is that the decision problem for equivalence in both KAT and NetKAT is PSPACE-complete [30 ###reference_b30###, 4 ###reference_b4###].\nFurther investigations [44 ###reference_b44###] revealed that the tasks performed in NetKAT only make use of a fragment of KAT. It turns out that the difficulty of deciding equivalence in KAT can largely be attributed to the non-deterministic nature of KAT programs.\nIf one restricts to KAT programs that operate deterministically with respect to Boolean control flow, the associated decision problem is almost linear.\nThis fragment of KAT was first identified in [31 ###reference_b31###] and further explored as guarded Kleene algebra with tests (GKAT) [44 ###reference_b44###].\nThe study in [44 ###reference_b44###] proved that the decision problem for GKAT programs is almost linear, and proposed an axiomatization of equivalence.\nHowever, the axiomatization suffered from a serious drawback: it included a powerful uniqueness of solutions axiom (UA), which greatly encumbers algebraic reasoning in practice.\nIn order to use (UA) to show that a pair of programs are equivalent, one needs to find a system of equations satisfied by both.\nEven more worryingly, the axiomatization contained a fixed-point axiom with a side condition reminiscent of Salomaa\u2019s axiomatization for regular expressions.\nThis axiom is known to be non-algebraic, and thus impairs the use of the axiomatic reasoning in context (as substitution of atomic programs is not sound anymore).\nThe authors of [44 ###reference_b44###] left as open questions whether (UA) can be derived from the other GKAT axioms and whether the non-algebraic side condition can be removed.\nDespite the attention GKAT has received in recent literature [41 ###reference_b41###, 50 ###reference_b50###, 43 ###reference_b43###], these questions remain open.\nIn the present work, we offer a partial answer to the questions posed in [44 ###reference_b44###].\nWe show that proving the validity of an equivalence in GKAT does not require (UA) if the pair of programs in question are of a particular form, what we call skip-free.\nThis fragment of GKAT is expressive enough to capture a large class of programs, and it also provides a better basis for algebraic reasoning: we show that the side condition of the fixed-point axiom can be removed.\nOur inspiration to look at this fragment came from recent work by Grabmayer and Fokkink on the axiomatization of one-free star expressions modulo bisimulation [16 ###reference_b16###, 15 ###reference_b15###], an important stepping stone to solve a decades-open problem posed by Milner [34 ###reference_b34###].\nIn a nutshell, our contribution is to identify a large fragment of GKAT, what we call the skip-free fragment, that admits an algebraic axiomatization.\nWe axiomatize both bisimilarity and language semantics and provide two completeness proofs.\nThe first proves completeness of skip-free GKAT modulo bisimulation [41 ###reference_b41###], via a reduction to completeness of Grabmayer and Fokkink\u2019s system [16 ###reference_b16###].\nThe second proves completeness of skip-free GKAT w.r.t. language semantics via a reduction to skip-free GKAT modulo bisimulation.\nWe also show that equivalence proofs of skip-free GKAT expressions (for both semantics) embed in full GKAT.\nThe next section contains an introduction to GKAT and an overview of the open problems we tackle in the technical sections of the paper."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Overview",
|
| 15 |
+
"text": "In this section we provide an overview of our results. We start with a motivating example of two imperative programs to discuss program equivalence as a verification technology. We then show how GKAT can be used to solve this problem and explore the open questions that we tackle in this paper."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Introducing Skip-free GKAT",
|
| 21 |
+
"text": "The axiom scheme (UA) can be avoided entirely in a certain fragment of GKAT, both for determining bisimilarity and language equivalence.\nIn this section, we give a formal description of the expressions in this fragment and their semantics."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Skip-free Semantics",
|
| 27 |
+
"text": "There are three natural ways to interpret skip-free GKAT expressions: as automata, as behaviours, and as languages.222We will connect these to the relational semantics from Definition 1 ###reference_inition1### in Section 7 ###reference_###.\nAfter a short note on Boolean algebra, we shall begin with the automaton interpretation, also known as the small-step semantics, from which the other two can be derived.\nTo properly present our automata, we need to introduce one more notion.\nBoolean expressions are a syntax for elements of a Boolean algebra, an algebraic structure satisfying the equations in Fig. 3 ###reference_###.\nWhen a Boolean algebra is freely generated from a finite set of basic tests ( in the case of ), it has a finite set of nonzero minimal elements called atoms.\nAtoms are in one-to-one correspondence with sets of tests, and the free Boolean algebra is isomorphic to , the sets of subsets of , equipped with , , and .\nIn the context of programming, one can think of an atom as a complete description of the machine state, saying which tests are true and which are false.\nWe will denote atoms by the Greek letters and , sometimes with indices.\nGiven a Boolean expression and an atom we say that entails , written , whenever , or equivalently .\nThroughout the paper, we use the notation where is a set and is a symbol to denote the disjoint union (coproduct) of and .\nWhen and are sets, we write for the set of functions from to .\nThe small-step semantics of a skip-free GKAT expression uses a special type of deterministic automaton.\nA skip-free automaton is a pair , where is a set of states and is a transition structure.\nAt every and for any , one of three things can happen:\n, which we write as , means the state under makes a transition to a new state , after performing the action ;\n, which we write as , means the state under successfully terminates with action ;\n, which we write as , means the state under terminates with failure.\nOften we will leave these outputs implicit.\nWe often drop the subscript from the notations above when no confusion is likely.\nWe equip the set of all skip-free GKAT expressions with an automaton structure given in Fig. 4 ###reference_###, representing step-by-step execution.\nGiven , we denote the set of states reachable from by and call this the small-step semantics of .\nThe small-step semantics of skip-free GKAT expressions is inspired by Brzozowski\u2019s derivatives [8 ###reference_b8###], which provide an automata-theoretic description of the step-by-step execution of a regular expression.\nOur first lemma tells us that, like regular expressions, skip-free GKAT expressions correspond to finite automata.\nFor any , has finitely many states.\nThe automaton that arises from the program fizzbuzz2 is below, with , , and .\nThe expression is the same as in Example 3 ###reference_mple3###, is the same as but without the action in front, and .\nWe also adopt the convention of writing where to represent all transitions where .\nThe automaton interpretation of a skip-free GKAT expression (its small-step semantics) provides an intuitive visual depiction of the details of its execution. This is a useful view on the operational semantics of expressions, but sometimes one might want to have a more precise description of the global behaviour of the program. The remaining two interpretations of skip-free GKAT expressions aim to capture two different semantics of expressions: one finer, bisimilarity, that makes a distinction on the branching created by how its states respond to atomic tests, which actions can be performed, and when successful termination and crashes occur; another coarser, language semantics, that assigns a language of traces to each expression capturing all sequences of actions that lead to successful termination. The key difference between these two semantics will be their ability to distinguish programs that crash early in the execution from programs that crash later\u2014this will become evident in the axiomatizations of both semantics. We start by presenting the language semantics as this is the more traditional one associated with GKAT (and regular) expressions.\nFormally, a (skip-free) guarded trace is a nonempty string of the form , where each and . Intuitively, each captures the state of program variables needed to execute program action and the execution of each except the last yields a new program state .\nA skip-free guarded language is a set of guarded traces.\nSkip-free guarded languages should be thought of as sets of strings denoting successfully terminating computations.\nIn a skip-free automaton with a state , the language accepted by is the skip-free guarded language\nIf is clear from context, we will simply write instead of .\nIf , we write and say that and are language equivalent.\nEach skip-free GKAT expression is a state in the automaton of expressions (Definition 4 ###reference_inition4###) and therefore accepts a language.\nThe language accepted by a skip-free GKAT expression is the set of successful runs of the program it denotes.\nAnalogously to GKAT, we can describe this language inductively.\nGiven an expression , the language accepted by in , i.e., can be characterized as follows:\nHere, we write and , while (where denotes the empty word) and .\nLemma 2 ###reference_ma2### provides a way of computing the language of an expression without having to generate the automaton for .\nAnother, finer, notion of equivalence that we can associate with skip-free automata is bisimilarity.\nGiven skip-free automata and , a bisimulation is a relation such that for any , and :\nif and only if ,\nif and only if , and\nif , then there is a such that and .333Together with the first two constraints, this condition implies that if , then there is an such that and .\nWe call and bisimilar if for some bisimulation and write\n.\nIn a fixed skip-free automaton , we define as the largest bisimulation, called bisimilarity.\nThis is an equivalence relation and a bisimulation.444\nThis follows directly from seeing skip-free automata as a special type of coalgebra and the fact that the functor involved preserves weak pullbacks [38 ###reference_b38###].\nIn fact, coalgebra has been an indispensable tool in the production of the current paper, guiding us to the correct definitions and simplifying many of the proofs.\n\nThe bisimilarity equivalence class of a state is often called its behaviour.\nIn the automaton below, and are bisimilar.\nThis is witnessed by the bisimulation .\nWe can also use bisimulations to witness language equivalence.\nLet .\nIf , then .\nThe converse of Lemma 3 ###reference_ma3### is not true.\nConsider, for example, the program that repeats the atomic action indefinitely, never reaching .\nSince\nwe know that .\nBut and are not bisimilar, since Fig. 4 ###reference_### tells us that and , which together refute Definition 6 ###reference_inition6###.1."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Axioms",
|
| 33 |
+
"text": "Next, we give an inference system for bisimilarity and language equivalence consisting of equations and equational inference rules.\nThe axioms of skip-free GKAT are given in Fig. 2 ###reference_###.\nThey include the equation (), which says that early deadlock is the same as late deadlock.\nThis is sound with respect to the language interpretation, meaning that () is true if is replaced with a skip-free guarded language, but it is not sound with respect to the bisimulation semantics.\nFor example, the expressions and are not bisimilar for any .\nInterestingly, this is the only axiomatic difference between bisimilarity and language equivalence.\nThe underlying logical structure of our inference systems is equational logic [6 ###reference_b6###], meaning that provable equivalence is an equivalence relation that is preserved by the algebraic operations.\nGiven expressions , we write and say that and are -equivalent if the equation can be derived from the axioms in Fig. 2 ###reference_### without the axiom marked ().\nWe write and say that and are -equivalent if can be derived from the whole set of axioms in Fig. 2 ###reference_###.\nThe axioms in Fig. 2 ###reference_### are sound with respect to the respective semantics they axiomatize.\nThe only axiom that is not sound w.r.t. bisimilarity is , as this would relate automata with different behaviours ( may permit some action to be performed, and this is observable in the bisimulation).\nFor any ,\nIf , then .\nIf , then .\nWe consider the next two results, which are jointly converse to Theorem 3.1 ###reference_theorem1###, to be the main theorems of this paper.\nThey state that the axioms in Fig. 2 ###reference_### are complete for bisimilarity and language equivalence respectively, i.e., they describe a complete set of program transformations for skip-free GKAT.\nIf , then .\nIf , then .\nWe prove Theorem 3.2 ###reference_theorem2### in Section 5 ###reference_### by drawing a formal analogy between skip-free GKAT and a recent study of regular expressions in the context of process algebra [16 ###reference_b16###].\nWe include a short overview of the latter in the next section.\nWe delay the proof of Theorem 3.3 ###reference_theorem3### to Section 6 ###reference_###, which uses a separate technique based on the pruning method introduced in [41 ###reference_b41###]."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "One-free Star Expressions",
|
| 39 |
+
"text": "Regular expressions were introduced by Kleene [24 ###reference_b24###] as a syntax for the algebra of regular events.\nMilner offered an alternative interpretation of regular expressions [34 ###reference_b34###], as what he called star behaviours.\nBased on work of Salomaa [39 ###reference_b39###], Milner proposed a sound axiomatization of the algebra of star behaviours, but left completeness an open problem.\nAfter nearly 40 years of active research from the process algebra community, a solution was finally found by Grabmayer [15 ###reference_b15###].\nA few years before this result, Grabmayer and Fokkink proved that a suitable restriction of Milner\u2019s axioms gives a complete inference system for the behaviour interpretation of a fragment of regular expressions, called the one-free fragment [16 ###reference_b16###].\nIn this section, we give a quick overview of Grabmayer and Fokkink\u2019s one-free fragment [16 ###reference_b16###], slightly adapted to use an alphabet that will be suitable to later use in one of the completeness proofs of skip-free GKAT."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "5",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Completeness for Skip-free Bisimulation GKAT",
|
| 45 |
+
"text": "This section is dedicated to the proof of our first completeness result, Theorem 3.2 ###reference_theorem2###, which says that the axioms of Fig. 2 ###reference_### (excluding ) are complete with respect to bisimilarity in skip-free GKAT. Our proof strategy is a reduction of our completeness claim to the completeness result for (Theorem 4.1 ###reference_theorem1###).\nThe key objects of interest in the reduction are a pair of translations: one translation turns skip-free GKAT expressions into one-free star expressions and maintains bisimilarity, and the other translation turns (certain) one-free star expressions into skip-free GKAT expressions and maintains provable bisimilarity.\nWe first discuss the translation between automata and labelled transition systems, which preserves and reflects bisimilarity.\nWe then introduce the syntactic translations and present the completeness proof."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5.1",
|
| 49 |
+
"parent_section_id": "5",
|
| 50 |
+
"section_name": "Transforming skip-free automata to labelled transition systems",
|
| 51 |
+
"text": "We can easily transform a skip-free automaton into an LTS by essentially turning transitions into transitions. This can be formalized, as follows.\nGiven a set , we define to be\n.\nGiven a skip-free automaton , we define\nThe function is injective: as its name suggests, is essentially the graph of when viewed as a partial function from to .\nThis implies that the transformation of skip-free automata into LTSs preserves and reflects bisimilarity.\nLet , and be a skip-free automaton.\nThen in if and only if in .\nLeading up to the proof of Theorem 3.2 ###reference_theorem2###, we also need to undo the effect of on skip-free automata with a transformation that takes every LTS of the form to its underlying skip-free automaton .\nThe LTSs that can be written in the form for some skip-free automaton can be described as follows.\nCall a set graph-like if whenever and , then and .\nAn LTS is deterministic if is graph-like for every .\nAn LTS is deterministic if and only if for some skip-free automaton .\nAs mentioned in Footnote 4 ###reference_te4###, there is a coalgebraic outlook in many of the technical details in the present paper.\nFor the interested reader, and the map that transforms graph-like relations into functions are actually natural transformations between the functors whose coalgebras correspond to skip-free automata and deterministic LTSs, and are furthermore inverse to one another.\nThis implies that and witness an isomorphism between the categories of skip-free automata and deterministic LTSs."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5.2",
|
| 55 |
+
"parent_section_id": "5",
|
| 56 |
+
"section_name": "Translating Syntax",
|
| 57 |
+
"text": "We can mimic the transformation of skip-free automata into deterministic labelled transition systems and vice-versa by a pair of syntactic translations going back and forth between skip-free GKAT expressions and certain one-free star expressions. Similar to how only some labelled transition systems can be turned into skip-free automata, only some one-free star expressions have corresponding skip-free GKAT expressions\u2014the deterministic ones.\nThe definition of deterministic expressions requires the following notation: given a test , we define inductively on as follows:\nfor any and .\nThe set of deterministic one-free star expressions is the smallest subset such that (i) , (ii) for any and , (iii) for any , and (iv) for any such that for some we have and , we also have .\nWe can now present the translations of skip-free expressions to deterministic one-free star expressions.\nWe define the translation function by\nfor any , , .\nIn Definition 11 ###reference_inition11###, we make use of a generalized sum .\nTechnically, this requires we fix an enumeration of ahead of time, say , at which point we can define .\nOf course, is commutative and associative up to , so the actual ordering of this sum does not matter as far as equivalence is concerned.\nThe most prescient feature of this translation is that it respects bisimilarity.\nThe graph of the translation function is a bisimulation of labelled transition systems between and .\nConsequently, in if and only if in .\nWe would now like to define a back translation function by induction on its argument.\nLooking at Definition 10 ###reference_inition10###, one might be tempted to write , but the fact of the matter is that it is possible for there to be distinct such that , even when and have different atoms.\nSay that are separated by if and .\nIf such a exists we say that and are separated.\nAnother way to define is to say that is the smallest subset of containing and that is closed under sequential composition and closed under unions and stars of separated one-free star expressions.\nSuppose and are separated by both and .\nThen one can prove that and , so and are separated by as well.\nSince there are only finitely many Boolean expressions up to equivalence, there is a maximal (weakest) test such that and are separated by .\nThe back translation is defined by\nfor any .\nIn the union and star cases, we may use that and are separated (by definition of ), so that is well-defined.\nThe most important property of in the completeness proof is that it preserves provable equivalence.\nLet .\nIf , then .\nAn intuitive approach to proving Theorem 5.1 ###reference_theorem1### proceeds by induction on the derivation of .\nHowever, if one-free regular expressions appear in the derivation of that are not deterministic, then the induction hypothesis cannot be applied, because the translation map is only defined on .\nIn other words, the induction hypothesis must be strengthened, as a proof by induction on derivations can only go through if whenever and , there is a derivation of in which only deterministic one-free regular expressions appear.\nSuch a derivation is what we call a deterministic proof.\nGiven , we call a proof of a deterministic proof if every expression that appears in the proof is a deterministic one-free regular expression (i.e., is in ).\nWe write if there is a deterministic proof of .\nTo proceed with the completeness proof sketched above, we have to show that the induction hypothesis is sound.\nFor any , if and only if .\nThe proof of Theorem 5.2 ###reference_theorem2### requires an in-depth dive into the completeness proof of Grabmayer and Fokkink [16 ###reference_b16###].\nThis can be found in Appendix 0.F ###reference_###.\nWith Theorem 5.2 ###reference_theorem2### in hand, Theorem 5.1 ###reference_theorem1### is proven by induction on the deterministic proof of .\nThe last fact needed in the proof of completeness is that, up to provable equivalence, every skip-free GKAT expression is equivalent to its back-translation.\nFor any , .\nWe are now ready to prove Theorem 3.2 ###reference_theorem2###, that is complete with respect to behavioural equivalence in skip-free GKAT.\nSee 3.2 ###reference_theorem2###\nLet be a bisimilar pair of skip-free GKAT expressions.\nBy Lemma 5 ###reference_ma5###, and are bisimilar in .\nBy Lemma 7 ###reference_ma7###, the translation preserves bisimilarity, so and are bisimilar in as well.\nBy Theorem 4.1 ###reference_theorem1###, .\nTherefore, by Theorem 5.1 ###reference_theorem1###, .\nFinally, by Lemma 8 ###reference_ma8###, we have\n."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "6",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Completeness for Skip-free GKAT",
|
| 63 |
+
"text": "The previous section establishes that -equivalence coincides with bisimilarity for skip-free GKAT expressions by reducing the completeness problem of skip-free GKAT up to bisimilarity to a solved completeness problem, namely that of one-free star expressions up to bisimilarity.\nIn this section we prove a completeness result for skip-free GKAT up to language equivalence.\nWe show this can be achieved by reducing it to the completeness problem of skip-free GKAT up to bisimilarity, which we just solved in the previous section.\nDespite bisimilarity being a less traditional equivalence in the context of Kleene algebra, this reduction simplifies the completeness proof greatly, and justifies the study of bisimilarity in the pursuit of completeness for GKAT.\nThe axiom (which is the only difference between skip-free GKAT up to language equivalence and skip-free GKAT up to bisimilarity) indicates that the only semantic difference between bisimilarity and language equivalence in skip-free GKAT is early termination.\nThis motivates our reduction to skip-free GKAT up to bisimilarity below, which involves reducing each skip-free expression to an expression representing only the successfully terminating branches of execution.\nNow let us turn to the formal proof of Theorem 3.3 ###reference_theorem3###, which says that if are such that , then .\nIn a nutshell, our strategy is to produce two terms such that , and in .\nThe latter property tells us that by Theorem 3.2 ###reference_theorem2###, which together with the other properties allows us to conclude .\nThe expression can be thought of as the early termination version of , obtained by pruning the branches of its execution that cannot end in successful termination.\nTo properly define the transformation on expressions, we need the notion of a dead state in a skip-free automaton, analogous to a similar notion from [44 ###reference_b44###].\nLet be a skip-free automaton.\nThe set is the largest subset of such for all and , either or .\nWhen , is dead; otherwise, it is live.\nIn the sequel, we say is dead when is a dead state in , i.e., when .\nWhether is dead can be determined by a simple depth-first search, since can reach only finitely many expressions by .\nThe axioms of skip-free GKAT can also tell when a skip-free expression is dead.\nLet .\nIf is dead, then .\nWe are now ready to define , the transformation on expressions promised above.\nThe intuition here is to prune the dead subterms of by recursive descent; whenever we find a part that will inevitably lead to an expression that is never going to lead to acceptance, we set it to .\nLet and .\nIn the sequel we use as a shorthand for .\nWe now define inductively, as follows\n{mathpar}\n\u230a0\u230b = 0\n\u230ap\u230b = p\n\u230ae_1 +_b e_2\u230b = \u230ae_1\u230b +_b \u230ae_2\u230b\n\u230ae_1 \u22c5e_2\u230b =\n{0 is dead\u230ae1\u230b \u22c5\u230ae2\u230b otherwise\n\u230ae_1 ^(b) e_2\u230b =\n{0 is dead\u230ae1\u230b (b)\u230ae2\u230b otherwise\nThe transformation defined above yields a term that is -equivalent to , because includes the early termination axiom .\nThe proof is a simple induction on , using Lemma 9 ###reference_ma9###.\nFor any , .\nIt remains to show that if , then and are bisimilar.\nTo this end, we need to relate the language semantics of and to their behaviour.\nAs a first step, we note that behaviour that never leads to acceptance can be pruned from a skip-free automaton by removing transitions into dead states.\nLet be a skip-free automaton.\nDefine by\nMoreover, language equivalence of two states in a skip-free automaton implies bisimilarity of those states, but only in the pruned version of that skip-free automaton.\nThe proof works by showing that the relation on that connects states with the same language is, in fact, a bisimulation in .\nLet be a skip-free automaton and .\nWe have\nThe final intermediate property relates the behaviour of states in the pruned skip-free automaton of expressions to those in the syntactic skip-free automaton.\nThe graph of is a bisimulation of skip-free automata between and .\nWe now have all the ingredients necessary to prove Theorem 3.3 ###reference_theorem3###.\nSee 3.3 ###reference_theorem3###\nIf , then by definition .\nBy Lemma 11 ###reference_ma11###, in , which by Lemma 12 ###reference_ma12### implies that in .\nFrom Theorem 3.2 ###reference_theorem2### we know that , and therefore by Lemma 10 ###reference_ma10###."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "7",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Relation to GKAT",
|
| 69 |
+
"text": "So far we have seen the technical development of skip-free GKAT without much reference to the original development of GKAT as it was presented in [44 ###reference_b44###] and [41 ###reference_b41###].\nIn this section, we make the case that the semantics of skip-free GKAT is merely a simplified version of the semantics of GKAT, and that the two agree on which expressions are equivalent after embedding skip-free GKAT into GKAT. More precisely, we identify the bisimulation and language semantics of skip-free GKAT given in Section 3 ###reference_### with instances of the existing bisimulation [41 ###reference_b41###] and language [44 ###reference_b44###] semantics of GKAT proper.\nThe main takeaway is that two skip-free GKAT expressions are equivalent in our semantics precisely when they are equivalent when interpreted as proper GKAT expressions in the existing semantics."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "7.1",
|
| 73 |
+
"parent_section_id": "7",
|
| 74 |
+
"section_name": "Bisimulation semantics",
|
| 75 |
+
"text": "To connect the bisimulation semantics of skip-free GKAT to GKAT at large, we start by recalling the latter.\nTo do this, we need to define GKAT automata.\nA (GKAT) automaton is a pair such that is a set and is a function called the transition function.\nWe write to denote , to denote , and if .\nAutomata can be equipped with their own notion of bisimulation.555As in previous sections, automata can be studied as coalgebras for a given functor and the notions below are instances of general abstract notions [18 ###reference_b18###, 38 ###reference_b38###].\nGiven automata and , a bisimulation between them is a relation such that if , and ,:\nif , then ; and\nif , then ; and\nif , then such that .\nWe call and bisimilar and write if for some bisimulation .\nThe properties listed above are implications, but it is not hard to show that if all three properties hold for , then so do all of their symmetric counterparts.\nFor instance, if , then certainly must be of the form , which then implies that while .\nTwo GKAT expressions are bisimilar when they are bisimilar as states in the syntactic automaton [41 ###reference_b41###], , summarized in Fig. 7 ###reference_###.\nThe definition of given above diverges slightly from the definition in [41 ###reference_b41###].\nFortunately, this does not make a difference in terms of the bisimulation semantics: two expressions are bisimilar in if and only if they are bisimilar in the original semantics.\nWe refer to Appendix 0.E ###reference_### for a detailed account.\nThere is a fairly easy way to convert a skip-free automaton into a GKAT automaton: simply reroute all accepting transitions into a new state , that accepts immediately, and leave the other transitions the same.\nGiven a skip-free automaton , we define the automaton , where is defined by\nWe can show that two states are bisimilar in a skip-free automaton if and only if these same states are bisimilar in the corresponding GKAT automaton.\nLet be a skip-free automaton, and let .\nThe syntactic skip-free automaton can of course be converted to a GKAT automaton in this way.\nIt turns out that there is a very natural way of correlating this automaton to the syntactic GKAT automaton .\nThe relation is a bisimulation between and .\nWe now have everything to relate the bisimulation semantics of skip-free GKAT expressions to the bisimulation semantics of GKAT expressions at large.\nLet .\nThe following holds:\nWe derive using Lemmas 13 ###reference_ma13### and 14 ###reference_ma14###, as follows: since the graph of is a bisimulation, in iff in if and only if in .\nIn the last step, we use the fact that if is a bisimulation (of automata) between and , and is a bisimulation between and , then is a bisimulation between and (see Lemma 20 ###reference_ma20###)."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "7.2",
|
| 79 |
+
"parent_section_id": "7",
|
| 80 |
+
"section_name": "Language semantics",
|
| 81 |
+
"text": "We now recall the language semantics of GKAT, which is defined in terms of guarded strings [30 ###reference_b30###], i.e., words in the set , where atoms and actions alternate.\nIn GKAT, successful termination occurs with a trailing associated test, representing the state of the machine at termination.\nIn an execution of the sequential composition of two programs , the test trailing the execution of needs to match up with an input test compatible with , otherwise the program crashes at the end of executing .\nThe following operations on languages of guarded strings record this behaviour by matching the ends of traces on the left with the beginnings of traces on the right.\nFor , define\n and\n,\nwhere is defined inductively by setting and .\nThe language semantics of a GKAT expression is now defined in terms of the composition operators above, as follows.\nWe define inductively, as follows:\n{mathpar}\n^L(b) = { \u03b1\u2208At\u2223\u03b1\u2264b }\n^L(p) = { \u03b1p\u03b2\u2223\u03b1, \u03b2\u2208At}\n^L(e \u22c5f) = ^L(e) \u22c4^L(f)\n\n^L(e +_b f) = ^L(b) \u22c4^L(e) \u222a^L(\u00afb) \u22c4^L(f)\n^L(e^(b)) = (^L(b) \u22c4^L(e))^(*) \u22c4^L(\u00afb)\nThis semantics is connected to the relational semantics from Definition 1 ###reference_inition1###:\nFor , we have if and only if for all relational interpretations\nMoreover, since skip-free GKAT expressions are also GKAT expressions, this means that we now have two language interpretations of the former, given by and .\nFortunately, one can easily be expressed in terms of the other.\nFor , it holds that .\nAs an easy consequence of the above, we find that the two semantics must identify the same skip-free GKAT-expressions.\nFor , we have iff .\nBy Theorem 3.3 ###reference_theorem3###, these properties imply that also axiomatizes relational equivalence of skip-free GKAT-expressions, as a result.\nLet , we have if and only if for all relational interpretations ."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "7.3",
|
| 85 |
+
"parent_section_id": "7",
|
| 86 |
+
"section_name": "Equivalences",
|
| 87 |
+
"text": "Finally, we relate equivalences as proved for skip-free GKAT expressions to those provable for GKAT expressions, showing that proofs of equivalence for skip-free GKAT expressions can be replayed in the larger calculus, without (UA).\nThe axioms of GKAT as presented in [44 ###reference_b44###, 41 ###reference_b41###] are provided in Figure 8 ###reference_###.\nWe write when is derivable from the axioms in Figure 8 ###reference_### with the exception of (), and when is derivable from the full set.\nThe last axiom of GKAT is not really a single axiom, but rather an axiom scheme, parameterized by the function defined as follows:\nThe function models the analogue of Salomaa\u2019s empty word property [39 ###reference_b39###]: we say is guarded when is equivalent to by to the laws of Boolean algebra.\nNotice that as GKAT expressions, skip-free GKAT expressions are always guarded.\nSince skip-free GKAT expressions are also GKAT expressions, we have four notions of equivalence for GKAT expressions: as skip-free expressions or GKAT expressions in general, either with or without ().\nThese are related as follows.\nLet .\nThen (1) if and only if , and (2) if and only if .\nFor the forward direction of (1), we note that if , then in by Theorem 3.1 ###reference_theorem1###.\nBy Lemma 15 ###reference_ma15###, in and therefore by Theorem 3.2 ###reference_theorem2###.\nConversely, note that any proof of by the axioms of Figure 2 ###reference_### can be replayed using the rules from Figure 8 ###reference_###.\nIn particular, the guardedness condition required for the last skip-free GKAT axiom using the last GKAT axiom is always true, because for any .\nThe proof of the second claim is similar, but uses Theorem 3.2 ###reference_theorem2### instead."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "8",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Related Work",
|
| 93 |
+
"text": "This paper fits into a larger research program focused on understanding the logical and algebraic content of programming.\nKleene\u2019s paper introducing the algebra of regular languages [24 ###reference_b24###] was a foundational contribution to this research program, containing an algebraic account of mechanical programming and some of its sound equational laws.\nThe paper also contained an interesting completeness problem: give a complete description of the equations satisfied by the algebra of regular languages.\nSalomaa was the first to provide a sound and complete axiomatization of language equivalence for regular expressions [39 ###reference_b39###].\nThe axiomatization in op. cit. included an inference rule with a side condition that prevented it from being algebraic in the sense that the validity of an equation is not preserved when substituting letters for arbitrary regular expressions.\nNevertheless, this inspired axiomatizations of several variations and extensions of Kleene algebra [48 ###reference_b48###, 44 ###reference_b44###, 43 ###reference_b43###], as well as Milner\u2019s axiomatization of the algebra of star behaviours [34 ###reference_b34###].\nThe side condition introduced by Salomaa is often called the empty word property, an early version of a concept from process theory called guardedness666This is a different use of the word \u201cguarded\u201d than in \u201cguarded Kleene algebra with tests\u201d. In the context of process theory, a recursive specification is guarded if every of its function calls occurs within the scope of an operation. that is also fundamental to the theory of iteration [7 ###reference_b7###].\nOur axiomatization of skip-free GKAT is algebraic due to the lack of a guardedness side-condition (it is an equational Horn theory [33 ###reference_b33###]).\nThis is particularly desirable because it allows for an abundance of other models of the axioms.\nKozen proposed an algebraic axiomatization of Kleene algebra that is sound and complete for language equivalence [26 ###reference_b26###], which has become the basis for a number of axiomatizations of other Kleene algebra variants [14 ###reference_b14###, 21 ###reference_b21###, 22 ###reference_b22###, 49 ###reference_b49###] including Kleene algebra with tests [27 ###reference_b27###].\nKAT also has a plethora of relational models, which are desirable for reasons we hinted at in Section 2 ###reference_###.\nGKAT is a fragment of KAT that was first identified in [31 ###reference_b31###].\nIt was later given a sound and complete axiomatization in [44 ###reference_b44###], although the axiomatization is neither algebraic nor finite (it includes (UA), an axiom scheme that stands for infinitely many axioms).\nIt was later shown that dropping (called (S3) in [44 ###reference_b44###]) from this axiomatization gives a sound and complete axiomatization of bisimilarity [41 ###reference_b41###].\nThe inspiration for our pruning technique is also in [41 ###reference_b41###], where a reduction of the language equivalence case to the bisimilarity case is discussed.\nDespite the existence of an algebraic axiomatization of language equivalence in KAT, GKAT has resisted algebraic axiomatization so far.\nSkip-free GKAT happens to be a fragment of GKAT in which every expression is guarded, thus eliminating the need for the side condition in Fig. 8 ###reference_### and allowing for an algebraic axiomatization.\nAn inequational axiomatization resembling that of KAT might be gleaned from the recent preprint [40 ###reference_b40###], but we have not investigated this carefully.\nThe GKAT axioms for bisimilarity of ground terms can also likely be obtained from the small-step semantics of GKAT using [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###], but unfortunately this does not appear to help with the larger completeness problem.\nThe idea of reducing one completeness problem in Kleene algebra to another is common in Kleene algebra; for instance, it is behind the completeness proof of KAT [30 ###reference_b30###].\nCohen also reduced weak Kleene algebra as an axiomatization of star expressions up to simulation to monodic trees [11 ###reference_b11###], whose completeness was conjectured by Takai and Furusawa [47 ###reference_b47###].\nGrabmayer\u2019s solution to the completeness problem of regular expressions modulo bisimulation [15 ###reference_b15###] can also be seen as a reduction to the one-free case [16 ###reference_b16###], since his crystallization procedure produces an automaton that can be solved using the technique found in op. cit.\nOther instances of reductions include [10 ###reference_b10###, 4 ###reference_b4###, 12 ###reference_b12###, 49 ###reference_b49###, 21 ###reference_b21###, 23 ###reference_b23###, 32 ###reference_b32###, 36 ###reference_b36###, 28 ###reference_b28###].\nRecent work has started to study reductions and their compositionality properties [12 ###reference_b12###, 22 ###reference_b22###, 35 ###reference_b35###]."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "9",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Discussion",
|
| 99 |
+
"text": "We continue the study of efficient fragments of Kleene Algebra with Tests (KAT) initiated in [44 ###reference_b44###], where the authors introduced Guarded Kleene Algebra with Tests (GKAT) and provided an efficient decision procedure for equivalence. They also proposed a candidate axiomatization, but left open two questions.\nThe first question concerned the existence of an algebraic axiomatization, which is an axiomatization that is closed under substitution\u2014i.e., where one can prove properties about a certain program and then use as a variable in the context of a larger program, being able to substitute as needed.\nThis is essential to enable compositional analysis.\nThe second question left open in [44 ###reference_b44###] was whether an axiomatization that did not require an axiom scheme was possible.\nHaving a completeness proof that does not require an axiom scheme to reason about mutually dependent loops is again essential for scalability: we should be able to axiomatize single loops and generalize this behaviour to multiple, potentially nested, loops.\nIn this paper, we identified a large fragment of GKAT, which we call skip-free GKAT (), that can be axiomatized algebraically without relying on an axiom scheme.\nWe show how the axiomatization works well for two types of equivalence: bisimilarity and language equivalence, by proving completeness results for both semantics. Having the two semantics is interesting from a verification point of view as it gives access to different levels of precision when analyzing program behaviour, but also enables a layered approach to the completeness proofs.\nWe provide a reduction of the completeness proof for language semantics to the one for bisimilarity. Moreover, the latter is connected to a recently solved [15 ###reference_b15###] problem proposed by Milner. This approach enabled two things: it breaks down the completeness proofs and reuses some of the techniques while also highlighting the exact difference between the two equivalences (captured by the axiom which does not hold for bisimilarity). We also showed that proofs of equivalence in skip-free GKAT transfer without any loss to proofs of equivalence in GKAT.\nThere are several directions for future work. The bridge between process algebra and Kleene algebra has not been exploited to its full potential. The fact that we could reuse results by Grabmayer and Fokkink [15 ###reference_b15###, 16 ###reference_b16###] was a major step towards completeness. An independent proof would have been much more complex and very likely required the development of technical tools resembling those in [15 ###reference_b15###, 16 ###reference_b16###]. We hope the results in this paper can be taken further and more results can be exchanged between the two communities to solve open problems.\nThe completeness problem for full GKAT remains open, but our completeness results for skip-free GKAT are encouraging.\nWe believe they show a path towards studying whether an algebraic axiomatization can be devised or a negative result can be proved.\nA first step in exploring a completeness result would be to try extending Grabmayer\u2019s completeness result [15 ###reference_b15###] to a setting with output variables\u2014this is a non-trivial exploration, but we are hopeful that it will yield new tools for completeness.\nAs mentioned in the introduction, NetKAT [4 ###reference_b4###] (and its probabilistic variants [13 ###reference_b13###, 45 ###reference_b45###]) have been one of the most successful extensions of KAT. We believe the step from skip-free GKAT to a skip-free guarded version of NetKAT is also a worthwhile exploration. Following [17 ###reference_b17###], we hope to be able to explore these extensions in a modular and parametric way."
|
| 100 |
+
}
|
| 101 |
+
],
|
| 102 |
+
"appendix": [
|
| 103 |
+
{
|
| 104 |
+
"section_id": "Appendix t0",
|
| 105 |
+
"parent_section_id": null,
|
| 106 |
+
"section_name": "Appendix 0.A Coalgebra",
|
| 107 |
+
"text": "In the main text of the paper, we avoided using the language of universal coalgebra [38 ###reference_b38###] in the presentation to not deter from the main concepts which can be described concretely. We have however used coalgebra in our development and as mentioned in Footnote 4 ###reference_te4### the concrete definitions in the main text are instances of abstract notions. This is helpful in simplifying proofs and so, in the appendix, we will present the proofs of the results using coalgebra.\nIn this first section of the appendix, we present the coalgebraic results relevant to our studies of skip-free GKAT, full GKAT, and one-free star expressions as they appear in the paper.\nCoalgebra makes heavy use of the language of category theory, which we assume the reader is somewhat familiar with (see [5 ###reference_b5###] for an introduction).\nFor our purposes, we only need the category of sets and functions, so when we refer to a functor , we really just mean a functor .\nGiven a functor , an -coalgebra is a pair consisting of a set of states and a transition function .\nAn -coalgebra homomorphism consists of a function such that , i.e. the diagram below commutes.\nThe functor is the coalgebraic signature of an -coalgebra .\nMany models of computation can be captured as -coalgebras for some , and (universal) coalgebra provides a framework for studying them all at once [38 ###reference_b38###].\nEverything that we refer to in the main text as an automaton (of some sort) is also an -coalgebra for some .\nThe coalgebraic signature of skip-free GKAT is the functor , where for any set , any , any , and any atomic test ,\nThe coalgebraic signature of one-free star behaviours is [42 ###reference_b42###], where for any set and any ,\nWe denote the coalgebraic signature of GKAT with , where\nSince multiple kinds of automata appear in the paper, results from coalgebra help us avoid duplicating proofs.\nFor example, all the notions of bisimulation that appear in this paper are instances of a general notion of bisimulation suggested by universal coalgebra.\nA bisimulation between -coalgebras and is a relation with an -coalgebra structure such that the projection maps and are coalgebra homomorphisms [38 ###reference_b38###].\nDefinition 6 ###reference_inition6### is a restatement of Definition 24 ###reference_inition24### in the case of , Definition 8 ###reference_inition8### is a restatement of Definition 24 ###reference_inition24### in the case of , and the definition of a bisimulation for GKAT automata is a restatement of Definition 24 ###reference_inition24### in the case of .\nWe briefly sketch the case for .\nThe other two cases have been covered elsewhere [42 ###reference_b42###, 44 ###reference_b44###].\nSuppose , are -coalgebras and .\nThen the function defined\ndefines a bisimulation of -coalgebras iff 1.-3. of Definition 6 ###reference_inition6### are met.\n-coalgebras and -coalgebra homomorphisms form a category , the properties of which can vary wildly depending on the properties of .\nWe follow [18 ###reference_b18###] in describing the desired structure of , , and based on the coalgebraic signature.\nLet be a set and be a set of functions to a set .\nA weak pullback of is a set and a set of functions from such that the following two conditions are met:\nFor all , we have .\nSuppose is another set of functions such that for all , .\nThen there is a function such that for all , .\nNote that is not necessarily unique.777This is often called a generalized weak pullback, and the typical notion of weak pullback is binary, i.e., specifically the case. Note that there are functors that preserve binary weak pullbacks that do not preserve generalized weak pullbacks [18 ###reference_b18###]."
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"section_id": "Appendix t0",
|
| 111 |
+
"parent_section_id": null,
|
| 112 |
+
"section_name": "Appendix 0.B Proofs for Section\u00a03",
|
| 113 |
+
"text": "See 1 ###reference_ma1###\nBy induction on .\nThe automaton has a single state with no transitions and consists of a single state that accepts all of after .\nWrite for the number of expressions reachable from .\nFor the inductive step, simply notice that , , and are bounded above by .\nSee 2 ###reference_ma2###\nThe proof proceeds by induction on .\nIn the base, since consists of a single state with no outgoing transitions, clearly .\nSimilarly, the only outgoing transitions of each are , so .\nFor the inductive step, first note that every successfully terminating path out of is of the form\n\nwhere either and\n\nor and\n.\nThis property implies the equality for .\nFurthermore, every successfully terminating path out of is of the form\n\nwhere\n\nand\n.\nThis proves the equality for .\nLastly, we consider .\nCall a cycle in a -coalgebra minimal if every state appears at most once in the cycle.\nNote that every cycle is a composition of minimal cycles.\nMinimal cycles containing are of the form\n\nwhere and\n.\nSuccessfully terminating paths from that do not contain cycles are of the form\n\nwhere\n.\nPutting these together, a successfully terminating path from is a composition of minimal cycles followed by a successfully terminating path coming from .\nIt follows that the guarded traces accepted by are those of the form , where each and starts with an atomic test below and and starts with an atomic test below .\nIn symbols, .\nSee 3.1 ###reference_theorem1###\nWe begin by sketching the proof that is a bisimulation on .\nLet .\nWe show that satisfies 1.-3. of Definition 6 ###reference_inition6### by induction on the proof of .\nIn the base, we need to consider the equational axioms (including those of equational logic).\n1.-3. of Definition 6 ###reference_inition6### are clearly reflexive and symmetric, so it suffices to consider the equational axioms listed in Fig. 2 ###reference_###.\nConsider .\nIf , then since either or for each atomic test , , and vice versa.\nSince is an equivalence relation, .\nSimilarly, if and only if and if and only if .\nIn the , , , , , and cases, the two expressions have all the same outgoing transitions. See the previous case.\nIn the case, let .\nThere are a few subcases:\nSince if and only if , it follows that if and only if . That is, 1. in Definition 6 ###reference_inition6### is satisfied by this .\nIf , then . Therefore, and .\nThis establishes 2. of Definition 6 ###reference_inition6###.\nIf , then . We also know that , so . Since , in conjunction with (ii) we see that 3. of Definition 6 ###reference_inition6### is satisfied for this .\nIn the induction step, we consider the Horn rules.\nWe skip the transitivity case, since the conditions in Definition 6 ###reference_inition6### are clearly transitive.\nWe are going to show that the congruence rules preserve the properties in Definition 6 ###reference_inition6###.\nSuppose and satisfies the conditions of Definition 6 ###reference_inition6###.\nIt suffices to check in the subcase. These expressions have the same pairs of outgoing transitions out of for . The induction hypothesis concludes this case.\nIn the subcase, if and only if by the induction hypothesis, and if and only if . Similarly, if and only if , so if and only if . If , then if and only if . It follows that if and only if .\nThe case is easy because the case was covered above.\nConsider .\nWe only cover the case here because and have the same outgoing transitions for . The condition 1. of Definition 6 ###reference_inition6### follows from the fact that if and only if . Similarly, if and only if for . Finally, if for and , then and for .\nConsider .\nIf , then the transitions out of and must either both reject, or go to expressions again related by .\nOtherwise, if , then the -transitions of and are determined by and respectively, which are equivalent by assumption; the claim then follows.\nLet and assume for an induction hypothesis that satisfies 1.-3. in Definition 6 ###reference_inition6###.\nThen if and only if either and or and .\nSince the latter crashes are the same as those of , we have verified 1. in Definition 6 ###reference_inition6###.\nWe also know that if and only if and .\nThe latter conditions are equivalent to , thus satisfying 2. in Definition 6 ###reference_inition6###.\nFor 3. from Definition 6 ###reference_inition6###, let .\nWe should show that takes a similarly labeled transition to an expression equivalent to .\nBy induction, such that .\nThis gives us three cases to consider.\nIf and , then , and thus .\nSince , it follows that .\nGiven that , this was exactly what we needed to prove.\nIf and , then .\nBut then , which suffices since in this case.\nIf and , then .\nBut then , which suffices because in this case.\nNow we verify that is sound with respect to language equivalence.\nAgain, we proceed to show that implies by induction on the proof of from the rules in Fig. 2 ###reference_###.\nWe have already seen that if implies , so by Lemma 3 ###reference_ma3### we know that implies .\nThis handles most of the base case.\nReflexivity, symmetry, and transitivity are handled by the fact that is the kernel of .\nThis leaves us with , which can be seen from Lemma 2 ###reference_ma2###:\nIn the inductive step, we need to consider the transitivity, congruence, and Horn rules.\nCongruence is a consequence of Lemma 2 ###reference_ma2###.\nTransitivity follows from the fact that is the kernel of .\nSoundness of the Horn rule follows from Lemma 2 ###reference_ma2### and the fact that for any languages ,\nIndeed, suppose .\nThen because , by ().\nWe end this section of the appendix by stating a number of provable equivalences that are useful in other sections.\nThe following lemma is necessary in our proof of Lemma 10 ###reference_ma10###, for example.\nRecall that for and , we use as a shorthand for .\nLet and .\nThe following hold:\n{mathpar}\ne +_b \u00afb f \u2261_\u2020e +_b f\nbe +_b f \u2261_\u2020e +_b f\n(b \u2227c)e \u2261_\u2020b(ce)\nb(e +_c f) \u2261_\u2020be +_c bf\nb(e +_c f) \u2261_\u2020b(e +_b \u2227c f)\nb(e \u22c5f) \u2261_\u2020(be) \u22c5f\ne ^(b) 0 \u22610\ne ^(b) f \u2261_\u2020e ^(b) (\u00afbf)\n(be) ^(b) f \u2261_\u2020e^(b) f\nWe will also refer to the equivalence below as (G2,G3), as it can be derived as follows:\nWe prove the desired equalities in order of appearance.\nFor , we derive:\nFor ,\nFor , we derive\nFor , it suffices to prove the following, by (RSP):\nFor , it suffices to prove the following, by (RSP):\nFor ,"
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"section_id": "Appendix t0",
|
| 117 |
+
"parent_section_id": null,
|
| 118 |
+
"section_name": "Appendix 0.C Proofs for Section\u00a05",
|
| 119 |
+
"text": "See 6 ###reference_ma6###\nFor any set , let and define by\nLet be a deterministic LTS. Then is a skip-free automaton such that , because .\nSee 7 ###reference_ma7###\nRecall that for any function , is given by\nfor any index sets and .\nBy Lemma 21 ###reference_ma21### it suffices to show that is a -coalgebra homomorphism from to , i.e., that .\nWe prove this by induction on .\nSince ,\nNow let .\nsince .\nThis covers the base case.\nFor the induction step, assume that for .\nConsider .\nObserve that\nNow, , and by the induction hypothesis, if and only if there is a such that and .\nThis implies that if and only if and where .\nTherefore,\nNow consider .\nRecall .\nIf , then either such that , or and .\nBy the induction hypothesis, either there is a such that and , or .\nTherefore,\nFinally, consider .\nSince , if , there are three possibilities: either (1) and and ; (2) and and ; or (3) and .\nIn case (1), by the induction hypothesis, .\nIn case (2), by the induction hypothesis, there is a such that and .\nThis would make .\nIn case (3), by the induction hypothesis, there is a such that and .\nTherefore,\nThis concludes the proof.\nGiven , write if is a test separating from .\nLet be any test.\nIf , then and .\nWe are going to begin by showing that when , by induction on .\nIn the first base case, .\nOf course, for any .\nBy assumption, .\nFor the second base case, consider for some and .\nIf , then , so we can rule out this possibility by soundness.\nOtherwise, and\nFor the first inductive case, let and .\nThen\nand similarly .\nHence for , so\nFor the sequential composition case, if , then we first argue that .\nTo see this, we first make two claims.\nif and only if .\nFor the direction from left to right, note that if , then .\nBut then, by the assumption that and soundness, we have that for some .\nIf and , then we are done.\nOtherwise, if such that , then a simple inductive proof shows that as well \u2014 we can therefore rule out this case.\nConversely, if , then , by another induction on .\nIf , then such that ; conversely, if , then .\nThe first property follows by an argument similar to the previous claim; the second can be shown by induction on .\nWe can now use the fundamental theorem for one-free regular expressions [15 ###reference_b15###, Proposition 2.9], which states that for any ,\nThis allows us to derive that\nHence, .\nThis in turn allows us to deduce\nFor the star case, assume for and that and are separated by .\nNotice that this entails\nIt follows that and , like in the sum case.\nIt then follows that , by the same reasoning in the sequential composition case.\nThe induction hypothesis tells us for .\nSo,\nThis concludes the proof of the intermediate property, that when .\nNow let , the maximal test separating from .\nThe key insight here is that , so that .\nWith this observation in hand, we can further calculate\nas well as\nThis concludes the proof overall.\nThe next result is the key step to proving Theorem 5.1 ###reference_theorem1###, which establishes the most important property of used in the completeness proof for skip-free bisimulation GKAT.\nSee 5.2 ###reference_theorem2###\nThe proof of this theorem is the subject of Appendix 0.F ###reference_###.\nSee 5.1 ###reference_theorem1###\nLet , and suppose .\nThen, by Theorem 5.2 ###reference_theorem2###, .\nWe proceed by induction on the deterministic derivation of .\nIn the base case, we verify the one-free star behaviour axioms directly.\nEvery such that satisfies , for if then\nso by (G0) we find\n.\nThe maximal test separating from is , so by (G0).\nIf , then\n\nby (G2) and Lemma 25 ###reference_ma25### (because ).\nLetting and , we have and as well.\nUsing Lemma 25 ###reference_ma25###, we derive\nIn the left zero case we find that by (G6),\nNo further considerations in the associativity case:\nIf , then , so\nSuppose .\nThen as well, so\nFor the inductive step, assume implies for .\nWe consider the and congruence rules (the congruence rule is trivial), recursive specification rule (RSP), symmetry (Sym) and transitivity (Tra).\nSuppose the deterministic proof ends with\nand assume that .\nWe are also assuming that and are deterministic, so we must have such that , and .\nHowever, since and , if and only if , so we may as well take .\nWe have\nSuppose the deterministic proof ends with\nand assume that , .\nAgain, we obtain a common such that and .\nUsing Lemma 25 ###reference_ma25###, we have\nSuppose the deterministic proof ends with the rule\n\nand assume that .\nSince we are also assuming that , there is a such that .\nWe also have , so\n by Lemma 25 ###reference_ma25###.\nIt follows from (RSP) that\nSuppose the deterministic proof ends with\nThen by symmetry of and the induction hypothesis, .\nSuppose the deterministic proof ends with\nThen by assumption, and .\nBy the induction hypothesis, .\nThis implies that by transitivity of .\nWe will need the following lemma in the sequel.\nIf , then .\nBy induction on the proof of .\nIn the base case, we consider the skip-free GKAT axioms directly.\nFor , we derive\nThe second equivalence follows from the fact that for any and any one-free regular expression , which can be shown via a straightforward induction on .\nFor , we derive\nFor , we derive\nFor the inductive step, we need to consider the congruence properties and the Horn rule; the cases for transitivity and symmetry are trivial.\nSuppose the proof ends with\nand assume that .\nThen\nSuppose the proof ends with\nand assume that .\nThen\nSuppose the proof ends with\nand assume that .\nThen\nBy the Horn rule in skip-free RegEx,\nWe are now ready to prove the key lemma in our first completeness theorem.\nSee 8 ###reference_ma8###\nWe begin by proving the following intermediary result: for any and , (*).\nWe proceed by induction on the one-free regular expression .\nIn the first base case, we have .\nIn the second base case, if ,\nIf ,\nIn the first inductive step, let .\nSince ,\nIn the second inductive case,\nIn the final inductive step, let .\nSince and ,\nThis intermediary fact (*) lets us establish the main claim by induction on .\nIn the first base case, by definition.\nIn the second base case, let . Then inductively,\nIn the first inductive step,\nIn the inductive step,\nIn the inductive step,"
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"section_id": "Appendix t0",
|
| 123 |
+
"parent_section_id": null,
|
| 124 |
+
"section_name": "Appendix 0.D Proofs for Section\u00a06",
|
| 125 |
+
"text": "See 9 ###reference_ma9###\nIt suffices to prove that if and , then .\nAfter all, if this is true and is dead then , and hence .\nWe proceed by induction on .\nIn the base, there are two cases:\nIf , then for all it holds that .\nWe thus find that , and therefore .\nIf , then immediately.\nFor the inductive step, there are three more cases.\nIf then and .\nBy induction, we have and .\nWe then derive using Lemma 24 ###reference_ma24###:\nIf , then or .\nIn the former case , so by Lemma 24 ###reference_ma24### we find\n.\nIn the latter case , and thus .\nIf , then , and so by induction.\nBy Lemma 24 ###reference_ma24###:\nSee 10 ###reference_ma10###\nWe proceed by induction on .\nIn the base, the claim holds immediately, whether or .\nFor the inductive step, there are three cases.\nIf , then .\nSuppose .\nIf is dead, then by Lemma 9 ###reference_ma9###, and so .\nOtherwise .\nSuppose .\nIf is dead, then by Lemma 9 ###reference_ma9###, and so by Lemma 24 ###reference_ma24###.\nOtherwise, we derive that .\nTo prove Lemma 11 ###reference_ma11###, we need an auxiliary lemma.\nLet be a -coalgebra and .\nNow if and only if .\nFor the direction from left to right, it suffices to show that satisfies the rules for dead states.\nTo see this, note that if then certainly the case where is excluded, for then .\nFurthermore, if for some , then , for otherwise cannot be empty; hence .\nFor the converse, suppose .\nThen we can find, by induction on , an with for some \u2014 a contradiction.\nSee 11 ###reference_ma11###\nIt suffices to prove that is a bisimulation in .\nTo see this, suppose that .\nIf , then or with .\nTherefore there is no word in of the form , by Lemma 27 ###reference_ma27###.\nThus or where , whence by Lemma 27 ###reference_ma27###.\nIn either case, .\nIf for some , then , and therefore .\nBut then as well.\nIf , then is not dead in .\nBy Lemma 27 ###reference_ma27###, there exists some s.t. .\nWe then have for some .\nA straightforward argument then shows that , and so .\nThe claim about the graph of being a bisimulation of skip-free automata can also be shown by induction on and exhaustive case analysis, as follows.\nSee 12 ###reference_ma12###\nIt suffices to show that \nWe proceed by induction on .\nIn the base, and , and so the claim holds trivially.\nFor the inductive step, there are three cases.\nSuppose .\nWithout loss of generality, we assume .\nIf , then either or with dead.\nIn either case, , and so by induction.\nWe then conclude that .\nIf for some , then as well.\nIt follows that , and so by induction.\nWe then conclude that .\nIf , then for some live .\nHence , and so by induction.\nWe then conclude that .\nSuppose .\nIf is dead, then so is ; hence\nOtherwise, if is live, then we have three cases to consider.\nIf , then we have two more subcases to consider.\nIf , then , meaning .\nBy induction, , and so .\nIf with dead, then we can exclude the case where , for then , which contradicts that is live.\nWe then know that such that .\nFurthermore, since is live and is dead, it must be the case that is dead.\nWe then find that , and so by induction.\nWe conclude that .\nIf , then , which is impossible.\nWe can therefore exclude this case.\nIf , then with live.\nThis gives us two more subcases.\nIf and , then , and so by induction .\nThus .\nIf and , then must be live.\nWe then have , and so by induction.\nWe conclude that .\nSuppose .\nIf is dead, then so is ; hence\nOtherwise, if is live, then we first consider the case where .\nIf , then or with dead.\nIn either case, , and so by induction.\nWe conclude that .\nIf , then .\nThus and so we have by induction.\nWe then conclude that .\nIf , then with live.\nIt then follows that , and so by induction.\nWe then conclude that .\nIt remains to consider the case where .\nIf , then we have two more subcases to consider.\nIf , then as well, since .\nThis means that , and so by induction.\nWe conclude that .\nIf with dead, then since we must have that , with .\nSince is live, so is .\nIt then follows that is dead, and so .\nBy induction, , meaning that .\nIf , then , which is impossible when ; we can therefore exclude this case.\nIf , then with live.\nSince , we have that and .\nSince and are live, so is .\nWe then find that , meaning by induction.\nWe conclude by deriving"
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"section_id": "Appendix t0",
|
| 129 |
+
"parent_section_id": null,
|
| 130 |
+
"section_name": "Appendix 0.E Proofs for Section\u00a07",
|
| 131 |
+
"text": "We start by proving that our tweaked version of the syntactic GKAT automaton does not make any difference with regard to bisimilarity of expressions.\nTo make this precise, we first recall the definition of the syntactic GKAT automaton [41 ###reference_b41###].\nWe define in the same way as , except for the following two cases:\nWhen such that , we set .\nWhen and , we set .\nEssentially, and differ only by the use of versus in these two cases.\nWe can now show that an expression is bisimilar to its representation in the syntactic automaton GKAT as defined in the literature; this then tells us that two states are bisimilar in if and only if they are bisimilar in .\nLet .\nNow .\nLet be the smallest relation on such that all of the following hold:\n{mathpar}\n\\inferrule e R e\n\n\\inferrulee R e\u2019 \nf \u2208E\ne \u22c5f R e\u2019 \u2a1ff\n\n\nLet such that .\nWe proceed by induction on the construction of .\nIn the base, because ; we proceed by induction on .\nIn the (inner) base, the definitions of and coincide, and so those cases go through immediately.\nFor the (inner) inductive step, we have three more cases.\nIf , then let and assume without loss of generality that .\nIn that case, and .\nThe claim then follows by induction.\nIf , then there are three more subcases to consider.\nIf , then by induction such that .\nNow, and , while , as desired.\nIf , then by induction, and so .\nIf , then , so and ; the claim then follows by induction.\nIf , then let .\nThere are three more subcases to consider.\nIf and , then by induction such that .\nWe then find that and , while .\nIf and , then by induction, so as desired.\nIf , then .\nFor the (outer) inductive step, because and such that .\nWe distinguish two cases.\nIf , then because we have that for all , so for all by induction.\nBut in that case we have that , and similarly .\nThe claim follows by an argument similar to the (outer) base.\nOtherwise ; there are now three cases.\nIf , then by induction, and so , as desired.\nIf , then by induction, and so and ; the claim then follows by an argument similar to the one in the outer base.\nIf , then with by induction.\nIn that case, and while , as desired.\nSee 13 ###reference_ma13###\nFor the forward direction, let be a -bisimulation on .\nWe claim that is an -bisimulation on .\nTo see this, first note that the pair immediately satisfies the conditions put on an -bisimulation.\nFurthermore, if , then we check the three conditions.\nIf , then as well, which means that and hence ; this condition holds.\nIf , then , but this contradicts that ; we can therefore disregard this case.\nIf , then there are two possibilities.\nIf and , then as well.\nBut then .\nSince is related to by , we are done.\nIf , then such that .\nBut then as well, and we are done.\nFor the converse claim, let be a bisimulation on .\nFirst, we note that if and or , then necessarily \u2014 after all, this tells us that for all , and is the only element of that fits that description.\nWe now claim that is a bisimulation on .\nTo this end, let ; we check the three conditions.\nIf , then as well, which means , and hence ; this condition is covered.\nIf , then , which means that such that .\nBut by the considerations above, this means that , and thus as well.\nIf , then as well.\nBut then such that .\nNow, since , also by the considerations above, and thus .\nSee 14 ###reference_ma14###\nLet be given by when , and .\nClearly, the graph of is the relation claimed to be a bisimulation.\nBy Lemma 21 ###reference_ma21###, it suffices to show that is a coalgebra homomorphism.\nFor the special case of , we have that for all .\nIt remains to check that for all and , which we do by induction on .\nThere are two base cases:\nIf , then .\nIf , then .\nFor the inductive step, we distinguish the following cases:\nIf , then there are two cases to consider.\nFirst, if , then\nThe case where is similar.\nIf , then there are three cases to consider.\nIf , then as well.\nBy induction, , and since , we have\nIf , then .\nBy induction, , and since we have\nIf , then as well.\nBy induction , and so\nIf , then there are four cases to consider.\nIf and , then as well.\nBy induction , so\nIf and , then .\nBy induction, , and so\nIf and , then .\nBy induction , and so\nIf , then , so we derive as follows:\nSee 16 ###reference_ma16###\nWe proceed by induction on .\nIn the base, there are two cases.\nFirst, if , then by definition, and , meaning that .\nSecond, if , then if and only if for some , which is true if and only if .\nFor the inductive step, we first note that for and , we have .\nWe now distinguish three cases.\nIf , then we derive"
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"section_id": "Appendix t0",
|
| 135 |
+
"parent_section_id": null,
|
| 136 |
+
"section_name": "Appendix 0.F The proof of Theorem\u00a05.2",
|
| 137 |
+
"text": "In order to show that every provable equivalence between deterministic one-free regular expressions is obtainable from a deterministic proof, we need to take a detour through the completeness proof of Grabmayer and Fokkink [15 ###reference_b15###] for one-free regular expressions modulo bisimilarity.\nGrabmayer and Fokkink\u2019s completeness proof revolves around a notion of solution to an operational model.\nThe operational models of one-free regular expressions they solve are LTSs with the so-called LLEE-property, defined in [15 ###reference_b15###].\nThe following is a slightly simpler but equivalent condition on LTSs called well-layeredness, found in [42 ###reference_b42###].\nLet be a LTS.\nAn entry/body labelling of is a labelling of each transition as either an entry transition, written , or as a body transition, written .\nGiven an entry/body labelling of and , we write to denote that there is a path\nsuch that .\nWe use to denote that for some , and use and to denote reflexive-transitive and transitive closures respectively.\nAn LTS is said to be well-layered if it has an entry/body labelling that satisfies the following conditions.\n(local finiteness) For any , is finite.\n(flatness) For any , if , then .\n(full specification) For any ,\n(there are no body loops), and\nif and , then (every entry transition is a loop entry transition).\n(layeredness) The graph is acyclic.\n(goto-free) If , then .\nIf satifies these conditions, we say that is a layering witness for .\nGrabmayer and Fokkink\u2019s completeness proof technique revolves around being able to solve certain systems of equations represented by LTSs.\nThis is captured abstractly by the following notion [15 ###reference_b15###, Definition 2.8].\nA solution to an LTS is a function such that for any ,\nGrabmayer and Fokkink\u2019s completeness proof technique can now be summarized as follows.\nProve that for any , is well-layered.\nProve that every well-layered LTS admits a unique solution up to .\nThat is, if are two solutions to the well-layered LTS , then for all .\nProve that if is a coalgebra homomorphism and is a solution to , then is a solution to .\nShow that the bisimulation collapse of a well-layered LTS is well-layered.\nThe following theorem is a strengthening of the statement in Step 4.\nThe bisimulation collapse case is originally due to Grabmayer and Fokkink [15 ###reference_b15###].\nIt was observed in [42 ###reference_b42###, Theorem 4.1] that a slight modification to Grabmayer and Fokkink\u2019s proof establishes the version below.\nLet be a well-layered LTS and be a surjective coalgebra homomorphism.\nThen is also well-layered.\nRoughly speaking, what we are going to do now is rework Steps 1 through 4 of Grabmayer and Fokkink\u2019s completeness proof for deterministic well-layered LTSs and their corresponding notion of deterministic solution.\nLet be a deterministic LTS and .\nWe say that is a deterministic solution if for any ,\nGiven two deterministic solutions and , we write if for any , .\nThe unique solution to a well-layered LTS is given by a formula.\nDefine the two quantities below for any\nThese are finite because in a layering witness , and do not contain infinite paths.\nLet , and let\nThe canonical solution to (given the layering witness ) is defined recursively on as follows:\nwhere we write and to denote the terms and respectively, as well as the statement .\nAbove, the expression is defined for every pair such that by recursion on in the lexicographical ordering of as follows:\nwhere\nwe define\nThe well-definedness of the formulas appearing in the definition above deserves further explanation.\nThe expression is defined by induction on .\nIn the base case of this induction, has no outgoing body transitions.\nThis gives the equivalent formula,\nThe full recursive formula is well-defined (if each is), because if , then .\nBoth of these formulas depend on the expression , which is further defined by induction on .\nIn the base case of this induction, and , because we require .\nSo, has no outgoing body transitions.\nSince in the expression , , so , and has no outgoing loop entry transitions.\nThis gives the equivalent expression\nwhich has no further dependencies.\nFor well-definedness of , we must check well-definedness of the subterms and .\nThe former appears in a context where , and so ; furthermore, because , which tells us that .\nAs for the subterm , note that because we have that and , meaning that .\nWe are specifically interested in solving deterministic LTSs.\nThe following terminology will be useful in proofs.\nA state in a LTS is operationally deterministic if is graph-like.\nThat is, for any , and implies and .\nThus, an LTS is deterministic precisely when each of its states is operationally deterministic.\nDeterminism is preserved by homomorphisms.\nIf is an operationally deterministic state of the LTS and is a homomorphism, then is operationally deterministic.\nThere are three cases to consider.\nSuppose and .\nThen because is a homomorphism.\nIt follows that , because is operationally deterministic.\nIf and for some , then there are such that , , , and .\nSince is operationally deterministic, and .\nHence, .\nIf and for some , then and for some such that .\nThis is not possible because is operationally deterministic, which would then require and , despite .\nWe are now ready to describe the structure of the proof of Theorem 5.2 ###reference_theorem2###.\nThe proof requires the following four properties of deterministic regular expressions, deterministic well-layered LTSs, and their deterministic solutions.\n(Lemma 30 ###reference_ma30###) is a deterministic subcoalgebra of .\n(Lemma 31 ###reference_ma31###) For any , the inclusion map is a deterministic solution.\nThis is equivalent to saying that\n(Theorem 0.F.2 ###reference_.Thmtheorem2###) Fix a layering witness for a deterministic LTS .\nThe following properties hold:\nis a deterministic solution to . That is, for any ,\nFor any deterministic solution to and for all ,\n(Lemma 33 ###reference_ma33###) Let be a homomorphism between deterministic LTSs and let be a deterministic solution to .\nThen is a deterministic solution to .\nWe now provide the proof of Theorem 5.2 ###reference_theorem2###, showing how these four properties collectively imply the theorem, and will then later present the proofs of all above properties.\nSuppose .\nBy Lemma 30 ###reference_ma30###, and are deterministic subcoalgebras of .\nBecause the latter is well-layered [42 ###reference_b42###], and subcoalgebras of well-layered LTSs are again well-layered (this follows easily from the definition), we know that and are also well-layered.\nNow, if , then by soundness.\nBy [38 ###reference_b38###, Theorem 4.2], there is a minimal LTS and homomorphisms such that .\nSince well-layeredness and determinism are preserved by homomorphisms (Theorems 0.F.1 ###reference_.Thmtheorem1### and 29 ###reference_ma29###), is a deterministic well-layered LTS.\nIt follows from Theorem 0.F.2 ###reference_.Thmtheorem2### that has a deterministic solution .\nBy Lemma 33 ###reference_ma33###, and are deterministic solutions to and respectively.\nLemmas 30 ###reference_ma30### and 31 ###reference_ma31### tell us that and are also deterministic solutions to and respectively.\nTherefore, by Theorem 0.F.2 ###reference_.Thmtheorem2###,\nSince , we see from the derivations above that .\nIn the proof of Theorem 5.2 ###reference_theorem2### we used the four properties outlined above (Lemmas 30 ###reference_ma30###, 31 ###reference_ma31###, 0.F.2 ###reference_.Thmtheorem2### and 33 ###reference_ma33###), as depicted in Fig. 9 ###reference_.F9###. We will now prove all the needed properties individually.\n."
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"section_id": "Appendix t0",
|
| 141 |
+
"parent_section_id": null,
|
| 142 |
+
"section_name": "Appendix 0.G Corrections Made to the Current Document",
|
| 143 |
+
"text": "We discovered that the original version of this paper left a gap in its main proof, the argument towards completeness of skip-free bisimulation GKAT w.r.t. bisimilarity.\nThankfully, we have since filled this gap.\nA corrected completeness proof now appears in this updated version of the paper (Appendix 0.F ###reference_###), but we explain here the gap and how we proceeded to fix it. At the end of this appendix, we also list some other minor changes we took the opportunity to do which correct a few other small errors and typos in the paper."
|
| 144 |
+
}
|
| 145 |
+
],
|
| 146 |
+
"tables": {},
|
| 147 |
+
"image_paths": {},
|
| 148 |
+
"validation": true,
|
| 149 |
+
"references": [],
|
| 150 |
+
"url": "http://arxiv.org/html/2301.11301v4"
|
| 151 |
+
}
|
20241001/2304.02730v4.json
ADDED
|
@@ -0,0 +1,499 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Fair Ordering in Replicated Systems via Streaming Social Choice",
|
| 3 |
+
"abstract": "How can we order transactions \u201cfairly\u201d in a replicated state machine (of which today\u2019s blockchains are a prototypical example)?\nIn the model of prior work (themis; kelkar2020order; cachin2022quick; vafadar2023condorcet; kiayias2024ordering),\neach of replicas observes transactions\nin a different order, and the system aggregates\nthese observed orderings into a single order. We argue that this problem\nis best viewed directly through the lens of the classic preference aggregation problem of social choice theory\n(instead of as a distributed computing problem),\nin which rankings on candidates are aggregated into an election result.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "1. Introduction",
|
| 9 |
+
"text": "We study the problem of ordering transactions in the widely-used replicated state machine architecture (e.g.,\n(ongaro2014search; lamport2001paxos; yin:hotstuff; oki1988viewstamped)).\nIn the standard setting, each of a set of distinct replicas\nmaintains a copy of a state machine.\nReplicas\ncommunicate to agree on a totally-ordered log of transactions ,\nand then each replica applies the transactions in this order to its local copy of the state machine.\nAn (unbounded) set of clients create and broadcast new transactions.\nThis architecture famously underlies many of today\u2019s blockchains,\nbut also underlies many traditional (centralized) services, such as bank infrastructure.\nThis architecture requires that all replicas apply transactions in the same order.\nIn a bank example,\na client with a $1 balance might create two transactions, one which sends a $1 payment to client \nand one which sends a $1 payment to client . Only one of payments can execute successfully\n(assuming \u2019s balance is not allowed to overdraft)\nso if different replicas apply these transactions in different orders,\nthen they will disagree on the balances of clients and .\nThere are many communication protocols through which replicas can agree on some total order\n(solving the \u201ctotal order broadcast\u201d problem; broadcastsurvey gives a survey).\nYet in many systems, most notably within today\u2019s public blockchains,\nsignificant financial value can be derived from ordering transactions in specific ways (daian2020flash).\nThese systems must therefore agree on not just any ordering but an \u201coptimal\u201d one, for some notion of optimal.\nThe key observation for this work is that this problem is a novel streaming variation of the classic\npreference aggregation problem of social choice theory (hagele2001lulls; llull; colomer2013ramon; condorcet1785essay; arrow1950difficulty).\nPrior work on this problem (kelkar2020order; themis; cachin2022quick; aequitaspermissionless)\nuse very different network communication protocols, but ultimately each produces agreement on a \u201cvote\u201d from each\nreplica on the ordering of a set of transactions (in social choice parlance, a \u201cranking\u201d on a set of \u201ccandidates\u201d).\nAgnostic to the choice of network protocol,\nhow should a system aggregate a set of proposed orders (\u201crankings\u201d) into a single, total order?\nTwo key differences separate this problem from the classic social choice setting.\nFirst, the number of transactions (\u201ccandidates\u201d) is countably infinite\n(as clients can continually send new transactions for an unbounded time).\nAnd second, the system must produce the output ranking in a streaming fashion.\nIt cannot wait to see the reported orders over the entire (infinite) set of\ntransactions before making a decision on the relative ordering of two transactions.\nInstead, the system must produce as output an append-only, totally-ordered log of transactions (to be given to the state machine).\nFurthermore, the system should ideally minimize the delay between when a client sends a transaction\nand when that transaction is appended to the output log (i.e. the system needs \u201cliveness\u201d).\nWhile we believe this problem to be interesting in its own right,\nstudying it directly through the lens of classic social choice theory,\nenables us to develop an ordering algorithm\nwith both much stronger order \u201cfairness\u201d guarantees (the key property studied in prior work, discussed below)\nand stronger liveness guarantees\nthan all of the prior work. This algorithm could be deployed on top of any of the network protocols\nof prior work.\nOne additional consideration is that replicas might strategically adjust their reported orderings.\nPrior work (kelkar2020order; themis; cachin2022quick) writes that \u201chonest\u201d replicas\nmust report transactions in the order in which they arrive over a network,\nand any other behavior is \u201cfaulty.\u201d\nOne desirable property of an order aggregation rule is that the influence of a (colluding) subset of faulty replicas is\nprecisely bounded.\nThere is a wide body of social choice literature on this underlying aggregation problem.\nOur goal with this work is to demonstrate an application for our streaming version of the problem,\nand to demonstrate the value of using social choice results in this application\nby targeting the precise desiderata raised in prior work.\nThere are many other natural desiderata,\nnotions of fairness,\nand aggregation rules that may be practically useful.\nThe application of social-choice style aggregation rules in this streaming setting poses a number of interesting open questions.\nFor example, if replicas have distinct financial motivations, or accept bribes from clients to order transactions in specific ways,\nis there an aggregation rule that maximizes (perhaps approximately) social welfare?\nFor a reader coming from social choice theory,\n\u201ctransaction\u201d could be replaced by \u201ccandidate\u201d, and \u201can ordering vote\u201d by \u201ca ranking.\u201d"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "1.1. Our Results",
|
| 15 |
+
"text": "We first observe that for an ordering method to be well-defined on a countably infinite set,\nit suffices for there to exist a \u201cmonotonic\u201d and \u201casymptotically live\u201d algorithm that implements the ordering method\non finite, initial segments of the input. This algorithm must not undo a decision\nwhen its (finite) input is extended (\u201dmonotonicity\u201d) and must eventually output every transaction (\u201dasymptotic liveness\u201d).\nThe core challenge in building such an algorithm is in determining when an algorithm has enough\ninformation to make a decision that is consistent with its hypothetical output on any extension of its input.\nPrior work of kelkar2020order defines a notion of \u201c-batch-order-fairness\u201d (reproduced here as Definition 3.1 ###reference_theorem1###).\nThe output ordering is divided into batches, and if replicas receive a transaction before another transaction ,\nthen cannot be in a later batch than . However, transactions within a batch can be ordered arbitrarily.\nUnfortunately, this definition is vacuously satisfiable by an arbitrarily large batch. We first strengthen this definition\nin \u00a73 ###reference_### to precisely capture the intuition that batches should be minimal.\nMinimality must be defined carefully in the presence of faulty replicas;\nwe show that\n-batch-order-fairness cannot be satisfied simultaneously with exact notions of batch minimality\nand faulty replicas.\nWe call our definition -minimal-batch-order-fairness (Definition 3.4 ###reference_theorem4###).\nSimply put, if a fraction of replicas vote for before (henceforth; )\nthen the output ordering should include \nunless there is a sufficiently strong reason to put \u2014that is, a sequence of transactions\n, with and ,\nwhere at least a fraction of replicas vote for for .\nMotivating this definition is the fact that a fraction of faulty replicas can, at most,\nincrease or decrease the fraction of replicas that report by .\nAs an example, given faulty replicas\nout of total replicas and a fixed ,\nthe protocol of kelkar2020order\nsatisfies\n-minimal-batch-batch-order-fairness (Lemma 3.3 ###reference_###).\nUnfortunately, that protocol relies on an explicit choice of a parameter and an explicit bound on , and\nit is not at all clear whether a higher or lower parameter gives a stronger guarantee (\u00a710 ###reference_### gives an example).\nBy contrast, the algorithm we give here achieves -minimal-batch-order-fairness for every \nsimultaneously,\nand for any number of faulty replicas. Fairness guarantees smoothly degrade as the number of faulty replicas increases.\n111Our algorithms do not require knowledge of a bound on the number of faulty replicas.\nThis notion of \u201cfaulty\u201d replica is distinct from notions of \u201cfaulty\u201d in classical Byzantine-fault tolerant communication protocols, such as (castro1999practical).\nThis work adapts the ordering known as Ranked Pairs (tideman1987independence) to our streaming setting.\nInformally, Ranked Pairs operates on a weighted, directed graph, where each vertex is a transaction\nand an edge from to has weight equal to the fraction of replicas that report .\nPrior work (kelkar2020order; cachin2022quick), by contrast, operates on the unweighted, directed graph that is derived\nby taking the weighted graph which our work uses,\nremoving all edges below weight , and dropping the weights on the remaining edges. This technical difference\nis the key to enabling our stronger results.\nAnalysis of this directed graph gives an important structural lemma about what pieces of information\nRanked Pairs uses to determine its output. This lemma allows a streaming version of Ranked Pairs to carefully track how uncertainty about as-yet-unseen transactions\npropagates throughout Ranked Pairs\u2019s algorithm.\nRanked Pairs iterates over edges in order of edge weight.\nAn interesting observation is that the tiebreaking rule in this edge ordering critically determines overall liveness.\nA worst-case (fixed) rule could cause our streaming algorithm to never output any transactions.\nHowever, we show in \u00a76 ###reference_### how to construct a tiebreaking rule in a streaming fashion\nthat guarantees not only asymptotic liveness but also an explicitly bounded liveness delay.\nIf, for example, at most time elapses between\nwhen a transaction is sent and when every honest replica votes on it, for example,\nour algorithm outputs after a delay of at most .\nThis can be reduced to , by rounding edge weights to the nearest \u2014\nalthough this rounding\nreduces the fairness guarantee to -minimal-batch-order-fairness (again for every simultaneously,\nand any )."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "2. Preliminaries and System Model",
|
| 21 |
+
"text": "We consider a model in which there are replicas cooperating to develop a total ordering\nof transactions.\nTransactions are received over the network from clients.\nWe say that the ordering in which a replica receives transactions is that replica\u2019s observed ordering.\nAn Ordering Preference on a (finite or countably-infinite) set of transactions is a total ordering\n222\nIsomorphic to a subset of\nHowever, at any finite time, each replica can have received only a finite number of transactions.\nReplicas periodically submit these finite \u201cranking votes\u201d to a ranking algorithm.\nA replica submits to a ranking algorithm\nan ordering vote\non a set of transactions, .\n(where for ).\nWe say that extends if is an initial segment of \n(as a convention, extends itself).\nA deterministic ranking algorithm takes as input an ordering vote from\neach replica and outputs an ordering on a subset of the transactions in its input.\nThe output need not include every transaction in the input.\nWe say that a replica is honest if its true, observed ordering always extends its ordering vote,\nand if whenever it submits a vote, it contains all transactions that the replica has observed at that time.\nOtherwise, the replica is faulty.\nWe denote the number of faulty replicas as (out of ), and do not assume a bound or knowledge of .\n333\nOf course, -minimal-batch-order-fairness (Definition 3.4 ###reference_theorem4###) is only meaningful if .\nIn the rest of this work, we assume that every replica eventually votes on every transaction.\nThis may not necessarily hold for faulty replicas. A precise choice of response to this type of faulty behavior\nwill depend on the network model and communication protocol, but, informally,\nif a replica fails to vote on a transaction within a \u201csufficient\u201d time,\nit suffices for the honest replicas to fabricate a vote on behalf of the faulty replica.\n\u00a716 ###reference_### gives an example construction that satisfies this property,\nbut note that prior work on this problem (e.g., (kelkar2020order; themis; cachin2022quick)) implicitly\nuse the same ideas to handle unresponsive replicas."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "3. Defining Fair Ordering",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.1",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "3.1. Batch Order Fairness",
|
| 33 |
+
"text": "Our discussion here follows the notion of \u201cfairness\u201d discussed in prior work (e.g., (kelkar2020order; themis; cachin2022quick; kiayias2024ordering)).\nThe natural notion of \u201cfairness\u201d in prior work, which we follow here,\nis that if \u201cmost\u201d replicas report a transaction before another transaction , then the protocol outputs before .\nMaking this notion precise requires accounting for what are known as Condorcet cycles.\nA (super)majority might report before , another majority reports before , and a third majority reports\n before .\nAequitas (kelkar2020order) sidesteps this problem by outputting these Condorcet cycles in \u201cbatches,\u201d which\nleads to the following definition (paraphrased from (kelkar2020order)).\nSuppose that and are received by all nodes.\nIf nodes received before , then a ranking algorithm never outputs in a later batch than .\nIn other words, and could be in the same batch. Transactions within a batch are ordered arbitrarily.\nSubsequent work (themis; cachin2022quick; kiayias2024ordering) follows this same definition (or a restatement of it).\nThere are two key problems with this definition.\nFirst, this definition does not preclude a ranking algorithm from putting all transactions into one batch\n(and therefore ordering transactions arbitrarily). For this definition to be meaningful,\nit needs a notion of \u201cminimality\u201d to batch sizes.\nWe address this problem by strengthening the definition to include a notion of \u201cminimality\u201d (Definition 3.4 ###reference_theorem4###).\nSome of the prior work (kelkar2020order; cachin2022quick; kiayias2024ordering), but not all (themis), implicitly satisfies this definition.\nSecond, it is not clear whether a higher or lower gives stronger fairness properties, yet\nall of the prior work requires a fixed parameter choice.\nIndeed, there are simple examples (with no faulty replicas)\nwhere Definition 3.4 ###reference_theorem4### only implies ordering restrictions\nat high values of , others at only low values, and still others at only intermediate values (\u00a710 ###reference_###).\nIt is not the case that satisfying Definition 3.4 ###reference_theorem4### for a low value of implies\nsatisfying the definition for a high value of .\nWe address this problem by constructing a protocol that satisfies Definition 3.4 ###reference_theorem4###\nfor all simultaneously."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.2",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "3.2. Batch Minimality",
|
| 39 |
+
"text": "To construct a precise definition, we start with a weighted dependency graph\nover a set of transactions.\nSuppose that replicas report ordering votes on a set of transactions .\nThe Ordering Graph is a complete, weighted directed graph with vertex set \nand, for each transaction pair ,\nan edge of weight if replicas report .\nAny notion of \u201cminimal\u201d should at least require that when there are no cycles in ordering dependencies, the output is\nconsistent with every ordering dependency\n(as in Theorem IV.1 part 1, Themis (themis)). Exact -minimality generalizes this notion to the case where cycles in ordering\ndependencies exist.\nFor any pair of transactions , if replicas receive but is output before ,\nthen there exists a sequence of transactions where\n replicas receive and is output before for all .\nHowever, this notion of minimality is impossible to achieve (for any ) in the face of any faulty replicas.\nThe condition on in Lemma 3.2 ###reference_### is a technical artifact\nof a construction in the proof (related to the pigeonhole principle).\nFor some parameter settings, such as with ,\nthe condition admits .\nNo protocol can achieve -batch-order-fairness with exactly -minimal output batches for any and greater than\n\n(for ).\nIn fact, the construction in the proof contains no cycles in the ordering dependency graph of weight ,\nthereby applying even to a weaker version of Definition 3.3 ###reference_theorem3### that applies only to the case where the ordering graph,\nrestricted to edges of weight at least , has no cycles.\nDefine m=, and .\nConsider the process of choosing an ordering between transactions .\nAssume that every node submits a vote on transaction ordering (i.e. the theorem\nholds even when faulty nodes are required to submit a vote).\nAn ordering dependency arises, therefore, if and only if there are at least votes for some before another .\nDivide the set of nodes into disjoint groups for of size .\nThe remainder are the faulty nodes.\nSuppose that nodes in group receive transactions in order ,\nand that the faulty nodes receive the same ordering of transactions as the nodes in group .\nAs such, the transaction pairs that receive at least votes are exactly those with ,\nand there are no cycles in this ordering dependency graph. Thus, any protocol must output the ordering\n.\nHowever, for any protocol that lacks knowledge of which nodes are faulty,\nthis scenario is indistinguishable from one where\nthe faulty nodes had received transactions in the ordering received by group ,\nbut reported the ordering observed in . As such, the protocol\nwould necessarily output the ordering observed by .\nHowever, in this case, the only correct output would have been ,\nas only the transaction pairs with for or \nreceived votes for . \n\u220e\nThe core difficulty is that faulty replicas can cause the weight on an edge observed by any algorithm to be different from the true,\nground truth weight.\nIntuitively,\nif a protocol has a reliable communication layer (i.e. every honest replica is able to submit a vote on its ordering preference),\nthen the worst that a faulty replica can do is to misreport its ordering preferences. If there are faulty replicas, then\nthe faulty nodes can artificially reduce or increase the fraction of replicas that vote for by at most .\nAs a protocol cannot distinguish a faulty replica from an honest one by its votes,\nwe therefore must take some error in edge weights into account in our definition of minimality.\nAn ordering is -minimally-batch-order-fair if, for any transaction pair \nthat is received in that order by at least replicas but output by the protocol in the reverse ordering,\nthen there is a sequence of transactions where at least replicas\nreceive and is output before .\nDefinition 3.4 ###reference_theorem4### captures the notion that a protocol cannot distinguish between\na -fraction of replicas misreporting a transaction ordering. Given this indistinguishable fraction,\nthe protocol outputs minimally-sized batches. Definition 3.3 ###reference_theorem3### corresponds to the case of .\nThis definition does not explicitly discuss \u201cbatches\u201d or \u201cminimality,\u201d but approximately minimal batches (the strongly connected components of Lemma 3.2 ###reference_###) can be recovered\nfrom any ordering satisfying it. The second condition of Lemma 3.2 ###reference_### limits the size of output batches.\nSuppose that an output ordering satisfies -minimal-batch-order-fairness.\nCompute the ordering graph , drop all edges with weight below , and compute the strongly connected components\nof the remainder.\nIf replicas received , then either and are in the same strongly connected component,\nor all transactions in the component containing are output before any transactions in the component containing .\nIf replicas receive and there is no sequence of transactions\n where at least replicas\nreceive , then all transactions in the component containing are output before any transactions in the component containing .\nIf at least replicas receive , then the edge is included in the thresholded ordering graph,\nand if less than replicas receive , then the edge is not included.\nThus, if at least replicas receive , then either and are in the same strongly connected component,\nor there is an edge from the component containing to that containing . And if, additionally, there is no sequence of transactions\nfrom to as in the lemma statement, then and must be in different components.\nIf there is an edge from one strongly connected component to another, then all transactions in the first component must be output\nbefore any in the second (or else a violation of Definition 3.4 ###reference_theorem4###) would occur.\n\n\u220e\nThere is one subtle difference between this definition and\nthe requirement for an explicit sequence of output batches in prior work (e.g., in (kelkar2020order; themis)).\nWhen there are two disjoint strongly connected components with no dependencies (of strength at least )\nfrom one to the other, Definition 3.4 ###reference_theorem4### allows the output to interleave these components.\nThis may be strictly required (\u00a711 ###reference_###)."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.3",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "3.3. Comparison to Prior Work",
|
| 45 |
+
"text": "Before constructing our protocol, we first apply Definition 3.4 ###reference_theorem4### to related prior work. Aequitas (kelkar2020order)\nis instantiated with a choice of parameter and a bound on the number of faulty replicas.\nAequitas (kelkar2020order) with parameters achieves -minimal-batch-order-fairness.\nNote that Aequitas requires . Lemma 3.3 ###reference_### is a stronger version of\nTheorem 6.6 of (kelkar2020order).\nAequitas considers a transaction pair as an ordering dependency if at least report\nreceiving before (part 3.b.ii of \u00a76.2 of (kelkar2020order)).\nThis ensures that the protocol considers every ordering dependency\nthat truly receives at least votes, and might include any edge that receives at least votes.\nAequitas sequentially outputs minimal cycles in its computed graph of ordering dependencies (not Definition 3.2 ###reference_theorem2###),\nordering within each cycle arbitrarily.\nThis means that if some pair that receives at least votes is output in reversed order,\nthen it must be part of a cycle of dependencies, and each of these dependencies must have received at least votes in order\nto be included in Aequitas\u2019s computed graph. \u220e\nBroadly speaking, Aequitas, along with the protocols of cachin2022quick and kiayias2024ordering,\nfollow the same pattern. Through (very different) communication protocols,\nthey come to agreement on, for each transaction pair and , the fraction of replicas that report before .\nThese protocols choose (in advance) a fixed parameter , and then drop from consideration ordering dependences of strength less\nthan . This is equivalent to constructing the ordering dependency graph of Definition 3.2 ###reference_theorem2###,\nbut then dropping all edges of weight less than and then forgetting all of the weights on the remaining edges. The strongly connected\ncomponents of this graph form the batches output in Aequitas.\nAequitas (as well as the protocols of cachin2022quick and kiayias2024ordering) also is not asymptotically live\n(Definition 5.2 ###reference_theorem2###). The strongly connected components of this unweighted dependency graph\ncan be arbitrarily large, and these protocols wait until they observe the entirety of a component before outputting any transaction\nin it. An exception is one version of the protocol in kiayias2024ordering, which adds timestamps to transactions\nand (given precise network assumptions) can output a component in a streaming fashion.\nBy contrast, Ranked Pairs (Theorem 4 ###reference_###) achieves -minimal-batch-fairness for every simultaneously,\nand for any (our work does not require knowledge of a bound on ). Our protocol gets these stronger results by using all of the information\navailable in the problem input. Additionally, Ranked Pairs guarantees a bounded liveness delay (although precise end-to-end results depend also on network conditions).\nThemis (themis) claims (Theorem IV.1) that its output can be partitioned into \u201cminimal\u201d batches,\nwhich appears to contradict Lemma 3.2 ###reference_###.\nHowever, Themis uses a much weaker notion of \u201cminimal.\u201d\nWhere -batch-order-fairness would create a dependency for not after if at least replicas\nreceive before ,\nthe relaxation implied by Definition III.1 of (themis) creates such a dependency if at least receive before \n(that is, it cannot be the case that more than receive before ).\nTheorem IV.1 of (themis) considers its output batches \u201cminimal\u201d if they are singletons when there are no cycles of these dependencies.\nThis notion of minimality is much weaker than what we discuss here, and does not provide any\nguarantee when there is significant disagreement between replicas on\ntransaction orderings. Lemma 3.3 ###reference_### gives an example highlighting this distinction.\nIn Themis (themis), even if all nodes report before and there are no faulty replicas\nnor cycles in ordering dependencies (as defined in Definition 3.1 ###reference_theorem1###) for any ,\nthe output may put before .\nThis kind of counterintuitive behavior is a possible output of Themis\u2019s protocol.\nWhen two transactions receive\n votes in one round of Themis, it determines whether to consider as an ordering dependency or \nby majority vote. Equivalently, Themis as a protocol operates in the regime of , regardless\nof a chosen parameter value\n(the choice of does affect how Themis considers\ntransactions reported by some but not all replicas, which does not affect this discussion).\nSuppose that the number of nodes is even, and break the nodes into two groups.\nGroup one receives transactions , while group two\nreceives transactions .\nAll nodes submit their local orderings to the protocol correctly. Themis proceeds in rounds;\nthe round under consideration consists only of these three transactions.\nThemis builds a graph of what it considers ordering dependencies on the transactions in a round\n(Figure 1, part 1, (themis)).\nLet be the number of replicas that vote\nfor before .\nAn ordering dependency between and is included if (1) \nand (2) . Themis assumes , or equivalently,\n, so .\nTies are broken in an unspecified (deterministic) manner.\nAs such, (depending on the tiebreaking), Themis may consider as valid ordering dependencies the pairs ,\n, and . Within a strongly connected component of its dependency graph, Themis computes a Hamiltonian cycle,\nthen outputs transactions by walking arbitrarily along that cycle. As such, a possible output of Themis is the ordering\n.\nThus, all nodes received before , and there were no cycles of ordering dependencies for any , but the protocol could\noutput before .\nNote that Themis only waits for votes from replicas before proceeding, not the full .\nThis difference is immaterial if .\nHowever, the construction above works only if the numbers of votes for\n and for are both at least .\nTighter analysis shows that\n, so the construction still works.\nFor larger than for some small and sufficiently large ,\nthis construction could be repeated with more transactions in the cycle.\nThis eliminates the need to abuse a tiebreaking rule when votes are evenly divided. \u220e"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "4. Ranked Pairs Voting",
|
| 51 |
+
"text": "Our protocol is based on the Ranked Pairs method (tideman1987independence).\nRanked Pairs (Algorithm 2 ###reference_thm2###, reproduced in \u00a715 ###reference_###)\nfirst builds the ordering graph (Definition 3.2 ###reference_theorem2###),\nthen iterates through edges in order of edge weight (breaking ties arbitrarily).\nIt builds an acyclic set of edges greedily, adding each new edge so long as it does\nnot create a cycle with edges already in the set. The output ordering\nis the result of topologically sorting the resulting acyclic graph.\nGiven a ordering vote (on every transaction in a finite set) from every replica,\nRanked Pairs Voting simultaneously satisfies -minimal-batch-order-fairness\nfor every , and does not depend on any fixed bound on .\nLemma 3.2 ###reference_### and Theorem 4 ###reference_### together imply an important property.\nRanked Pair\u2019s output\ncan be divided into batches consistent with -batch-order-fairness for any \n(subject to the interleaving discussed in \u00a73.2 ###reference_###).\nThere is no need to choose a or arbitrarily order transactions within a batch.\nRanked Pairs Voting ensures that if some edge of weight is not included in the output graph \u2014 that is to say,\nif nodes vote for before , but the output ordering has \u2014 then there must be a directed path of edges\nin already from to . As the algorithm looked at these edges before the current edge,\nthese edges must have weight (as observed by the algorithm) at least .\nNote that if there are faulty nodes, these nodes can, at most, adjust the weight (observed by the algorithm) on any particular transaction pair\nby at most .\nThus, an edge with true weight is only reversed if there exists a path in the ordering graph\n(that is included in )\nin the opposite direction\nof minimum true weight at least .\nNowhere in the algorithm do its choices depend on or (or ). \u220e"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "5. Streaming Ranked Pairs",
|
| 57 |
+
"text": "We now turn to the streaming setting that is the focus of our work.\nThere may be an infinite set of transactions to consider overall, but at any finite time,\nan algorithm must compute an ordering on a subset of the transactions that have been seen thus far.\nIn a real-world system, such an algorithm would be run with some periodic frequency,\nand replicas would periodically append to their ordering votes.\nAn algorithm must be monotonic (Definition 5.1 ###reference_theorem1###)\u2014if replicas extend their ordering votes, the algorithm, when run again on the extensions,\ncan only extend its prior output.\nA sequencing algorithm is monotonic\nif, given two sets of ordering votes \nand ,\nsuch that extends for all ,\n extends .\nIf a sequencing algorithm is monotonic,\nit implies a well-formed definition for aggregating\na set of orderings on countably infinite sets of transactions,\nnot just on finite sets. comes before \nin the infinite case\nif there exists a finite subset of the input orderings\nsuch that the algorithm puts in its output.\nA non-vacuous condition is also required\u2014the algorithm\nthat never outputs anything is monotonic.\nA sequencing algorithm is asymptotically live\nif, given any set of countably infinite ordering votes \nand any transaction in those votes,\nthere exists an such that when each is trunctated to the first \nelements of the ordering to produce ,\n is included in .\nFor expository simplicity, we present\nour algorithm in two steps. First, we give a streaming algorithm\n(Algorithm 3 ###reference_thm3###) that is monotonic but not asymptotically live.\nThen, we give a modification that ensures asymptotic liveness (Algorithm 1 ###reference_thm1###)."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.1",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "5.1. Decisions Are Local",
|
| 63 |
+
"text": "Implementing Ranked Pairs voting in this streaming setting requires first\nunderstanding when the algorithm chooses or rejects an edge in the ordering graph.\nIf Ranked Pairs rejects an edge of weight , then it must choose some directed path of edges from to , where\neach edge has weight at least .\nIf Ranked Pairs rejects an edge of weight , then adding the edge would have created a cycle among the edges the algorithm had already chosen. Ranked Pairs looks at edges in order of weight, so all of the edges on the already chosen path from to must have weight\nat least .\n\u220e\nRanked Pairs chooses every edge of weight .\nThere cannot be any cycles of edges, all of weight . Such a situation would require every replica to submit a cycle of preferences,\nwhich is impossible, given that each ordering vote is a total, linear order. \u220e\nFurthermore, the set of vertices that such a path can visit are bounded to a local neighborhood of and .\nLet be any transaction.\nLet be the set of all preceeding transactions; that is, all with .\nLet be the set of all concurrent transactions; that is, all with .\nLet be the set of all subsequent transactions; that is, all with .\nIf ranked pairs chooses all edges in a path from to ,\nno transactions on the path are in or .\nIf the path visits a transaction in , then it creates a cycle with the edge from to \n(which ranked pairs must choose, by Observation 5.1 ###reference_###).\nIf the path visits a transaction in , then it creates a cycle with the edge from to .\n\u220e\nLemma 5.1 ###reference_### lets the streaming algorithm decide whether to choose an edge given only a finite amount of\ninformation.\nOnce the algorithm sees a vote for a transaction from every replica, it can compute the weight in the ordering graph\nfor every edge and .\nUnseen transactions must be in .\nEven so, a streaming algorithm cannot always know whether or not Ranked Pairs would choose an edge.\nAs such, we allow our algorithms to leave an edge in an \u201cindeterminate\u201d state, and design\nour algorithms to account for indeterminate edges when considering subsequent edges.\nFirst, we construct an ordering graph that captures the information available at a finite time.\nSuppose that each replica submits an ordering vote .\nLet be the set of all transactions that appear in each vote, and let be a new vertex (the \u201cfuture\u201d)\nThe Streamed Ordering Graph is a weighted, directed, complete graph \non vertex set .\nFor each and , set weights and as in Definition 3.2 ###reference_theorem2###.\nSet for all .\nIf there exists that appears in the votes of some\nbut not all replicas and which preceeds in at least one replica\u2019s vote, set , and otherwise .\nConceptually, the \u201cfuture\u201d vertex represents all transactions that have not received votes from all replicas.\nImplicitly, all edge weights not computable from the available information are upper-bounded by .\nAn implementation might compute a tighter upper bound on the weights of edges . Subsequent arguments require only that\nthe assigned weight is an upper bound of the true weight."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5.2",
|
| 67 |
+
"parent_section_id": "5",
|
| 68 |
+
"section_name": "5.2. A Streaming Algorithm",
|
| 69 |
+
"text": "We can now construct a streaming algorithm to compute Ranked Pairs.\nInstead of either including or rejecting every edge,\nAlgorithm 3 ###reference_thm3### (shown in full in Appendix 15 ###reference_###) has the option to declare an edge as indeterminate;\nthat is,\nthe algorithm cannot determine whether to include or exclude the edge\nwith the information in the input.\nThe algorithm proceeds identically to the classic Ranked Pairs algorithm, except regarding how it treats indeterminate edges.\nFor each new edge, it first checks whether it can accept the edge (without creating a cycle of accepted edges),\nsupposing that all currently indeterminate edges are later accepted.\nConversely, it also checks whether it can reject the edge,\nwhich means checking whether accepting the edge would always create a cycle of accepted edges\n(assuming the currently indeterminate edges are later rejected).\nThe algorithm operates on the modified input graph (Definition 5.4 ###reference_theorem4###), where the \u201cfuture\u201d vertex\nbounds the information the algorithm has to consider about as-yet-unseen transactions. Informally,\nall edges between as-yet-unseen transactions are indeterminate, so from the perspective of directed connectivity analysis,\nthe unseen transactions form a single, strongly-connected component.\nTo prove correctness of this algorithm, it suffices to show that the output of the algorithm is\nmonotonic and\nthat the output is consistent with the non-streaming version of Ranked Pairs.\nBy consistency, we specifically mean that at any point, if clients were to stop sending new transactions and all replicas eventually voted\non every replica, Algorithms 2 ###reference_thm2### and 3 ###reference_thm3### would produce the same output.\nConsider a counterfactual scenario where clients stop sending transactions, all replicas eventually receive every transaction,\nand every replica includes every transaction in its output vote.\nWhenever Algorithm 3 ###reference_thm3### includes a determinate edge in (resp. excludes an edge),\nthat edge is included (resp. excluded) in the output of the non-streaming Ranked Pairs on the counterfactual input.\nSuppose that the lemma statement holds for all edges earlier in the Ranked Pairs ordering.\nSpecifically, assume that Ranked Pairs would choose every higher determinate edge,\nnot choose any higher rejected edge, and might or might not choose any higher indeterminate edge (and that all edges of higher weight\nare all either already rejected, chosen as determinate, or chosen as indeterminate).\nBy Lemma 5.1 ###reference_###,\nRanked Pairs would choose if and only if there does not exist a path of already chosen edges higher in the ordering\nfrom to that does not enter or in .\nIf there exists a path of determinate edges in ,\nthen Streaming Ranked Pairs rejects , and if there does not exist\na path of determinate or indeterminate edges,\nthen Streaming Ranked Pairs accepts . Otherwise, the edge is left indeterminate.\nNote that Streaming Ranked Pairs considers edges in the same ordering that Ranked Pairs would (relative to the restricted set\nconsidered). However, at any point, due to the initialization of with indeterminate edges,\nthe set of edges in which the algorithm searches for a cycle might include more edges than those that would be considered\nby Ranked Pairs. However, these extra edges are indeterminate and can therefore only make Streaming\nRanked Pairs choose indeterminate for a considered edge, satisfying the induction hypothesis.\nImportantly,\nthe weights of these extra edges upper bound their true (unknown) weights.\nAnd any cycle that would go through (an) as-yet-unseen transaction(s) maps to one (using indeterminate edges)\nthrough the future vertex . As such,\nthe set of paths considered in Streaming Ranked Pairs is always a superset of that considered by Ranked Pairs,\nand the difference between these sets is always made up of indeterminate edges.\nAs such, when the streaming algorithm does not choose to leave an edge as indeterminate,\nthe streaming algorithm makes the same decisions as the non-streaming algorithm.\nThus, the induction hypothesis holds for .\nThe induction hypothesis clearly holds for the first edge considered, so the lemma holds.\n\u220e\nThe same argument shows that Algorithm 3 ###reference_thm3### is monotonic.\nAlgorithm 3 ###reference_thm3### is monotonic.\nThe proof of Lemma 5.2 ###reference_### actually shows that an edge is marked determinate or rejected only if Ranked Pairs\nwould make that decision on the edge on any counterfactual scenario that extends the input to the algorithm.\n\u220e\nLemma 5.2 ###reference_### directly implies that the output of Algorithm 3 ###reference_thm3### matches that of Ranked Pairs.\nThe output of Algorithm 3 ###reference_thm3### matches an initial segment of the ordering output by the non-streaming Ranked Pairs\n(on the counterfactual of Lemma 5.2 ###reference_###).\nNote that every transaction bordering on an indeterminate edge must come strictly after (in the topological ordering)\nevery transaction in the output of the algorithm.\nBy Lemma 5.2 ###reference_###, then regardless\nof whether or not the indeterminate edges are chosen in Ranked Pairs,\nevery transaction bordering an indeterminate edge\nmust come after (in the true output of Ranked Pairs) all of the transactions output by the algorithm.\nWithin the set of output transactions, again because of Lemma 5.2 ###reference_###, transactions must be ordered\naccording to the true output of Ranked Pairs. \u220e\nRanked Pairs has the property that, for any set of inputs , if\nit is run on a restriction of the inputs to the transactions\nthat appear as an initial segment of its output on ,\nits output is unmodified on the restricted inputs (for completeness, we give a proof in \u00a79 ###reference_###).\nAs Algorithm 3 ###reference_thm3### is monotonic,\nits output exactly matches the Ranked Pairs ordering\nwhen the input votes are restricted in this way."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "6",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "6. Liveness and Efficiency",
|
| 75 |
+
"text": ""
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "6.1",
|
| 79 |
+
"parent_section_id": "6",
|
| 80 |
+
"section_name": "6.1. Efficient Marginal Runtime",
|
| 81 |
+
"text": "Instantiating this protocol means running a ranking algorithm repeatedly\n(as replicas report additional information).\nRather than recompute the streamed ordering graph and the status of every edge in each invocation,\nAlgorithm 3 ###reference_thm3### could use the output of a past invocation, restricted\nto the decisions that were determinate, as an oracle.\nCorollary 5.2 ###reference_### implies that\ndeterminate choices are consistent with any extension of the input,\nso this oracle does not change the output.\nHowever, the runtime now depends only on the number of new edges in the streamed ordering graph (\u00a714 ###reference_###)."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "6.2",
|
| 85 |
+
"parent_section_id": "6",
|
| 86 |
+
"section_name": "6.2. Protocol Liveness",
|
| 87 |
+
"text": "Algorithm 3 ###reference_thm3### is not asymptotically live. Appendix 13 ###reference_###\ngives a worst-case example.\nThe problem is the tiebreaking rule for how Ranked Pairs iterates over\nedges of equal weight.\nThat construction uses a worst-case tiebreaking rule,\nbut Ranked Pairs requires only some tiebreaking rule.\nOur algorithm can generate a tiebreaking rule dynamically.\nSpecifically, when the algorithm visits edges of weight ,\nit defers edges that would become indeterminate\nto the end of the ordering among edges of that weight.\nAlgorithm 1 ###reference_thm1### thus constructs a (partial) ordering between edges that breaks ties between edges of the same weight.\nThe output of this algorithm is, by the same arguments as in \u00a75.2 ###reference_###, consistent with the output of non-streaming Ranked Pairs,\nwhen using an tiebreaking rule consistent with this edge ordering.\nThe oracle mechanism of\n\u00a76.1 ###reference_### passes this tiebreaking information from one run\nof Algorithm 1 ###reference_thm1###\nto the next.\nWhenever Algorithm 1 ###reference_thm1### includes a determinate edge in , or excludes an edge from ,\nthe non-streaming Ranked Pairs, when using the implied ordering, would make the same decision.\nThe argument proceeds as in Lemma 5.2 ###reference_###.\nAlgorithm 1 ###reference_thm1### induces a tiebreaking rule between edges of the same weight.\nIf Algorithm 3 ###reference_thm3### were given this tiebreaking rule,\nit would visit edges in the order in which Algorithm 1 ###reference_thm1### makes decisions on edges\n(i.e., skipping\nthe instances where Algorithm 1 ###reference_thm1### defers an edge until later).\nAs such, its output is consistent with the non-streaming Ranked Pairs using this implied ordering (by Theorem 5.2 ###reference_###).\n\u220e\nThis small change enables the following structural lemma. We start with a useful definition.\nAn edge is contemporary to an edge if and only if\n and are contained in .\nIf Algorithm 1 ###reference_thm1### marks an edge with weight as indeterminate,\nthere must be a path from to contained in that contains an indeterminate edge of weight at least\n.\nBy construction of the algorithm, if an edge is marked as indeterminate,\nthen there must be a path in already chosen (when this edge is visited)\nof determinate and indeterminate edges of weight at least . Furthermore,\nany indeterminate edge must be of weight strictly more than . When the algorithm looks for such a path,\nit has not yet marked any edges of weight as indeterminate, and if no such path exists, then the edge\nwould not be marked as indeterminate.\n\u220e\nBy contrast, the best bound in Algorithm 3 ###reference_thm3### is that there exists a contemporary indeterminate edge of weight\n.\nLemma 6.2 ###reference_### implies that for any indeterminate edge, there is sequence of contemporary\nindeterminate edges of strictly increasing weights that causes the edge to be indeterminate.\nSuppose Algorithm 1 ###reference_thm1### marks an edge of weight as indeterminate.\nThen there exists a sequence of indeterminate edges \nof strictly increasing weight\nsuch that for each pair of edges and with contemporary to ,\n and are present in ,\nand the source of is .\nFollows from repeated application of Lemma 6.2 ###reference_###.\nThe algorithm starts with only edges leaving the \u201cfuture\u201d vertex marked as indeterminate.\nThese initial edges are the only indeterminate edges with weight , and thus are the only ones\nthat can form the root of any chain.\n\u220e\nNone of these chains can be longer than edges,\nas each step increases the weight by at least .\nThis bound implies that Algorithm 1 ###reference_thm1### eventually outputs every transaction.\nAlgorithm 1 ###reference_thm1### is asymptotically live.\nLet be any set of (countably infinite) ordering preferences,\nand let be any transaction in those orderings.\nConsider the set of all chains of edges where is contemporary to (and of higher weight)\nand is adjacent to , and .\nBecause each is finite,\nthe number of such chains must be finite,\nand the number of transactions that appear in any of these chains must be finite.\nOn any input to Algorithm 1 ###reference_thm1### that is a finite truncation of ,\nif an edge adjacent to is left indeterminate,\nthen Lemma 6.2 ###reference_### implies that there must be a chain of indeterminate edges on this input\nwith the first adjacent to , the last adjacent to , and each edge contemporary to the previous.\nThis chain, with the last edge dropped, must be one of the chains considered above.\nAs such, there are only a finite number of transactions that could be in the last edge adjacent to .\nFor each such transaction , there can only be an edge from to in the streamed ordering graph\nif there is some other transaction in the input that is ahead of in at least one replica\u2019s vote.\nThere can only be a finite number of these transactions.\nTaking a union over a finite number of finite sets gives a finite set of transactions. There must therefore be some bound such\nthat all of these transactions appear in every replica\u2019s vote if is\ntruncated to (at least) the first elements.\nA transaction is not output by Algorithm 1 ###reference_thm1### if the topological sort in the last step\nputs a transaction adjacent to an indeterminate edge ahead of .\nBut there are only a finite number of transactions that might ever be ahead of (specifically, and ).\nLet . Then if is truncated to\n(at least) the first ,\nthen must appear in the output of Algorithm 1 ###reference_thm1###.\n\u220e\nAdditionally, given a network assumption like Assumption 6.2 ###reference_theorem2###,\nLemma 6.2 ###reference_### bounds the temporal delay between when a transaction\nis sent by a client and when Algorithm 1 ###reference_thm1### adds it to its output.\nIf a client sends a transaction at time , all replicas include in\ntheir ordering votes before time .\nAs such, replicas disagree on the ordering between and \nonly if they were sent at roughly the same time.\nNote that Algorithm 1 ###reference_thm1### does not require knowledge of , and does not depend on Assumption 6.2 ###reference_theorem2### for correctness.\nConsider any two transactions and \nsent at times and , respectively.\nIf , then the weight of the edge is .\nIf , then every replica receives and commits to a vote on before is sent,\nso must come after in every replica\u2019s vote. \u220e\nConsider any three transactions , , and received by all replicas,\nthat were sent by clients at times , , and , respectively.\nSuppose , and that the weight of is not .\nThen .\nBy construction, , so . Furthermore, , so\n.\nFurthermore, by Observation 6.2 ###reference_###, .\nThus, .\n\u220e\nThis lemma and Lemma 6.2 ###reference_### give an overall time bound.\nA transaction is contained in the output of the algorithm of Algorithm 1 ###reference_thm1###\nafter at most time.\nLet the current time be , and let be sent at time . Without loss of generality, assume has been voted on by\nevery replica.\nIf an indeterminate edge exists to , then there exists some transaction (sent at )\nthat has not been voted on by every replica, but which preceeds in some replicas\u2019 votes.\nAs such, . Then, as does not fully preceed , it must be the case that \n(so .\nLemma 6.2 ###reference_### implies a sequence of indeterminate edges of strictly increasing weight from this edge\nto one adjacent to .\nConsider two adjacent edges in this chain, send at times , respectively.\nBy Lemma 6.2 ###reference_###, it must be the case that .\nAdding this bound over each link in the chain\nshows that\n (where the last comes from bounding the\ndelay of the transaction in the last edge adjacent to ).\nA sequence can be of length at most .\nAs such, after time , every edge adjacent to is either rejected or included and marked determinate.\nTo ensure that is included in the output of Algorithm 1 ###reference_thm1###, it suffices to ensure that\nevery transaction adjacent to an indeterminate edge\nmust come after .\nThis is guaranteed by waiting for an additional time. \u220e"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6.3",
|
| 91 |
+
"parent_section_id": "6",
|
| 92 |
+
"section_name": "6.3. Trading Accuracy for Liveness",
|
| 93 |
+
"text": "The bound in Theorem 6.2 ###reference_### comes from the fact that a chain of indeterminate edges\ncan have length, which, in turn, follows from the fact that weights have a granularity of\n. Reducing this granularity by rounding weights reduces the maximum length of a chain of indeterminate\nedges.\nIf edge weights are rounded to the nearest in the streamed ordering graph\n(before Algorithm 1 ###reference_thm1### is applied)\nthen\na transaction is contained in the output\nafter at most time.\nThe argument proceeds exactly as in that of Theorem 6.2 ###reference_###,\nexcept that the length of the chain of indeterminate edges is at most .\n\u220e\nHowever, rounding weakens fairness guarantees.\nRounding edge weights to the nearest before applying Algorithm 1 ###reference_thm1###\nachieves -minimal-batch-order-fairness for all and faulty replicas.\nThe argument proceeds as in Theorem 4 ###reference_###. However, due to the rounding and the faulty replicas,\nthe weight on an edge observed by the algorithm might be up to different from the true observed weight\n(as opposed to , as in Theorem 4 ###reference_###).\n\u220e"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "7",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "7. Related Work",
|
| 99 |
+
"text": ""
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "7.1",
|
| 103 |
+
"parent_section_id": "7",
|
| 104 |
+
"section_name": "7.1. Order-Fairness",
|
| 105 |
+
"text": "Closest to this work are\nAequitas (kelkar2020order) and Themis (themis), which propose the definition of -batch-order-fairness\nand instantiate ordering protocols based on aggregating ordering votes into a total order.\nAequitas achieves -minimal-batch-order-fairness\nfor a fixed choice of , but is not asymptotically live.\naequitaspermissionless instantiate Aequitas in a setting where the number of replicas is not known,\nand similarly points out the connection between this streaming problem and the classical ranking aggregation problem.\n\u00a73.3 ###reference_### gives a more detailed comparison to these works.\ncachin2022quick give a notion of\n\u201c-differential-order-fairness;\u201d Theorem D.1 of (themis) shows equivalence\nof this definition and -batch-order-fairness, for a choice of (as a function of ).\nOur definition of -minimal-batch-order-fairness (Definition 3.4 ###reference_theorem4###) strengthens\nthe (vacuously satisfiable) -batch-order-fairness definition used in prior work (see \u00a73 ###reference_### for details).\nConcurrently with this work, vafadar2023condorcet\nobserve that an adversary can influence the output ordering of many protocols\n(including that of kelkar2020order)\nby sending extra transactions (which can force batches to be combined together).\nThis eliminates the constraints on the output that Definition 3.1 ###reference_theorem1### gives\n(using an example akin to Example 10.2 ###reference_mtheorem2###).\nLemma 3.3 ###reference_### identifies a similar problem.\nThis observation is a corollary of the fact that the protocol of kelkar2020order\n(as well as the Ranked Pairs algorithm studied here)\ndoes not satisfy the Irrelevance of Independent Alternatives axiom (arrow1950difficulty).\nvafadar2023condorcet also propose using Ranked Pairs to sort transactions within batches\n(as, e.g., those output by a protocol like Themis (themis)).\nHowever,\nno modification to the ordering of transactions within a batch output in Aequitas (kelkar2020order)\ncan give the Ranked Pairs ordering, because Ranked Pairs might need to interleave batches that Aequitas considers incomparable\n(as demonstrated in \u00a711 ###reference_###). This strategy also cannot mitigate Aequitas\u2019s lack of asymptotic liveness.\nThis mitigation, if applied to Aequitas, is akin to dropping all edges of weight less than before running Ranked Pairs.\nThis strategy, therefore, would achieve -minimal-batch-order-fairness for all (a weaker\nguarantee than what our protocol provides).\nAlso concurrently with this work,\nkiayias2024ordering propose a different method for ordering transactions within batches\n(but rely on a specific choice of a parameter to determine those batches). A version\nof that protocol adds timestamps on reported orderings to improve liveness.\nAs discussed in \u00a73.3 ###reference_###, the batches in Themis (themis) are defined slightly differently than in Definition 3.1 ###reference_theorem1###.\nThemis, when computing a graph of ordering dependencies, adds a directed edge between every pair of vertices, choosing the direction of an edge by majority vote (ignoring the parameter , which is considered only on edges adjacent to transactions that some replicas have not yet reported observing).\nIn so doing, it merges together any two batches unless, for any transaction in one and in the other,\na majority of replicas vote (which bypasses the counterexample of \u00a711 ###reference_###).\nAdditionally, Themis\u2019s (themis) round-based sequencing protocol presents a different problem insurmountable\nby any process like the one suggested by vafadar2023condorcet\nthat sorts batches separately.\nNamely, Themis may finalize the composition of a batch before has enough information to guarantee\nthat, according to the Ranked Pairs ordering, all transactions in the batch must come before\nall those not in the batch (and not already in the output).\nThus, it may output two batches in sequence, such that running Ranked Pairs on each batch separately\nproduces a different output than running Ranked Pairs on the union of the two batches together.\nWe give a example of this phenomenon in \u00a712 ###reference_###.\nzhang2020byzantine\nadd timestamps to ordering votes and sorts transactions by median timestamp to provide a linearizability guarantee,\nwhich is incomparable to -minimal-batch-order-fairness. mamageishvili2023buying propose letting users\npay to reduce their reported timestamps."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "7.2",
|
| 109 |
+
"parent_section_id": "7",
|
| 110 |
+
"section_name": "7.2. Social Choice",
|
| 111 |
+
"text": "Classical work of social choice studies the problem of choosing an order between a fixed, finite set of\ncandidates given a complete set of preferences from each voter (arrow1950difficulty).\nPrior work studies related models in a limited-information setting.\nLu and Boutilier (lu2013multi), Ackerman et. al (ackerman2013elections),\nand Cullinan et. al (cullinan2014borda) study the setting where a social choice rule has incomplete access (a partial ordering)\nto a voter\u2019s preferences over a finite set of candidates.\nConitzer and Sandholm (conitzer2005communication)\nstudy the communication complexity of a variety of voting rules.\nFain et. al (fain2019random) study a randomized ordering rule\nthat requires only a constant number of queries to voter preferences.\nFishburn (fishburn1970arrow) showed that Arrow\u2019s impossibility theorem does not hold\nwhen the set of voters is infinite, although Kirman and Sondermann (kirman1972arrow)\nfind \u201cdictatorial sets\u201d of voters. Grafe and Grafe (grafe1983arrow)\nextend these results to the case of infinitely many alternatives,\nsubject to a continuity condition\non the space of ranking preferences.\nChichilnisky and Heal (chichilnisky1997social)\nand Efimov and Koshevoy (efimov1994topological) study the types of rules admissible in the infinite voters setting,\ngiven a topology on ordering preferences."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "7.3",
|
| 115 |
+
"parent_section_id": "7",
|
| 116 |
+
"section_name": "7.3. Front-Running",
|
| 117 |
+
"text": "The study of order-fairness is often motivated by the goal of limiting\nan adversary\u2019s ability to order transactions (\u201cfront-running\u201d) in public blockchains (as in,\ne.g., daian2020flash).\nAdditional approaches to this problem include commiting to an ordering\nbefore any replica can know transaction contents, using threshold encryption or commit-reveal schemes\n(malkhi2022maximal; clineclockwork; zhang2022flash).\nLi et. al (li2023transaction) study the problem of verifiably computing an\nordering within a trusted hardware enclave.\nThese could be applied on top of a streaming ordering algorithm.\nKavousi et. al (kavousi2023blindperm) randomly shuffle blocks of encrypted transactions.\nSome decentralized exchanges (ramseyer2023speedex; cowswapproblem; penumbraswap) process transactions in unordered batches,\neliminating the need for an ordering within a block (but not eliminating all order manipulations (zhangcomputation)).\nferreira2022credible and li2023mev construct sequencing rules specific to decentralized exchanges."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "8",
|
| 121 |
+
"parent_section_id": null,
|
| 122 |
+
"section_name": "8. Conclusion",
|
| 123 |
+
"text": "We study here the problem of ordering transactions in a replicated state machine\nas a novel streaming instance of the preference aggregation problem from classical social choice theory.\nThis viewpoint enables us to strengthen the notions of \u201corder-fairness\u201d used in prior work,\nand then construct algorithms solving this problem with both strictly stronger \u201cfairness\u201d guarantees\nand strictly stronger liveness properties than all of the prior work.\nTo be specific, our streaming variant of Ranked Pairs satisfies\n-minimal-batch-order-fairness\nfor every simultaneously, and for any number of faulty replicas .\nFairness guarantees smoothly weaken as the number of faulty replicas increases.\nFor comparison, prior work must fix a choice of \nand a bound on the number of faulty replicas in advance, and can only satisfy -batch-order-fairness for that .\nThese notions of \u201corder-fairness\u201d are not the only desiderata with practical significance.\nDifferent contexts in which a system is deployed pose different constraints and financial incentives.\nWe believe that social-choice style methods for preference aggregation raises many interesting open questions\nof practical import.\nFor example, what aggregation rules lead to high social welfare? If clients bribe replicas to prefer certain orderings,\nwhat rules maximize (or minimize) this fee revenue?\nHow do incentives distort the output ordering?"
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "9",
|
| 127 |
+
"parent_section_id": null,
|
| 128 |
+
"section_name": "9. Truncation of Ranked Pairs",
|
| 129 |
+
"text": "Ranked Pairs preserves its output ordering when restricted to considering only an initial segment of its output.\nIn the statement below, denotes Ranked Pairs Voting (Algorithm 2 ###reference_thm2###).\nSuppose that is a set of ordering votes\non a set of transactions, and let .\nConsider a set that forms the transactions in an initial segment of ,\nand for each , let be restricted to .\nThen extends .\nThe proof follows by executing and comparing against an execution\non the restricted input.\nWhenever the algorithm includes an edge, inclusion cannot create a cycle in the graph of previously included\nedges, so it cannot create a cycle in the graph of previously included edges restricted to .\nWhenever the algorithm rejects an edge , it must have included already a directed path from\n to . If and both lie within (that is, the algorithm run on the restricted input\nwould iterate over this edge), then the previously included path must be entirely contained within . Otherwise,\nthere would be some on this path in , but\nthat would imply that would come before in and thus that is not the set of transactions in\nan initial segment of (as ).\n\u220e"
|
| 130 |
+
}
|
| 131 |
+
],
|
| 132 |
+
"appendix": [],
|
| 133 |
+
"tables": {},
|
| 134 |
+
"image_paths": {},
|
| 135 |
+
"validation": true,
|
| 136 |
+
"references": [
|
| 137 |
+
{
|
| 138 |
+
"1": {
|
| 139 |
+
"title": "CoW Protocol Overview: The Batch Auction Optimization\nProblem.",
|
| 140 |
+
"author": "[n.\u2009d.].",
|
| 141 |
+
"venue": "https://web.archive.org/web/20220614183101/https://docs.cow.fi/off-chain-services/in-depth-solver-specification/the-batch-auction-optimization-problem.",
|
| 142 |
+
"url": null
|
| 143 |
+
}
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"2": {
|
| 147 |
+
"title": "The Penumbra Protocol: Sealed-Bid Batch Swaps.",
|
| 148 |
+
"author": "[n.\u2009d.].",
|
| 149 |
+
"venue": "https://web.archive.org/web/20220614034906/https://protocol.penumbra.zone/main/zswap/swap.html.",
|
| 150 |
+
"url": null
|
| 151 |
+
}
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"3": {
|
| 155 |
+
"title": "Elections with partially ordered preferences.",
|
| 156 |
+
"author": "Michael Ackerman,\nSul-Young Choi, Peter Coughlin,\nEric Gottlieb, and Japheth Wood.\n2013.",
|
| 157 |
+
"venue": "Public Choice 157\n(2013), 145\u2013168.",
|
| 158 |
+
"url": null
|
| 159 |
+
}
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"4": {
|
| 163 |
+
"title": "A difficulty in the concept of social welfare.",
|
| 164 |
+
"author": "Kenneth J Arrow.\n1950.",
|
| 165 |
+
"venue": "Journal of political economy\n58, 4 (1950),\n328\u2013346.",
|
| 166 |
+
"url": null
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"5": {
|
| 171 |
+
"title": "Quick order fairness. In\nFinancial Cryptography and Data Security: 26th\nInternational Conference, FC 2022, Grenada, May 2\u20136, 2022, Revised Selected\nPapers. Springer, 316\u2013333.",
|
| 172 |
+
"author": "Christian Cachin, Jovana\nMi\u0107i\u0107, Nathalie Steinhauer, and\nLuca Zanolini. 2022.",
|
| 173 |
+
"venue": "",
|
| 174 |
+
"url": null
|
| 175 |
+
}
|
| 176 |
+
},
|
| 177 |
+
{
|
| 178 |
+
"6": {
|
| 179 |
+
"title": "Practical byzantine fault tolerance. In\nOsDI, Vol. 99.\n173\u2013186.",
|
| 180 |
+
"author": "Miguel Castro, Barbara\nLiskov, et al. 1999.",
|
| 181 |
+
"venue": "",
|
| 182 |
+
"url": null
|
| 183 |
+
}
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"7": {
|
| 187 |
+
"title": "Social choice with infinite populations:\nconstruction of a rule and impossibility results.",
|
| 188 |
+
"author": "Graciela Chichilnisky and\nGeoffrey Heal. 1997.",
|
| 189 |
+
"venue": "Social Choice and Welfare\n14 (1997), 303\u2013318.",
|
| 190 |
+
"url": null
|
| 191 |
+
}
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"8": {
|
| 195 |
+
"title": "ClockWork: An Exchange Protocol for Proofs of Non\nFront-Running.",
|
| 196 |
+
"author": "Dan Cline, Thaddeus\nDryja, and Neha Narula.\n[n.\u2009d.].",
|
| 197 |
+
"venue": "([n.\u2009d.]).",
|
| 198 |
+
"url": null
|
| 199 |
+
}
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"9": {
|
| 203 |
+
"title": "Ramon Llull: from \u2018Ars electionis\u2019 to social\nchoice theory.",
|
| 204 |
+
"author": "Josep M Colomer.\n2013.",
|
| 205 |
+
"venue": "Social Choice and Welfare\n40, 2 (2013),\n317\u2013328.",
|
| 206 |
+
"url": null
|
| 207 |
+
}
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"10": {
|
| 211 |
+
"title": "An Essay on the Application of Analysis to the\nProbability of Decisions Rendered by a Plurality of Votes.",
|
| 212 |
+
"author": "Marquis de Condorcet and\nMarquis de Caritat. 1785.",
|
| 213 |
+
"venue": "Classics of social choice\n(1785), 91\u2013112.",
|
| 214 |
+
"url": null
|
| 215 |
+
}
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"11": {
|
| 219 |
+
"title": "Communication complexity of common voting rules.\nIn Proceedings of the 6th ACM conference on\nElectronic commerce. 78\u201387.",
|
| 220 |
+
"author": "Vincent Conitzer and\nTuomas Sandholm. 2005.",
|
| 221 |
+
"venue": "",
|
| 222 |
+
"url": null
|
| 223 |
+
}
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"12": {
|
| 227 |
+
"title": "A Borda count for partially ordered ballots.",
|
| 228 |
+
"author": "John Cullinan, Samuel K\nHsiao, and David Polett.\n2014.",
|
| 229 |
+
"venue": "Social Choice and Welfare\n42 (2014), 913\u2013926.",
|
| 230 |
+
"url": null
|
| 231 |
+
}
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"13": {
|
| 235 |
+
"title": "Flash boys 2.0: Frontrunning in decentralized\nexchanges, miner extractable value, and consensus instability. In\n2020 IEEE Symposium on Security and Privacy (SP).\nIEEE, 910\u2013927.",
|
| 236 |
+
"author": "Philip Daian, Steven\nGoldfeder, Tyler Kell, Yunqi Li,\nXueyuan Zhao, Iddo Bentov,\nLorenz Breidenbach, and Ari Juels.\n2020.",
|
| 237 |
+
"venue": "",
|
| 238 |
+
"url": null
|
| 239 |
+
}
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"14": {
|
| 243 |
+
"title": "Total order broadcast and multicast algorithms:\nTaxonomy and survey.",
|
| 244 |
+
"author": "Xavier D\u00e9fago,\nAndr\u00e9 Schiper, and P\u00e9ter\nUrb\u00e1n. 2004.",
|
| 245 |
+
"venue": "ACM Comput. Surv. 36,\n4 (dec 2004),\n372\u2013421.",
|
| 246 |
+
"url": null
|
| 247 |
+
}
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"15": {
|
| 251 |
+
"title": "A topological approach to social choice with\ninfinite populations.",
|
| 252 |
+
"author": "Boris A Efimov and\nGleb A Koshevoy. 1994.",
|
| 253 |
+
"venue": "Mathematical Social Sciences\n27, 2 (1994),\n145\u2013157.",
|
| 254 |
+
"url": null
|
| 255 |
+
}
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"16": {
|
| 259 |
+
"title": "Random dictators with a random referee: Constant\nsample complexity mechanisms for social choice. In\nProceedings of the AAAI Conference on Artificial\nIntelligence, Vol. 33. 1893\u20131900.",
|
| 260 |
+
"author": "Brandon Fain, Ashish\nGoel, Kamesh Munagala, and Nina\nPrabhu. 2019.",
|
| 261 |
+
"venue": "",
|
| 262 |
+
"url": null
|
| 263 |
+
}
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"17": {
|
| 267 |
+
"title": "Credible Decentralized Exchange Design via\nVerifiable Sequencing Rules.",
|
| 268 |
+
"author": "Matheus VX Ferreira and\nDavid C Parkes. 2022.",
|
| 269 |
+
"venue": "arXiv preprint arXiv:2209.15569\n(2022).",
|
| 270 |
+
"url": null
|
| 271 |
+
}
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"18": {
|
| 275 |
+
"title": "Arrow\u2019s impossibility theorem: concise proof and\ninfinite voters.",
|
| 276 |
+
"author": "Peter C Fishburn.\n1970.",
|
| 277 |
+
"venue": "Journal of Economic Theory\n2, 1 (1970),\n103\u2013106.",
|
| 278 |
+
"url": null
|
| 279 |
+
}
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"19": {
|
| 283 |
+
"title": "The bitcoin backbone protocol: Analysis and\napplications. In Advances in Cryptology-EUROCRYPT\n2015: 34th Annual International Conference on the Theory and Applications of\nCryptographic Techniques, Sofia, Bulgaria, April 26-30, 2015, Proceedings,\nPart II. Springer, 281\u2013310.",
|
| 284 |
+
"author": "Juan Garay, Aggelos\nKiayias, and Nikos Leonardos.\n2015.",
|
| 285 |
+
"venue": "",
|
| 286 |
+
"url": null
|
| 287 |
+
}
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"20": {
|
| 291 |
+
"title": "On Arrow-type impossibility theorems with infinite\nindividuals and infinite alternatives.",
|
| 292 |
+
"author": "F Grafe and J Grafe.\n1983.",
|
| 293 |
+
"venue": "Economics Letters 11,\n1-2 (1983), 75\u201379.",
|
| 294 |
+
"url": null
|
| 295 |
+
}
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"21": {
|
| 299 |
+
"title": "Lulls\u2019 writings on electoral sytems.",
|
| 300 |
+
"author": "G\u00fcnter H\u00e4gele and\nFriedrich Pukelsheim. 2001.",
|
| 301 |
+
"venue": "(2001).",
|
| 302 |
+
"url": null
|
| 303 |
+
}
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"22": {
|
| 307 |
+
"title": "BlindPerm: Efficient MEV Mitigation with an\nEncrypted Mempool and Permutation.",
|
| 308 |
+
"author": "Alireza Kavousi, Duc V\nLe, Philipp Jovanovic, and George\nDanezis. 2023.",
|
| 309 |
+
"venue": "Cryptology ePrint Archive\n(2023).",
|
| 310 |
+
"url": null
|
| 311 |
+
}
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"23": {
|
| 315 |
+
"title": "Order-fair consensus in the permissionless\nsetting. In Proceedings of the 9th ACM on ASIA\nPublic-Key Cryptography Workshop. 3\u201314.",
|
| 316 |
+
"author": "Mahimna Kelkar, Soubhik\nDeb, and Sreeram Kannan.\n2022.",
|
| 317 |
+
"venue": "",
|
| 318 |
+
"url": null
|
| 319 |
+
}
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"24": {
|
| 323 |
+
"title": "Themis: Fast, strong order-fairness in byzantine\nconsensus. In Proceedings of the 2023 ACM SIGSAC\nConference on Computer and Communications Security.\n475\u2013489.",
|
| 324 |
+
"author": "Mahimna Kelkar, Soubhik\nDeb, Sishan Long, Ari Juels, and\nSreeram Kannan. 2023.",
|
| 325 |
+
"venue": "",
|
| 326 |
+
"url": null
|
| 327 |
+
}
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"25": {
|
| 331 |
+
"title": "Order-fairness for byzantine consensus. In\nAdvances in Cryptology\u2013CRYPTO 2020: 40th Annual\nInternational Cryptology Conference, CRYPTO 2020, Santa Barbara, CA, USA,\nAugust 17\u201321, 2020, Proceedings, Part III 40. Springer,\n451\u2013480.",
|
| 332 |
+
"author": "Mahimna Kelkar, Fan\nZhang, Steven Goldfeder, and Ari\nJuels. 2020.",
|
| 333 |
+
"venue": "",
|
| 334 |
+
"url": null
|
| 335 |
+
}
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"26": {
|
| 339 |
+
"title": "Ordering transactions with bounded unfairness:\ndefinitions, complexity and constructions. In\nAnnual International Conference on the Theory and\nApplications of Cryptographic Techniques. Springer,\n34\u201363.",
|
| 340 |
+
"author": "Aggelos Kiayias, Nikos\nLeonardos, and Yu Shen.\n2024.",
|
| 341 |
+
"venue": "",
|
| 342 |
+
"url": null
|
| 343 |
+
}
|
| 344 |
+
},
|
| 345 |
+
{
|
| 346 |
+
"27": {
|
| 347 |
+
"title": "Arrow\u2019s theorem, many agents, and invisible\ndictators.",
|
| 348 |
+
"author": "Alan P Kirman and Dieter\nSondermann. 1972.",
|
| 349 |
+
"venue": "Journal of Economic Theory\n5, 2 (1972),\n267\u2013277.",
|
| 350 |
+
"url": null
|
| 351 |
+
}
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"28": {
|
| 355 |
+
"title": "Paxos made simple.",
|
| 356 |
+
"author": "Leslie Lamport.\n2001.",
|
| 357 |
+
"venue": "ACM SIGACT News (Distributed Computing\nColumn) 32, 4 (Whole Number 121, December 2001) (2001),\n51\u201358.",
|
| 358 |
+
"url": null
|
| 359 |
+
}
|
| 360 |
+
},
|
| 361 |
+
{
|
| 362 |
+
"29": {
|
| 363 |
+
"title": "Transaction Fairness in Blockchains, Revisited.",
|
| 364 |
+
"author": "Rujia Li, Xuanwei Hu,\nQin Wang, Sisi Duan, and\nQi Wang. 2023a.",
|
| 365 |
+
"venue": "Cryptology ePrint Archive\n(2023).",
|
| 366 |
+
"url": null
|
| 367 |
+
}
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"30": {
|
| 371 |
+
"title": "MEV Makes Everyone Happy under Greedy Sequencing\nRule. In Proceedings of the 2023 Workshop on\nDecentralized Finance and Security. 9\u201315.",
|
| 372 |
+
"author": "Yuhao Li, Mengqian Zhang,\nJichen Li, Elynn Chen,\nXi Chen, and Xiaotie Deng.\n2023b.",
|
| 373 |
+
"venue": "",
|
| 374 |
+
"url": null
|
| 375 |
+
}
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"31": {
|
| 379 |
+
"title": "Ars notatoria.",
|
| 380 |
+
"author": "Ramon Llull and Jordi\nGaya. 1978.",
|
| 381 |
+
"venue": "Citema.",
|
| 382 |
+
"url": null
|
| 383 |
+
}
|
| 384 |
+
},
|
| 385 |
+
{
|
| 386 |
+
"32": {
|
| 387 |
+
"title": "Fast and Secure Global Payments with Stellar. In\nProceedings of the 27th ACM Symposium on Operating\nSystems Principles (Huntsville, Ontario, Canada)\n(SOSP \u201919). Association for\nComputing Machinery, New York, NY, USA,\n80\u201396.",
|
| 388 |
+
"author": "Marta Lokhava, Giuliano\nLosa, David Mazi\u00e8res, Graydon Hoare,\nNicolas Barry, Eli Gafni,\nJonathan Jove, Rafa\u0142 Malinowsky, and\nJed McCaleb. 2019.",
|
| 389 |
+
"venue": "https://doi.org/10.1145/3341301.3359636",
|
| 390 |
+
"url": null
|
| 391 |
+
}
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"33": {
|
| 395 |
+
"title": "Multi-winner social choice with incomplete\npreferences. In Twenty-Third International Joint\nConference on Artificial Intelligence.",
|
| 396 |
+
"author": "Tyler Lu and Craig\nBoutilier. 2013.",
|
| 397 |
+
"venue": "",
|
| 398 |
+
"url": null
|
| 399 |
+
}
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"34": {
|
| 403 |
+
"title": "Maximal Extractable Value (MEV) Protection on a\nDAG.",
|
| 404 |
+
"author": "Dahlia Malkhi and Pawel\nSzalachowski. 2022.",
|
| 405 |
+
"venue": "arXiv preprint arXiv:2208.00940\n(2022).",
|
| 406 |
+
"url": null
|
| 407 |
+
}
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"35": {
|
| 411 |
+
"title": "Buying Time: Latency Racing vs. Bidding for\nTransaction Ordering. In 5th Conference on\nAdvances in Financial Technologies (AFT 2023). Schloss-Dagstuhl-Leibniz\nZentrum f\u00fcr Informatik.",
|
| 412 |
+
"author": "Akaki Mamageishvili,\nMahimna Kelkar, Jan Christoph Schlegel,\nand Edward W Felten. 2023.",
|
| 413 |
+
"venue": "",
|
| 414 |
+
"url": null
|
| 415 |
+
}
|
| 416 |
+
},
|
| 417 |
+
{
|
| 418 |
+
"36": {
|
| 419 |
+
"title": "The honey badger of BFT protocols. In\nProceedings of the 2016 ACM SIGSAC conference on\ncomputer and communications security. 31\u201342.",
|
| 420 |
+
"author": "Andrew Miller, Yu Xia,\nKyle Croman, Elaine Shi, and\nDawn Song. 2016.",
|
| 421 |
+
"venue": "",
|
| 422 |
+
"url": null
|
| 423 |
+
}
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"37": {
|
| 427 |
+
"title": "Viewstamped replication: A new primary copy method\nto support highly-available distributed systems. In\nProceedings of the seventh annual ACM Symposium on\nPrinciples of distributed computing. 8\u201317.",
|
| 428 |
+
"author": "Brian M Oki and\nBarbara H Liskov. 1988.",
|
| 429 |
+
"venue": "",
|
| 430 |
+
"url": null
|
| 431 |
+
}
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"38": {
|
| 435 |
+
"title": "In search of an understandable consensus\nalgorithm.",
|
| 436 |
+
"author": "Diego Ongaro and John\nOusterhout. 2014.",
|
| 437 |
+
"venue": "(2014), 305\u2013319.",
|
| 438 |
+
"url": null
|
| 439 |
+
}
|
| 440 |
+
},
|
| 441 |
+
{
|
| 442 |
+
"39": {
|
| 443 |
+
"title": "SPEEDEX: A Scalable, Parallelizable, and\nEconomically Efficient Decentralized EXchange. In\n20th USENIX Symposium on Networked Systems Design\nand Implementation (NSDI 23). 849\u2013875.",
|
| 444 |
+
"author": "Geoffrey Ramseyer, Ashish\nGoel, and David Mazi\u00e8res.\n2023.",
|
| 445 |
+
"venue": "",
|
| 446 |
+
"url": null
|
| 447 |
+
}
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"40": {
|
| 451 |
+
"title": "Independence of clones as a criterion for voting\nrules.",
|
| 452 |
+
"author": "T Nicolaus Tideman.\n1987.",
|
| 453 |
+
"venue": "Social Choice and Welfare\n4, 3 (1987),\n185\u2013206.",
|
| 454 |
+
"url": null
|
| 455 |
+
}
|
| 456 |
+
},
|
| 457 |
+
{
|
| 458 |
+
"41": {
|
| 459 |
+
"title": "Condorcet Attack Against Fair Transaction\nOrdering. In 5th Conference on Advances in\nFinancial Technologies.",
|
| 460 |
+
"author": "Mohammad Amin Vafadar and\nMajid Khabbazian. 2023.",
|
| 461 |
+
"venue": "",
|
| 462 |
+
"url": null
|
| 463 |
+
}
|
| 464 |
+
},
|
| 465 |
+
{
|
| 466 |
+
"42": {
|
| 467 |
+
"title": "HotStuff: BFT Consensus with Linearity and\nResponsiveness. In Proceedings of the 2019 ACM\nSymposium on Principles of Distributed Computing (Toronto ON, Canada)\n(PODC \u201919). Association for\nComputing Machinery, New York, NY, USA,\n347\u2013356.",
|
| 468 |
+
"author": "Maofan Yin, Dahlia\nMalkhi, Michael K. Reiter, Guy Golan\nGueta, and Ittai Abraham.\n2019.",
|
| 469 |
+
"venue": "https://doi.org/10.1145/3293611.3331591",
|
| 470 |
+
"url": null
|
| 471 |
+
}
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"43": {
|
| 475 |
+
"title": "Flash freezing flash boys: Countering blockchain\nfront-running. In 2022 IEEE 42nd International\nConference on Distributed Computing Systems Workshops (ICDCSW). IEEE,\n90\u201395.",
|
| 476 |
+
"author": "Haoqian Zhang, Louis-Henri\nMerino, Vero Estrada-Galinanes, and\nBryan Ford. 2022.",
|
| 477 |
+
"venue": "",
|
| 478 |
+
"url": null
|
| 479 |
+
}
|
| 480 |
+
},
|
| 481 |
+
{
|
| 482 |
+
"44": {
|
| 483 |
+
"title": "Computation of Optimal MEV in Decentralized\nExchanges.",
|
| 484 |
+
"author": "Mengqian Zhang, Yuhao Li,\nXinyuan Sun, Elynn Chen, and\nXi Chen. [n.\u2009d.].",
|
| 485 |
+
"venue": "([n.\u2009d.]).",
|
| 486 |
+
"url": null
|
| 487 |
+
}
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"45": {
|
| 491 |
+
"title": "Byzantine Ordered Consensus without Byzantine\nOligarchy. In 14th USENIX Symposium on Operating\nSystems Design and Implementation (OSDI 20). 633\u2013649.",
|
| 492 |
+
"author": "Yunhao Zhang, Srinath\nSetty, Qi Chen, Lidong Zhou, and\nLorenzo Alvisi. 2020.",
|
| 493 |
+
"venue": "",
|
| 494 |
+
"url": null
|
| 495 |
+
}
|
| 496 |
+
}
|
| 497 |
+
],
|
| 498 |
+
"url": "http://arxiv.org/html/2304.02730v4"
|
| 499 |
+
}
|
20241001/2305.06888v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2305.13214v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2307.07635v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2307.15586v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2308.03547v2.json
ADDED
|
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Near-Optimal Pilot Assignment in Cell-Free Massive MIMO",
|
| 3 |
+
"abstract": "Cell-free massive MIMO systems are currently being considered as potential\nenablers of future (6G) technologies for wireless communications. By combining\ndistributed processing and massive MIMO, they are expected to deliver improved\nuser coverage and efficiency. A possible source of performance degradation in\nsuch systems is pilot contamination, which contributes to causing interference\nduring uplink training and affects channel estimation negatively. Contamination\noccurs when the same pilot sequence is assigned to more than one user. This is\nin general inevitable, as the number of mutually orthogonal pilot sequences\ncorresponds to only a fraction of the coherence interval. We introduce a new\nalgorithm for pilot assignment and analyze its performance both from a\ntheoretical perspective and in computational experiments. We show that it has an\napproximation ratio close to 1 for a plausibly large number of orthogonal pilot\nsequences, as well as low computational complexity under massive parallelism. We\nalso show that, on average, it outperforms other methods in terms of per-user\nSINR and throughput on the uplink.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "We consider a cell-free massive MIMO system as described in [1 ###reference_b1###],\nwhich is characterized by a large number of single-antenna, geographically\ndistributed APs simultaneously serving autonomous users via a TDD\nscheme. Each coherence interval, assumed to be of duration \n(samples), is divided into a phase for uplink training and two others for\ndownlink and uplink data transmission. Training refers to the sending by each\nuser to all APs of a -sample pilot sequence (a pilot), with\n, used by each AP to estimate the channel for\nsubsequent downlink and uplink data transmission for that user. The APs are\ncapable of computationally efficient signal processing, and are moreover\nconnected to a CPU by a fronthaul network. Two tasks the CPU handles are pilot\nassignment and power allocation.\nOur goal is to contribute to the development of algorithms for pilot assignment.\nBefore we continue, however, it is important to note that the development of\ncell-free massive MIMO has continued to evolve since the publication of\n[1 ###reference_b1###], aiming to both enlarge the physical capabilities of the system\n(e.g., by providing each AP with multiple antennas and expanding the system\u2019s\ncomputational capacity) and to more realistically address some performance\nbottlenecks and other difficulties that were not contemplated at the time. These\nhave included synchronization issues related to TDD [2 ###reference_b2###], reciprocity\ncalibration to make possible the intended use of the same channel for both\ndownlink and uplink traffic [3 ###reference_b3###], and scalability [4 ###reference_b4###].\nIn this letter, we assume that all available pilots are orthogonal to one\nanother. Thus, given the number of samples in a pilot,\nthe number of pilots is . Assigning pilots to users can be\ncomplicated if , since in this case at least two users must be assigned the\nsame pilot. This gives rise to so-called pilot contamination, whose consequence\nis a reduced data rate for the users involved. In [1 ###reference_b1###], the channel\nbetween AP and user is modeled as\nwhere is the large-scale fading and is the small-scale\nfading. The \u2019s are assumed to remain constant during each coherence\ninterval and the \u2019s to be i.i.d. random variables.\nEstimating channel during uplink training causes a pilot-contamination\neffect on proportional to , where\n is the set of users assigned the same pilot as user (itself included).\nThe variance of this quantity relative to the \u2019s, after totaled over all\nAPs, is given by\nVariance is therefore fundamentally tied to the issue of pilot\ncontamination, so minimizing it during pilot assignment plays a central role in\nattenuating the deleterious effects of pilot scarcity on . Globally, the\nproblem to be solved can be formulated as finding a partition of the set of\nusers into subsets, aiming to assign the same pilot to all users in the same\nsubset. The goal is to find a partition that\nminimizes , where\nThis is an NP-hard optimization problem, but here we demonstrate that it can be\ntackled by a greedy algorithm so that the optimum is approximated to within a\nratio that improves as the number of pilots increases.\nWe proceed as follows. In Section 2 ###reference_###, we briefly review the relevant\nstate of the art and relate our contribution to it. We then recap a few system\nmodel details in Section 3 ###reference_###, where we continue to follow [1 ###reference_b1###]\nclosely. In Section 4 ###reference_###, we recast the problem of finding to\nminimize in graph-theoretic terms, and\ndescribe and analyze our near-optimal algorithm to solve it. Computational\nresults are given in Section 5 ###reference_### and we conclude in Section 6 ###reference_###."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "State of the art and contribution",
|
| 15 |
+
"text": "Two baseline approaches to pilot assignment are RANDOM, which assigns a pilot\nchosen uniformly at random to each user, and GREEDY [1 ###reference_b1###], which begins\nas RANDOM and then repeatedly identifies the user for which a performance\nmeasure of choice is worst and replaces its pilot so that variance is\nminimized. The latter goes on while the selected user\u2019s pilot does indeed\nchange. More elaborate approaches from recent years include Improved BASIC\n(IBASIC) [5 ###reference_b5###] and some that use graph theory-based techniques\n[6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###]. IBASIC first sorts the users in descending order\nof and assigns pilots to the first users as in\nRANDOM. It then goes through the succeeding users, in order, each of which\ngets assigned the pilot that currently gets closest to minimizing\n while respecting a preestablished\nmaximum number of users per pilot. AP is the one for which\n is greatest.\nThe approaches in [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###] aim to pose the problem of pilot\nassignment in terms of an undirected graph whose vertices are the users. All\nthree aim to obtain a -set partition of the vertex set, but the ones in\n[6 ###reference_b6###, 7 ###reference_b7###], based respectively on vertex coloring (COLORING) and on\nfinding a maximum-weight matching on a bipartite graph (MATCHING), take\ncircuitous routes to their goals and seem oblivious to the precise definition of\npartition given in Section 1 ###reference_###. COLORING operates by\nrepeatedly adjusting the graph\u2019s density, based on the \u2019s, until it\nbecomes -colorable according to a heuristic. MATCHING, in turn, iterates\nuntil either a performance criterion is met or a preestablished maximum number\nof iterations is reached. In each iteration, the \u2019s are used to\ncreate a bipartite graph in which of the users are assigned pilots and the\nremaining are to share pilots with them based on the resulting\nmaximum-weight matching.\nIn our view, COLORING and MATCHING are both based on a failure to realize that\nthe most direct route to finding partition is to also consider the\nperspective that is dual to the minimization involved in the partition\u2019s\ndefinition. Such dual perspective is that of maximization: to find partition\n, look for a maximum-weight -cut of an edge-weighted complete\ngraph on vertices whose weights depend on the \u2019s. A -cut is\nsimply the set of all edges connecting vertices from different sets of\n. The approach in [8 ###reference_b8###], known as Weighted Graphic\nFramework (WGF), is on the other hand firmly in line with this idea but uses\nedge weights that are essentially unjustified. WGF uses the maximum-weight\n-cut algorithm from [9 ###reference_b9###] directly: it starts by initializing the \nsets of the partition by adding an arbitrarily selected user to each of them; it\nthen considers each of the remaining users (say ), computes for each\nset the total internal edge weight it will have if is added to it, and\nfinally adds to the set whose weight will be minimum. The total weight of\nthe -cut output by this algorithm accounts for a fraction of the optimal\ntotal weight of at least [9 ###reference_b9###], so WGF has an approximation\nratio that approaches as increases.\nIn this letter, we pick up where WGF left off and contribute a new algorithm to\nassign pilots to users. Like WGF, this algorithm looks for a maximum-weight\n-cut on an edge-weighted complete graph on the users. Unlike WGF, though,\nedge weights stay true to the principle of reflecting the variance of the\ninterference caused by pilot contamination during uplink training, as discussed\nin Section 1 ###reference_###. We call the new algorithm Greedy Edge Contraction (GEC)\nand prove that it too has an approximation-ratio lower bound that approaches \nas increases, now given by . In this sense, both WGF and GEC\nare near-optimal, with WGF more so, though only slightly for relatively large\n, since . This difference notwithstanding, our results\nin Section 5 ###reference_### show that GEC performs better than other methods,\nincluding WGF. As we will see, this is only partly due to the poorly defined\nedge weights that WGF uses."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "System model essentials",
|
| 21 |
+
"text": "We assume APs and users to be placed in a square region, at\ncoordinates for an AP or a user. We also assume that this region\nwraps itself around the boundaries on both dimensions. For AP and user ,\nletting and likewise\n implies that the distance \nbetween them is such that\nThis has become customary in the field (see, e.g.,\n[1 ###reference_b1###, 4 ###reference_b4###, 6 ###reference_b6###, 7 ###reference_b7###, 5 ###reference_b5###]) and aims to help attenuate the\ninevitable boundary effects that come with a finite connected region. The idea\nis to generalize the Euclidean distance formula on the plane,\n, by allowing\neach of the two squared displacements on the right-hand side to be replaced by\nits over-the-boundary version, or\n as the case may be, whenever that leads to a\nsmaller . A simple example where wrapping is used only along the\nabscissas is given in Figure 1 ###reference_###.\n###figure_1### For (m) the reference distances, (MHz) the carrier frequency, and\n, (m) the antenna heights, the path loss\n (dB) corresponding to follows the same three-slope\nmodel as [1 ###reference_b1###], given by\nwhere\n, , and .\nThe resulting large-scale fading is\nwhere (dB) is the shadow-fading standard deviation and\n is an random variable. We assume that the \u2019s\nare uncorrelated with one another and that the \u2019s are available\nwhenever needed.\nAs in [1 ###reference_b1###], we assume that each AP calculates MMSE estimates of the\nchannels between itself and the users from a combination of all users\u2019 pilots\nsent to it during training. The estimated channel between AP and user \nhas expected gain\nUsing the notations\nthe resulting SINR on the uplink is given by\nIn the expressions for and , and\n are the normalized uplink SNR for training and for data\ntransmission, respectively. The resulting throughput for user is\nwhere (Hz) is the channel bandwidth. In this equation, the factor\n serves first to deduct the fraction of\nthe coherence interval that is used for pilot transmission, then to further\ndeduct half of what remains, which we assume is reserved for data transmission\non the downlink.\nEq. (13 ###reference_###) is central to the comparative computational study we carry out\nin Section 5 ###reference_###, so the \u2019s appearing in it, which work as power\ncontrol coefficients, must be determined for each new assignment of pilots to\nusers. As customary, in order to ensure fairness toward all users we express\npower allocation as the max-min problem, on variables and\n, given by\nThis is a quasilinear problem, so we do bisection on variable to solve it,\ntackling only the linear feasibility program given by Eqs. (16 ###reference_###)\nand (17 ###reference_###) for each fixed value of . The resulting\n is necessarily the same for every user . Thus,\nwhenever referring to these SINR values or the corresponding throughputs, we\nhenceforth use simply and , respectively."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "The GEC algorithm",
|
| 27 |
+
"text": "Like WGF, GEC does pilot assignment to users by solving the MAX -CUT problem\non an edge-weighted complete graph, now denoted by , having a vertex set\nthat corresponds to the set of users. MAX -CUT asks that the vertex set of\n be partitioned into sets in such a way that the sum of the weights of\nall inter-set edges is maximized, or equivalently the sum over all intra-set\nedges is minimized.\nThe idea is for each of these sets to correspond to a set of users to which\nthe same pilot is assigned. It is therefore crucial that weights be selected in\na way that relates directly and clearly to the potential for pilot contamination\nbetween the users in question. In line with our reasoning in Section 1 ###reference_###,\nwe quantify some user \u2019s contribution to the pilot-contamination effect on\neach of the users it shares the pilot with as , henceforth defined as\nThus, the weight of the edge interconnecting vertices and in ,\ndenoted by , is\nassuming that vertex corresponds to user and vertex to user (or\n to , to ).\nMAX 2-CUT is one of the classic NP-hard problems, so the trivially more general\nMAX -CUT is NP-hard as well. We approach its solution by employing the\ngeneralization given in [10 ###reference_b10###] of their own MAX 2-CUT algorithm. The\nresulting GEC runs for iterations, each consisting in the contraction of\nan edge, say , thus joining vertices and into a single\nnew vertex, say , and moreover connecting to every vertex\npreviously connected to or .\nThese iterations result in a sequence of graphs that, like the initial ,\nare also edge-weighted complete graphs. Unlike , however, vertices in these\ngraphs are no longer necessarily identified with single users, but generally\nwith non-singleton sets of users as well. The last graph in the sequence,\ndenoted by , has vertices, one for each pilot.\nThe general formula for the weight between vertices and , valid\nfor all graphs in the sequence, is\nwhere is the set of users to which vertex corresponds and is its\nsize. This expression generalizes the one in Eq. (19 ###reference_###), which\nrefers to an edge in with and (or vice versa). In\norder for the formula in Eq. (21 ###reference_###) to remain valid as vertices\n and are joined to form vertex , it suffices that each edge\n such that be given weight\n, that is, the sum of the weights of the two edges\nthat used to connect to and before the contraction of edge\n. Note also that summing up the weights of all \u2019s intra-set\nedges yields\nwhich as expected is simply a rewrite of Eq. (4 ###reference_###). The sum of this\nquantity over all vertices (every ) is what is targeted for minimization as\nthe solution to MAX -CUT is approximated by GEC. The heart of GEC at each\niteration is therefore to select for contraction the edge of least weight. GEC\nis summarized as the pseudocode in Algorithm 1 ###reference_###.\nInput: , edge weights as in Eq. (19 ###reference_###) \nOutput:\nAn extension of the analysis in [10 ###reference_b10###] reveals that\nwhere is the total weight of the edges of (i.e., the\ntotal weight of the obtained -cut of ) and is its\noptimal value. To see that this holds, let be the total weight of the\nedges of and then use Lemma 1 from [10 ###reference_b10###], which is valid for\nMAX -CUT as much as it is for MAX 2-CUT. It states that\nwhere is the total weight of the edges contracted during\nthe iterations. Using Eq. (24 ###reference_###) and the fact that\n, we obtain\nThis means that GEC, similarly to WGF (see Section 2 ###reference_###), is capable of\napproximating the optimal -cut of so long as the number of pilots\nis sufficiently large. For example, with we get\n for GEC and\n for WGF. This might seem to put WGF at\nan advantage over GEC, perhaps one counterbalanced by GEC\u2019s edge weights being\nwell-founded while those of WGF are not. What we have observed is more nuanced\nthan this, though, as we discuss in Section 6 ###reference_###.\nAs for GEC\u2019s computational complexity, note that its costliest step is the one\nin line 4 ###reference_4###, which requires time, followed by the loop in\nline 6 ###reference_6###, line 9 ###reference_9###, and the loop in line 11 ###reference_11###, each running\nin time. Considering that the loop in line 3 ###reference_3### repeats \ntimes, the overall time required by GEC on a sequential device is .\nHowever, so long as ASICs can be designed to provide the necessary massive\nparallelism, the time requirement of line 4 ###reference_4### can be lowered to\n (see, e.g., [11 ###reference_b11###] and references therein). Likewise, the\nloop in line 6 ###reference_6###, as well as line 9 ###reference_9### and the loop in\nline 11 ###reference_11###, can much more easily be sped up to run in time. The\noverall time required by GEC can therefore be reduced to . This\nremains unaltered if we add the time for calculating the \u2019s, whenever\nthe \u2019s change, prior to running GEC. Once again assuming the\nnecessary massive parallelism, this can be achieved in time, which\ngets reduced to for with a constant. Since by assumption\nwe have , for consistency we require only that (we use for\nour computational results). WGF runs faster on a sequential computer, requiring\n time, but assuming massive parallelism reduces this to the same\n time as in the case of GEC. This is owed to the fact that both the\nfactor for GEC to obtain a minimum, and for WGF, get reduced to\n."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Computational results",
|
| 33 |
+
"text": "We use the parameter values given in Table 1 ###reference_###, where the value of\n is for the channel bandwidth in the table,\na transmit power of W, a temperature of K, and a noise figure of\n dB. Each value of is compatible either with mobile users\nat highway speeds (; see Table 2.1 in [12 ###reference_b12###]) or\nwith users at urban-road speeds (, extending that\nsame table for a speed of at most m/s). We use and \nthroughout.\nFor each value of , every result we report is an average over \nrandom trials, each beginning with the independent sampling of coordinates for\nall APs and all users, and of values for all \u2019s. The resulting\ninstance of the pilot-assignment problem is then submitted to GEC and five other\nalgorithms: an Improved WGF (IWGF) that uses the edge weights in\nEq. (19 ###reference_###), the original WGF, IBASIC with\n,111, as in [5 ###reference_b5###],\nunless , in which case .\nGREEDY,222The performance measure used by GREEDY (see\nSection 2 ###reference_###) is based on Eq. (13 ###reference_###), so during pilot assignment\nwith GREEDY we use for every user [1 ###reference_b1###]. and RANDOM.\nOur results are given in Figures 2 ###reference_### and 3 ###reference_###, respectively\nfor and as functions of . We omit confidence\nintervals from the figures but inform their bounds in the figures\u2019 captions.\n###figure_2### ###figure_3### All plots suggest the superiority of GEC beginning at , followed\nby IWGF, then variously by IBASIC, GREEDY, or WGF, though GREEDY is outperformed\nby IBASIC and WGF beginning at . Excluding GREEDY and RANDOM, all\nmethods perform equally for , indicating that they correctly avoid pilot\ncontamination altogether whenever possible. In the case of GEC, this is easily\nseen by noting that the loop in line 3 ###reference_3### of Algorithm 1 ###reference_### is\nnever entered if . In conformity with Eq. (14 ###reference_###), throughput is\nseen to increase with for fixed , but for fixed\n decreases after peaking as continues to grow. These trends\ncan also be seen as affecting the channel\u2019s spectral efficiency on the uplink,\nwhich is given by (Mbps/Hz)."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "6",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Conclusion",
|
| 39 |
+
"text": "We attribute the superiority of both GEC and IWGF to their formulation as a MAX\n-CUT problem with edge weights that, unlike those used by the original WGF,\nreflect the fundamental quantity underlying the rise of pilot contamination when\na pilot is assigned to more than one user. This much is a consequence of our\ndiscussion in Section 1 ###reference_### regarding the ultimate centrality of\nEq. (2 ###reference_###) in the choice of edge weights. One might also have expected IWGF\nto perform better than GEC, given that the former\u2019s approximation ratio\nsurpasses the latter\u2019s. Our results in Section 5 ###reference_### show quite the\nopposite and this should work as a reminder of what such ratios really mean.\nThey are lower bounds on how close to an optimal result the heuristic in\nquestion can get, but in general only experimentation can clarify how those\nlower bounds get surpassed in each case. For the experiments at hand, clearly\nGEC was able to surpass its ratio\u2019s lower bound enough to perform better than\nIWGF on average. Thus, given that the two algorithms have the same computational\ncomplexity under massive parallelism, GEC is in the end the better choice."
|
| 40 |
+
}
|
| 41 |
+
],
|
| 42 |
+
"appendix": [],
|
| 43 |
+
"tables": {
|
| 44 |
+
"1": {
|
| 45 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>System model parameters.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S5.T1.1.1.1\">\n\u00a0m</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2\">\n\u00a0m</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.3.3.3\">\n\u00a0m</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row\" id=\"S5.T1.4.4.1\">\n\u00a0MHz</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.5.5.2\">\n\u00a0m</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.6.6.3\">\n\u00a0m</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.T1.7.7.1\">\n\u00a0dB</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.9.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S5.T1.10.10.1\">\n\u00a0Hz</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.12.12.3\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 46 |
+
"capture": "Table 1: System model parameters."
|
| 47 |
+
}
|
| 48 |
+
},
|
| 49 |
+
"image_paths": {
|
| 50 |
+
"1": {
|
| 51 |
+
"figure_path": "2308.03547v2_figure_1.png",
|
| 52 |
+
"caption": "Figure 1: Attaching a \u201cphantom\u201d copy of the original D\u00d7D\ud835\udc37\ud835\udc37D\\times Ditalic_D \u00d7 italic_D region to its\nright-hand boundary allows the two choices of displacement along the abscissas\nto be visualized. In this example, clearly\nD\u2212\u0394m\u2062ka<\u0394m\u2062ka\ud835\udc37superscriptsubscript\u0394\ud835\udc5a\ud835\udc58asuperscriptsubscript\u0394\ud835\udc5a\ud835\udc58aD-\\Delta_{mk}^{\\mathrm{a}}<\\Delta_{mk}^{\\mathrm{a}}italic_D - roman_\u0394 start_POSTSUBSCRIPT italic_m italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_a end_POSTSUPERSCRIPT < roman_\u0394 start_POSTSUBSCRIPT italic_m italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_a end_POSTSUPERSCRIPT, so D\u2212\u0394m\u2062ka\ud835\udc37superscriptsubscript\u0394\ud835\udc5a\ud835\udc58aD-\\Delta_{mk}^{\\mathrm{a}}italic_D - roman_\u0394 start_POSTSUBSCRIPT italic_m italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_a end_POSTSUPERSCRIPT\nshould be preferred, yielding dm\u2062ksubscript\ud835\udc51\ud835\udc5a\ud835\udc58d_{mk}italic_d start_POSTSUBSCRIPT italic_m italic_k end_POSTSUBSCRIPT as indicated.",
|
| 53 |
+
"url": "http://arxiv.org/html/2308.03547v2/extracted/5890157/wrap.png"
|
| 54 |
+
},
|
| 55 |
+
"2": {
|
| 56 |
+
"figure_path": "2308.03547v2_figure_2.png",
|
| 57 |
+
"caption": "Figure 2: SINRusuperscriptSINRu\\mathrm{SINR}^{\\mathrm{u}}roman_SINR start_POSTSUPERSCRIPT roman_u end_POSTSUPERSCRIPT vs. number of pilots P\ud835\udc43Pitalic_P.\nConfidence-interval bounds at the 95%percent9595\\%95 % level are about \u00b10.4%plus-or-minuspercent0.4\\pm 0.4\\%\u00b1 0.4 % of the\naverage and occur for WGF at P=10\ud835\udc4310P=10italic_P = 10.",
|
| 58 |
+
"url": "http://arxiv.org/html/2308.03547v2/extracted/5890157/sinr.png"
|
| 59 |
+
},
|
| 60 |
+
"3": {
|
| 61 |
+
"figure_path": "2308.03547v2_figure_3.png",
|
| 62 |
+
"caption": "Figure 3: Throughput Rusuperscript\ud835\udc45uR^{\\mathrm{u}}italic_R start_POSTSUPERSCRIPT roman_u end_POSTSUPERSCRIPT vs. number of pilots P\ud835\udc43Pitalic_P.\nConfidence-interval bounds at the 95%percent9595\\%95 % level are about \u00b10.3%plus-or-minuspercent0.3\\pm 0.3\\%\u00b1 0.3 % of the\naverage and occur for WGF at P=10\ud835\udc4310P=10italic_P = 10. This percentage varies with\n\u03c4csubscript\ud835\udf0fc\\tau_{\\mathrm{c}}italic_\u03c4 start_POSTSUBSCRIPT roman_c end_POSTSUBSCRIPT in the order of 10\u221211superscript101110^{-11}10 start_POSTSUPERSCRIPT - 11 end_POSTSUPERSCRIPT.",
|
| 63 |
+
"url": "http://arxiv.org/html/2308.03547v2/extracted/5890157/rate-ver.png"
|
| 64 |
+
}
|
| 65 |
+
},
|
| 66 |
+
"validation": true,
|
| 67 |
+
"references": [
|
| 68 |
+
{
|
| 69 |
+
"1": {
|
| 70 |
+
"title": "Cell-free massive MIMO versus small cells.",
|
| 71 |
+
"author": "H. Q. Ngo, A. Ashikhmin, H. Yang, E. G. Larsson, and T. L. Marzetta.",
|
| 72 |
+
"venue": "IEEE Trans. Wireless Commun., 16:1834\u20131850, 2017.",
|
| 73 |
+
"url": null
|
| 74 |
+
}
|
| 75 |
+
},
|
| 76 |
+
{
|
| 77 |
+
"2": {
|
| 78 |
+
"title": "BeamSync: Over-the-air carrier synchronization in distributed\nRadioWeaves.",
|
| 79 |
+
"author": "U. K. Ganesan, R. Sarvendranath, and E. G. Larsson.",
|
| 80 |
+
"venue": "In Proc. 25th WSA, pages 379\u2013384, 2021.",
|
| 81 |
+
"url": null
|
| 82 |
+
}
|
| 83 |
+
},
|
| 84 |
+
{
|
| 85 |
+
"3": {
|
| 86 |
+
"title": "A gradual method for channel non-reciprocity calibration in cell-free\nmassive MIMO.",
|
| 87 |
+
"author": "N.-I Kim, C. W. Yu, S.-E. Hong, J.-H. Na, and B. C. Chung.",
|
| 88 |
+
"venue": "IEEE Commun. Lett., 26:2779\u20132783, 2022.",
|
| 89 |
+
"url": null
|
| 90 |
+
}
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"4": {
|
| 94 |
+
"title": "Scalable cell-free massive MIMO systems.",
|
| 95 |
+
"author": "E. Bj\u00f6rnson and L. Sanguinetti.",
|
| 96 |
+
"venue": "IEEE Trans. Commun., 68:4247\u20134261, 2020.",
|
| 97 |
+
"url": null
|
| 98 |
+
}
|
| 99 |
+
},
|
| 100 |
+
{
|
| 101 |
+
"5": {
|
| 102 |
+
"title": "Weight-counting based greedy pilot allocation in cell-free massive\nMIMO.",
|
| 103 |
+
"author": "M. Qu, W. Zhao, and M. Jin.",
|
| 104 |
+
"venue": "In Proc. 13th ICTC, pages 1261\u20131266, 2022.",
|
| 105 |
+
"url": null
|
| 106 |
+
}
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"6": {
|
| 110 |
+
"title": "Graph coloring based pilot assignment for cell-free massive MIMO\nsystems.",
|
| 111 |
+
"author": "H. Liu, J. Zhang, S. Jin, and B. Ai.",
|
| 112 |
+
"venue": "IEEE Trans. Veh. Technol., 69:9180\u20139184, 2020.",
|
| 113 |
+
"url": null
|
| 114 |
+
}
|
| 115 |
+
},
|
| 116 |
+
{
|
| 117 |
+
"7": {
|
| 118 |
+
"title": "Pilot assignment in cell-free massive MIMO based on the Hungarian\nalgorithm.",
|
| 119 |
+
"author": "S. Buzzi, C. D\u2019Andrea, M. Fresia, Y.-P. Zhang, and S. Feng.",
|
| 120 |
+
"venue": "IEEE Wireless Commun. Lett., 10:34\u201337, 2021.",
|
| 121 |
+
"url": null
|
| 122 |
+
}
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"8": {
|
| 126 |
+
"title": "Pilot assignment for cell free massive MIMO systems using a\nweighted graphic framework.",
|
| 127 |
+
"author": "W. Zeng, Y. He, B. Li, and S. Wang.",
|
| 128 |
+
"venue": "IEEE Trans. Veh. Technol., 70:6190\u20136194, 2021.",
|
| 129 |
+
"url": null
|
| 130 |
+
}
|
| 131 |
+
},
|
| 132 |
+
{
|
| 133 |
+
"9": {
|
| 134 |
+
"title": "P-complete approximation problems.",
|
| 135 |
+
"author": "S. Sahni and T. Gonzalez.",
|
| 136 |
+
"venue": "J. ACM, 23:555\u2013565, 1976.",
|
| 137 |
+
"url": null
|
| 138 |
+
}
|
| 139 |
+
},
|
| 140 |
+
{
|
| 141 |
+
"10": {
|
| 142 |
+
"title": "On greedy construction heuristics for the MAX-CUT problem.",
|
| 143 |
+
"author": "S. Kahruman, E. Kolotoglu, S. Butenko, and I. V. Hicks.",
|
| 144 |
+
"venue": "Int. J. Comput. Sci. Eng., 3:211\u2013218, 2008.",
|
| 145 |
+
"url": null
|
| 146 |
+
}
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"11": {
|
| 150 |
+
"title": "Practical massively parallel sorting.",
|
| 151 |
+
"author": "M. Axtmann, T. Bingmann, P. Sanders, and C. Schulz.",
|
| 152 |
+
"venue": "In Proc. 27th SPAA, pages 13\u201323, 2015.",
|
| 153 |
+
"url": null
|
| 154 |
+
}
|
| 155 |
+
},
|
| 156 |
+
{
|
| 157 |
+
"12": {
|
| 158 |
+
"title": "Fundamentals of Massive MIMO.",
|
| 159 |
+
"author": "T. L. Marzetta, E. G. Larsson, H. Yang, and H. Q. Ngo.",
|
| 160 |
+
"venue": "Cambridge University Press, Cambridge, UK, 2016.",
|
| 161 |
+
"url": null
|
| 162 |
+
}
|
| 163 |
+
}
|
| 164 |
+
],
|
| 165 |
+
"url": "http://arxiv.org/html/2308.03547v2"
|
| 166 |
+
}
|
20241001/2308.07766v2.json
ADDED
|
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Whale Detection Enhancement through Synthetic Satellite Images",
|
| 3 |
+
"abstract": "With a number of marine populations in rapid\ndecline, collecting and analyzing data about marine populations has become increasingly important to develop effective conservation policies for a wide range of marine animals, including whales. Modern computer vision algorithms allow us to detect whales in images in a wide range of domains, further speeding up and enhancing the monitoring process. However, these algorithms heavily rely on large training datasets, which are challenging and time-consuming to collect particularly in marine or aquatic environments. Recent advances in AI however have made it possible to synthetically create datasets for training machine learning algorithms, thus enabling new solutions that were not possible before. In this work, we present a solution - SeaDroneSim2 benchmark suite, which addresses this challenge by generating aerial, and satellite synthetic image datasets to improve the detection of whales and reduce the effort required for training data collection. We show that we can achieve a performance boost on whale detection compared to using the real data alone for training, by augmenting a real data. We open source 111https://github.com/prgumd/SeaDroneSim2 both the code of the simulation platform SeaDroneSim2 and the dataset generated through it.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "INTRODUCTION",
|
| 9 |
+
"text": "Satellite images play an increasingly crucial role in diverse tasks such as land-use classification [1 ###reference_b1###], precision agriculture [2 ###reference_b2###, 3 ###reference_b3###], coastal management [4 ###reference_b4###], search and rescue missions [5 ###reference_b5###], and environmental monitoring [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###]. These images provide an accessible and comprehensive tool for marine monitoring, offering extensive coverage and detailed Earth surface insights. Remote sensing techniques using these images empower scientists and authorities to enhance marine ecosystem understanding, support conservation endeavors, and improve emergency response capabilities.\n###figure_1### In maritime operations the significance of a dependable vision-based system cannot be overstated. In particular whale detection missions encounter various challenges such as lighting effects on mammals\u2019 visibility, image altitudes affecting appearances, underwater clarity reduction due to turbidity, varied watercolor grading, and diverse backgrounds. Additionally, object pose and texture variations are crucial for precise detection and tracking.\n###figure_2### To address these challenges, the integration of robust algorithms becomes imperative, particularly within the realm of deep neural networks. However, the availability of datasets for whale detection in maritime environments, despite some efforts from Cubaynes [9 ###reference_b9###], remains limited in terms of both size and diversity. Acquiring basic remote sensing images for smaller regions can cost over $100,000 [10 ###reference_b10###]. Manually analyzing and labeling 3357 to 5534 whale images could demand around 1328 to 2016 hours [11 ###reference_b11###]. Gathering aerial images through field operations adds to expenses. Additionally, labeling objects of interest in dynamic and complex maritime environments poses considerable difficulties [12 ###reference_b12###]. Hence, there is a pressing need for alternative methods to generate large-scale datasets rapidly, encompassing a wide variety of objects.\nTo overcome the scarcity of datasets in maritime environments, we introduce a simulation platform called SeaDroneSim2 to generate synthetic data and enhance object detection quality. By leveraging our simulation suit SeaDroneSim2, synthetic aerial and satellite images can be created to replicate a wide range of objects and environmental conditions. Customizable virtual scenes can be rendered to replicate genuine maritime situations, expediting the creation of varied datasets. Synthetic data generation allows for variations in lighting conditions, altitudes, viewing angles, watercolors, and more. This provides a large variety of training examples that strengthen object detection and tracking systems. SeaDroneSim2 overcomes the challenges of manual data collection and facilitates the creation of large-scale datasets encompassing a variety of objects, including whales.\nOur main contributions are as follows:\nWe have made improvements to our novel simulation suit for generating aerial and satellite images for the maritime environment, adding enhanced functionality for noise and water properties.\nWe conducted experiments to evaluate the synthetic datasets and compare performances.\nWe open-source SeaDronesSim2 and dataset associated with this work to accelerate further research. To the best of our knowledge, we are among the first to share the segmentation labeling for whale detection.\nWe proposed a complete pipeline for autonomously generating aerial and satellite maritime images for objects of interest and detecting them.\nThe rest of this paper is organized as follows: We first place this work in the context of related works in Sec. II ###reference_###. Then, we describe the proposed simulation which is used to create photo-realistic images in Sec. III ###reference_###. We then present some quantitative and qualitative evaluations of our approach in Sec. IV ###reference_###. We conclude our work in Sec. V ###reference_### with parting thoughts on future work."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II RELATED WORK",
|
| 15 |
+
"text": "This section reviews datasets, object detection for maritime environments, whale detection studies, and simulations in the maritime and aerial domains."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Datasets and Object Detection for Maritime Environments",
|
| 21 |
+
"text": "The development of advanced computer vision algorithms necessitates access to extensive datasets, particularly in the context of maritime environments. While existing datasets predominantly center around synthetic aperture radar satellite imagery for remote sensing tasks, a growing trend is directing attention toward Very High-Resolution (VHR) images for object detection within these environments [13 ###reference_b13###, 7 ###reference_b7###]. Noteworthy contributions include Gallego\u2019s [14 ###reference_b14###] autonomous ship detection method using aerial images, and Li\u2019s [15 ###reference_b15###] dataset featuring Google Earth and UAV-based images for ship detection. Lygouras et al [16 ###reference_b16###] focused on human detection with UAV-based images, albeit with dataset limitations. Kiefer et al [17 ###reference_b17###] explored maritime and terrestrial images for boat and people detection. UAV-based dataset from Varga et al [18 ###reference_b18###] for water object recognition is noteworthy, though its applicability for SAR tasks might be constrained due to its limited object class coverage.\nIn the realm of whale monitoring, Very High-Resolution (VHR) images have garnered substantial attention [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###] as a viable alternative to conventional techniques such as ship-based or acoustic-based monitoring [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###]. This shift highlights the growing preference for VHR images and their potential to offer insights into whale populations, behaviors, and associated risks. Deep neural networks have emerged as a pivotal tool in multiple endeavors to detect whales[13 ###reference_b13###, 20 ###reference_b20###, 25 ###reference_b25###], harnessing their prowess in analyzing both visual and auditory data.\nBoulent et al [11 ###reference_b11###] proposed a human-in-the-loop approach that combines automation and biologist expertise, creating an AI-assisted annotation tool for whale monitoring. This demonstrates the potential of deep learning to enhance efficiency and accuracy in analyzing whale images, with implications for management and conservation policies."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Simulation",
|
| 27 |
+
"text": "In our approach to detection tasks, we draw inspiration from the concept of detecting Remotely Operated Vehicle(ROV) [26 ###reference_b26###], oysters [27 ###reference_b27###, 28 ###reference_b28###] and propellers[29 ###reference_b29###], which utilize 3D models of the objects to generate synthetic data. Similarly, we utilize a 3D model of the whale to create a maritime dataset specifically for whale detection, addressing the lack of large-scale datasets in this domain.\nWhen large-scale datasets are lacking for robotics tasks, research groups have developed simulations to meet their needs. One of the most relevant works, RarePlanes[30 ###reference_b30###], focuses on utilizing synthetic images to detect airplanes in very high-resolution (VHR) images. Simulators in the aerial and maritime domains often focus on drone control for safety operations[31 ###reference_b31###] and rapid control[32 ###reference_b32###]. Some examples(Abujob [33 ###reference_b33###]) include simulations for verifying algorithms related to landing drones on ships with motion prediction. While similar simulators like the Matlab UAV Toolbox[34 ###reference_b34###] exist, they primarily focus on terrestrial applications and lack ground truth segmentation for the objects of interest.\nWith the ultimate goal of developing an autonomous aerial and satellite surveillance system, we recognize the importance of object detection methods. Due to the scarcity of large-scale datasets for the aerial and satellite of the maritime environment and limited literature on this topic, we are pursuing an alternative approach by generating datasets through synthetic image generation. To the best of our knowledge, we are among the first to propose a simulator called SeaDroneSim2 for generating aerial and satellite datasets of the maritime environment and using them for object detection. Details of the SeaDroneSim2 will be described in the following section."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III System Description",
|
| 33 |
+
"text": "SeaDroneSim2 is built based on the Blender [35 ###reference_b35###] game engine. After creating the simulated environment, the object of interest is incorporated to generate a synthetic dataset for training a Neural Network in object detection. As depicted in Fig. 2 ###reference_###), the chosen object and various parameters for the maritime environment(like water texture, lighting, and random objects) are inputted into SeaDroneSim2. The open-source tool then produces a training dataset along with corresponding ground truth masks. This generated data facilitates and enhances the development of a detection network for recognizing the object of interest. In the remainder of this section, we will go through some of the details of this image generation and object detection pipeline.\nFor the maritime object detection application of SeaDroneSim2, the Neural Network must be trained to detect a range of shapes and colors for specific objects. It should also account for diverse oceanic variables, such as water texture, watercolor (including different levels of turbidity),and lighting conditions. Training the detection network to be effective for these varying environments requires large training datasets, which, as aforementioned, are often costly in nature [10 ###reference_b10###]. At the time of writing, there are very limited training datasets for maritime object detection, one of which is a dataset from the British Antarctic Survey[9 ###reference_b9###]. Existing datasets, even the one from the British Antarctic Survey, lack ground truth masks for image segmentation of the targeted object. Therefore, images must be labeled by hand to train the Neural Network. Here, SeaDroneSim2 proves advantageous as it generates accurate ground truth image masks for objects within the simulation."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-A Object of Interest",
|
| 39 |
+
"text": "Generating realistic synthetic environments relies heavily on an accurate 3D model of the object of interest. The selection of the object of interest is based on two key factors: the increasing conservation efforts for marine life, including whales, and the high costs associated with obtaining state-of-the-art datasets. In addition, the use of whales as our primary object of interest is also easily translatable to other marine creatures, such as dolphins and sea lions, due to their presence near the surface of the ocean.\nAs depicted in Fig. 1 ###reference_###, the 3D whale model closely resembles an actual whale when observed from a satellite or aerial perspective. Although some differences in detail are evident, the 3D model offers a realistic portrayal of whales in satellite imagery.\nOur synthetic data generation tool offers parameterization options specific to the object of interest. This includes the capability to rotate the object along three axes and move the object across three dimensions, as demonstrated in the first two images in the third row in Fig. 3 ###reference_###. In addition to the physical location of the object of interest, customizable textures can also be applied to the object to simulate different environments, including different species of marine animals and marine objects(in the third row in Fig. 3 ###reference_###).\n###figure_3### For each synthetically generated image of the object of interest, SeaDroneSim2 provides functions to generate image masks. The masks aid in image segmentation and alleviate the requirement for manual image labeling, which can be labor-intensive.\nMoreover, SeaDroneSim2 can generate 1000 training images (with resolution 140x140), with their masks, in about 25 minutes. This value varies depending on the resolution of the images being generated."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-B Water Volume",
|
| 45 |
+
"text": "SeaDroneSim2 uses Blender\u2019s render engine CYCLES to create the synthetic oceanic environment. The render engine uses path-tracing of the light object to generate image renders. As a result, through several customization mechanisms, we can attain realistic oceanic environments for different watercolors, textures, and turbidity levels.\nA notable advantage of SeaDroneSim2 is its automated data generation process, requiring minimal user interaction. By utilizing a local installation of Blender, the tool generates a Blender file containing an oceanic plane equipped with pre-configured lighting and camera placements.\nSeaDroneSim2 provides utilities for both customized and pre-built implementations, enabling users to fine-tune maritime environments. For instance, users can change water color using specific RGB value ranges(in the last two images in the second-row Fig. 3 ###reference_###), generating corresponding images and masks. The default implementation offers 1030 different colors, ranging from blue to green water shades. Users can also adjust water turbidity, affecting object visibility. The first two images in the first-row Fig. 3 ###reference_### show increasing turbidity which decreases the whale\u2019s visibility.\nIn addition, water texture is another key parameter in the tool suite. As we can see in Fig. 3 ###reference_###, there are several parameterization options for water texture, depending on different factors like detail, dimension, scale, metallic, lacunarity, and strength. First, the water\u2019s detail fine-tunes the roughness of the water, while scale and strength play the most important factors in defining the roughness, with a larger scale resulting in calmer waters. The water\u2019s dimension and lacunarity compress or expand the water\u2019s patterns to also affect the roughness. Finally, the water\u2019s metallic reflects a varying level of refraction of the water itself.\nNext, considering the prevalence of white noise in many available satellite oceanic datasets, there is often an element of white noise in datasets. SeaDroneSim2 also can simulate and render the noise through Blender. Employing both White Noise and Gaussian Noise, the tool replicates different noise levels within images, as demonstrated in the first two images of the second row in Fig. 3 ###reference_###. Similarly, SeaDroneSim2 also enables varying levels of lighting of the ocean, mimicking varying levels of sunlight, depicted in the last two images in the first row of Fig. 3 ###reference_###\nLastly, we incorporate the capability to simulate diverse wave characteristics in oceanic settings, as illustrated in the last two images of the last row in Fig. 3 ###reference_###. These simulated waves encompass variations in height, tilt, sharpness, and textures, offering a range of parameterized functions."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-C Objects of Non-Interest",
|
| 51 |
+
"text": "Moreover, for a comprehensive and immersive environment, enhancements can be made to the surroundings beyond the water. This could involve adding buoys and markers, seabed features, rocks, reefs, or other relevant elements to provide context and realism to the oceanic setting. Such inclusions can contribute to a more realistic simulation and better represent the intricacies of maritime environments.\nAs depicted in the first two images in the last row of Fig. 3 ###reference_###, an additional rock has been included on the seafloor, showcasing how submerged objects can be incorporated into the simulation. The differing visibility levels and rock height demonstrate how water turbidity influences the clarity of submerged objects."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.4",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-D Aerial Image",
|
| 57 |
+
"text": "One of our goals is to simulate aerial images in SeaDroneSim2. While other camera angles are easily achievable through Blender\u2019s capabilities, most images take advantage of a simple aerial view. However, there does exist variance in camera altitude, which yields larger objects of interest at lower altitudes and smaller objects of interest at higher altitudes.\nThe last two images in the third row Fig. 3 ###reference_### illustrate this variance, showcasing how the camera can be parameterized based on the object\u2019s specifications, while typically being fixated on the object of interest by default.\nWhile the majority of training images maintain a resolution of 140 140 pixels, the image resolution can extend to 30,000 30,000 pixels, depending on the computational capabilities of the underlying hardware.\nIn summary, by combining realistic rendering of water properties, such as transparency and wave dynamics, with the integration of diverse underwater elements, SeaDroneSim2can serve as a valuable tool for exploring maritime situations, enhancing and evaluating detection algorithms, contributing to cetacean conservation, and various other applications.\n###figure_4###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "IV Experiments And Results",
|
| 63 |
+
"text": "First, we describe the dataset we used in these experiments. Then we compared the results obtained by two different segmentation networks of detecting the whales utilizing datasets generated from SeaDroneSim2."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.1",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "IV-A Synthetic Dataset",
|
| 69 |
+
"text": "During our rendering process, we produced a total of 2000 synthetic images to construct a diverse and representative dataset. To mimic real-world conditions, we introduced variations in lighting, altitudes, orientation, water color, Gaussian noise levels, water turbidity, and wave patterns. These modifications were implemented to approximate the realistic characteristics commonly observed in satellite images, thereby creating a more photo-realistic environment for the network to learn from.\nMoreover, for training the network to segment whales in diverse body orientations of the whale, we incorporated synthetic whale instances with varying body orientations of the whale in the simulations. Using this approach, we enforced the network to segment whales more accurately, regardless of their body orientation or appearance."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "IV-B Real Dataset",
|
| 75 |
+
"text": "To obtain whale images from the satellite view, we accessed a collection of 633 images through the UK Polar Data Centre (PDC) [9 ###reference_b9###]. These images played a crucial role in evaluating the effectiveness of our approach. However, during our dataset review, we noticed that the majority of these images only captured a small portion of the whale above the sea surface, and in some cases, only a splash of water from the whales.\nDue to the unique characteristics of these satellite images, we conducted a meticulous examination to identify suitable ones that could be included in our test dataset. After careful selection, we were able to compile a set of 508 images that met our criteria and could be effectively utilized for testing and evaluating our approach.\nAlthough this subset of real satellite images may have limitations in terms of whale visibility and coverage, it provides us with valuable test data to assess the performance and robustness of our method in segmenting and identifying whales under challenging scenarios."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.2.1",
|
| 79 |
+
"parent_section_id": "4.2",
|
| 80 |
+
"section_name": "IV-B1 Evaluation Metrics",
|
| 81 |
+
"text": "To evaluate with the real dataset, we use the Intersection over Union (IoU) which is a common evaluation metric used to assess the performance of image segmentation algorithms. It measures the similarity between the predicted segmentation and the ground truth segmentation.\nThe term Intersection refers to the region where the predicted segmentation and the ground truth segmentation overlap. On the other hand, Union encompasses the entire area covered by both segments, whether overlapping or not. To assess the model\u2019s performance, we define the success and Detection Rate (DR) for each cluster based on the IoU as\nwhere TP represents the true positive count, which corresponds to the number of correctly segmented instances by the model, and FN represents the false negative count, signifying the number of instances that were present in the ground truth but were not segmented by the model. We assign DR with two different thresholds of 0.5 and 0.6, which are denoted as mn and in Table I ###reference_###. Finally, we conduct a comprehensive analysis by computing a series of results for varying percentages of real data size, allowing us to make a thorough comparison between the method with and without the incorporation of SeaDroneSim2."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.3",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-C Experimental Results",
|
| 87 |
+
"text": "During our testing phase, we evaluated two types of networks: Unet [36 ###reference_b36###] and Feature Pyramid Networks (FPN) [37 ###reference_b37###]. For both networks, we used a learning rate of 0.001 with decay. The optimization process utilized the Adam optimizer in conjunction with the Jaccard loss [38 ###reference_b38###] as our loss function.\nBefore training, we augmented the dataset using various techniques, including a rotation range of 90 degrees, width shift range of 0.3, height shift range of 0.3, shear range of 0.5, zoom range of 0.3, horizontal flip, and vertical flip.\nThese data augmentation techniques are applied to the images to artificially increase the dataset\u2019s diversity and expose the model to various transformations that may occur in real-world scenarios. By incorporating these augmentations, we aim to enhance the model\u2019s ability to generalize and improve its performance on unseen data during training.\nThroughout the training process, we used a batch size of 32 and conducted training for 100 epochs. To ensure reliable and consistent results, each training was performed at least 10 times (tests), and the best result was selected for further analysis and comparison.\nBy conducting these extensive tests on both Unet and FPN networks, we aimed to determine the usability of our rendered synthetic image and identify the most suitable model for SeaDroneSim2 application.\nGiven the significant time and financial expenses involved in collecting real datasets, our strategy focuses on minimizing the reliance on real data for training. To achieve this goal, we performed tests using different proportions of the real dataset, specifically 10% and 50% of the available real data. The remaining 50% of the real dataset (our held real data set) is reserved for testing purposes.\nBoth the Unet and FPN methods were tested, and we obtained and scores using only 10% of the real dataset, which serves as our baseline for the task. For the Unet method, the and scores were 0.615 and 0.488, respectively. As for the FPN method, the and scores were 0.679 and 0.476, respectively. The results are tabulated in Table I ###reference_###\nAfter training the model solely on synthetic datasets and conducting the testing, we observed that the results were lower than the baseline. We think both networks have learned to recognize whales in the synthetic domain but the sim-to-real domain transfer is lacking in this case.\nWe achieve better results when including the synthetic dataset rendered by SeaDroneSim2. The and results from \u2018Unet+SeaDroneSim\u2019 (Unet with synthetic augmented real data) are 0.71 and 0.512 against the human-labeled ground truth which is 15.4% and 4.91% better than just using a 10% real dataset for training.\nThe and results from \"FPN+SeaDroneSim\" (FPN with synthetic augmented real data) are 0.746 and 0.551 against the human-labeled ground truth which is 9.86% and 15.7% better than just using a 10% real dataset for training. Moreover, the results from \u2018Unet + SeaDroneSim (Unet with synthetic augmented real data) are 0.71, which is 3.34% better than just using a 50% real dataset for training.\nAs depicted in Fig. 4 ###reference_###, the underwater submerged portion of the body of the whale would be overlooked if the network were not trained with synthetic augmented real data. Consequently, this situation results in numerous false negatives. Our synthetic data includes numerous samples where either a portion of the whale\u2019s body or the entire body is submerged in water. Thus, training the network using real data augmented with our synthetic data leads to a substantial reduction in false negatives. This improvement leads to more accurate predictions and a higher success rate even in challenging scenarios.\nAugment the dataset with synthetic does not always guarantee improvement, as evidenced by the slight drop in the result from both \"FPN+SeaDroneSim\" and result from\"Unet+SeaDroneSim\" when using 50% of the real dataset in Table. I ###reference_###.\nHowever, this drop could be attributed to the fact that our dataset is relatively small, which leads to limited robustness in handling diverse real-world scenarios for whale detection."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Conclusions and Future Work",
|
| 93 |
+
"text": "In this work, we discussed how to utilize the capability of a render engine and built a simulation environment for whale detection. We discussed the implementation details of using Blender for generating synthetic datasets for object detection. We then compared our detection results with the usage of different synthetic datasets generated from the simulation. These results highlight that collecting real images is challenging for data-critical applications. It is possible to use 3D models of the object to create photorealistic images in a simulation environment that will successfully detect objects in a specific domain. This work is among the first to build a maritime object simulation focusing on object detection with particular emphasis on whale detection.\nIn the future, we aim to enhance the capabilities and functionalities of SeaDroneSim2 and compare our results with additional datasets for different objects of interest, including coral reefs and oysters. In addition to our ongoing efforts, we are dedicated to creating specialized datasets that focus on maritime objects, with a particular emphasis on aiding cetacean scientists in detecting and monitoring whales under the ice. By providing these tailored datasets, we aim to equip researchers with the necessary tools to effectively study and protect whales in challenging icy environments."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [],
|
| 97 |
+
"tables": {
|
| 98 |
+
"1": {
|
| 99 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Comparison of Semantic Segmentation Results with the Two Different Networks</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.2.2.3\">Method</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.2.2.4\">Real Data size</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.3.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.3.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.3.1.1.1\">Unet</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.3.1.2\">0.615</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.3.1.3\">0.488</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.3.1.4\">10%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.4.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.4.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.4.2.1.1\">Unet +SeaDroneSim2</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.4.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.4.2.2.1\">0.710</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.4.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.4.2.3.1\">0.512</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.4.2.4\">10%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.5.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.5.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.5.3.1.1\">Unet</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.5.3.2\">0.849</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.5.3.3\">0.687</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.5.3.4\">50%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.6.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.6.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.6.4.1.1\">Unet +SeaDroneSim2</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.6.4.2\">0.833</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.6.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.6.4.3.1\">0.710</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.6.4.4\">50%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.7.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.7.5.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.7.5.1.1\">FPN</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.7.5.2\">0.679</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.7.5.3\">0.476</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.7.5.4\">10%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.8.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.8.6.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.8.6.1.1\">FPN +SeaDroneSim2</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.8.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.8.6.2.1\">0.746</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.8.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.8.6.3.1\">0.551</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.8.6.4\">10%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.9.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.9.7.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.9.7.1.1\">FPN</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.9.7.2\">0.861</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.9.7.3\">0.714</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.9.7.4\">50%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.10.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.2.10.8.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.10.8.1.1\">FPN +SeaDroneSim2</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.2.10.8.2\">0.861</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.2.10.8.3\">0.683</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.2.10.8.4\">50%</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 100 |
+
"capture": "TABLE I: Comparison of Semantic Segmentation Results with the Two Different Networks"
|
| 101 |
+
}
|
| 102 |
+
},
|
| 103 |
+
"image_paths": {
|
| 104 |
+
"1": {
|
| 105 |
+
"figure_path": "2308.07766v2_figure_1.png",
|
| 106 |
+
"caption": "Figure 1: First row presents real whale images taken from space while the second row is the simulated whale images from using SeaDroneSim2.",
|
| 107 |
+
"url": "http://arxiv.org/html/2308.07766v2/x1.png"
|
| 108 |
+
},
|
| 109 |
+
"2": {
|
| 110 |
+
"figure_path": "2308.07766v2_figure_2.png",
|
| 111 |
+
"caption": "Figure 2: An overview of our approach. (a) Assets: Loads the assets such as water properties, objects, materials, etc. into SeaDroneSim2 to generate Synthetic datasets. Note, the synthetic dataset would include its ground truth mask for the object of interest. (b) We modify the properties within the scene such as noise level, rotation of the object, altitude of the camera and etc. (c)The synthetic dataset generated is then fed into a Neural network to obtain the object detection result. We demonstrated the generation of aerial, satellite, and underwater images for two different objects of interest in our study. Note: Object Detection images are cropped and enlarged for better visualization.",
|
| 112 |
+
"url": "http://arxiv.org/html/2308.07766v2/x2.png"
|
| 113 |
+
},
|
| 114 |
+
"3": {
|
| 115 |
+
"figure_path": "2308.07766v2_figure_3.png",
|
| 116 |
+
"caption": "Figure 3: These are the synthetic images generated from SeaDroneSim2. In the first row, the first two images showcase the increasing turbidity of the water, and the last two images depict varying lighting conditions. In the second row, the first two images display different watercolors, while the last two images exhibit increasing noise from the satellite images. In the third row, the first two images demonstrate varying altitudes, while the last two images illustrate different whale positions, including lodging, spyhopping, and submerging. In the last row, the first two images demonstrate synthetic images with different water waves. while the last two images illustrate different rocks and hills.",
|
| 117 |
+
"url": "http://arxiv.org/html/2308.07766v2/x3.png"
|
| 118 |
+
},
|
| 119 |
+
"4": {
|
| 120 |
+
"figure_path": "2308.07766v2_figure_4.png",
|
| 121 |
+
"caption": "Figure 4: From left to right: Sample real input image, ground truth, segmentation result using Unet without synthetic augmented\nreal data, segmentation result using Unet with synthetic augmented\nreal data, segmentation result using FPN without synthetic augmented\nreal data, segmentation result using FPN with synthetic augmented\nreal data. All networks here are trained with only 10% of real data.",
|
| 122 |
+
"url": "http://arxiv.org/html/2308.07766v2/x4.png"
|
| 123 |
+
}
|
| 124 |
+
},
|
| 125 |
+
"validation": true,
|
| 126 |
+
"references": [],
|
| 127 |
+
"url": "http://arxiv.org/html/2308.07766v2"
|
| 128 |
+
}
|
20241001/2308.16697v3.json
ADDED
|
@@ -0,0 +1,352 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Game semantics for the constructive \ud835\udf07-calculus",
|
| 3 |
+
"abstract": "We define game semantics for the constructive -calculus and prove its equivalence to bi-relational semantics.\nAs an application, we use the game semantics to prove that the -calculus collapses to modal logic over the modal logic .\nWe then show the completeness of extended with fixed-point operators.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "This paper is a first step into relating two strands of research modal logic: the modal -calculus and constructive modal logics.\nWe define a constructive variant of the -calculus by adding least and greatest fixed-point operators to constructive modal logic.\nWe define game semantics for the constructive -calculus and prove its equivalence to bi-relational Kripke semantics.\nWe use then the game semantics to study an intuitionistic variant of the modal logic with fixed-point operators.\nBefore introducing our results, we briefly review the related literature on constructive modal logics and the -calculus.\nOn constructive modal logics, the duality of the modalities and is lost.\nThese logics have been studied for a long time; some of the first texts on the topic are Fitch [Fit48 ###reference_bx16###] and Prawitz [Pra65 ###reference_bx29###].\nIn this paper, we use Mendler and de Paiva\u2019s bi-relational -models [MdP05 ###reference_bx24###].\nThese models are based on those of Wijesekera [Wij90 ###reference_bx32###], but allow worlds where the false proposition holds.\nThe -models are not the only semantics available for constructive modal logics.\nOf note the semantics of Acclavio et al. [ACS21 ###reference_bx1###].\nAcclavio et al. provide complete denotational semantics for via game semantics.\nTheir games are canonical representations of proofs, not model checking games as the ones presented in this paper.\nThere are also categorical semantics [AMdPR01 ###reference_bx4###] and realizability semantics [KMS21 ###reference_bx20###].\nFurthermore, one should note that constructive modal logic is not the only non-classical variant of modal logic.\nIt can also be strengthened to intuitionistic and G\u00f6del modal logics.\nOn the axiomatic side, these logics are obtained by adding axioms to constructive modal logic.\nOn the semantics side, they are obtained by excluding fallible worlds and adding restrictions on -models.\nSee [DM23 ###reference_bx14###, dGSC24 ###reference_bx12###] for more information on the relation between constructive and intuitionistic modal logic.\nWhile models for intuitionistic modal logics can be seen as a particular type of -models, constructive and intuitionistic variants of the same logic usually behave quite differently.\nOf note is Das and Marin\u2019s [DM23 ###reference_bx14###] paper which shows that the -free fragment of and do not coincide: does not prove , while does.\nThe modal -calculus was defined by Kozen [Koz83 ###reference_bx21###], who also defined a related proof system .\nThe completeness of was first proved by Walukiewicz [Wal95 ###reference_bx31###].\nSee [Len10 ###reference_bx23###, BW18 ###reference_bx10###] for surveys on the -calculus.\nThe -calculus\u2019 alternation hierarchy classifies the -formulas by how many alternating least and greatest fixed-point operators they contain.\nThe strictness of the hierarchy was open for many years until it was proved by Bradfield [Bra98a ###reference_bx8###].\nBradfield later gave a simplified proof of the alternation hierarchy\u2019s strictness using evaluation games [Bra98b ###reference_bx9###].\nThe strictness may not hold over restricted classes of models.\nFor example, Alberucci and Facchini [AF09 ###reference_bx3###] proved that the alternation hierarchy collapses to its alternation-free fragment over transitive models, and to modal logic over equivalence relations.\nSee Chapter 2 of [Pac23 ###reference_bx27###] for a survey on the alternation hierarchy.\nThe -formulas are famously hard to understand.\nOne advantage of game semantics for the -calculus over the standard Kripke semantics is that they give a more intuitive interpretation of the -formulas.\nFurthermore, evaluation games are also useful as a tool for proving theorems about the -calculus.\nIn an evaluation game for the -calculus, two players discuss whether a formula is true at a given world of a Kripke model.\nIn the classical version of the game, it is usual to refer to the players as Verifier and Refuter.\nIn the constructive version of the game, we will still have two players, but now they alternate between the roles of Verifier and Refuter, depending on their moves.\nThis difference happens because, over classical semantics, every formulas can be put in negative normal form; this allows us to simplify the evaluation games in the classical case.\nIn other words, we need to consider negation and implication in constructive semantics.\nTherefore we will need a more delicate argument to prove the equivalence of the semantics in the constructive case.\nOur proof is based on the proof of the correctness of game semantics for the classical -calculus by Ong [Ong15 ###reference_bx25###].\nSince our evaluation games build on the -models of Mendler and de Paiva [MdP05 ###reference_bx24###], the game semantics can also be used for any logic whose semantics are based on (subsets of) -models.\nIn particular, our game semantics can also be used to define semantics for an intuitionistic -calculus, based on bi-relational models for the modal logic .\nAs an application, we study the logic , an intuitionistic variant of with fixed-points operators.\n is also known as and , and was first studied by Prior [Pri57 ###reference_bx30###].\nThe completeness of over -models was proved by Ono [Ono77 ###reference_bx26###] and Fischer Servi [FS78 ###reference_bx17###].\nWe use the game semantics to show that the constructive -calculus collapses to constructive modal logic over .\nThat is, every -formula is equivalent to a formula without fixed-point operators over -models.\nOur proof is a generalization of Alberucci and Facchini\u2019s proof of the collapse of the (classical) -calculus to (classical) modal logic over -models [AF09 ###reference_bx3###].\nFinally, we use the -calculus\u2019 collapse to modal logic over to prove the completeness of , the modal logic obtained by adding fixed-point axioms and rules to the modal logics .\nAs far as the author is aware, these are the first completeness results for any logic over the constructive -calculus.\nAt last, we note that a constructive variant of was previously studied by Arisaka et al. [ADS15 ###reference_bx2###], who defined and proved the correctness of a nested sequent calculus for .\nThe bi-relational semantics for this logic has not been studied yet in the literature, so the semantical methods we use to prove the collapse of the -calculus over cannot be used for ."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Constructive -calculus",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Game semantics for the constructive -calculus",
|
| 21 |
+
"text": "In this section, we define game semantics for the constructive -calculus and prove its equivalence to the bi-relational semantics.\nThis game semantics is a modification of the game semantics of the classical -calculus.\nIn the classical version, the players Verifier and Refuter discuss whether a formula hods in a world of a Kripke model .\nWhile in the classical -calculus we can suppose formulas use no implications and that negations are applied only to propositional symbols, we cannot do the same in the constructive -calculus.\nThis complicates the games used for the constructive -calculus: the players now have the roles of Refuter and Verifier, and swap roles when discussing certain formulas."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Definition",
|
| 27 |
+
"text": "Fix a -model , a world , and a well-named -formula .\nIn this subsection, we define the evaluation game .\nThe game has two players: and .\nThe two players will have the roles of Verifier and Refuter (abbreviated to and , respectively).\nEach player has only one role at any given time, and the players always have different roles.\nWe usually write \u201c is \u201d for \u201cthe player in the role of \u201d, and similar expressions for the other combination of players and games.\nWe denote an arbitrary role by and the dual role by ; that is, if is , then is and vice versa.\nThe game has two types of positions.\nThe main positions of the game are of the form where , , and is a role.\nWe also have auxiliary positions of the form if , if , and if , where .\nIn any position, is the role currently held by ; the role of is .\nIntuitively, at a position , tries to show that satisfies and tries to prove that does not satisfy .\nThese auxiliary positions are used to decompose the players moves at positions of the forms , , and . For example, at the position , first makes a choice and then does so.\nThe auxiliary positions make explicit that both players are involved in the choice of the next main position .\nFor the same reasoning, we also use auxiliary positions is also necessary for .\nWe use auxiliary positions for for uniformity\u2019s sake.\nThe game begin at the position , with in the role of and in the role of .\nEach position is owned by exactly one of the players.\nAt each turn of the game, the owner of the current position has chooses one of the available positions to move to.\nThe game then continues with the new position.\nIf no such position is available, the game ends.\nWe describe the ownership and possible plays for each type of position below; this information is summarized in Table 1 ###reference_###.\nIf and , then the position is owned by and there is no available move.\nBelow, we suppose the world in the position being described is not in .\nAt the position there is no available move and the game ends.\nThis position is owned by if and by if .\nThe position is owned by , who chooses one of and .\nSimilarly, at is owned by , who chooses one of and .\nThe position of the form is owned by , who chooses such that , and then move to the position ; the position is again owned by , who chooses such that and moves to .\nSimilarly, the position is owned by chooses such that , and then move the position ; the position is owned by chooses such that and moves to .\nAt a position of the form , chooses and challenges to show that ; that is, moves to .\nPositions of the form is similar.\nIn this case, chooses and moves to , and then chooses one of and .\nThat is, chooses and chooses whether to show that or ; in case chooses , the players exchange roles.\nLet ; at the positions and are owned by if is and by if is ; the only available position to move to is .\nWhen moving from to , we say that the fixed-point formula was regenerated.\nA run of the game is a sequence of positions which respects the rules above.\nThat is a run is a (finite or infinite) sequence of position such that:\nis ;\nit is possible to play from ; and\nif is finite, then the last position in is of the form with or with .\nBefore defining the winning conditions, we note that the positivity requirement on the fixed-point formulas guarantees that, if and occur in any run of the game, then has the same role in both positions:\nLet and be a runs of the game .\nSuppose occurs in and if occurs in .\nThen ; that is, has the same role at both positions.\nIf and occur in the same run , then and coincide by the positivity of in : it implies that the players must swap roles an even number of times between these two positions.\nNow, let and be the first occurrence of positions with the formula in and , respectively.\nThe well-namedness of implies that there is only one occurrence of in .\nThis fact along with the positivity of implies that the number of times the players switch roles to get to and must have the same parity.\n\u220e\nWe say that the fixed-point formula is owned by if a position of the form is reachable from the initial position either and , or if and .\nThe fixed-point formula is owned by if it is not owned by .\nThis is well-defined by the above proposition.\nWe are now ready to define the winning conditions for the game.\nLet be a run of the game.\nIf is finite, then the last position in is of the form with or with .\nThe owner of the last position has no available position and loses the game.\nIf is infinite, let outermost infinitely often regenerated fixed-point formula in the play ; that is, is regenerated infinitely often in and, if is regenerated infinitely often in , then .\nThen wins iff owns the fixed-point .\nA (positional) strategy for is a function which, given a position owned by , outputs a position where can move to, if any such position is available.\n follows in the run if whenever is owned by , then .\nThe strategy is winning iff wins all possible runs where they follow .\nStrategies and winning strategies for are defined similarly.\nNote that at most one of the player can have a winning strategy for a given evaluation game.\nIt is not immediate that that one of the players has a winning strategy for a given evaluation game.\nThe existence of winning strategies is implied by our proof of the equivalence of the bi-relational Kripke semantics and game semantics for the constructive -calculus:\nFix a -model , a world , and a well-named -formula .\nLet be an evaluation game.\nThen only one and has a positional winning strategy .\nIt is immediate that not both and have winning strategies for .\nFor a contradiction, suppose is a winning strategy for and is a winning strategy for .\nThen the play resulting from the players using and is winning for both and , which is not possible by the definition of the game.\nNow, by the definition of the bi-relational semantics, either or .\nTheorem 8 ###reference_orem8### provides us strategies in both cases.\nIn case holds, has a winning strategy; in case holds, has a winning strategy.\n\u220e\nNote that the existence of positional strategies for the evaluation games is not a trivial fact.\nA key fact in the existence of such strategies is that, in order to determine the winner of an infinite run , we need to look only at its tails; the exclusion of any initial segment of the run does not alter the result.\nAnother way of proving the existence of winning strategies for evaluation games is to represent them as parity games; the existence of positional winning strategies for parity games was proved by Emerson and Jutla [EJ91 ###reference_bx15###].\nThe relation between the -calculus and parity games is outside the scope of this paper, see [GTW03 ###reference_bx19###] for more information."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Correctness of game semantics",
|
| 33 |
+
"text": "We now show the equivalence between the -calculus\u2019 bi-relational semantics and game semantics.\nThat is, we will show that iff the player has a winning strategy for , and that iff the player has a winning strategy for .\nBefore proving it, we briefly sketch the key idea and remark on some technical points.\nFix an evaluation game .\nCall a position true iff , and false iff .\nWe will show that, at a true positions, can always move in a way favorable to themselves.\nThat is, in a way such that the resulting position is a true position, if the players have not switched roles; or the resulting position is a false position, if the players have switched roles.\nOn the other hand, cannot move in any way favorable to themselves.\nA similar situation occurs at false positions.\nTo make the statements above precise, we have to overcome two problems.\nFirst, when considering whether , the formula might have free variables, and so its valuation might not be well-defined.\nWe solve this by augmenting with the intended valuations for the variables occurring in .\nSecond, we need to consider infinite plays.\nSpecifically, we need to guarantee that, when starting from a true position, the resulting play is winning for ; and when starting from a false position, the resulting play is winning for .\nTo solve this, we assign two types of signatures to each position .\nWe will show that the players can play such that, if we start from a true position, the -signatures are non-increasing and eventually constant; and if we start from a false position, the -signatures are non-increasing and eventually constant.\nThis will guarantee the resulting plays are winning for the corresponding players.\nLet be a -model, and be a well-named -formula.\nThen\nWe prove that, if , then has a winning strategy for and, if , then has a winning strategy for .\nThis is sufficient to prove the theorem since the two players cannot both have a winning strategy for and since one of or always holds.\nSuppose .\nWe will assign to each main position of the game an ordinal signature .\nWe show is always able to control the truth of the positions in the evaluation game and move in a way that the signature is eventually constant.\nTo define -signatures, we enumerate the fixed-point subformulas of in non-increasing size:\nThat is, we require that, if , then ; and, if , then .\nWe also enumerate the fixed-point subformulas of which are owned by in non-increasing size:\nAn -signature is a sequence of ordinals.\nDenote by the th component of .\nWrite iff the first components of are identical.\nOrder the signatures by the lexicographical order: iff there is such that and .\nThe lexicographical order is a well-ordering of the signatures.\nThe augmented Kripke model is obtained by setting , where is a Kripke model, and is a variable symbol.\nWe want to evaluate subformulas of where some occur free, so we augment with the correct valuations of these variables:\nBy the choice of our enumeration, does not contain free occurrences of , and so is well-defined.\nThe th approximant of is obtained by applying to , many times; and that the approximant is obtained by applying to , many times.\nWe define models where the variables owned by are assigned their th approximant , and variables owned by receive their correct value.\nFormally, given a signature , we define augmented models by\nIf , we call a true position; if , we call a false position.\nNow, if a true position, then there is a least signature such that .\nSimilarly, if a false position, then there is a least signature such that .\nDenote these signatures by .\nWe will define a strategy for which guarantees that when the players are at , if is in the role of , and if is in the role of .\nFurthermore, cannot move in ways the signature is increasing and most of \u2019s moves never increase the signature.\nThe only time the signature may increase is when regenerating some fixed-poin formula , but in this case the first positions of the signature are not modified.\nWe will also have that any positions reachable when follows the strategy are true positions when and false positions when .\nRemember that the game starts on the position and we assumed that holds, so this is true for the initial position of the game.\nWe define \u2019s strategy as follows:\nSuppose the game is at the position .\nIf and is a true position; then moves to such that , with .\nBy the definition of the signatures, .\nIf and is a false position; then and for all . So whichever way moves, the next position is false and the signature is non-increasing.\nSuppose the game is at the position .\nIf and is a true position; then and for all .\nSo whichever way moves, the next position is true and the signature is non-increasing.\nIf and is a false position; then moves to such that and , with .\nSuppose the game is at the position .\nIf and is a true position; for all move of , can move to some such that .\nBy the definition of the signatures, .\nIf and is a false position; moves to a position such that all answers by are false positions.\nFurthermore, and for all such .\nSuppose the game is at the position .\nIf and is a true position; for all moves and of , we have .\nBy the definition of the signatures, .\nIf and is a false position; moves to a position and then to a position which is a false position.\nFurthermore, and .\nSuppose the game is at the position .\nIf and is a true position; after all move of , the players switch roles and we have .\nBy the definition of the signatures, .\nIf and is a false position; moves to a position which is a true position and switches roles with .\nFurthermore, and .\nSuppose the game is at the position .\nIf and is a true position.\nAfter moves to , moves to if it is a true position.\nOtherwise, moves to ; in this case, is a false position.\nEither way, .\nIf and is a false position; moves to a position such that is a true position and is a false position.\nAny answer of satisfies our requirements.\nSuppose the game is at or at , then the owner of the position must move to .\nIf there there is such that , then and .\nIf there is no such that , then .\nOn finite runs, wins by the construction of the strategy : is at a true position of the form reachable following .\nSimilarly, is at a false positions of the form .\nAlso, is at true positions where .\nNow, consider an infinite run where follows , let be the smallest number in such that is an infinitely often regenerated fixed-point operator.\nSuppose there is such that for a contradiction.\nLet be the positions where occur; that is, all the positions .\nWithout loss of generality, we suppose that for all no is regenerated after the th position of the run.\nThe move from to causes a strict decrease in the signature.\nThe other moves between and cannot cancel this decrease, since either the signature does not change or one of the first positions of the signature is reduced.\nTherefore the sequence of signatures\nis strictly decreasing.\nThis is a contradiction, as the signatures are well-ordered.\nTherefore there is no such that , and so wins the run.\nWe conclude that the strategy is a winning strategy for .\nWe now sketch how to prove the other half of the theorem.\nIf , then we can define a winning strategy for similar to the strategy for defined above.\nThe main difference is that we need to consider -signatures, denoting approximants for \u2019s variables.\nAgain, enumerate the fixed-point subformulas of in non-increasing size:\nWe now also enumerate the fixed-point subformulas of which are owned by in non-increasing size:\nAn -signature is a sequence of ordinals.\nAs with -signatures, denote by the th component of and write iff the first components of are identical.\nOrder the -signatures by the lexicographical order: iff there is such that and .\nThe lexicographical order is a well-ordering of the -signatures.\nAs above, let be the model augmented with the correct valuations of the variables occurring in .\nGiven a signature , we define augmented models by\nOn , the variables owned by are assigned their th approximant , and variables owned by receive their correct value.\nIf , we call a true position; if , we call a false position.\nNow, if a true position, then there is a least signature such that .\nSimilarly, if a false position, then there is a least signature such that .\nDenote these signatures by .\nSimilar to the first case, cannot move in ways where the -signature increases, and we can build a strategy for in a way that, eventually, their moves do not increase the signature.\nBy the same argument as above, the strategy is winning.\n\u220e"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "The collapse to modal logic over models",
|
| 39 |
+
"text": "In this section we use the game semantics to prove that the -calculus collapses to modal logic over , that is, that all -formula is equivalent to a modal formula over -models.\nIn the first subsection, we isolate the key lemma to this proof.\nIn the second subsection, we prove the collapse."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.1",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "The key lemma",
|
| 45 |
+
"text": "To prove the -calculus\u2019 collapse over classical -models, Alberucci and Facchini use the following result:\nLet be an -model.\nIf , then iff , where .\nWe cannot prove the same result over -models, but the following Lemma will suffice:\nLet be an -model.\nLet be the composition of and .\nIf , then\nwhere .\nFix an -model .\nThe composition is a transitive relation by Lemma 4 ###reference_orem4###.\nAlso note that the worlds occurring in some position of a play are -accessible from the previously occurring worlds.\nThat is, when if players have gone through a position and later , then .\nThis happens because and are reflexive relations and is transitive.\nNow, suppose and .\nFor all , there is such that .\nLet be such that .\nBy downward confluence, there is such that .\nBy the transitivity of , .\nSo there is such that and .\nAs , .\nSo for all there is such that .\nThat is, .\nSimilarly, suppose and .\nTherefore implies .\nLet , then by the transitiveness of .\nSo .\nThus .\n\u220e"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.2",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "The collapse",
|
| 51 |
+
"text": "We first show that the fixed-points for modal formulas can be reached in two steps.\nOur proof is by contradiction.\nThis contradiction is not essential, but makes the proof easier to understand.\nLet be an -model and be a modal formula where is positive and appears only once in .\nThen\nWe first show that .\nLet be an -models and be a well-named -formula.\nWe can also suppose that is of the form with .\nWe show that is equivalent to .\nAs is positive in , we have that .\nSo we need only to show that .\nFor a contradiction, suppose that and .\nThen has a winning strategy for the evaluation game ; and has a winning strategy for the evaluation game .\nWe use and to define strategies for in and for in .\nRemember that starts on the role of and starts on the role of .\nWe have the players use analogous strategies on both games.\nSuppose the players are in positions in and in .\nBoth positions have the same owner, in the same role.\nThat is, if \u2019s turn in some game, it is \u2019s turn in both games; and the owner\u2019s role is in some game, their role is in both games.\nFor example, suppose is playing the role of and the players are in positions and in and .\nIf plays in , they play in .\nThe players continue both games following the strategies described above until they get to a position of the form in both games; or they get to positions of the form in and in .\nCase 1. Suppose the players are in a position in both games.\nWithout loss of generality, suppose is and is .\nAs is winning for in , .\nAs is winning for in , .\nAnd so we have a contradiction.\nA similar contradiction is reached if is and is .\nCase 2. Suppose the players are in positions of the form in and in .\nWithout loss of generality, suppose is and is .\nAs is a winning strategy for in , .\nPreviously, the players must have been through some a position in .\nAs is a winning strategy for in , .\nNote that, from the definition of the game, the reflexivity of and , and the transitivity of , we have that .\nBy Lemma 10 ###reference_orem10###, since .\nWe have our contradiction.\nEither way, we conclude that .\nAnd so .\nIn classical semantics, we can prove by a direct calculation.\nWe cannot do the same in intuitionistic semantics as we cannot use the law of excluded middle.\nWe have to prove it directly.\nFirst, holds as is positive in .\nIf we suppose there is such that and , we get a similar contradiction.\n\u220e\nWe are now able to show the constructive -calculus\u2019 collapse to modal logic over -models.\nOver -models, every -formula is equivalent to a modal formula.\nWe argue by structural induction on -formulas.\nFirst, some of the easy cases.\n is equivalent to a modal formula, as it is a modal formula.\nSuppose the -formulas and are equivalent to modal formulas and , then is equivalent to , is equivalent to , is equivalent to , is equivalent to .\nNow, the interesting cases.\nAs above, is equivalent to , where is a modal formula.\nBy Lemma 11 ###reference_orem11###, is equivalent to , which is a modal formula.\nThe same Lemma shows that is equivalent to .\nTherefore every -formula is equivalent to a modal formula over -models.\n\u220e"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "The completeness of",
|
| 57 |
+
"text": "In this section, we prove:\nFor all closed -formula , proves iff is true at all -models.\nWe begin by proving the soundness of .\nThen we show a formalized version of the collapse to modal logic.\nAt last, we use the provable collapse to prove the Truth Lemma for .\nOur canonical model argument uses the notation of Balbiani et al. [BDFD21 ###reference_bx7###], but the construction is similar to the canonical model for defined by Fischer Servi [FS78 ###reference_bx17###]."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.1",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "Soundness",
|
| 63 |
+
"text": "The soundness of is straightforward:\nFix a -formula .\nThen implies holds over all -models.\nWe will prove only the soundness of the axiom and rule ; the soundness of and rule are analogous.\nThe soundness of these axioms and rules will follow from Lemma 1 ###reference_orem1### and basic properties of monotone operators (see also [AN01 ###reference_bx5###]).\nFor the soundness of the axioms in , see Fischer Servi [FS78 ###reference_bx17###] and Ono [Ono77 ###reference_bx26###].\nFix an -model .\nSuppose ; that is, is in the greatest fixed-point of .\nTherefore, .\nAnd so .\nThis implies is sound.\nNow, suppose holds on every world in .\nThen .\nAs is the greatest fixed-point of , too.\nTherefore holds in every world in , and so is sound.\n\u220e"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5.2",
|
| 67 |
+
"parent_section_id": "5",
|
| 68 |
+
"section_name": "The provable collapse",
|
| 69 |
+
"text": "We now show that any -formula is provably equivalent to a modal formula in .\nWe first prove a technical lemma showing that monotonicity for formulas without fixed-point operators is provable.\nSuppose and is a formula without fixed-point operators.\nIf is positive in , then .\nIf is negative in , then .\nWe prove this lemma using structural induction.\nWe prove only the cases where is positive, as the cases where is negative are similar.\nFor is a proposition symbol or a variable symbol, then the result is immediate.\nLet .\nBy the induction hypothesis, and .\nAs is a tautology, follows by .\nLet .\nBy the induction hypothesis, and .\nAs is a tautology, follows by .\nLet .\nBy the induction hypothesis, .\nThen by , and so by and .\nLet .\nBy the induction hypothesis, .\nThen by , and so by and .\nLet .\nThen is negative in and so by the induction hypothesis.\nSince is a tautology, too.\nLet .\nThen is negative in and positive in .\nSo and by the induction hypothesis.\nSince is a tautology, too. \u220e\nNow, we show that fixed-points of modal formulas are equivalent to modal formulas over .\nThis is a formal version of Lemma 11 ###reference_orem11###.\nIf has no fixed-point operators, then and .\nis in as it is a tautology.\nBy Lemma 15 ###reference_orem15###, we get that .\nBy , is in .\nBy , we have that .\nBy repeating this argument once, we get .\nNow, as is valid on any -model, is provable by the completeness of .\nBy , .\nThe proof for is similar.\n\u220e\nSimilar to how we proved Theorem 12 ###reference_orem12###, we use Lemma 16 ###reference_orem16### to prove the following theorem:\nAny -formula is provably equivalent to a modal formula over .\nThat is, for all -formula , there is a modal formula such that is in ."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.3",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "The canonical model",
|
| 75 |
+
"text": "We say is a -theory if:\n is a set of formulas containing all the axioms of ;\n is closed under under ;\n; and\n implies or .\nDenote by the set and by the set .\nDefine the canonical -model by:\n;\n;\niff ;\niff and ; and\niff .\nLet be the canonical -model.\nThe relation is an equivalence relation.\nBy and , implies and implies .\nSo and . Thus .\nLet .\nThen , , , and .\nSuppose , then and , so .\nThus .\nSuppose , then and .\nSo .\nBy and , and so .\nTherefore .\nLet , then and .\nWe want to show , .\nLet .\nBy and , .\nBy , , so .\nThus .\nNow, suppose .\nSo .\nThus .\nBy , and so .\nTherefore .\n\u220e\nLet be the canonical -model.\nThe relation is backward confluent.\nThat is, if and , then there is such that\nSuppose .\nBy hypothesis, , , and .\nLet be the closure of under .\nWe first show that, if is provable formulas in , then .\nThere are and such that proves . By and , proves and .\nSince each is in , so is , by and along with .\nSince , , and thus too.\nBy repeated applications of , we have .\nBy and , we have .\nBy an application of Zorn\u2019s Lemma, there is a maximal set such that: is a consistent set of formulas containing ; closed under ; and implies .\nSuppose .\nBy and , , and so .\nSince and is a theory, at least one of and is in .\nSuppose both and are inconsistent.\nThen , thus , and so ; this is a contradiction.\nSo at least one of and is consistent.\nIf is consistent and , we can show by the same argument as the paragraph above that, if is a consequence of , then is a consequence of .\nTherefore the closure of under is a subset of , so from the beginning.\nSimilarly, if is consistent and , then .\nIf is inconsistent and , we get that ; and so by , a contradiction.\n is inconsistent and give a similar contradiction.\nTherefore either or .\nTherefore is a -theory.\nTrivially, and so .\nIf , then by the construction of .\nIf then .\nTherefore .\nThis concludes the lemma.\n\u220e\nWe now have have:\nThe canonical -model is an -model.\nSince the subset relation is a preorder, is a reflexive and transitive relation over .\n is empty by definition.\nThe relation is an equivalence relation over by Lemma 18 ###reference_orem18###\n also satisfies the convergence requirements by Lemma 19 ###reference_orem19### and Proposition 5 ###reference_orem5###.\nIt follows from the definition that preserves the truth of propositions.\n\u220e\nWith the provable collapse over , we can prove Truth Lemma for the canonical -model.\nLet be the canonical -model.\nFor -theory and all closed -formula ,\nThe proof is by structural induction on modal formulas.\nIf , then the lemma holds by the definition of .\nIf , then the lemma holds by the definition of the semantics and of .\nIf , then\nIf , then\nHere we use that if then or , as is a theory.\nLet .\nFirst suppose that .\nLet be a theory such that .\nBy the induction hypothesis, .\nAs , .\nBy , .\nSo .\nNow suppose that .\nTake to be the closure of under the derivation rules.\nIf , then there is such that .\nAnd so .\nAs , this means , a contradiction.\nTherefore .\nBy Zorn\u2019s Lemma, we can build a theory which contains and do not prove .\nBy the induction hypothesis, and .\nAs , .\nLet . This case follows by the equivalence between and over intuitionistic logic.\nLet .\nFirst suppose that .\nLet .\nThen and .\nBy induction hypothesis, .\nSo .\nNow suppose that .\nDefine .\nBy definition, .\nBy the induction hypothesis, .\nNow we show that .\n follows by definition.\nLet .\nThen .\nBy two applications of , .\nSo .\nSo .\nTherefore , and thus .\nLet .\nFirst suppose that .\nLet be a theory such that .\nFurthermore, suppose is consistent.\nLet be the closure under derivation rules of .\n holds by definition.\nLet , then for some .\nThus and .\nSo .\nBy , .\nSo .\nBy Zorn\u2019s Lemma, there is a theory containing such that .\nBy induction hypothesis, .\nTherefore .\nNow suppose that .\nLet be such that and .\nBy the definition of , , so .\nTherefore , a contradiction.\nWe conclude that for all , if , then .\nBy the induction hypothesis, .\nTherefore .\nLet be .\nWe want to show that iff .\nBy Lemma 16 ###reference_orem16###, is provably equivalent to some modal formula .\nSo .\nThus:\nThe first equivalence holds by , the second by completeness for , and the last from the soundness of .\nLet be .\nBy a proof similar to the paragraph above, we prove that iff .\nThis finishes the proof of Lemma 21 ###reference_orem21###.\n\u220e\nLet be a closed -formula.\nIf proves , then is true at all -models by Lemma 14 ###reference_orem14###.\nNow, suppose does not prove .\nBy Zorn\u2019s Lemma, there is an -theory such that .\nTherefore, does not hold over in the canonical model by Lemma 20 ###reference_orem20###; and so is not true in all -models.\n\u220e"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "6",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Future Work",
|
| 81 |
+
"text": "We now present some topics for research work that we are currently working on.\nMost of these are centered on non-classical variants of with fixed-points."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [],
|
| 85 |
+
"tables": {
|
| 86 |
+
"1": {
|
| 87 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Rules of evaluation games for the constructive modal -calculus.</figcaption><div class=\"ltx_flex_figure\">\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.15.13\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.15.13.14.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"2\" id=\"S3.T1.15.13.14.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">Verifier</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.13.15.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.15.13.15.2.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">Position</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.15.13.15.2.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">Admissible moves</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.4.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.3.1.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.4.2.2.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.3.3.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.4.4.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.5.5.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.8.6.6.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.11.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.8.8.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n and \n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.11.9.9.3\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.13.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.10.10.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.13.11.11.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.13.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.12.12.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.15.13.13.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n</tbody>\n</table>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.38.36\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.38.36.24.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"2\" id=\"S3.T1.38.36.24.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">Refuter</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.38.36.25.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.38.36.25.2.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">Position</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.38.36.25.2.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">Admissible moves</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.17.15.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.16.14.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.17.15.2.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.19.17.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.18.16.3.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.19.17.4.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.21.19.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.20.18.5.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.21.19.6.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.23.21.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.22.20.7.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.23.21.8.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.25.23.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.24.22.9.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.25.23.10.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.27.25.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.26.24.11.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.27.25.12.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.30.28.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.29.27.14.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n and \n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.30.28.15.3\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.32.30.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.31.29.16.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.32.30.17.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.34.32.19\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.33.31.18.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.34.32.19.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.38.36.23\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.37.35.22.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n, and \n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.38.36.23.4\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n</tbody>\n</table>\n</div>\n</div>\n</figure>",
|
| 88 |
+
"capture": "Table 1: Rules of evaluation games for the constructive modal -calculus."
|
| 89 |
+
}
|
| 90 |
+
},
|
| 91 |
+
"image_paths": {},
|
| 92 |
+
"validation": true,
|
| 93 |
+
"references": [
|
| 94 |
+
{
|
| 95 |
+
"1": {
|
| 96 |
+
"title": "Game semantics for constructive modal logic.",
|
| 97 |
+
"author": "Matteo Acclavio, Davide Catta, and Lutz Stra\u00dfburger.",
|
| 98 |
+
"venue": "volume 12842 of Lecture Notes in Computer Science,\npages 428\u2013445. Springer International Publishing, 2021.",
|
| 99 |
+
"url": null
|
| 100 |
+
}
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"2": {
|
| 104 |
+
"title": "On Nested Sequents for Constructive Modal Logics.",
|
| 105 |
+
"author": "Ryuta Arisaka, Anupam Das, and Lutz Stra\u00dfburger.",
|
| 106 |
+
"venue": "Logical Methods in Computer Science, 11:1583, 2015.",
|
| 107 |
+
"url": null
|
| 108 |
+
}
|
| 109 |
+
},
|
| 110 |
+
{
|
| 111 |
+
"3": {
|
| 112 |
+
"title": "The modal -calculus hierarchy over restricted classes of\ntransition systems.",
|
| 113 |
+
"author": "Luca Alberucci and Alessandro Facchini.",
|
| 114 |
+
"venue": "The Journal of Symbolic Logic, 74(4):1367\u20131400, 2009.",
|
| 115 |
+
"url": null
|
| 116 |
+
}
|
| 117 |
+
},
|
| 118 |
+
{
|
| 119 |
+
"4": {
|
| 120 |
+
"title": "Categorical and Kripke semantics for constructive S4 modal\nlogic.",
|
| 121 |
+
"author": "Natasha Alechina, Michael Mendler, Valeria de Paiva, and Eike Ritter.",
|
| 122 |
+
"venue": "volume 2142 of Lecture Notes in Computer Science, pages\n292\u2013307. Springer, 2001.",
|
| 123 |
+
"url": null
|
| 124 |
+
}
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"5": {
|
| 128 |
+
"title": "Rudiments of -Calculus.",
|
| 129 |
+
"author": "Andr\u00e9 Arnold and Damian Niwi\u0144ski.",
|
| 130 |
+
"venue": "Number v. 146 in Studies in Logic and the Foundations of Mathematics.\nElsevier, 1st edition, 2001.",
|
| 131 |
+
"url": null
|
| 132 |
+
}
|
| 133 |
+
},
|
| 134 |
+
{
|
| 135 |
+
"6": {
|
| 136 |
+
"title": "The topological mu-calculus: completeness and decidability.",
|
| 137 |
+
"author": "Alexandru Baltag, Nick Bezhanishvili, and David Fern\u00e1ndez-Duque.",
|
| 138 |
+
"venue": "Journal of the ACM, 70(5):1\u201338, 2023.",
|
| 139 |
+
"url": null
|
| 140 |
+
}
|
| 141 |
+
},
|
| 142 |
+
{
|
| 143 |
+
"7": {
|
| 144 |
+
"title": "Some constructive variants of S4 with the finite model property.",
|
| 145 |
+
"author": "Philippe Balbiani, Martin Dieguez, and David Fern\u00e1ndez-Duque.",
|
| 146 |
+
"venue": "In 2021 36th Annual ACM/IEEE Symposium on Logic in\nComputer Science (LICS), pages 1\u201313. IEEE, 2021.",
|
| 147 |
+
"url": null
|
| 148 |
+
}
|
| 149 |
+
},
|
| 150 |
+
{
|
| 151 |
+
"8": {
|
| 152 |
+
"title": "The modal mu-calculus alternation hierarchy is strict.",
|
| 153 |
+
"author": "Julian C. Bradfield.",
|
| 154 |
+
"venue": "Theoretical Computer Science, 195(2):133\u2013153, 1998.",
|
| 155 |
+
"url": null
|
| 156 |
+
}
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"9": {
|
| 160 |
+
"title": "Simplifying the modal mu-calculus alternation hierarchy.",
|
| 161 |
+
"author": "Julian C. Bradfield.",
|
| 162 |
+
"venue": "In STACS 98, volume 1373, pages 39\u201349. Springer Berlin\nHeidelberg, 1998.",
|
| 163 |
+
"url": null
|
| 164 |
+
}
|
| 165 |
+
},
|
| 166 |
+
{
|
| 167 |
+
"10": {
|
| 168 |
+
"title": "The mu-calculus and model checking.",
|
| 169 |
+
"author": "Julian C. Bradfield and Igor Walukiewicz.",
|
| 170 |
+
"venue": "In Handbook of Model Checking, pages 871\u2013919. Springer\nInternational Publishing, 2018.",
|
| 171 |
+
"url": null
|
| 172 |
+
}
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"11": {
|
| 176 |
+
"title": "Combining Logics.",
|
| 177 |
+
"author": "Walter Carnielli and Marcelo Esteban Coniglio.",
|
| 178 |
+
"venue": "In The Stanford Encyclopedia of Philosophy.\nMetaphysics Research Lab, Stanford University, fall 2020 edition, 2020.",
|
| 179 |
+
"url": null
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"12": {
|
| 184 |
+
"title": "Semantical Analysis of Intuitionistic Modal Logics between\nCK and IK.",
|
| 185 |
+
"author": "Jim de Groot, Ian Shillito, and Ranald Clouston.",
|
| 186 |
+
"venue": "Preprint, 2024.",
|
| 187 |
+
"url": null
|
| 188 |
+
}
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"13": {
|
| 192 |
+
"title": "On the -calculus over transitive and finite transitive frames.",
|
| 193 |
+
"author": "Giovanna D\u2019Agostino and Giacomo Lenzi.",
|
| 194 |
+
"venue": "Theoretical Computer Science, 411(50):4273\u20134290, 2010.",
|
| 195 |
+
"url": null
|
| 196 |
+
}
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"14": {
|
| 200 |
+
"title": "On Intuitionistic Diamonds (and Lack Thereof).",
|
| 201 |
+
"author": "Anupam Das and Sonia Marin.",
|
| 202 |
+
"venue": "In Automated Reasoning with Analytic Tableaux and\nRelated Methods, Lecture Notes in Computer Science, pages\n283\u2013301. Springer Nature Switzerland, 2023.",
|
| 203 |
+
"url": null
|
| 204 |
+
}
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"15": {
|
| 208 |
+
"title": "Tree automata, mu-calculus and determinacy.",
|
| 209 |
+
"author": "Allen E. Emerson and Charanjit S. Jutla.",
|
| 210 |
+
"venue": "In FoCS, volume 91, pages 368\u2013377, 1991.",
|
| 211 |
+
"url": null
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"16": {
|
| 216 |
+
"title": "Intuitionistic modal logic with quantifiers.",
|
| 217 |
+
"author": "Frederic B. Fitch.",
|
| 218 |
+
"venue": "Portugaliae Mathematicae, 7:113\u2013118, 1948.",
|
| 219 |
+
"url": null
|
| 220 |
+
}
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"17": {
|
| 224 |
+
"title": "The finite model property for MIPQ and some consequences.",
|
| 225 |
+
"author": "Gis\u00e8le Fischer Servi.",
|
| 226 |
+
"venue": "Notre Dame Journal of Formal Logic, XIX(4):687\u2013692, 1978.",
|
| 227 |
+
"url": null
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"18": {
|
| 232 |
+
"title": "Intuitionistic S4 is decidable.",
|
| 233 |
+
"author": "Marianna Girlando, Roman Kuznets, Sonia Marin, Marianela Morales, and Lutz\nStra\u00dfburger.",
|
| 234 |
+
"venue": "In 2023 38th Annual ACM/IEEE Symposium on Logic in\nComputer Science (LICS), pages 1\u201313, 2023.",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"19": {
|
| 240 |
+
"title": "Automata, logics, and infinite games: a guide to current\nresearch, volume 2500.",
|
| 241 |
+
"author": "Erich Gr\u00e4del, Wolfgang Thomas, and Thomas Wilke.",
|
| 242 |
+
"venue": "Springer, 2003.",
|
| 243 |
+
"url": null
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"20": {
|
| 248 |
+
"title": "Justification logic for constructive modal logic.",
|
| 249 |
+
"author": "Roman Kuznets, Sonia Marin, and Lutz Stra\u00dfburger.",
|
| 250 |
+
"venue": "Journal of Applied Logics, 8(8):2313\u20132332, 2021.",
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"21": {
|
| 256 |
+
"title": "Results on the propositional -calculus.",
|
| 257 |
+
"author": "Dexter Kozen.",
|
| 258 |
+
"venue": "Theoretical Computer Science, 27(3):333\u2013354, 1983.",
|
| 259 |
+
"url": null
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"22": {
|
| 264 |
+
"title": "Combining modal logics.",
|
| 265 |
+
"author": "Agi Kurucz.",
|
| 266 |
+
"venue": "In Patrick Blackburn, Johan Van Benthem, and Frank Wolter, editors,\nStudies in Logic and Practical Reasoning, volume 3 of Handbook of Modal Logic, pages 869\u2013924. Elsevier, 2007.",
|
| 267 |
+
"url": null
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"23": {
|
| 272 |
+
"title": "Recent results on the modal -calculus: A survey.",
|
| 273 |
+
"author": "Giacomo Lenzi.",
|
| 274 |
+
"venue": "Rendiconti dell\u2019Istituto di Matematica dell\u2019Universit\u00e0 di\nTrieste, 42:235\u2013255, 2010.",
|
| 275 |
+
"url": null
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"24": {
|
| 280 |
+
"title": "Constructive CK for contexts.",
|
| 281 |
+
"author": "Michael Mendler and Valeria de Paiva.",
|
| 282 |
+
"venue": "Context Representation and Reasoning (CRR-2005), 13, 2005.",
|
| 283 |
+
"url": null
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"25": {
|
| 288 |
+
"title": "Automata, logic and games.",
|
| 289 |
+
"author": "Luke Ong.",
|
| 290 |
+
"venue": "2015.",
|
| 291 |
+
"url": null
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"26": {
|
| 296 |
+
"title": "On some intuitionistic modal logics.",
|
| 297 |
+
"author": "Hiroakira Ono.",
|
| 298 |
+
"venue": "Publications of the Research Institute for Mathematical\nSciences, 13(3):687\u2013722, 1977.",
|
| 299 |
+
"url": null
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"27": {
|
| 304 |
+
"title": "Exploring the Difference Hierarchies on -Calculus and\nArithmetic\u2014from the Point of View of Gale\u2013Stewart Games.",
|
| 305 |
+
"author": "Leonardo Pacheco.",
|
| 306 |
+
"venue": "PhD thesis, Tohoku University, 2023.",
|
| 307 |
+
"url": null
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"28": {
|
| 312 |
+
"title": "The -calculus\u2019 Alternation Hierarchy is Strict over\nNon-Trivial Fusion Logics.",
|
| 313 |
+
"author": "Leonardo Pacheco.",
|
| 314 |
+
"venue": "2024.",
|
| 315 |
+
"url": null
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"29": {
|
| 320 |
+
"title": "Natural Deduction: A Proof-Theoretical Study.",
|
| 321 |
+
"author": "Dag Prawitz.",
|
| 322 |
+
"venue": "1965.",
|
| 323 |
+
"url": null
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"30": {
|
| 328 |
+
"title": "Time and Modality.",
|
| 329 |
+
"author": "Arthur N. Prior.",
|
| 330 |
+
"venue": "Clarenton Press, 1957.",
|
| 331 |
+
"url": null
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"31": {
|
| 336 |
+
"title": "Completeness of Kozen\u2019s axiomatisation of the propositional\n-calculus.",
|
| 337 |
+
"author": "Igor Walukiewicz.",
|
| 338 |
+
"venue": "In Proceedings of Tenth Annual IEEE Symposium on Logic\nin Computer Science, pages 14\u201324, 1995.",
|
| 339 |
+
"url": null
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"32": {
|
| 344 |
+
"title": "Constructive modal logics I.",
|
| 345 |
+
"author": "Duminda Wijesekera.",
|
| 346 |
+
"venue": "Annals of Pure and Applied Logic, 50(3):271\u2013301, 1990.",
|
| 347 |
+
"url": null
|
| 348 |
+
}
|
| 349 |
+
}
|
| 350 |
+
],
|
| 351 |
+
"url": "http://arxiv.org/html/2308.16697v3"
|
| 352 |
+
}
|
20241001/2309.04109v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2309.10103v2.json
ADDED
|
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Reasoning about the Unseen for Efficient Outdoor Object Navigation",
|
| 3 |
+
"abstract": "Robots should exist anywhere humans do: indoors, outdoors, and even unmapped environments.\nIn contrast, the focus of recent advancements in Object Goal Navigation (OGN)[anderson2018objectnav, chaplot2020object, majumdar2022zson] has targeted navigating in indoor environments by leveraging spatial and semantic cues that do not generalize outdoors. While these contributions provide valuable insights into indoor scenarios, the broader spectrum of real-world robotic applications often extends to outdoor settings. As we transition to the vast and complex terrains of outdoor environments, new challenges emerge. Unlike the structured layouts found indoors, outdoor environments lack clear spatial delineations and are riddled with inherent semantic ambiguities. Despite this, humans navigate with ease because we can reason about the unseen. We introduce a new task OUTDOOR, a new mechanism for Large Language Models (LLMs) to accurately hallucinate possible futures, and a new computationally aware success metric for pushing research forward in this more complex domain. Additionally, we show impressive results on both a simulated drone and physical quadruped in outdoor environments. Our agent has no premapping and our formalism outperforms naive LLM-based approaches.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Advancements in Object Goal Navigation (OGN) [anderson2018objectnav, chaplot2020object, majumdar2022zson] have enhanced the proficiency of robotic agents in navigating indoor environments by leveraging spatial and semantic cues. Agents that can guide humans (e.g. the visually impaired [blind-indoors]) are an important enabling technology, but need to move beyond restricted indoor spaces to the full richness of outdoor navigation.\nOutdoor environments are substantially larger than handled by current semantic mapping approaches [chaplot2020learning, Min2022], have complex terrains [self-supervised-outdoor], and, crucially, lack clear semantic delineations.\nNot only is sensing simplified indoors, but so is reasoning as rooms are easily distinguished and semantically categorized. Outdoor environments still have semantic distinctions but visually identical spaces might be a soccer field, a picnic area, or the pit of an outdoor orchestra depending on the time of day. Additionally, outdoor navigational tasks typically demand that robotic agents engage in roles with more granular goal specifications. For instance, in the context of search and rescue operations, the objective is not merely to navigate to a general category of \u2018people\u2019 but to pinpoint casualties potentially trapped under a car.\nRecently, Large Language Models (LLMs) [gpt, devlin2018bert, openai2023gpt4] trained on expansive internet datasets are serving as adaptable policies in embodied platforms, making them proficient in addressing a wider range of tasks [saycan2022arxiv, shah2023lmnav, brohan2022rt]. The existing work has primarily focused on high-level task-planning with predefined skills in constrained environments.\nDespite this, we have seem very promising skill demonstrations in indoor object-scenarios made possible by these models [chen2023train, zhou2023esc, zhou2023navgpt].\nWhile some emergent behavior has been identified, the language and vision communities have begun harnessing these LLMs for their reasoning capabilities due to the vast world knowledge and models stored in their parameters.\nThus, we posit that outdoor navigation offers a promising avenue to test and refine the foundational navigation and reasoning abilities of LLMs. This paper aims to formulate an elementary navigation policy and evaluate its efficacy in diverse and challenging outdoor environments, providing insights into the potential of LLMs as embodied agents.\nOur primary contribution in this work are:\nWe introduce the OUTDOOR (Outdoor Underspecified Task Descriptions Of Objects and Regions) task, which dramatically increases the complexity inherent in object goal navigation for outdoor settings.\nWe introduce a novel use of LLMs as a planning agent to traverse real-world outdoor terrains. Our approach imagines future notes for a RRT (Rapidly-exploring Random Tree) to improve agent success (+50.4%).\nWe introduce the CASR (Computationally Adjusted Success Rate) metric, that trades off planning costs with time spent \u201cthinking\u201d (i.e. querying LLMs)."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Task Definition",
|
| 15 |
+
"text": "###figure_1###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A OUTDOOR: Outdoor Underspecified Task Descriptions Of Objects and Regions",
|
| 21 |
+
"text": "In traditional Object Goal Navigation, users specify distinct goal categories that can be automatically evaluated by the system and do not include contraints or handle underspecified goals (e.g. reference by affordance). However, real-world outdoor environments like parks present more complex scenarios. For example, if the goal is to find a place to eat, it could refer to any bench or table, rather than a specific one. OUTDOOR embraces ambiguity, generalizing to a more nuanced and realistic navigational challenge.\nWe categorize the instruction complexity into four levels:\nLevel 1: Navigate to obj X [aka traditional object-nav]\nLevel 2: Navigate to obj X conditioned on obj Y\nLevel 3: Navigate to obj X conditioned on path P\nLevel 4: Navigate to underspecific abstraction A\nHuman intervention is essential for evaluating success across all levels from 1 to 4. While the goals in Levels 1 to 3 are specific and relatively straightforward to assess, Level 4 presents a more abstract goal. For example, a directive like \u201dFind me somewhere to take a nap?\u201d makes the evaluation more nuanced, potentially necessitating human evaluation.\nAgents start an episode from a pose and are given a linguistic goal from one of the aforementioned levels. The agent\u2019s challenge is to reconcile real-time environmental observations with its interpretation of and understand the semantic and spatial relationships between the objects and regions present. Operating autonomously, the agent must then navigate the environment, with the path being represented as , where each action transitions to a pose .\nThe episode terminates when the agent predicts \u201cFound Goal\u201d. Agents are also limited to a maximum exploration time: ."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "III Related Works",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.1",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "III-A Decision Making and Planning for LLM",
|
| 33 |
+
"text": "Vanilla implementations of Large Language Models (LLMs) often fall short in decision-making and planning capabilities. To address this, several strategies have been developed. The linear reasoning approach, \u201cChain of Thoughts,\u201d enhances structured problem-solving [wei2023chainofthought], and tree-based strategies, \u201cTree of Thoughts,\u201d bring forth search-guided reasoning capabilities [yao2023tree]. To improve performance search algorithms have also been integrated [xie2023decomposition].\nExternal planning methods have emerged [hao2023reasoning, zhao2023large, zhang2023planning, wang2023planandsolve, wang2023describe, huang2022inner, yao2023react] as methods to leverage techniques such as Monte Carlo Tree Search (MCTS) to enhance the reasoning capacities of LLMs. While they show promise in fields like mathematics, code generation, and high-level task planning, their application in low-level path planning for robotics remains limited. A primary reason is the complexity of mapping LLM outputs to the intricate action spaces of robots, making tasks requiring detailed sequences of movements a challenge. In this context, our approach stands out by using waypoints as a natural interface for low-level path planning. This not only bridges the gap between LLM reasoning and robot actions but also ensures that our method operates in a parallel manner, offering both effectiveness and efficiency."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.2",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-B LLM as embodied agents for navigation",
|
| 39 |
+
"text": "The use of language for guiding embodied agents has a long lineage. Language only models like BERT [devlin2018bert] can be used as scoring functions between language instructions and path to help embodied agent navigate [majumdar2020vlnbert]. Performance on such tasks have scaled with larger models (e.g. GPT-4) which have greater aptitudes for common sense reasoning and its comprehension of world structures [zhou2023esc, chen2023train, zhou2023navgpt, shah2023lmnav, dorbala2023catshapedmug].\nShah et al [shah2023lmnav] uses LLMs as a parser to extract landmarks as sub-goal nodes for robots to navigate on a graph, Chen et al [chen2023train] use LLM as an evaluator to re-weight the waypoints generated by the frontier-based method [1997frontier]. Zhou et al [zhou2023esc] took another approach, they used several hand-designed constraints via Probabilistic Soft Logic programming language to choose the best frontiers to explore. NavGPT [zhou2023navgpt] utilized the synergizing prompt methods such as ReAct [yao2022react] with a discrete action space for LLM to navigate.\n###figure_2### figureOverview: The agent captures RGB images (potential frontiers). Each image is processed through a Vision Language Model (VLM) to generate a textual caption. Subsequent Rapidly-exploring Random Trees (RRT) aid the agent in envisioning possible future scenarios for each frontier. The results, combined with GPS coordinates, populate a frontier buffer. The most promising frontier is identified, and a local planner guides the agent to its location.\nExisting approaches to navigation predominantly rely on idealized indoor scene graphs, often assuming structured environments where, for example, a refrigerator is necessarily located in the kitchen or a fireplace in the living room [zhou2023esc, chen2023train, shah2023lmnav]. Alternatively, some methods leverage Google Street View data for navigation tasks [schumann2023velma]. However, these approaches fall short in capturing the nuanced complexities and granularities inherent to real-world scenarios, such as search and rescue operations or advanced domestic robotics tasks in semantically rich environments such as airports or campus buildings. In genuine outdoor settings, spatial semantics may be ambiguous or lack well-defined boundaries. Consequently, an intelligent agent must be capable of strategically predict information in space to effectively navigate and reason within these more complex contexts."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "IV Method: Reasoned Explorer",
|
| 45 |
+
"text": "Figure 3 outlines our proposed Reasoned Explorer method \u2013 an LLM reasoning technique that enables an LLM-based agent to execute OUTDOOR tasks in complex outdoor environments. We remove the perfect depth assumption and use a dynamically expandable graph to store the map information illustrated in \u00a7IV-A ###reference_###. We then employ two LLMs (\u00a7IV-B ###reference_###): one as a visionary and the other as an evaluator. The visionary LLM is designed to project future agent states and potential scenarios, while the evaluator critically assesses the feasibility of achieving the goal within those states. To physically embody our method, we then talks about the perception and action techniques used in \u00a7IV-C ###reference_### \u2013 IV-D ###reference_###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "IV-A Graph the unknown",
|
| 51 |
+
"text": "Historically, methods for object goal navigation and VLN relied on near perfect depth information derived from simulations to generate dense geometric occupancy maps, which subsequently informed the expansion of explorable frontiers [chen2023train, zhou2023esc, zhou2023navgpt, pmlr-v155-anderson21a]. However, these assumptions falter in real-world outdoor settings, especially without the aid of high-end depth cameras or LiDARs. Addressing this limitation, our approach introduces an adaptive topological graph to introduce frontiers. This combination not only mitigates the need for perfect depth information but also enhances the agent\u2019s navigational capabilities.\nAs depicted in Figure 2 ###reference_###, the green circles symbolize the expanded frontiers emanating from pathpoints, which are denoted by pink circles. During each iteration, the algorithm calculates and expands frontiers from the present pathpoint. These frontiers subsequently undergo a rigorous planning and scoring phase, as elaborated in Section IV-B ###reference_###. All of these frontiers are retained in a specialized Frontier Buffer for subsequent reference.\nTo ensure that distant frontiers are penalized, yet remain viable for exploration, we introduce a sigmoid-modulated distance function:\nHere, represents the sigmoid-modulated distance for the -th frontier. The parameter dictates the sharpness of the modulation, with a larger creating a more pronounced transition around , which represents the distance where the penalty is half its maximum potential value.\nThe agent\u2019s actions are then determined by the updated score function:\nWithin this formulation, refers to the score of each frontier following the planning-scoring phase. As the agent advances in its exploration, the selected frontier \u2014 now deemed a pathpoint \u2014 is removed from the Frontier Buffer.\nThe agent persists in its exploration endeavors until it perceives a halt signal dispatched from the a speicalized LLM checking function.\n###figure_3###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "IV-B Reasoning about the uncertainty",
|
| 57 |
+
"text": "In the context of planning within intricate outdoor environments, direct determinations by the LLM-based method solely on localized information can lead to suboptimal behaviors, such as inconsistency and short-sightedness. These behaviors arise not only from the inherent tendency of LLMs to hallucinate [McKenna2023SourcesOH], but also the non-delineated nature of the outdoor environment. Such properties become particularly concerning when placing heavy reliance on singular output generations from LLMs [stochastic].\nTo fortify against these vulnerabilities and enhance decision robustness, we elected to incorporate the future information using expanding RRT strategy. By projecting multiple forward-looking imaginary branches through iterative queries of LLM_Visionary and LLM_Evaluator (Figure III-B ###reference_###), mitigating the risks associated with single query outputs. Our choice of RRT was informed by its intrinsic properties, in contrast of the sequential sampling process used in MCTS[zhang2023planning][hao2023reasoning], RRT provides a parallelizable framework to allow us score and expand the imaginary nodes all at the same time.\nIn our integration of the RRT, the specifics and underlying mechanics are meticulously outlined in Algorithm 1 and visually complemented in Figure 3 ###reference_###. At its core, our adaptation hinges on the dual roles assumed by the LLM: as an agent, denoted as LLM_Eval, and as an evaluator, represented by LLM_Gen. Their specific responsibilities and interplay will be further discussed in the following sections.\ndenoted as , assesses the correlation between a scene description and the provided goal objects or instructions . Its primary role is to guide the agent by offering a reference score, indicating the likelihood of achieving the goal based on the current scene. The scoring mechanism is structured on a Likert scale ranging from 1 to 5, where a score of 1 indicates a low likelihood of goal achievement, and a score of 5 signifies a high likelihood. This evaluative approach ensures that the agent can make informed decisions based on the contextual relevance of the scene to the goal.\nrepresented as , produces the next scene descriptor based on the current scene description . Its objective is to enable the agent to anticipate or predict its future waypoint. This is achieved by prompting the agent to envision what it might encounter next. Comprehensive prompt templates and further details are available on our official website.\n###figure_4### In our adaptation of the RRT, the algorithm\u2019s mechanics are determined two hyperparameters:\nN: Dictates the action space dimension, signifying the range of feasible directions the agent can embark upon during its exploration.\nL: Denotes the length of individual simulations within the branch. This dictates the depth of the tree and how far the algorithm projects into potential future states."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.3",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "IV-C Perceiving",
|
| 63 |
+
"text": "The agent captures images during each exploration steps. These images are processed by a Vision Language Model (VLM). For the purposes of this study, we employed Kosmos-2 [kosmos-2], a VLM fine-tuned with spatially-grounded image-text data. This model offers the distinct advantage of providing detailed object-level descriptions of scenes. Importantly, it is promptable, so we prompt it to describe not only the objects in the scene, but also the spatial relationship between objects as well as the backgrounds."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.4",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "IV-D Action on real robot",
|
| 69 |
+
"text": "Upon waypoint determination by the scoring function detailed in Section IV-A ###reference_###, our robot employs a straightforward PID controller to traverse the path navigating between the current waypoint and the subsequent one, with the localization from on board high resolution RTK-GPS and IMU."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Experiments",
|
| 75 |
+
"text": "Comprehensive evaluations are conducted across multiple platforms: 1. The AirSim [shah2017airsim] simulation environment: A photo-realistic outdoor simulation setting that consists of different semantic distinguishable areas in Downtown West environment. 2. A real-world robotic platform: Unitree Go1, equiped with a USB camera, high resolution RTK-GPS module, and Inertial measurement unit."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.1",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "A Compute Aware Metric for LLM-based Robotic Agents",
|
| 81 |
+
"text": "In the assessment of robotic agents in real-world settings, particularly for outdoor tasks like Search and Rescue, it is vital to strike a balance between navigation efficiency, computational overhead, and travel duration. The dominant metrics in the space: Success Rate (SR) and Success Weighted by Path Length (SPL), ignore \u201ctime\u201d. Here, we specifically mean wallclock time, or the length of an experiment or episode. While always relevant in practical scenarios, the use of Large Foundation Models (e.g. over API) introduces a new computational trade-off. Specifically, the interplay between Computational Time (CT) and Travel Time (TT). An increase in computation that results in reduced travel time might lead to overall efficiency gains depending on the amount of computational time required. Colloquially, when is it faster to think before acting, versus acting on a hunch?\nThis overall efficiency versus the maximum allowed episode length is simply a normalized sum of the two components: Compute (CT) and Travel (TT) time. The value is normalized to the range [0,1], aligning it for integration with the traditional SR metric:\nis a predefined maximum acceptable time for an mission to be completed, in our experiments, = 30 minutes, and any experiment time above it is set to failure.\nWith the normalized SR and the aforementioned interaction term, we formulate the Computationally Adjusted Success Rate (CASR) as:\nThe range of CASR spans [0,1], where:\n1 signifies optimal performance, reflecting total navigational success combined with infinite speed and lightening computation.\n0, on the other hand, corresponds to a navigation failure or either of CT and TT reaching the limit.\nCASR serves dual purposes. Beyond being a metric for considering computational time, it acts as a performance indicator during optimization of model and hyperparameter selection. An increase in CT that doesn\u2019t correspond to a significant shift in CASR suggests that the TT remains largely unaffected by the CT changes. Further discussions on this can be found in Section VI-B ###reference_###.\nNote, while the notion of immediate inference or fast travel may seem far-fetched at first glance, most real-time models do operate at 30+ Hz, pushing CT , and setting a goal for machine learning efficiency research. The size of TT captures route efficiency and morphology choices. In a practical setting, one might opt for a UAV over a UGV, choose a wheeled vs legged UGV, or even optimize gait to further improve TT. Again, as many commodity UAVs approach speeds of 100k/h, even existing hardware can for many domains shrink TT if routing is correct. In our comparisons, UAV is compared to UAV and UGV to UGV so morphology does not affect comparisons, only routing."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.2",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "Baselines",
|
| 87 |
+
"text": "We benchmarked our approach against two baseline implementations: The first, LLM-as-Eval [chen2023train], utilizes the LLM as an evaluator for re-scoring expanded frontiers [1997frontier]. The second, LLM-MCTS [hao2023reasoning], employs Monte Carlo Tree Search techniques for trajectory expansion at 10 iterations. Both baselines are reproductions of existing methods using our graph maps, devoid of depth information. This choice is informed by real-world practicalities: without advanced depth cameras, standard devices such as the Real Sense D435 exhibit subpar performance in outdoor environments."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.3",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Result",
|
| 93 |
+
"text": "###figure_5### Table I ###reference_### presents a comprehensive evaluation of our methods against the baselines on four metrics: Success Rate (SR), Oracle Success Rate (OSR), Success Weighted by Path Length (SPL) [anderson2018objectnav], and our newly introduced metric, Computationally Adjusted Success Rate (CASR). It shows:\nWe consistently outperformed naive use of LLM as evaluator and LLM-MCTS across all four metrics.\nIn addition to the performance metrics, our methodology exhibited superior time efficiency compared to LLM-MCTS shown in the CASR difference. This emphasizes the practicality and efficiency of our proposed solutions in real-world applications.\nPerformance for all methods decreased as difficulty increased from L1 to L4 of our OUTDOOR tasks, with the exception of L3. We suspect the anomalous L3 performance is attributed to human-specific path preferences that potentially reduce the search space.\nTable II ###reference_### showcases the efficacy of our methods in transferring from simulation to the real-world. For comparative analysis, the baseline LLM-as-Eval was also tested in real-world scenarios to assess its potential for achieving superior performance. The results indicate that our methods smoothly transition from simulation to real-world contexts with matching performance.\nFigure 4 ###reference_### shows that LLM-as-Eval is more like a random search style of exploration. In contrast, our method delivers a more structured exploration towards the goal."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.4",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Ablation Study",
|
| 99 |
+
"text": "In our ablation study, we aimed to assess the impact of various design choices of our Reasoned Explorer method on performance. To maintain consistency, we standardized the conditions by focusing solely on a level 1 OUTDOOR task with set to 15 minutes. Furthermore, to minimize the performance variance attributed to our perception model, we ignored the caption error in this evaluation."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.4.1",
|
| 103 |
+
"parent_section_id": "5.4",
|
| 104 |
+
"section_name": "V-D1 How many steps should we think into the future?",
|
| 105 |
+
"text": "Table III ###reference_### shows how different values affect CASR performance. At , the approach aligns with the LLM-as-Eval baseline, and we subsequently increment to 4. The optimal performance is observed at , underscoring the advantages of iterative querying with LLM visionary and LLM evaluator for OUTDOOR tasks.\nNotably, the standard deviation is minimized around and increases at the extremes. The variance at can be attributed to relying solely on LLM\u2019s single output, while the increased uncertainty at suggests that excessive querying might introduce performance variability."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.4.2",
|
| 109 |
+
"parent_section_id": "5.4",
|
| 110 |
+
"section_name": "V-D2 What LLM model should we chose?",
|
| 111 |
+
"text": "We conducted an ablation study focusing on the model selection for LLM_Visionary and LLM_Evaluator, detailed in Table IV ###reference_###. The pairing of GPT3.5 for visionary predictions and GPT4 for node scoring emerges as an optimal choice, balancing both efficiency and performance. The combination of GPT4 with GPT4 was not explored due to the rate limit constraints of the OpenAI API, and can be explored in future work. Notably, even with GPT3.5 serving both LLM_Eval and LLM_Visionary roles, wall avoidance behavior is observed. This observation could provide valuable insights for future research aiming to utilize LLM as a local planner. Another observation is that the average scores GPT3.5 as evaluator gives is 1.86 points higher than GPT4, means that GPT3.5 is more optimistic and GPT4 is more conservative in scoring."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "6",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "VI Discussions",
|
| 117 |
+
"text": ""
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "6.1",
|
| 121 |
+
"parent_section_id": "6",
|
| 122 |
+
"section_name": "VI-A Obstacle Avoidance Capability",
|
| 123 |
+
"text": "Based on our experimental observations, our method, Reasoned Explorer, excels at navigating around larger obstacles, such as walls. However, its limitation emerges when confronted with smaller objects. The ability to avoid these objects hinges on the graph\u2019s edge length and the precision of the perception model. We posit that with a more advanced perception model, which can accurately determine the relative positions of objects, the method holds potential to adeptly handle smaller obstacles as well.\n###figure_6###"
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "6.2",
|
| 127 |
+
"parent_section_id": "6",
|
| 128 |
+
"section_name": "VI-B Current Limitations of VLM",
|
| 129 |
+
"text": "As illustrated in Figure 5 ###reference_###, there is a significant disparity between OSR and SR, especially with harder tasks. This divergence predominantly emerges because the VLM, upon achieving its goal, often fails to acknowledge it. This misrecognition subsequently leads the agent off target. Furthermore, the VLM has a propensity to hallucinate the positions of objects. In outdoor environments, it becomes particularly challenging to provide accurate descriptions of every object. Consequently, future work may consider the direct use of embeddings in lieu of caption data. Such observations point towards the potential of improving VLM\u2019s performance, ideally bridging the gap and enabling SR to align more closely with OSR."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "6.3",
|
| 133 |
+
"parent_section_id": "6",
|
| 134 |
+
"section_name": "VI-C Observation Derived from CASR",
|
| 135 |
+
"text": "The CASR metric serves as both a metric for method comparison and a tool for understanding the balance between computational time and task efficiency. Key observations are:\nPositive Correlation with CT: An increase in CT leading to a higher CASR suggests that more computation can reduce travel time (TT), enhancing efficiency.\nNegative Correlation with CT: A drop in CASR with increased CT indicates diminishing returns from additional computation (e.g. saturation).\nSteady CASR despite CT Variation: A consistent CASR, despite varying CT, indicates a balance between computation and task time, so other factors, such as motion planning dominate."
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "7",
|
| 139 |
+
"parent_section_id": null,
|
| 140 |
+
"section_name": "VII Conclusions",
|
| 141 |
+
"text": "The emergence of LLMs has opened new avenues in the embodied agent domain. This paper introduced the OUTDOOR task, a pioneering approach aimed at propelling embodied agents into challenging outdoor settings.\nFurther, we introduced a novel, general, mechanism for using LLMs to reason about robot plans in unseen environments,\nand\nproposed the CASR, the first metric to assess balance between reasoning and action for embodied agents.\nOur formulation more closely mirrors how humans navigate and explore, trading off between thinking and acting to both leverage what we know in general and can see in the specific."
|
| 142 |
+
}
|
| 143 |
+
],
|
| 144 |
+
"appendix": [],
|
| 145 |
+
"tables": {
|
| 146 |
+
"1": {
|
| 147 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S5.T1.1.1.1.2\">SR</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S5.T1.1.1.1.3\">OSR</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S5.T1.1.1.1.4\">SPL</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S5.T1.1.1.1.5\">CASR</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T1.1.2.2.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.2\">L1</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.3\">L2</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.4\">L3</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.5\">L4</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.6\">Avg</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.7\">L1</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.8\">L2</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.9\">L3</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.10\">L4</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.11\">Avg</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.12\">L1</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.13\">L2</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.14\">L3</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.15\">L4</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.16\">Avg</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.17\">L1</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.18\">L2</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.19\">L3</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.20\">L4</th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.1.2.2.21\">Avg</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.1.3.1.1\">LLM-as-Eval</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.2\">0.17</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.3\">0.09</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.4\">0.00</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.5\">0.00</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.3.1.6\">0.06</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.7\">0.17</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.8\">0.27</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.9\">0.00</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.10\">0.00</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.3.1.11\">0.06</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.12\">0.13</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.13\">0.04</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.14\">0.00</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.15\">0.00</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.3.1.16\">0.04</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.17\">0.10</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.18\">0.06</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.19\">0.00</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.20\">0.00</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T1.1.3.1.21\">0.04</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.1.4.2.1\">LLM-MCTS(10 iter)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.2.2\">0.54</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.3\">0.43</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.4\">0.59</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.4.2.5.1\">0.33</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.6\">0.47</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.4.2.7.1\">0.88</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.8\">0.76</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.9\">0.63</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.10\">0.69</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.11\">0.74</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.12\">0.37</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.13\">0.31</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.14\">0.38</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.15\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.4.2.15.1\">0.23</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.16\">0.32</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.17\">0.11</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.18\">0.09</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.19\">0.11</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T1.1.4.2.20\">0.07</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T1.1.4.2.21\">0.10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.1\">Ours</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.2.1\">0.59</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.3.1\">0.49</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.4.1\">0.63</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.5\">0.32</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.6.1\">0.51</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.7.1\">0.88</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.8.1\">0.82</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.9.1\">0.71</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.10.1\">0.88</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.11\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.11.1\">0.82</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.12.1\">0.44</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.13.1\">0.32</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.14.1\">0.43</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.15\">0.22</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.16\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.16.1\">0.35</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.17\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.17.1\">0.51</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.18\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.18.1\">0.42</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.19\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.19.1\">0.46</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.20\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.20.1\">0.28</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.5.3.21\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.21.1\">0.42</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Baseline Comparison for Different Task Levels in simulation (AirSim\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">shah2017airsim</span>]</cite></figcaption>\n</figure>",
|
| 148 |
+
"capture": "TABLE I: Baseline Comparison for Different Task Levels in simulation (AirSim\u00a0[shah2017airsim]"
|
| 149 |
+
},
|
| 150 |
+
"2": {
|
| 151 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T2.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S5.T2.1.1.1.2\">Simulation (Drone)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S5.T2.1.1.1.3\">Real World (Quadruped)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T2.1.2.2.1\"></th>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T2.1.2.2.2\">SR</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T2.1.2.2.3\">OSR</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T2.1.2.2.4\">CASR</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T2.1.2.2.5\">SR</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center\" id=\"S5.T2.1.2.2.6\">OSR</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.1.2.2.7\">CASR</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.1.3.3.1\">LLM-as-Eval</th>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.2\">0.06</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.3\">0.06</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.4\">0.04</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.5\">0.10</td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.6\">0.20</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.7\">0.04</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T2.1.4.4.1\">Ours</th>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb\" id=\"S5.T2.1.4.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.4.4.2.1\">0.51</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb\" id=\"S5.T2.1.4.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.4.4.3.1\">0.82</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb\" id=\"S5.T2.1.4.4.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.4.4.4.1\">0.42</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb\" id=\"S5.T2.1.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.4.4.5.1\">0.60</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_center ltx_border_bb\" id=\"S5.T2.1.4.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.4.4.6.1\">0.70</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S5.T2.1.4.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.4.4.7.1\">0.24</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Simulation and Real World Performance.\n<br class=\"ltx_break\"/>CASR decreased in Real World due to use of a quadruped.</figcaption>\n</figure>",
|
| 152 |
+
"capture": "TABLE II: Simulation and Real World Performance.\nCASR decreased in Real World due to use of a quadruped."
|
| 153 |
+
},
|
| 154 |
+
"3": {
|
| 155 |
+
"table_html": "<figure class=\"ltx_table ltx_align_floatright\" id=\"S5.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T3.1.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.1.1.3\"><span class=\"ltx_text\" id=\"S5.T3.1.1.3.1\" style=\"font-size:90%;\">CASR</span></th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.1.1.1\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T3.1.2.1.1\"><span class=\"ltx_text\" id=\"S5.T3.1.2.1.1.1\" style=\"font-size:90%;\">L = 0</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.1.2.1.2\"><span class=\"ltx_text\" id=\"S5.T3.1.2.1.2.1\" style=\"font-size:90%;\">0.141</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T3.1.2.1.3\"><span class=\"ltx_text\" id=\"S5.T3.1.2.1.3.1\" style=\"font-size:90%;\">0.38</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.1.3.2.1\"><span class=\"ltx_text\" id=\"S5.T3.1.3.2.1.1\" style=\"font-size:90%;\">L = 1</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.3.2.2\"><span class=\"ltx_text\" id=\"S5.T3.1.3.2.2.1\" style=\"font-size:90%;\">0.516</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T3.1.3.2.3\"><span class=\"ltx_text\" id=\"S5.T3.1.3.2.3.1\" style=\"font-size:90%;\">0.34</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.1.4.3.1\"><span class=\"ltx_text\" id=\"S5.T3.1.4.3.1.1\" style=\"font-size:90%;\">L = 2</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.4.3.2.1\" style=\"font-size:90%;\">0.732</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T3.1.4.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.4.3.3.1\" style=\"font-size:90%;\">0.05</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.1.5.4.1\"><span class=\"ltx_text\" id=\"S5.T3.1.5.4.1.1\" style=\"font-size:90%;\">L = 3</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.4.2\"><span class=\"ltx_text\" id=\"S5.T3.1.5.4.2.1\" style=\"font-size:90%;\">0.644</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T3.1.5.4.3\"><span class=\"ltx_text\" id=\"S5.T3.1.5.4.3.1\" style=\"font-size:90%;\">0.07</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T3.1.6.5.1\"><span class=\"ltx_text\" id=\"S5.T3.1.6.5.1.1\" style=\"font-size:90%;\">L = 4</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.6.5.2\"><span class=\"ltx_text\" id=\"S5.T3.1.6.5.2.1\" style=\"font-size:90%;\">0.520</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S5.T3.1.6.5.3\"><span class=\"ltx_text\" id=\"S5.T3.1.6.5.3.1\" style=\"font-size:90%;\">0.34</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Choice of </figcaption>\n</figure>",
|
| 156 |
+
"capture": "TABLE III: Choice of "
|
| 157 |
+
},
|
| 158 |
+
"4": {
|
| 159 |
+
"table_html": "<figure class=\"ltx_table ltx_align_floatright\" id=\"S5.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T4.1.1.1\">\n<th class=\"ltx_td ltx_nopad_r ltx_align_left ltx_th ltx_th_column\" id=\"S5.T4.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T4.1.1.1.1.1\" style=\"font-size:90%;\">Vis</span></th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_left ltx_th ltx_th_column\" id=\"S5.T4.1.1.1.2\"><span class=\"ltx_text\" id=\"S5.T4.1.1.1.2.1\" style=\"font-size:90%;\">+ Eval</span></th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column\" id=\"S5.T4.1.1.1.3\"><span class=\"ltx_text\" id=\"S5.T4.1.1.1.3.1\" style=\"font-size:90%;\">CASR</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.1.2.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_tt\" id=\"S5.T4.1.2.1.1\"><span class=\"ltx_text\" id=\"S5.T4.1.2.1.1.1\" style=\"font-size:90%;\">GPT3.5</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_left ltx_border_tt\" id=\"S5.T4.1.2.1.2\"><span class=\"ltx_text\" id=\"S5.T4.1.2.1.2.1\" style=\"font-size:90%;\">+ 3.5</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T4.1.2.1.3\"><span class=\"ltx_text\" id=\"S5.T4.1.2.1.3.1\" style=\"font-size:90%;\">0.449</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.1.3.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T4.1.3.2.1\"><span class=\"ltx_text\" id=\"S5.T4.1.3.2.1.1\" style=\"font-size:90%;\">GPT3.5</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_left\" id=\"S5.T4.1.3.2.2\"><span class=\"ltx_text\" id=\"S5.T4.1.3.2.2.1\" style=\"font-size:90%;\">+ 4</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T4.1.3.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.3.2.3.1\" style=\"font-size:90%;\">0.732</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.1.4.3\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S5.T4.1.4.3.1\"><span class=\"ltx_text\" id=\"S5.T4.1.4.3.1.1\" style=\"font-size:90%;\">GPT4</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_align_left ltx_border_bb\" id=\"S5.T4.1.4.3.2\"><span class=\"ltx_text\" id=\"S5.T4.1.4.3.2.1\" style=\"font-size:90%;\">+ 4</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S5.T4.1.4.3.3\"><span class=\"ltx_text\" id=\"S5.T4.1.4.3.3.1\" style=\"font-size:90%;\">-</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>Choice of Model</figcaption>\n</figure>",
|
| 160 |
+
"capture": "TABLE IV: Choice of Model"
|
| 161 |
+
}
|
| 162 |
+
},
|
| 163 |
+
"image_paths": {
|
| 164 |
+
"1": {
|
| 165 |
+
"figure_path": "2309.10103v2_figure_1.png",
|
| 166 |
+
"caption": "Figure 1: Above are example queries at varying levels of complexity and a representative scene in our OUTDOOR task.",
|
| 167 |
+
"url": "http://arxiv.org/html/2309.10103v2/extracted/5894009/imgs/Figure_Task.png"
|
| 168 |
+
},
|
| 169 |
+
"2": {
|
| 170 |
+
"figure_path": "2309.10103v2_figure_2.png",
|
| 171 |
+
"caption": "Figure 2: The left image illustrates the expansion process where, at each step, N\ud835\udc41Nitalic_N nodes are expanded (with N=3\ud835\udc413N=3italic_N = 3 as depicted). The right image shows the agent\u2019s decision-making process with distance cost at each step.",
|
| 172 |
+
"url": "http://arxiv.org/html/2309.10103v2/extracted/5894009/imgs/Figure_3.png"
|
| 173 |
+
},
|
| 174 |
+
"3": {
|
| 175 |
+
"figure_path": "2309.10103v2_figure_3.png",
|
| 176 |
+
"caption": "Figure 3: The left image illustrates the expansion process where, at each step, N\ud835\udc41Nitalic_N nodes are expanded (with N=3\ud835\udc413N=3italic_N = 3 as depicted). The right image shows the agent\u2019s decision-making process with distance cost at each step.",
|
| 177 |
+
"url": "http://arxiv.org/html/2309.10103v2/extracted/5894009/imgs/Figure_RRT.png"
|
| 178 |
+
},
|
| 179 |
+
"4": {
|
| 180 |
+
"figure_path": "2309.10103v2_figure_4.png",
|
| 181 |
+
"caption": "Figure 4: Comparative Trajectories of LLM-as-Eval (left) and Reasoned Explorer (right). Green nodes represent the chosen path, while red nodes highlight the frontiers.",
|
| 182 |
+
"url": "http://arxiv.org/html/2309.10103v2/extracted/5894009/imgs/Figure_result.png"
|
| 183 |
+
},
|
| 184 |
+
"5": {
|
| 185 |
+
"figure_path": "2309.10103v2_figure_5.png",
|
| 186 |
+
"caption": "Figure 5: Comparison between SR and OSR for Reasoned Explorer",
|
| 187 |
+
"url": "http://arxiv.org/html/2309.10103v2/extracted/5894009/imgs/Figure_7.png"
|
| 188 |
+
}
|
| 189 |
+
},
|
| 190 |
+
"validation": true,
|
| 191 |
+
"references": [],
|
| 192 |
+
"url": "http://arxiv.org/html/2309.10103v2"
|
| 193 |
+
}
|
20241001/2310.03394v3.json
ADDED
|
@@ -0,0 +1,205 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Kinodynamic Motion Planning for a Team of Multirotors Transporting a Cable-Suspended Payload in Cluttered Environments",
|
| 3 |
+
"abstract": "We propose a motion planner for cable-driven payload transportation using multiple unmanned aerial vehicles (UAVs) in an environment cluttered with obstacles. Our planner is kinodynamic, i.e., it considers the full dynamics model of the transporting system including actuation constraints. Due to the high dimensionality of the planning problem, we use a hierarchical approach where we first solve for the geometric motion using a sampling-based method with a novel sampler, followed by constrained trajectory optimization that considers the full dynamics of the system. Both planning stages consider inter-robot and robot/obstacle collisions. We demonstrate in a software-in-the-loop simulation and real flight experiments that there is a significant benefit in kinodynamic motion planning for such payload transport systems with respect to payload tracking error and energy consumption compared to the standard methods of planning for the payload alone. Notably, we observe a significantly higher success rate in scenarios where the team formation changes are needed to move through tight spaces.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Uncrewed aerial vehicles (UAVs) are ideal for tasks that involve accessing remote locations, which makes them valuable collaborators in a variety of scenarios.\nCable-driven payload transportation using multiple UAVs is well suited for\ncollaborative assistance in construction sites such as carrying tools gabellieri2018study or transporting materials.\nThe field of control methods for payload transport systems has witnessed significant advancements. In particular, control algorithms lee2017geometric\nhave been devised to solve the transport problem with stability guarantees.\nHowever, they neglect inter-robot and robot/obstacle collisions.\nConversely, alternative methods employ a nonlinear optimization framework to account for inter-robot collisions, which require intricate online and on-board computations.\nA common limitation of the current methods for payload transport systems is that they assume a provided feasible reference trajectory that can be tracked.\nHowever, generating such reference trajectories for nonlinear, high-dimensional systems in cluttered environments is a recurring challenge and still an open problem.\nTraditionally, reference trajectories have been computed using either linear interpolation, planning for simplified dynamical models, or planning only for the payload.\nHowever, when the controller attempts to track these dynamically unfeasible trajectories in cluttered environments, it is likely to fail from either motor saturation or collisions caused by high tracking errors.\nThese effects are more notable if agile maneuvers are desired, or robots with low thrust-to-weight ratio are employed.\n###figure_1### To the best of our knowledge, there is no kinodynamic motion planner for cable-suspended payload transport.\nWe show that such a planner has significant advantages over traditional motion planning methods, as it is possible to construct feasible trajectories for the entire system\u2019s state, including not only the payload but also the UAVs and cable states. These trajectories account for inter-robot, robot/obstacle, cable/obstacle collisions, and the actuation limits of the motors.\nOverall, tracking such trajectories with existing controllers can lead to more reliable operation (success rate), higher predictability (lower tracking error), and lower energy consumption as flight time can be reduced.\nIn this paper, we extend our previous work wahba2023efficient by proposing an offline hierarchical kinodynamic motion planner (see Fig. 1 ###reference_###) to generate feasible reference trajectories to transport a point mass with multiple aerial multirotors in environments cluttered with obstacles.\nWe use an enhanced version of our prior geometric sampling-based motion planner wahba2023payloadplanning as an initial guess for a nonlinear optimizer. The optimizer generates feasible energy-efficient reference trajectories that take\ncollisions\ninto account.\nWe evaluate the planner by tracking reference trajectories using our highly-efficient controller wahba2023efficient and compare it using a realistic software-in-the-loop (SITL) simulation and several real flights with two baselines: the geometric cable-payload planner and the payload-only planning baseline. We consider three different environments, vary the number of robots and report key metrics such as energy efficiency, tracking accuracy, and success rate."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related Work",
|
| 15 |
+
"text": "Control algorithms of the payload transport system include centralized controllers, employing a cascading reactive approach with stability guarantees lee2017geometric,lee2013geometric, six2017kinematics and decentralized controllers tagliabue2019robust,tognon2018aerial. There are still practical obstacles to overcome, including the requirement to measure noisy payload accelerations and considering inter-robot collisions.\nOptimization-based controllers can directly include some constraints. One approach involves using iterative gradient-based solvers, but they are susceptible to local minima jimenez2022precise, petitti2020inertial.\nNonlinear model predictive control (NMPC) offers an alternative for payload control and collision avoidance li2023nonlinear, sun2023nonlinear. However, its high computational costs and limited scalability with multiple robots make it unsuitable for resource-constrained microcontrollers. Moreover, these methods do not directly integrate the dynamic model of the multirotors or the actuation limits of their motors into their online optimization formulation.\nThey rely only on cable tension constraints and state bounds, which are only suitable for slow-varying trajectories.\nOur previous work wahba2023efficient, wahba2023payloadplanning combines advantages of prior work by leveraging QPs that can handle inter-robot, robot/obstacle and cable/cable collision constraints. We proposed a QP-force allocation geometric controller that is executed on compute-constrained multirotors in realtime efficiently.\nOther methods employ offline motion planners manubens2013motion, de2019flexible, zhang2023if for inter-robot and robot/obstacle collision avoidance. These planners use sampling-based or control-based approaches, but often overlook multirotor actuation limits, state bounds, and payload distribution, potentially suggesting impractical configurations for the controller.\nFrom a broader robotics perspective, kinodynamic motion planning can rely on search or sampling li2016asymptotically,pivtoraikoKinodynamicMotionPlanning2011,webb2012kinodynamic,\nhonig2022db. Yet, these methods scale exponentially with the dimensionality of the state space, and thus fail when planning for a cable-suspended payload system.\nIn contrast, constrained trajectory optimization methods Crocoddyl, malyutaConvexOptimizationTrajectory2022a, TrajOpt,howell2019altro,pardo2016evaluating, geng2022load\nare more suitable for planning in high dimensional state spaces, with polynomial complexity on the state size.\nKinodynamic motion planning using nonlinear optimization has shown great success in different robotics fields, from sequential manipulation planning toussaint2018differentiable\nto legged locomotion ponton2018time, carpentier2018multicontact,winkler2018gait\nand flying robots foehn2017fast,geisert2016trajectory.\nHowever, optimization methods require a good initial guess of the solution trajectory, as they can only optimize the trajectory locally.\nIn our method, we combine a novel geometric sampling-based motion planner for cable-suspended payload systems with multiple multirotors with subsequent nonlinear trajectory optimization for full kinodynamic planning, resulting in a state-of-the-art integrated system."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Background",
|
| 21 |
+
"text": "This section provides necessary background for the dynamic model and the used control design, see wahba2023efficient for details."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "III-A System Description",
|
| 27 |
+
"text": "The dynamic model of the cable-suspended payload system with multiple multirotors can be described through Lagrangian mechanics lee2017geometric,masone2016cooperative,pereira2017control,tuci2018cooperative.\nConsider a team of multirotors transporting a cable-suspended payload.\nThe payload is a point mass with mass and the cables are massless rigid rods each with length , where .\nEach UAV is modeled as a floating rigid body with mass and diagonal moment of inertia .\nThe payload state is defined by the position and velocity vectors and . While the cable states are composed of the cable unit vector directed from the multirotor towards the payload, where , and the cable angular velocity .\nMoreover, the multirotor position and velocity state vectors are and with respect to the global frame of reference. As the cables are modeled rigidly, the position of each multirotor can be computed as\nThe attitude states of the i-th multirotor is comprised of the rotation matrix and body angular velocity .\nIn summary, the configuration manifold of the presented system is defined by , where the full system state is defined by\nThe output wrench of the i-th robot is defined by the collective thrust and the torques , where . This wrench vector is linearly related to the control input motor forces of the multirotor by , where is the actuation matrix and the motor force command is .\nThe control input vector of the full system is defined as\nand the system kinematics and dynamics are\nwhere is the acceleration of the payload, is the gravitational acceleration constant, and . denotes the skew-symmetric mapping ."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "III-B Controller Overview",
|
| 33 |
+
"text": "Our control design (see Fig. 1 ###reference_###) presents an efficient optimization-based cable force allocation of a geometric controller lee2017geometric for cable-suspended payload transportation that is aware of neighboring robots to avoid collisions.\nConsider the desired control forces that track the payload reference trajectory as .\nThis method reformulates the cable forces allocation optimization problem as two consecutive quadratic programs (QPs).\nThe QPs solve for the desired cable forces , taking into account the inter-robot collisions, and track .\nThus, is tracked by the i-th multirotor with a low-level controller lee2013geometric."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "IV Approach",
|
| 39 |
+
"text": ""
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.1",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "IV-A Problem Statement",
|
| 45 |
+
"text": "Consider the system described in Section III-A ###reference_###. The state space vector is defined by (2 ###reference_###).\nLet be a sequence of states sampled at time and be a sequence of controls applied to the system for times , where is a small timestep and the controls are constant during this timestep.\nWe denote the start state as , the goal state as and the collision-free configuration space as , which accounts for collisions between the robots, payload, and the cables as well as collisions against the environment.\nThen our goal is to transport the payload from a start to a goal state in the optimal time , which can be framed as the following optimization problem\nwhere the cost function can be set to minimize and other task objectives (e.g., energy).\nThe function\n is the time-discretized version of the dynamic model of the system and the second constraint limits the control input within a feasible control space (e.g., actuator limits).\nThe third set of constraints ensures that the motion connects the start and the goal states.\nThe last constraint ensures a collision-free path for the full state while avoiding any cable entanglements."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.2",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "IV-B Geometric Motion Planning (Offline)",
|
| 51 |
+
"text": "Given a start state and a goal state in an environment\nwith obstacles, we propose to use a sampling-based motion planner to plan a collision-free geometric path for the following state vector\nwhere and subscript is the reference trajectory.\nConsequently, is interpolated and the first order derivatives are computed by numerical differentiation to define the full reference trajectory that can be tracked by a controller or used as an initial guess for an optimizer."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2.1",
|
| 55 |
+
"parent_section_id": "4.2",
|
| 56 |
+
"section_name": "IV-B1 State Space Representation",
|
| 57 |
+
"text": "We propose to reduce the state space size directly by using a local parametrization for the unit vector with azimuth and elevation angles, such that\nwhere and .\nAs the payload is always beneath the robots, there are no singularities in this representation.\nThus, the reduced state vector can be represented by"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2.2",
|
| 61 |
+
"parent_section_id": "4.2",
|
| 62 |
+
"section_name": "IV-B2 Cost Function",
|
| 63 |
+
"text": "Given two consecutive states as and , we use a cost function that minimizes the integral of the energy between the two states. First, let us define the as the default cable direction to carry the payload statically.\nConsider the required force magnitude to carry a unit payload mass with respect to the static case (i.e., all cables point towards ) as\nwhere is the number of cables. We assume a trapezoidal energy profile between two consecutive states. Thus, the cost function is\nwhere is a weight, and are the travel distances of the payload position and each UAV, respectively.\nThe sampler will converge to the minimum cost solution over time.\nThe accepted samples by the collision checker ensure that the current formation distributes the load quasi-statically over the cables."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2.3",
|
| 67 |
+
"parent_section_id": "4.2",
|
| 68 |
+
"section_name": "IV-B3 Sampling Strategy",
|
| 69 |
+
"text": "Sampling uniformly and i.i.d. creates two major challenges: i) the probability of sampling a configuration that is collision-free without tangling of the cables decreases exponentially with , and ii) the number of cable permutations that result in the same relative formation grows factorial in .\nTo mitigate this curse of dimensionality we propose a custom sampler, see Algorithm 1 ###reference_thm1###.\nThe sampler uses a preprocessing step to compute a set of witness cable configurations (formations) that can be reached from the given valid start state .\nDuring the actual sampling-based search, we rely on these witness formations.\nFor the preprocessing we initialize the witness set with the initial state (Algorithm 1 ###reference_thm1###).\nFor subsequent witnesses, we randomly choose a base state from the set (Algorithm 1 ###reference_thm1###) and uniformly sample a formation (Algorithm 1 ###reference_thm1###).\nThen we solve an optimal assignment problem that minimizes the sum of the distances that the UAVs would have to move to change the formation from to (Algorithm 1 ###reference_thm1###).\nFor the cost, we extract the position of the UAVs as\nwhere is given in 7 ###reference_### and is the position of the payload at the initial state which is known to be collision-free.\nThe assignment problem can be solved optimally in polynomial time, for example by using the Hungarian Method (Algorithm 1 ###reference_thm1###).\nWe re-arrange the cables (Algorithm 1 ###reference_thm1###) and add the new witness if the whole formation change motion from is collision-free (Algorithm 1 ###reference_thm1###).\nIn the online search phase, we randomly pick a witness formation from the pre-computed set, add random noise to it, and augment it with a payload position uniformly drawn from the workspace (Algorithms 1 ###reference_thm1### to 1 ###reference_thm1###).\nFor the goal we only check if the payload reached the desired state. We use goal biasing and sample goal states in a similar fashion as in the online search (Algorithms 1 ###reference_thm1### to 1 ###reference_thm1###), except that the payload part is the user provided desired state rather than sampled as in Algorithm 1 ###reference_thm1###.\n###figure_2### ###figure_3### ###figure_4###"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2.4",
|
| 73 |
+
"parent_section_id": "4.2",
|
| 74 |
+
"section_name": "IV-B4 Reference Trajectory (Offline)",
|
| 75 |
+
"text": "Since the geometric planner generates only , the rest of the state vector in 2 ###reference_### needs to be recovered. The payload velocity vector is computed with numerical differentiation over small time steps. The is set to , since we found that the numerical differentiation of is a noisy signal. Finally, the attitude states of the i-th multirotor (, ) are set to , respectively, where is the identity matrix."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.3",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "IV-C Nonlinear Trajectory Optimization (Offline)",
|
| 81 |
+
"text": "The objective of the optimization step is to solve the full kinodynamic motion planning problem using the output of the geometric planner as initial guess.\nWhile the geometric planner only plans for the payload and cable states, trajectory optimization considers the full state of the systems and the motor forces expressed in 2 ###reference_### and 3 ###reference_###.\nTo generate a valid initial guess, the geometric solution is interpolated with a small time discretization ( s), setting the velocity components to zero (which empirically results in a better initial guess) and using a default orientation for the robots. For the initial guess on the controls sequence, we use the constant motor forces that are required to hover each quadrotor individually.\nThe optimization problem is formulated as a nonlinear trajectory optimization\nThe decision variables are the sequence of controls ,\nthe sequence of states and the time-duration of the intervals in the time discretization. The number of steps is defined by the initial guess. The collision distance between the robots, payload, cables and environment is computed by the signed distance function .\nIn the dynamics constraints, the continuous dynamics are now discretized using Euler integration with a time interval subject to optimization. We use the notation to highlight that\nsome components of the state space lie on manifolds (e.g. quaternions or unit vectors).\nThe dynamics of the system is highly nonlinear, and the collision constraints define a non-convex feasible set,\nresulting in a very challenging nonlinear problem. To improve the robustness and success of trajectory optimization,\nand to ensure good local convergence towards a locally optimal trajectory, we combine three terms in the objective function.\nThe term minimizes the time duration of the trajectory, with a small acting as a proximal regularization. The term \nminimizes the control effort. The term minimizes the acceleration of the system and is required to generate smooth trajectories and to ensure a good convergence of the solver. The coefficients are used to weigh the three terms. Together, these three terms combine the original objective of time minimization with a regularization that provides smooth gradients for the nonlinear optimizer to converge successfully.\nThe nonlinear trajectory optimization problem is solved using Differential Dynamic Programming (DDP)\nCrocoddyl, li2004iterative, which ensures that the nonlinear dynamics constraints are always fulfilled. Goal constraints, states and control bounds are included with a squared penalty in the cost using the squared penalty method nocedal1999numerical."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.4",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-D QP-Force Allocation Geometric Controller (Online)",
|
| 87 |
+
"text": "As shown in Fig. 1 ###reference_###, the QP-force allocation geometric controller tracks the generated motion plans.\nHere, we demonstrate how to compute the required reference values of the controller from the motion planner output.\nThe formulation with QPs presented in wahba2023efficient requires computing the reference cable forces , that tracks the reference trajectory of the payload states and the reference cable formations .\nSpecifically, is derived from the reference trajectory and the reference motor forces using , where denotes the tension of the -th cable.\nThe tension is calculated from the reference motor forces as\nsuch that, and represent the reference acceleration and orientation of the -th multirotor, and is the collective thrust magnitude. Differentiating the position of the multirotor from 1 ###reference_### twice yields the acceleration of the -th multirotor as\nand represents the reference payload acceleration, and are determined using 4 ###reference_###.\nHowever, in the geometric case, we have and , see Section IV-B4 ###reference_.SSS4###.\nNote that is not generally and still computed using 4 ###reference_###.\nFor the control input, we assume that each multirotor is hovering, i.e., .\nAfter computing , The cost function in the QPs can be modified as\nwhere the first term minimizes the sum of norms of the cable forces.\nThe second term minimizes the difference between the desired and the reference cable forces with as a weighting factor.\nNote that, even though in the geometric case the resulting may violate the dynamics, it is added as a soft constraint in 15 ###reference_###, thus the controller will still compute feasible to track.\nFinally, the low-level controller of each multirotor computes the motor forces that achieve these desired cable forces , thus tracking the reference trajectory wahba2023efficient."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Experimental Results",
|
| 93 |
+
"text": "To validate the performance of our method, we provide several real flight and simulation experiments.\nIn particular, we implement the sampling-based motion planner using OMPL sucan2012open, a widely used C++ library and rely on RRT* as sampling motion planning algorithm.\nFor optimization we extend Dynoplan honig2022db, an optimization-based motion planner based on Crocoddyl Crocoddyl, to include the system dynamics defined in 4 ###reference_###.\nBoth geometric and optimization-based planners rely on FCL (Flexible Collision Library) FCL for collision checking.\nWe use sympy sympy to compute the analytical Jacobians of the dynamics and to generate efficient C code."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.1",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Simulation Results",
|
| 99 |
+
"text": "For validation, we use a software-in-the-loop simulation where the flight controller code wahba2023efficient that runs on Bitcraze Crazyflie 2.1 multirotors is directly executed, together with the physics model that we implement in Dynobench honig2022db.\nDue to the agile nature of multirotors, we use a small of for both optimization and simulation.\nAll results are validated to fulfill the constraints in 5 ###reference_###.\nWe test our motion planning approach on three different scenarios, see Fig. 2 ###reference_### and the supplemental video: obstacle-free (i.e., Empty), a random forest-like environment, and a window environment, where the payload is transported through a narrow passage between two columns. The gap between each column and the origin linearly increases from to as the number of robots increases.\nFor each scenario, we evaluate five different problems increasing the number of robots, from two to six robots.\nAll scenarios for the motion planning experiments were solved on a workstation (AMD Ryzen Threadripper PRO 5975WX @ 3.6 GHz, 64 GB RAM, Ubuntu 22.04) and repeated 10 times. The runtime of the geometric motion planner was limited to ."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.1.1",
|
| 103 |
+
"parent_section_id": "5.1",
|
| 104 |
+
"section_name": "V-A1 Baseline Comparison",
|
| 105 |
+
"text": "We consider three different approaches for payload transport:\nPayload uses geometric planning for the payload alone and the resulting trajectory is tracked using the specialized payload transport controller.\nGeom uses geometric planning for the payload, cables, and UAVs as described in Section IV-B ###reference_###. The same controller can then use the augmented cost function 15 ###reference_### to track both the cables and the payload, allowing formation changes to pass through passages.\nOpt uses the proposed pipeline for kinodynamic motion planning: We first generate a geometric plan for the cable-payload system,\n(Section IV-B ###reference_###)\nand then we use the nonlinear trajectory optimization to generate feasible trajectories\n(Section IV-C ###reference_###).\nWe analyze tracking error, energy usage, and success rate in 15 different settings. The results are shown in Table I ###reference_###.\nWe consider an execution a success if i) a reference trajectory is successfully computed, ii) no collisions occurred when the controller is tracking this reference trajectory, and iii) the controller reaches the goal.\nFor the empty environment, all algorithms are successful. In the forest environment, the simple payload approach results in collisions since the UAVs are ignored during planning, and the planned path tends to be too close to obstacles.\nHere, the geom approach works in the majority of the tested cases.\nFor the window case, only our kinodynamic planner is highly successful, since the geometric planner often produces desired cable states that cannot be tracked accurately when considering the kinodynamic constraints.\n###figure_5### ###figure_6### ###figure_7### The average tracking error in all settings is significantly lower for our approach opt (up to 9 times lower compared to the geometric solution), since the full system states are considered.\nWe also compute the expected energy consumption of the flight using a model for the Bitcraze Crayflie 2.X multirotors 111https://bitcraze.io/documentation/repository/crazyflie-firmware/master/functional-areas/pwm-to-thrust/ ###reference_ry/crazyflie-firmware/master/functional-areas/pwm-to-thrust/###.\nHere, the energy linearly depends on the flight time as well as the forces that the propellers create.\nNot surprisingly, the energy for payload and geom are almost identical, while our method, opt, reaches a reduction of around 10 %.\nHowever, the energy is reduced by almost 50 % with iterative optimization (see Section V-A3 ###reference_.SSS3###).\nMoreover, the energy usage almost linearly increases with the number of robots, which is not surprising as multirotors require a lot of energy just to stay airborne.\n###figure_8### ###figure_9###"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.1.2",
|
| 109 |
+
"parent_section_id": "5.1",
|
| 110 |
+
"section_name": "V-A2 Computational Effort",
|
| 111 |
+
"text": "We analyze the runtime of our two planning phases, geometric planning and optimization, separately. For the geometric planner, we show that our new sampling strategy (Algorithm 1 ###reference_thm1###) compared to the uniform sampling, significantly reduces the time until a low-cost solution is reached, see Fig. 4 ###reference_###. These results are more significant for the environments with more open space (empty and window). Moreover, our results show that the sampling strategy effectively reduces the standard deviation of the cost, making it more likely that we find a high-quality solution quickly.\nFor the optimization, we are interested in the scalability with the number of robots. The number of decision variables is linear in the time horizon and the number of robots.\nTheoretically, trajectory optimization using DDP scales cubically with the state dimensionality (i.e., the number of robots) and linearly with the time horizon. However, we observe that adding robots also results in more nonlinear iterations, increasing the overall computation time, see Fig. 3 ###reference_###."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "5.1.3",
|
| 115 |
+
"parent_section_id": "5.1",
|
| 116 |
+
"section_name": "V-A3 Iterative Optimization (Offline)",
|
| 117 |
+
"text": "To minimize the trajectory duration, we solve a sequence of optimization problems for decreasing values of (IV-C ###reference_###) using the solutions of each problem as initial guess for the next one.\nExamples of this approach are apparent in sequential convex programming (SCP) malyutaConvexOptimizationTrajectory2022a as well as prior motion planning techniques for multirotors wolfgang2018.\nWe observe the benefits in our application, as shown in Fig. 3 ###reference_### for the forest environment with 4 robots, the estimated energy consumption of the team is reduced by almost 50 % just by repeating the optimization ten times."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "5.2",
|
| 121 |
+
"parent_section_id": "5",
|
| 122 |
+
"section_name": "Real Flights Results",
|
| 123 |
+
"text": ""
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5.2.1",
|
| 127 |
+
"parent_section_id": "5.2",
|
| 128 |
+
"section_name": "V-B1 Physical Setup",
|
| 129 |
+
"text": "To provide concrete validation to our simulation results, we conduct several real flight tests. For the real platform, we use multirotors of type Bitcraze Crazyflie 2.1 (CF), where we use the same flight controller code wahba2023efficient running on-board as in SITL. These are small (9 cm rotor-to-rotor) and lightweight () products that are commercially available.\nController and extended Kalman filter for state estimation run directly on-board the STM32-based flight controller (168 MHz, 192 kB RAM). For all scenarios, we use dental floss as cables with length and the payload mass is . We use magnets to connect the cables and the payload/multirotors to be easily repaired. On the host side, we use Crazyswarm2, which is based on Crazyswarm preiss2017crazyswarm but uses ROS 2 macenski2022robot to control and send commands for multiple CFs.\nIn particular, we equip each multirotor and the payload with a single reflective marker for position tracking at using an OptiTrack motion capture system in a flight space."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "5.2.2",
|
| 133 |
+
"parent_section_id": "5.2",
|
| 134 |
+
"section_name": "V-B2 Results",
|
| 135 |
+
"text": "We validate the functionality and the quality of our kinodynamic planner opt against geom through especially designed environments, where the state-of-the-art motion planners fail to generate feasible trajectories for the full system that can be tracked by the controller wahba2023efficient.\nAs shown in Fig. 5 ###reference_###, we test the experiments in the window and forest environments for two and three robots. Similar to our prior work wahba2023efficient, we only fly up to three robots\ndue to the computationally constrained microcontroller.\nWe can generate solutions within a few minutes, see Table II ###reference_###, where in most cases the optimization is the slower part. In the forest case with 3 robots, the geometric planning is computationally expensive due to a high obstacle density. Note that both window and forest examples are different from the simulation results to match our flight space constraints.\nAs in the simulation evaluation, we compare the tracking error and the energy usage for ten executed flights. Moreover, we compare the success rate of each method in tracking the reference trajectory to the goal state while avoiding inter-robot or robot/obstacle collisions. As shown in Table II ###reference_###, opt is succeeding in all cases.\nHowever, when tracking the plans generated by the geometric planner we observed failures either by the team crashing into obstacles or the whole team becoming unstable due to infeasibility of the provided reference.\nIn the 3 robot forest case, such tracking failures could even be observed in simulation, explaining the success rate."
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "6",
|
| 139 |
+
"parent_section_id": null,
|
| 140 |
+
"section_name": "VI Conclusion",
|
| 141 |
+
"text": "We propose a hierarchical kinodynamic motion planning algorithm for the cable-suspended payload transport system.\nOur method directly considers obstacles, inter-robot collisions, the full dynamics, the actuation limits and allows us to plan feasible reference trajectories that can be tracked accurately by our controller in realtime.\nWe compare our method with the state-of-the-art baselines in multiple simulation and real experiments.\nIn all cases, we achieve higher success rates and enhance the energy consumption of the executed motions, thereby maximizing effectiveness.\nIn the future, we would like to extend our method to the rigid payload and enable realtime planning with an obstacle-aware controller."
|
| 142 |
+
}
|
| 143 |
+
],
|
| 144 |
+
"appendix": [],
|
| 145 |
+
"tables": {
|
| 146 |
+
"1": {
|
| 147 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Simulation Results.\nShown are mean values for the tracking error and energy usage of the final solution over 50 runs with a timelimit of for the geometric planner. Standard deviation is small gray. Percentages are success rates.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.3.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.3.1.1.1\">Environment</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_rr\" id=\"S5.T1.3.1.1.2\">Metrics</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.3.1.1.3\">2 robots</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.3.1.1.4\">3 robots</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.3.1.1.5\">4 robots</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.3.1.1.6\">5 robots</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.3.1.1.7\">6 robots</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T1.3.1.1.8\">success [%]</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.3.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.2.1.1\" rowspan=\"6\"><span class=\"ltx_text\" id=\"S5.T1.3.2.1.1.1\">empty</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr ltx_border_t\" id=\"S5.T1.3.2.1.2\">Error payload [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.2.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.2.1.3.1\">0.02</span> <span class=\"ltx_text\" id=\"S5.T1.3.2.1.3.2\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.2.1.3.3\" style=\"font-size:50%;\">100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.2.1.3.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.2.1.4\">0.04 <span class=\"ltx_text\" id=\"S5.T1.3.2.1.4.1\" style=\"font-size:50%;color:#808080;\">0.02 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.2.1.4.2\" style=\"font-size:50%;\">100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.2.1.4.2.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.2.1.5\">0.02 <span class=\"ltx_text\" id=\"S5.T1.3.2.1.5.1\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.2.1.5.2\" style=\"font-size:50%;\">100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.2.1.5.2.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.2.1.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.2.1.6.1\">0.01</span> <span class=\"ltx_text\" id=\"S5.T1.3.2.1.6.2\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text\" id=\"S5.T1.3.2.1.6.3\" style=\"font-size:50%;\">14 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.2.1.7\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.3.2.1.8\">63</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.3.2.1\">Error geom [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.3.2.2\">0.03 <span class=\"ltx_text\" id=\"S5.T1.3.3.2.2.1\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.3.2.2.2\" style=\"font-size:50%;\">100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.3.2.2.2.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.3.2.3\">0.05 <span class=\"ltx_text\" id=\"S5.T1.3.3.2.3.1\" style=\"font-size:50%;color:#808080;\">0.03 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.3.2.3.2\" style=\"font-size:50%;\">100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.3.2.3.2.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.3.2.4\">0.03 <span class=\"ltx_text\" id=\"S5.T1.3.3.2.4.1\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text\" id=\"S5.T1.3.3.2.4.2\" style=\"font-size:50%;\">98 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.3.2.5\">0.02 <span class=\"ltx_text\" id=\"S5.T1.3.3.2.5.1\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text\" id=\"S5.T1.3.3.2.5.2\" style=\"font-size:50%;\">40 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.3.2.6\">\u2014</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.3.3.2.7\">68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.4.3.1\">Error opt [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.4.3.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.4.3.2.1\">0.02</span> <span class=\"ltx_text\" id=\"S5.T1.3.4.3.2.2\" style=\"font-size:50%;color:#808080;\">0.03 </span> <span class=\"ltx_text\" id=\"S5.T1.3.4.3.2.3\" style=\"font-size:50%;\">98 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.4.3.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.4.3.3.1\">0.01</span> <span class=\"ltx_text\" id=\"S5.T1.3.4.3.3.2\" style=\"font-size:50%;color:#808080;\">0.02 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.4.3.3.3\" style=\"font-size:50%;\">100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.4.3.3.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.4.3.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.4.3.4.1\">0.01</span> <span class=\"ltx_text\" id=\"S5.T1.3.4.3.4.2\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.4.3.4.3\" style=\"font-size:50%;\">100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.4.3.4.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.4.3.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.4.3.5.1\">0.01</span> <span class=\"ltx_text\" id=\"S5.T1.3.4.3.5.2\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.4.3.5.3\" style=\"font-size:50%;\">98<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.4.3.5.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.4.3.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.4.3.6.1\">0.02</span> <span class=\"ltx_text\" id=\"S5.T1.3.4.3.6.2\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.4.3.6.3\" style=\"font-size:50%;\">98<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.4.3.6.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.3.4.3.7\">99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr ltx_border_t\" id=\"S5.T1.3.5.4.1\">Energy payload [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.5.4.2\">0.05 <span class=\"ltx_text\" id=\"S5.T1.3.5.4.2.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.5.4.3\">0.08 <span class=\"ltx_text\" id=\"S5.T1.3.5.4.3.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.5.4.4\">0.10 <span class=\"ltx_text\" id=\"S5.T1.3.5.4.4.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.5.4.5\">0.12 <span class=\"ltx_text\" id=\"S5.T1.3.5.4.5.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.5.4.6\">\u2014</td>\n<td class=\"ltx_td\" id=\"S5.T1.3.5.4.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.6.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.6.5.1\">Energy geom [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.6.5.2\">0.05 <span class=\"ltx_text\" id=\"S5.T1.3.6.5.2.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.6.5.3\">0.08 <span class=\"ltx_text\" id=\"S5.T1.3.6.5.3.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.6.5.4\">0.10 <span class=\"ltx_text\" id=\"S5.T1.3.6.5.4.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.6.5.5\">0.12 <span class=\"ltx_text\" id=\"S5.T1.3.6.5.5.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.6.5.6\">\u2014</td>\n<td class=\"ltx_td\" id=\"S5.T1.3.6.5.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.7.6.1\">Energy opt [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.7.6.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.7.6.2.1\">0.04</span> <span class=\"ltx_text\" id=\"S5.T1.3.7.6.2.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.7.6.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.7.6.3.1\">0.05</span> <span class=\"ltx_text\" id=\"S5.T1.3.7.6.3.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.7.6.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.7.6.4.1\">0.07</span> <span class=\"ltx_text\" id=\"S5.T1.3.7.6.4.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.7.6.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.7.6.5.1\">0.11</span> <span class=\"ltx_text\" id=\"S5.T1.3.7.6.5.2\" style=\"font-size:50%;color:#808080;\">0.01</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.7.6.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.7.6.6.1\">0.13</span> <span class=\"ltx_text\" id=\"S5.T1.3.7.6.6.2\" style=\"font-size:50%;color:#808080;\">0.01</span>\n</td>\n<td class=\"ltx_td\" id=\"S5.T1.3.7.6.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.8.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.8.7.1\" rowspan=\"6\"><span class=\"ltx_text\" id=\"S5.T1.3.8.7.1.1\">forest</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr ltx_border_t\" id=\"S5.T1.3.8.7.2\">Error payload [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.8.7.3\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.8.7.4\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.8.7.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.8.7.6\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.8.7.7\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.3.8.7.8\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.9.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.9.8.1\">Error geom [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.9.8.2\">0.08 <span class=\"ltx_text\" id=\"S5.T1.3.9.8.2.1\" style=\"font-size:50%;color:#808080;\">0.06 </span> <span class=\"ltx_text\" id=\"S5.T1.3.9.8.2.2\" style=\"font-size:50%;\">46 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.9.8.3\">0.10 <span class=\"ltx_text\" id=\"S5.T1.3.9.8.3.1\" style=\"font-size:50%;color:#808080;\">0.06 </span> <span class=\"ltx_text\" id=\"S5.T1.3.9.8.3.2\" style=\"font-size:50%;\">66 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.9.8.4\">0.10 <span class=\"ltx_text\" id=\"S5.T1.3.9.8.4.1\" style=\"font-size:50%;color:#808080;\">0.06 </span> <span class=\"ltx_text\" id=\"S5.T1.3.9.8.4.2\" style=\"font-size:50%;\">62 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.9.8.5\">0.11 <span class=\"ltx_text\" id=\"S5.T1.3.9.8.5.1\" style=\"font-size:50%;color:#808080;\">0.06 </span> <span class=\"ltx_text\" id=\"S5.T1.3.9.8.5.2\" style=\"font-size:50%;\">16 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.9.8.6\">0.20 <span class=\"ltx_text\" id=\"S5.T1.3.9.8.6.1\" style=\"font-size:50%;color:#808080;\">0.08 </span> <span class=\"ltx_text\" id=\"S5.T1.3.9.8.6.2\" style=\"font-size:50%;\">2 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.3.9.8.7\">38</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.10.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.10.9.1\">Error opt [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.10.9.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.10.9.2.1\">0.02</span> <span class=\"ltx_text\" id=\"S5.T1.3.10.9.2.2\" style=\"font-size:50%;color:#808080;\">0.04 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.10.9.2.3\" style=\"font-size:50%;\">100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.10.9.2.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.10.9.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.10.9.3.1\">0.01</span> <span class=\"ltx_text\" id=\"S5.T1.3.10.9.3.2\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.10.9.3.3\" style=\"font-size:50%;\">100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.10.9.3.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.10.9.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.10.9.4.1\">0.01</span> <span class=\"ltx_text\" id=\"S5.T1.3.10.9.4.2\" style=\"font-size:50%;color:#808080;\">0.00 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.10.9.4.3\" style=\"font-size:50%;\">96<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.10.9.4.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.10.9.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.10.9.5.1\">0.03</span> <span class=\"ltx_text\" id=\"S5.T1.3.10.9.5.2\" style=\"font-size:50%;color:#808080;\">0.02 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.10.9.5.3\" style=\"font-size:50%;\">88<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.10.9.5.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.10.9.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.10.9.6.1\">0.03</span> <span class=\"ltx_text\" id=\"S5.T1.3.10.9.6.2\" style=\"font-size:50%;color:#808080;\">0.02 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.10.9.6.3\" style=\"font-size:50%;\">94<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.10.9.6.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.3.10.9.7\">96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.11.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr ltx_border_t\" id=\"S5.T1.3.11.10.1\">Energy payload [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.11.10.2\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.11.10.3\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.11.10.4\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.11.10.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.11.10.6\">\u2014</td>\n<td class=\"ltx_td\" id=\"S5.T1.3.11.10.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.12.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.12.11.1\">Energy geom [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.12.11.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.12.11.2.1\">0.05</span> <span class=\"ltx_text\" id=\"S5.T1.3.12.11.2.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.12.11.3\">0.08 <span class=\"ltx_text\" id=\"S5.T1.3.12.11.3.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.12.11.4\">0.10 <span class=\"ltx_text\" id=\"S5.T1.3.12.11.4.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.12.11.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.12.11.5.1\">0.12</span> <span class=\"ltx_text\" id=\"S5.T1.3.12.11.5.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.12.11.6\">0.15 <span class=\"ltx_text\" id=\"S5.T1.3.12.11.6.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td\" id=\"S5.T1.3.12.11.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.13.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.13.12.1\">Energy opt [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.13.12.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.13.12.2.1\">0.05</span> <span class=\"ltx_text\" id=\"S5.T1.3.13.12.2.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.13.12.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.13.12.3.1\">0.07</span> <span class=\"ltx_text\" id=\"S5.T1.3.13.12.3.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.13.12.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.13.12.4.1\">0.09</span> <span class=\"ltx_text\" id=\"S5.T1.3.13.12.4.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.13.12.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.13.12.5.1\">0.12</span> <span class=\"ltx_text\" id=\"S5.T1.3.13.12.5.2\" style=\"font-size:50%;color:#808080;\">0.01</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.13.12.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.13.12.6.1\">0.14</span> <span class=\"ltx_text\" id=\"S5.T1.3.13.12.6.2\" style=\"font-size:50%;color:#808080;\">0.01</span>\n</td>\n<td class=\"ltx_td\" id=\"S5.T1.3.13.12.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.14.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.14.13.1\" rowspan=\"6\"><span class=\"ltx_text\" id=\"S5.T1.3.14.13.1.1\">window</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr ltx_border_t\" id=\"S5.T1.3.14.13.2\">Error payload [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.14.13.3\">0.04 <span class=\"ltx_text\" id=\"S5.T1.3.14.13.3.1\" style=\"font-size:50%;color:#808080;\">0.02 </span> <span class=\"ltx_text\" id=\"S5.T1.3.14.13.3.2\" style=\"font-size:50%;\">4 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.14.13.4\">0.06 <span class=\"ltx_text\" id=\"S5.T1.3.14.13.4.1\" style=\"font-size:50%;color:#808080;\">0.04 </span> <span class=\"ltx_text\" id=\"S5.T1.3.14.13.4.2\" style=\"font-size:50%;\">12 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.14.13.5\">0.04 <span class=\"ltx_text\" id=\"S5.T1.3.14.13.5.1\" style=\"font-size:50%;color:#808080;\">0.02 </span> <span class=\"ltx_text\" id=\"S5.T1.3.14.13.5.2\" style=\"font-size:50%;\">2 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.14.13.6\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.14.13.7\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.3.14.13.8\">4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.15.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.15.14.1\">Error geom [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.15.14.2\">0.10 <span class=\"ltx_text\" id=\"S5.T1.3.15.14.2.1\" style=\"font-size:50%;color:#808080;\">0.09 </span> <span class=\"ltx_text\" id=\"S5.T1.3.15.14.2.2\" style=\"font-size:50%;\">34 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.15.14.3\">0.10 <span class=\"ltx_text\" id=\"S5.T1.3.15.14.3.1\" style=\"font-size:50%;color:#808080;\">0.06 </span> <span class=\"ltx_text\" id=\"S5.T1.3.15.14.3.2\" style=\"font-size:50%;\">14 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.15.14.4\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.15.14.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.15.14.6\">\u2014</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.3.15.14.7\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.16.15\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.16.15.1\">Error opt [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.16.15.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.16.15.2.1\">0.02</span> <span class=\"ltx_text\" id=\"S5.T1.3.16.15.2.2\" style=\"font-size:50%;color:#808080;\">0.02 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.16.15.2.3\" style=\"font-size:50%;\">98<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.16.15.2.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.16.15.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.16.15.3.1\">0.01</span> <span class=\"ltx_text\" id=\"S5.T1.3.16.15.3.2\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.16.15.3.3\" style=\"font-size:50%;\">100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.16.15.3.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.16.15.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.16.15.4.1\">0.02</span> <span class=\"ltx_text\" id=\"S5.T1.3.16.15.4.2\" style=\"font-size:50%;color:#808080;\">0.01 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.16.15.4.3\" style=\"font-size:50%;\">90<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.16.15.4.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.16.15.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.16.15.5.1\">0.04</span> <span class=\"ltx_text\" id=\"S5.T1.3.16.15.5.2\" style=\"font-size:50%;color:#808080;\">0.03 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.16.15.5.3\" style=\"font-size:50%;\">54<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.16.15.5.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.16.15.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.16.15.6.1\">0.05</span> <span class=\"ltx_text\" id=\"S5.T1.3.16.15.6.2\" style=\"font-size:50%;color:#808080;\">0.03 </span> <span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.16.15.6.3\" style=\"font-size:50%;\">56<span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.3.16.15.6.3.1\"> %</span></span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.3.16.15.7\">80</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.17.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr ltx_border_t\" id=\"S5.T1.3.17.16.1\">Energy payload [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.17.16.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.17.16.2.1\">0.05</span> <span class=\"ltx_text\" id=\"S5.T1.3.17.16.2.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.17.16.3\">0.08 <span class=\"ltx_text\" id=\"S5.T1.3.17.16.3.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.17.16.4\">0.10 <span class=\"ltx_text\" id=\"S5.T1.3.17.16.4.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.17.16.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.3.17.16.6\">\u2014</td>\n<td class=\"ltx_td\" id=\"S5.T1.3.17.16.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.18.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.18.17.1\">Energy geom [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.18.17.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.18.17.2.1\">0.05</span> <span class=\"ltx_text\" id=\"S5.T1.3.18.17.2.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.18.17.3\">0.08 <span class=\"ltx_text\" id=\"S5.T1.3.18.17.3.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.18.17.4\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.18.17.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.18.17.6\">\u2014</td>\n<td class=\"ltx_td\" id=\"S5.T1.3.18.17.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.19.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S5.T1.3.19.18.1\">Energy opt [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.19.18.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.19.18.2.1\">0.05</span> <span class=\"ltx_text\" id=\"S5.T1.3.19.18.2.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.19.18.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.19.18.3.1\">0.07</span> <span class=\"ltx_text\" id=\"S5.T1.3.19.18.3.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.19.18.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.19.18.4.1\">0.09</span> <span class=\"ltx_text\" id=\"S5.T1.3.19.18.4.2\" style=\"font-size:50%;color:#808080;\">0.01</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.19.18.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.19.18.5.1\">0.13</span> <span class=\"ltx_text\" id=\"S5.T1.3.19.18.5.2\" style=\"font-size:50%;color:#808080;\">0.01</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.3.19.18.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.19.18.6.1\">0.15</span> <span class=\"ltx_text\" id=\"S5.T1.3.19.18.6.2\" style=\"font-size:50%;color:#808080;\">0.01</span>\n</td>\n<td class=\"ltx_td\" id=\"S5.T1.3.19.18.7\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 148 |
+
"capture": "TABLE I: Simulation Results.\nShown are mean values for the tracking error and energy usage of the final solution over 50 runs with a timelimit of for the geometric planner. Standard deviation is small gray. Percentages are success rates.\n"
|
| 149 |
+
},
|
| 150 |
+
"2": {
|
| 151 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Real Flights Results.\nShown are mean values for the tracking error, energy usage and the planning time of each approach of 10 flight experiments executed on-board of the CFs to track the reference trajectories. Standard deviation is small gray. Percentages are success rates.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.1.1.1.1\">Env.</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.1.1.1.2\">Metric</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.1.1.1.3\">2 robots</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S5.T2.1.1.1.4\">3 robots</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.2.1.1\" rowspan=\"6\"><span class=\"ltx_text\" id=\"S5.T2.1.2.1.1.1\">window</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.2.1.2\">Error geom [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.2.1.3\">0.062 <span class=\"ltx_text\" id=\"S5.T2.1.2.1.3.1\" style=\"font-size:50%;color:#808080;\">0.04 </span> <span class=\"ltx_text\" id=\"S5.T2.1.2.1.3.2\" style=\"font-size:50%;\">80 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.1.4\">0.121 <span class=\"ltx_text\" id=\"S5.T2.1.2.1.4.1\" style=\"font-size:50%;color:#808080;\">0.06 </span> <span class=\"ltx_text\" id=\"S5.T2.1.2.1.4.2\" style=\"font-size:50%;\">60 %</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.3.2.1\">Error opt [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.3.2.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.3.2.2.1\">0.056</span> <span class=\"ltx_text\" id=\"S5.T2.1.3.2.2.2\" style=\"font-size:50%;color:#808080;\">0.03 </span> <span class=\"ltx_text\" id=\"S5.T2.1.3.2.2.3\" style=\"font-size:50%;\">100 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.3.2.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.3.2.3.1\">0.085</span> <span class=\"ltx_text\" id=\"S5.T2.1.3.2.3.2\" style=\"font-size:50%;color:#808080;\">0.04 </span> <span class=\"ltx_text\" id=\"S5.T2.1.3.2.3.3\" style=\"font-size:50%;\">100 %</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.4.3.1\">Energy geom [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.4.3.2\">0.022 <span class=\"ltx_text\" id=\"S5.T2.1.4.3.2.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.4.3.3\">0.027 <span class=\"ltx_text\" id=\"S5.T2.1.4.3.3.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.5.4.1\">Energy opt [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.5.4.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.5.4.2.1\">0.016</span> <span class=\"ltx_text\" id=\"S5.T2.1.5.4.2.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.4.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.5.4.3.1\">0.020</span> <span class=\"ltx_text\" id=\"S5.T2.1.5.4.3.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.6.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.6.5.1\">Plan time geom [s]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.6.5.2\">0.4 <span class=\"ltx_text\" id=\"S5.T2.1.6.5.2.1\" style=\"font-size:50%;color:#808080;\">0.42</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.6.5.3\">0.1 <span class=\"ltx_text\" id=\"S5.T2.1.6.5.3.1\" style=\"font-size:50%;color:#808080;\">0.05</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.7.6.1\">Plan time opt [s]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.7.6.2\">26.8 <span class=\"ltx_text\" id=\"S5.T2.1.7.6.2.1\" style=\"font-size:50%;color:#808080;\">9.45</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.7.6.3\">46.0 <span class=\"ltx_text\" id=\"S5.T2.1.7.6.3.1\" style=\"font-size:50%;color:#808080;\">12.5</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.8.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.8.7.1\" rowspan=\"6\"><span class=\"ltx_text\" id=\"S5.T2.1.8.7.1.1\">forest</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.8.7.2\">Error geom [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.8.7.3\">0.092 <span class=\"ltx_text\" id=\"S5.T2.1.8.7.3.1\" style=\"font-size:50%;color:#808080;\">0.08 </span> <span class=\"ltx_text\" id=\"S5.T2.1.8.7.3.2\" style=\"font-size:50%;\">60 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.8.7.4\">\u2014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.9.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.9.8.1\">Error opt [m]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.9.8.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.9.8.2.1\">0.057</span> <span class=\"ltx_text\" id=\"S5.T2.1.9.8.2.2\" style=\"font-size:50%;color:#808080;\">0.04 </span> <span class=\"ltx_text\" id=\"S5.T2.1.9.8.2.3\" style=\"font-size:50%;\">100 %</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.9.8.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.9.8.3.1\">0.118</span> <span class=\"ltx_text\" id=\"S5.T2.1.9.8.3.2\" style=\"font-size:50%;color:#808080;\">0.05 </span> <span class=\"ltx_text\" id=\"S5.T2.1.9.8.3.3\" style=\"font-size:50%;\">100 %</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.10.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.10.9.1\">Energy geom [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.10.9.2\">0.017 <span class=\"ltx_text\" id=\"S5.T2.1.10.9.2.1\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.10.9.3\">\u2014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.11.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.11.10.1\">Energy opt [Wh]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.11.10.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.11.10.2.1\">0.012</span> <span class=\"ltx_text\" id=\"S5.T2.1.11.10.2.2\" style=\"font-size:50%;color:#808080;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.11.10.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.11.10.3.1\">0.024</span> <span class=\"ltx_text\" id=\"S5.T2.1.11.10.3.2\" style=\"font-size:50%;color:#808080;\">0.01</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.12.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.12.11.1\">Plan. time geom [s]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.1.12.11.2\">0.3 <span class=\"ltx_text\" id=\"S5.T2.1.12.11.2.1\" style=\"font-size:50%;color:#808080;\">0.43</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.12.11.3\">168.4 <span class=\"ltx_text\" id=\"S5.T2.1.12.11.3.1\" style=\"font-size:50%;color:#808080;\">105.6</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.13.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.13.12.1\">Plan. time opt [s]</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.1.13.12.2\">22.2 <span class=\"ltx_text\" id=\"S5.T2.1.13.12.2.1\" style=\"font-size:50%;color:#808080;\">3.71</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.13.12.3\">57.8 <span class=\"ltx_text\" id=\"S5.T2.1.13.12.3.1\" style=\"font-size:50%;color:#808080;\">16.13</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 152 |
+
"capture": "TABLE II: Real Flights Results.\nShown are mean values for the tracking error, energy usage and the planning time of each approach of 10 flight experiments executed on-board of the CFs to track the reference trajectories. Standard deviation is small gray. Percentages are success rates.\n"
|
| 153 |
+
}
|
| 154 |
+
},
|
| 155 |
+
"image_paths": {
|
| 156 |
+
"1": {
|
| 157 |
+
"figure_path": "2310.03394v3_figure_1.png",
|
| 158 |
+
"caption": "Figure 1: Highlighted in the red box is our full kinodynamic motion planning algorithm. The geometric output of a sampling-based motion planner is used to initialize an optimizer, which generates the full feasible reference trajectory of the payload \ud835\udc290rsubscript\ud835\udc29subscript0\ud835\udc5f\\mathbf{p}_{0_{r}}bold_p start_POSTSUBSCRIPT 0 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_POSTSUBSCRIPT and the cable states \ud835\udc2airsubscript\ud835\udc2asubscript\ud835\udc56\ud835\udc5f\\mathbf{q}_{i_{r}}bold_q start_POSTSUBSCRIPT italic_i start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_POSTSUBSCRIPT. One can also use our sampling-based motion planner and compute the first order derivatives of the geometric states (\ud835\udc29\u02d90r,\ud835\udf4eir)subscript\u02d9\ud835\udc29subscript0\ud835\udc5fsubscript\ud835\udf4esubscript\ud835\udc56\ud835\udc5f(\\dot{\\mathbf{p}}_{0_{r}},\\boldsymbol{\\omega}_{i_{r}})( over\u02d9 start_ARG bold_p end_ARG start_POSTSUBSCRIPT 0 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_POSTSUBSCRIPT , bold_italic_\u03c9 start_POSTSUBSCRIPT italic_i start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) to provide a reference trajectory. Each reference trajectory is then tracked by our controller wahba2023efficient.",
|
| 159 |
+
"url": "http://arxiv.org/html/2310.03394v3/x1.png"
|
| 160 |
+
},
|
| 161 |
+
"2(a)": {
|
| 162 |
+
"figure_path": "2310.03394v3_figure_2(a).png",
|
| 163 |
+
"caption": "Figure 2: Validation scenarios. From left to right: empty (3 robots), forest (4 robots), window (5 robots).\nGreen UAVs (left on each picture) show the initial state, red UAVs the desired state. The red line represents the reference trajectory, and the white line is the tracked trajectory by our controller.\nFor window, the obstacles necessitate a formation change to pass through a narrow passage.",
|
| 164 |
+
"url": "http://arxiv.org/html/2310.03394v3/extracted/5892651/figs/env_empty.png"
|
| 165 |
+
},
|
| 166 |
+
"2(b)": {
|
| 167 |
+
"figure_path": "2310.03394v3_figure_2(b).png",
|
| 168 |
+
"caption": "Figure 2: Validation scenarios. From left to right: empty (3 robots), forest (4 robots), window (5 robots).\nGreen UAVs (left on each picture) show the initial state, red UAVs the desired state. The red line represents the reference trajectory, and the white line is the tracked trajectory by our controller.\nFor window, the obstacles necessitate a formation change to pass through a narrow passage.",
|
| 169 |
+
"url": "http://arxiv.org/html/2310.03394v3/extracted/5892651/figs/env_forest.png"
|
| 170 |
+
},
|
| 171 |
+
"2(c)": {
|
| 172 |
+
"figure_path": "2310.03394v3_figure_2(c).png",
|
| 173 |
+
"caption": "Figure 2: Validation scenarios. From left to right: empty (3 robots), forest (4 robots), window (5 robots).\nGreen UAVs (left on each picture) show the initial state, red UAVs the desired state. The red line represents the reference trajectory, and the white line is the tracked trajectory by our controller.\nFor window, the obstacles necessitate a formation change to pass through a narrow passage.",
|
| 174 |
+
"url": "http://arxiv.org/html/2310.03394v3/extracted/5892651/figs/env_window.png"
|
| 175 |
+
},
|
| 176 |
+
"3(a)": {
|
| 177 |
+
"figure_path": "2310.03394v3_figure_3(a).png",
|
| 178 |
+
"caption": "Figure 3: Left: Computational effort in seconds for the optimization to compute a solution in the forest environment over different numbers of robots. Right: Solution quality (in terms of energy) when sequentially solving the kinodynamic optimization over multiple iterations.",
|
| 179 |
+
"url": "http://arxiv.org/html/2310.03394v3/extracted/5892651/figs/whiskers_forest_50trials.png"
|
| 180 |
+
},
|
| 181 |
+
"3(b)": {
|
| 182 |
+
"figure_path": "2310.03394v3_figure_3(b).png",
|
| 183 |
+
"caption": "Figure 3: Left: Computational effort in seconds for the optimization to compute a solution in the forest environment over different numbers of robots. Right: Solution quality (in terms of energy) when sequentially solving the kinodynamic optimization over multiple iterations.",
|
| 184 |
+
"url": "http://arxiv.org/html/2310.03394v3/extracted/5892651/figs/time_energy_plots_min_EDIT.png"
|
| 185 |
+
},
|
| 186 |
+
"4": {
|
| 187 |
+
"figure_path": "2310.03394v3_figure_4.png",
|
| 188 |
+
"caption": "Figure 4: Examples for the sampling-based geometric planner using different environment and sampling strategies. The plot shows the mean and standard deviation (shaded) for cost convergence over runtime (log-scale), if the success rate is over 50%percent5050\\%50 % (50 trials).",
|
| 189 |
+
"url": "http://arxiv.org/html/2310.03394v3/extracted/5892651/figs/plot1_50trials.png"
|
| 190 |
+
},
|
| 191 |
+
"5(a)": {
|
| 192 |
+
"figure_path": "2310.03394v3_figure_5(a).png",
|
| 193 |
+
"caption": "Figure 5: Real flights validation scenarios. left: forest (3 robots), right: window (2 robots). The payload in both scenarios is modeled as a 10 gtimes10g10\\text{\\,}\\mathrm{g}start_ARG 10 end_ARG start_ARG times end_ARG start_ARG roman_g end_ARG point mass.",
|
| 194 |
+
"url": "http://arxiv.org/html/2310.03394v3/extracted/5892651/figs/3cfs_forest_edited.png"
|
| 195 |
+
},
|
| 196 |
+
"5(b)": {
|
| 197 |
+
"figure_path": "2310.03394v3_figure_5(b).png",
|
| 198 |
+
"caption": "Figure 5: Real flights validation scenarios. left: forest (3 robots), right: window (2 robots). The payload in both scenarios is modeled as a 10 gtimes10g10\\text{\\,}\\mathrm{g}start_ARG 10 end_ARG start_ARG times end_ARG start_ARG roman_g end_ARG point mass.",
|
| 199 |
+
"url": "http://arxiv.org/html/2310.03394v3/extracted/5892651/figs/2cfs_window.png"
|
| 200 |
+
}
|
| 201 |
+
},
|
| 202 |
+
"validation": true,
|
| 203 |
+
"references": [],
|
| 204 |
+
"url": "http://arxiv.org/html/2310.03394v3"
|
| 205 |
+
}
|
20241001/2310.04922v4.json
ADDED
|
@@ -0,0 +1,213 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Robust Multivariate Detection and Estimation with Fault Frequency Content Information",
|
| 3 |
+
"abstract": "This paper studies the problem of fault detection and estimation (FDE) for linear time-invariant (LTI) systems with a particular focus on frequency content information of faults, possibly as multiple disjoint continuum ranges, and under both disturbances and stochastic noise.\nTo ensure the worst-case fault sensitivity in the considered frequency ranges and mitigate the effects of disturbances and noise, an optimization framework incorporating a mixed performance index is developed to compute the optimal detection filter.\nMoreover, a thresholding rule is proposed to guarantee both the false alarm rate (FAR) and the fault detection rate (FDR).\nNext, shifting attention to fault estimation in specific frequency ranges, an exact reformulation of the optimal estimation filter design using the restricted performance index is derived, which is inherently non-convex. However, focusing on finite frequency samples and fixed poles, a lower bound is established via a highly tractable quadratic programming (QP) problem.\nThis lower bound together with an alternating optimization (AO) approach to the original estimation problem leads to a suboptimality gap for the overall estimation filter design.\nThe effectiveness of the proposed approaches is validated through applications of a non-minimum phase hydraulic turbine system and a multi-area power system.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "1. Introduction",
|
| 9 |
+
"text": "Fault diagnosis has been the focus of research in the past decades due to its critical importance in ensuring the safety and reliability of various engineering systems, such as power networks, vehicle dynamics, and aircraft systems [1 ###reference_b1###, 2 ###reference_b2###].\nTimely and accurate FDE of faults while a system is still operating in a controllable condition, can help prevent further damage and reduce losses.\nHowever, FDE performance is inevitably affected in practice by model uncertainties, disturbances, and stochastic noise, which can result in false alarms, missing detection, and large estimation errors. Hence, it is essential to consider these interferences when designing FDE methods.\nIn recent years, there also has been growing recognition of the need to address faults in specific frequency ranges.\nThis stems from the fact that many practical faults (or cyber-attack signals [3 ###reference_b3###]) exhibit distinct frequency characteristics, e.g., incipient faults in low-frequency ranges and actuator stuck faults with zero frequency [4 ###reference_b4###].\nExisting FDE methods developed for the entire frequency range can cause conservatism when dealing with these faults.\nMotivated by the above issues, this study focuses on the FDE problem in specific frequency ranges, considering both disturbances and stochastic noise.\nFault detection:\nA number of model-based fault detection methods have been developed for dynamical systems with disturbances and noise.\nThe basic idea is to design residual generators using observer-based or parity-space approaches [2 ###reference_b2###].\nThe outputs of residual generators (called residuals), that are used to indicate the occurrence of faults, should be sensitive to faults and robust to disturbances and noise, simultaneously.\nTo this end, performance indices, such as and norms are employed to measure the robustness against disturbances and noise.\nThe index, representing the worst-case fault sensitivity, is incorporated into the design of residual generators.\nFor instance, the authors in [5 ###reference_b5###] first proposed the observer.\nAnother residual generation method [6 ###reference_b6###] developed in the framework of differential-algebraic equations (DAE) has attracted attention these years.\nThis method can find residual generators of the possibly lowest order compared to conventional observer-based or parity-space approaches.\nMoreover, it offers much design freedom due to the ability to characterize all possible residual generators for systems represented by DAE.\nAs a result, different fault detection methods have been developed in the DAE framework, such as accounting for nonlinear terms [7 ###reference_b7###] and modeling uncertainties [8 ###reference_b8###].\nNote that the above methods all consider the entire frequency range, where conservatism exists and the index will be zero for strictly proper systems.\nThe authors in [9 ###reference_b9###] addressed this issue by introducing a weighting function to enhance the index in a specific frequency range, and further provided the existing condition of a non-zero index.\nHowever, finding an appropriate weighting function is complex.\nIn contrast, the generalized Kalman-Yakubovich-Popov (GKYP) lemma in [10 ###reference_b10###] provides a way to directly constrain the index in a frequency range.\nBased on the GKYP lemma, the authors in [4 ###reference_b4###] employed the index to design a Luenberger observer for fault detection of LTI systems with enhanced fault sensitivity in a specific frequency range.\nFurthermore, the integration of index and the GKYP lemma has been incorporated into the design of fault detection approaches for linear parameter-varying descriptor systems [11 ###reference_b11###] and nonlinear systems [12 ###reference_b12###].\nConsidering that the norm representing the peak value of a signal is more suitable for residual evaluation compared to the norm, the authors in [13 ###reference_b13###] chose the index to design the fault detection observer for linear descriptor systems.\nFor a more comprehensive analysis of different indices used in fault detection problems, such as , , and , see [14 ###reference_b14###].\nIn addition, the index has been investigated in the time domain as well, where fault sensitivity in a finite or infinite time horizon is maximized, see for example [15 ###reference_b15###, 16 ###reference_b16###].\nIt is worth mentioning that the aforementioned methods using the and norms typically consider disturbances or noise with bounded energy or peak values, which results in conservative diagnosis results. Moreover, the deterministic bounds are generally difficult to obtain in practical scenarios [17 ###reference_b17###].\nTherefore, exploiting the stochastic nature of these signals can be a promising alternative.\nMoreover, to our knowledge, little attention has been paid to designing residual generators for fault detection within specific frequency ranges, accounting for both disturbances and stochastic noise.\nFault estimation:\nAccurate fault estimation that provides the size and shape of faults is a fundamental task in the fault diagnosis area.\nMany model-based fault estimation methods are based on observers [18 ###reference_b18###, 19 ###reference_b19###], which generally require fault signals to be finitely differentiable.\nDifferent from observer-based methods, fault estimation filters do not require estimates of system states and assumptions regarding the derivatives of fault signals, such as the system-inversion-based fault estimation filters developed in [20 ###reference_b20###].\nHowever, the existence of a stable system-inversion-based estimation filter cannot be ensured when there are unstable zeros (i.e., in non-minimum-phase systems).\nAnother approach to designing fault estimation filters is directly minimizing the difference between the transfer function of the fault subsystem and the identity matrix in the optimization framework, as presented in [21 ###reference_b21###].\nOnce again, the above estimation methods are for the entire frequency range.\nThe existing methods for fault estimation in the frequency domain are primarily built on observer-based methods and the GKYP lemma.\nThe authors in [22 ###reference_b22###] designed a fault estimation observer for LTI systems, where the norm defined in a specific frequency range was employed to mitigate the effects of disturbances and faults on estimation errors.\nThe result was then extended and applied to Takagi\u2013Sugeno fuzzy systems [23 ###reference_b23###] and descriptor systems [24 ###reference_b24###].\nHowever, the design of fault estimation filters considering fault frequency content information has received considerably less attention.\nTo the best of our knowledge, only [25 ###reference_b25###] and [26 ###reference_b26###] investigated this problem.\nIn particular, the authors in [25 ###reference_b25###, Theorem 14.6] incorporated a weighting function into the optimization framework to improve fault estimation performance in a specific frequency range. However, as mentioned before, the selection process of a proper weighting function is complex.\nThe recent result [26 ###reference_b26###] designed the fault estimation filter represented by a rational matrix with constant inertia in the frequency region to attenuate disturbances, but it only considered fault estimation in the steady-state.\nTherefore, developing a tractable design method for fault estimation filters in the frequency domain capable of dealing with disturbances, stochastic noise, and a broader class of faults is meaningful.\nMain contributions:\nIn view of the existing results mentioned above, this study pioneers the design of FDE filters exploiting fault frequency content information in the DAE framework.\nCompared to the existing results focusing on FDE in the frequency domain, the proposed design framework offers the following key features:\n(i) it can deal with disturbances and stochastic noise and does not require assumptions on the derivatives of fault signals, thus applicable to a larger class of fault diagnosis problems;\n(ii) it produces FDE filters of the possibly lowest order compared to observer-based methods;\n(iii) it offers design flexibility by allowing for residuals of arbitrary dimensions and enabling the simultaneous design of both the numerator and denominator of FDE filters, while other fault diagnosis methods developed within the DAE framework typically design one-dimensional residuals with fixed denominators [7 ###reference_b7###, 8 ###reference_b8###];\n(iv) the design of FDE filters, which considers fault frequency content spanning multiple disjoint continuum ranges, is formulated into a unified optimization framework using the GKYP lemma. This approach significantly simplifies the design process of FDE filters in the frequency domain.\nNote that the derived optimization problems for filter design are inherently non-convex, for which an efficient approach is developed to approximate a suboptimal solution along with explicit performance bounds.\nThe contributions of this paper are summarized as follows:\nOptimal detection with fault frequency content:\nThe design of the fault detection filter, utilizing index in the DAE framework, is formulated as a finite optimization problem (Theorem 3.1 ###reference_Thm1###). This enables the derived filter to handle disturbances and stochastic noise while enhancing fault sensitivity across the set of disjoint continuum frequency ranges.\nThresholding with false alarm rate and fault detection rate guarantees:\nA thresholding rule that provides guarantees on FAR and FDR (Theorem 3.6 ###reference_Thm6###) is developed, which improves the current literature (e.g., [17 ###reference_b17###, 27 ###reference_b27###]) by extending the setting to multivariate residuals and ensuring FAR and FDR simultaneously.\nOptimal estimation with fault frequency content:\nShifting attention from detection to estimation, the index is replaced with the \u201drestricted\u201d norm in specific frequency ranges.\nThe fault estimation filter design is then reformulated in the DAE framework as a finite optimization problem (Theorem 4.1 ###reference_Thm1###).\nIn contrast to the existing estimation results that focus on faults represented by either step signals [28 ###reference_b28###] or polynomials [19 ###reference_b19###], this study considers a larger class of faults with frequency content containing multiple disjoint continuum ranges.\nConvex approximation with suboptimality gap:\nBy relaxing frequency ranges to finitely many samples, the estimation problem is lower bounded by a QP problem (Theorem 4.2 ###reference_Thm2###), whose solution can be approximated by a closed-form formula (Corollary 4.3 ###reference_Thm3###).\nCombining this with an AO approach to the original estimation problem yields a suboptimality gap for the overall design with given fixed filter poles (Proposition 4.4 ###reference_Thm4###).\nThe rest of the paper is organized as follows.\nThe problem formulation is introduced in Section 2 ###reference_###. Section 3 ###reference_### presents design methods for the fault detection filter and the thresholding rule.\nIn Section 4 ###reference_###, design methods for the fault estimation filter and the derivation of the suboptimality gap are developed.\nTo improve the flow of the paper and its accessibility, some technical proofs are relegated to Section 5 ###reference_###.\nThe proposed approaches are applied to a non-minimum phase system and a multi-area power system in Section 6 ###reference_### to demonstrate their effectiveness.\nFinally, Section 7 ###reference_### concludes the paper with future directions."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "2. Model Description and Problem Statement",
|
| 15 |
+
"text": "Consider the following discrete-time LTI system\nwhere , , , and are the state, control input, disturbance, and measurement output, respectively.\nThe signal denotes the independent and identically distributed (i.i.d.) white noise with zero mean.\nThe signal denotes the fault.\nSystem matrices in (1 ###reference_###) are all known with appropriate dimensions. Throughout this study, our filter design is restricted to a subclass of fault signals with the following frequency content.\nThe fault signal frequency content, also referred to as the signal spectrum, is the union of the disjoint intervals where and for all .\nIn other words, the fault signal can be fully characterized in the frequency domain via where is the Discrete-Time Fourier Transform.\nThis class of fault signals is denoted by .\nThe objective of this work is to design filters that can detect and estimate faults with frequency content through the control input and the measurement .\nTo this end, we consider filters in the DAE framework and introduce the time-shift operator , i.e., .\nThen, the state-space model (1 ###reference_###) is transformed into the DAE format\nwhere is the unknown initial condition, the polynomial matrices , , and are given by\nGiven the DAE format of the system, the filter is defined as\nwhere is the residual, is a polynomial matrix with coefficients and degree .\nThe denominator is , where and is the degree of with to ensure that the filter is strictly proper. Note that the parameters of , i.e., and , are the filter variables to be determined.\nMultiplying (2 ###reference_###) from the left side by , the residual becomes\nwhere .\nThe right-hand side of (4 ###reference_###) indicates the input-output relations from , and to , based on which one can design such that desired mapping relations are satisfied for different diagnosis purposes.\nSubsequently, for the sake of exposition, these mapping relations are denoted as\nThe contribution of the initial condition, i.e., the last term in (4 ###reference_###), vanishes exponentially fast under appropriate stability conditions.\nAssumption 2.2 ###reference_Thm2### is commonly adopted in fault detection literature [29 ###reference_b29###, 30 ###reference_b30###].\nNext, the two problems studied in this work are presented, including (i) fault detection (Section 2.1 ###reference_###), and (ii) fault estimation (Section 2.2 ###reference_###)."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "2.1. Problem 1: Fault detection",
|
| 21 |
+
"text": "In order to formally introduce the fault detection problem, the norm and index of a transfer function, e.g., , are introduced as follows.\nAssume is stable. The norm of is defined as\nand corresponds to the asymptotic variance of the output when the system is driven by the white noise with zero mean.\nThe index of in a single continuum frequency range is defined as\nwhich can also be rewritten as with denoting the minimum singular value.\nLet us look into the right-hand side of (4 ###reference_###). For fault detection problem, the residual is expected to be insensitive to , robust to , and sensitive to in . First, to decouple from , it needs to guarantee that\nIn view of the desired mapping conditions (5 ###reference_###), the design of the fault detection filter is formulated as the following optimization problem.\nConsider the system (1 ###reference_###), the filter to be designed in (3 ###reference_###), and the expression of the residual (4 ###reference_###). Given a scalar , find via the minimization program:\nThe following assumption is introduced to guarantee the feasibility of Problem 1a.\nThe pair is observable. For with and , the following rank condition holds\nDenote the transfer functions from to and to by and , respectively.\nIt readily follows\nif Assumption 2.5 ###reference_Thm5### holds [25 ###reference_b25###, Theorem 6.2].\nTherefore, Assumption 2.5 ###reference_Thm5### ensures simultaneously the following: (i) the disturbance can be decoupled, and (ii) the fault satisfies input observability condition in , which also indicates that there are no unstable invariant zeros in .\nThe second term is necessary for a nonzero index [9 ###reference_b9###, Lemma 5].\nNote that the fault frequency content information is incorporated into the analysis, which is derived from the classical result on the input observability condition in [32 ###reference_b32###, Theorem 3] and [25 ###reference_b25###, Corollary 14.1].\nAdditionally, a solution to Problem 1a ensures that the residual can be written as\nwhere no dependency on is present because it is decoupled.\nIn practice, the residual will oscillate around zero as a response to the noise in the absence of . In contrast, the residual will ideally be away from zero when a fault happens.\nSubsequently, let us take the average -norm of over a time interval as the evaluation function, i.e.,\nwhere . Given a threshold , the following fault detection logic is introduced:\nNote that false alarms and missing detection of faults are inevitable due to the random nature of noise.\nTo tackle these issues, a threshold that can provide guarantees on FAR and FDR is considered in the following problem.\nGiven the fault detection filter constructed from Problem 1a, an acceptable FAR , and a set of fault signals of interest , determine the threshold such that:\nwhere is the lower bound on FDR to be computed.\nThere are fewer results in the literature on FDR computation because different elements of multivariate fault signals may cancel out each other\u2019s contributions to the residual [3 ###reference_b3###].\nAs a result, there is no guarantee that FDR even exists.\nBy assuming that a set of faults is detectable, authors in [25 ###reference_b25###, Section 12.1] propose a computation method of FDR in the norm-based framework.\nIn this work, the index is employed to ensure fault sensitivity, which paves the path for FDR computation in a stochastic way."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "2.2. Problem 2: Fault estimation",
|
| 27 |
+
"text": "In certain scenarios, it becomes essential not just to identify the occurrence of faults, but also to estimate them precisely.\nFor instance, incorporating fault estimates into fault-tolerant controllers is a common practice to counteract the effects of faults [33 ###reference_b33###].\nHere, to ensure that the residual follows fault signals within , a stable in (3 ###reference_###) is determined such that the subsequent relation holds\nwhere is an upper bound.\nThe estimation condition (8 ###reference_###) is consistent with the format of the restricted norm in a specific frequency range.\nThe restricted norm of a transfer function in a single continuum frequency range is defined as\nwhich can also be rewritten as \nwith denoting the maximum singular value.\nAs a result, based on Definition 2.7 ###reference_Thm7###, the condition (8 ###reference_###) can be equivalently written as\nAs shown in (9 ###reference_###), the transfer function is designed to approximate the identity matrix over , so that can be viewed as an estimate of if is sufficiently close to .\nThis is different from the system-inversion-based estimation approaches [35 ###reference_b35###, 29 ###reference_b29###] which require (known as the perfect estimation condition).\nWe would like to point out that the perfect estimation condition is demanding and generally impossible to achieve because it contains infinite equality constraints, especially when there are disturbances, noise, or unstable zeros.\nWith the condition (9 ###reference_###), our second problem is to design the fault estimation filter through the following optimization problem, where conditions (5a ###reference_1###) and (5b ###reference_2###) are maintained to address and , respectively.\nConsider the system (1 ###reference_###), the filter to be designed in (3 ###reference_###), and the expression of the residual (4 ###reference_###). Given a scalar , find via the minimization program:\nThe condition (9 ###reference_###) for fault estimation is more stringent compared to the condition (5c ###reference_3###) used for fault detection.\nIn particular, it suffices to let the minimum singular value of be positive for fault detection, whereas needs to be as close to as possible to obtain satisfactory estimation performance.\nAdditionally, filters that satisfy condition (9 ###reference_###) with a sufficiently small norm can provide a positive index, but the opposite is not true."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "3. Fault Detection: Optimal Design and Thresholding",
|
| 33 |
+
"text": "This section presents design methods for the fault detection filter and the thresholding rule that provides guarantees on FAR and FDR.\nTo improve the clarity of presentation, some proofs are relegated to Section 5 ###reference_###."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "3.1. Fault detection filter design",
|
| 39 |
+
"text": "Let us start by considering to be designed in (3 ###reference_###).\nIn , the degrees , , the residual dimension , and coefficients of and are all design parameters. For simplicity, let and be fixed, and set throughout the subsequent analysis.\nTo compute the norm and index, the mapping relations and are represented in the observable canonical forms denoted by and , respectively.\nLet denote the -th row of for and .\nThen, the matrices , and are given by\nNote that the parameters and to be determined are reformulated into , , and .\nAn advantage of such a transformation is that all the design parameters are decoupled from each other.\nThis allows us to exactly formulate the design of the fault detection filter into a bilinear optimization problem as stated in the following theorem.\nConsider the system (1 ###reference_###), the structure of the filter (3 ###reference_###), and the state-space realizations and .\nGiven the degree , , the dimension of the residual , a scalar , a sufficiently small , and the fault frequency content information , the minimization program in Problem 1a can be equivalently stated as follows\nwhere for each frequency range , the variables and with \nand .\nThe proof is relegated to Section 5.1 ###reference_###.\n\u220e\nTheorem 3.1 ###reference_Thm1### builds on the celebrated GKYP lemma [10 ###reference_b10###], which provides three reformulations depending on the desired frequency regimes (low, middle, and high-frequency; see also Lemma 5.1 ###reference_Thm1### in the proof section). It is worth noting that the assertion of Theorem 3.1 ###reference_Thm1### leverages only the middle-frequency part of this lemma, as it covers all the cases required in this study.\nIn addition, note that the optimization problem (11 ###reference_###) is nonlinear because of the bilinear terms in (11b ###reference_.2###), and , and their transpose in (11c ###reference_.3###).\nTo tackle this issue, the AO method is employed, which divides the decision variables in the bilinear terms into two sets and then optimizes over the two sets of variables alternatively.\nOne way of division is\nwhere serves as the iteration indicator.\nThe initial values for the optimization process are derived as follows. Initially, a stable denominator, denoted by with coefficients , is chosen.\nNext, the coefficients of , i.e., , are determined by solving equation (11a ###reference_.1###) subject to the constraint to avoid the trivial solution.\nSubsequently, the initial values of and are found via (11b ###reference_.2###) and (11c ###reference_.3###), respectively.\nWith these preparations completed, the AO process can be initiated to solve the filter.\nThe whole procedure is summarized in Algorithm 1 ###reference_###.\nSet , fault frequency ranges , the iteration indicator , and select a stable denominator\nCompute via (11a ###reference_.1###) with\nCompute and via (11b ###reference_.2###) and (11c ###reference_.3###), respectively\nSelect , a sufficiently small\nWhile , do\nWith and , compute and by solving (11 ###reference_###) over\nWith and , compute and by solving (11 ###reference_###) over\nSet\nReturn final results and\nWhen using the GKYP lemma to deal with condition (5c ###reference_3###), an auxiliary matrices is introduced to obtain the matrix inequalities in (11c ###reference_.3###).\nDifferent from previous results where is predefined [4 ###reference_b4###, 13 ###reference_b13###, 16 ###reference_b16###], it is treated as a decision variable here.\nThis is motivated by the potentially large number of parameters that need determination in for systems of large scale or dimension.\nImproper selection of can result in poor indices or even render constraints infeasible.\nMoreover, using relaxation techniques, e.g., [36 ###reference_b36###, Lemma 1], to transform (11c ###reference_.3###) into linear matrix inequalities easily leads to infeasible problems because multiple constraints restrict the feasible solution set. Therefore, the bilinear terms are retained and addressed using the AO approach.\n###figure_1### The proposed design approach enables the fault detection filter to have residuals of arbitrary dimensions.\nCompared to the results [7 ###reference_b7###, 3 ###reference_b3###, 8 ###reference_b8###] also developed in the DAE framework, which generate only one-dimensional residuals, our approach improves two deficiencies:\nConsider a two-dimensional residual depicted in Fig. 1 ###reference_### as an example.\nThe filters in [7 ###reference_b7###, 3 ###reference_b3###, 8 ###reference_b8###] cannot detect faults that lie on the same hyperplane as the disturbance, i.e., .\nBy considering the two-dimensional residual, faults that can bypass detection only exist at the intersection of two hyperplanes. This means that our approach reduces the size of the set containing undetectable faults;\nAs indicated in [3 ###reference_b3###], different elements of faults may cancel out each other\u2019s contributions to the one-dimensional residual. Our approach circumvents this issue by ensuring fault sensitivity with a positive index."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "3.2. Thresholding rule",
|
| 45 |
+
"text": "With the fault detection filter constructed by solving the optimization problem (11 ###reference_###) and the residual evaluation function defined in (6 ###reference_###), the next is to determine the threshold which provides probabilistic guarantees on FAR and FDR as outlined in Problem 1b.\nTo proceed, let us first introduce the following lemma and assumption to be used hereafter.\nLet be subject to a sub-Gaussian distribution with mean and parameter , i.e.,\nwhere .\nThen, the following inequality holds\nThe measurement noise follows the i.i.d. sub-Gaussian distribution with zero mean and a time-invariant parameter .\nThe class of sub-Gaussian distributions is board, containing Gaussian, Bernoulli, and all bounded distributions. Also, the tails of sub-Gaussian distributions decrease exponentially fast from (13 ###reference_###), which is expected in many applications.\nGiven an acceptable FAR, the following theorem provides the determination method of the threshold and FDR.\nSuppose Assumption 3.5 ###reference_Thm5### holds. Consider the system (1 ###reference_###), the evaluation function in (6 ###reference_###), the fault detection filter obtained by solving (11 ###reference_###) with the derived values and , and faults of interest . Given an acceptable FAR , the probabilistic performance (7a ###reference_1###) in Problem 1b is achieved if the threshold is set as\nand, when , FDR in (7b ###reference_2###) satisfies\nThe proof is relegated to Section 5.1 ###reference_###.\n\u220e\nFrom the concentration property of sub-Gaussian distributions, the threshold in (14 ###reference_###) depends logarithmically on FAR, i.e.,. This improves the state-of-the-art results (e.g., [17 ###reference_b17###] and [25 ###reference_b25###, Section 10.2.1]), which rely on Chebyshev\u2019s inequality and result in thresholds that scale polynomially with . The threshold (14 ###reference_###) also extends our previous work [27 ###reference_b27###, Theorem 3.8] where the one-dimensional residual is considered. In addition, a lower bound for is derived to ensure that FDR can be achieved in (15 ###reference_###)."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "4. Fault Estimation: Optimal Design and Suboptimality Gap",
|
| 51 |
+
"text": "This section presents design methods for the fault estimation filter and the derivation process of a suboptimality gap for the original estimation problem.\nTo improve the clarity of presentation, some proofs are relegated to Section 5 ###reference_###."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "4.1. Fault estimation filter design",
|
| 57 |
+
"text": "The formulation of the fault estimation filter is provided in (3 ###reference_###).\nBased on the desired mapping relations outlined in Problem 2, the design of the filter is formulated into a bilinear optimization problem in the following theorem.\nConsider the system (1 ###reference_###), the structure of the filter (3 ###reference_###), and the state-space realizations and . Given the filter order , , the dimension of residual , a scalar , a sufficiently small , and the fault frequency content information , the minimization program in Problem 2 can be equivalently stated as follows\nwhere for each frequency range , the variables , with and .\nIt is proved in Theorem 3.1 ###reference_Thm1### that (11a ###reference_.1###) and (11b ###reference_.2###) are equivalent to conditions (5a ###reference_1###) and (5b ###reference_2###), respectively.\nTo demonstrate the equivalence between constraints (4.1 ###reference_###) and conditions (9 ###reference_###), the state-space realization of is derived as .\nBy setting the matrix and using in Lemma 5.1 ###reference_Thm1###, the equivalence between (4.1 ###reference_###) and (9 ###reference_###) is established.\nThe proof procedure of the equivalence is similar to that of (11c ###reference_.3###) in the proof of Theorem 3.1 ###reference_Thm1###. This completes the proof.\n\u220e\nThe optimization problem in Theorem 4.1 ###reference_Thm1### can be solved using Algorithm 1 ###reference_### as well.\nHowever, the key to achieving satisfactory estimation results is to ensure that is sufficiently small.\nThis usually requires several iteration steps with Algorithm 1 ###reference_### and results in heavy computational loads when dealing with large-scale systems."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "4.2. Convex approximation with suboptimality gap",
|
| 63 |
+
"text": "To reduce the computational complexity, the estimation condition (9 ###reference_###) is relaxed by letting approximate the identity matrix at selected finite frequency points instead of considering all frequencies, i.e.,\nwhere . The relaxed version of Problem 2 is derived as follows.\nConsider the system (1 ###reference_###), the filter to be designed in (3 ###reference_###), and the expression of the residual (4 ###reference_###). Given a scalar , find via the minimization program:\nBefore presenting the solution to Problem 2r, let us make some clarifications on .\nFor simplicity, the poles of the filter are fixed. Specifically, roots of are selected inside the unit disk and the order is set as , so that the fault estimation filter is stable and strictly proper.\nThus, the coefficient matrices for become the only parameters to be determined.\nFor clarity, by using the multiplication rule of polynomial matrices [7 ###reference_b7###, Lemma 4.2], the transfer functions and outlined in (4 ###reference_###) are written as\nwhere\nSubsequently, the design method of the fault estimation filter with relaxed conditions depicted in Problem 2r is provided in the following theorem.\nConsider the system (1 ###reference_###), the structure of the filter (3 ###reference_###), and the reformulations of and in (18 ###reference_###). Given the order , the dimension , the stable denominator with , frequency points , and the weight , the optimization problem in Problem 2r can be reformulated as the following QP problem:\nwhere and are the real and imaginary parts of , respectively, and .\nThe proof is relegated to Section 5.2 ###reference_###.\n\u220e\nCompared to (4.1 ###reference_###), the design of the fault estimation filter presented in Problem 2r stands out for its integration of more lenient conditions, as expounded in reference (19 ###reference_###). Notably, this design exhibits computational tractability, owing to its formulation as a QP problem.\nIn addition, an approximate analytical solution to (19 ###reference_###) is given as follows.\nConsider the QP problem in (19 ###reference_###) with the norm replaced by the Frobenius norm. An approximate analytical solution to (19 ###reference_###) is:\nwhere denotes the pseudo-inverse.\nThe proof is relegated to Section 5.2 ###reference_###.\n\u220e\nIt is worth mentioning that, for a filter with given poles (fixed denominator ), a suboptimality gap for the original estimation problem stated in Problem 2 can be obtained by solving the optimization problems in Theorem 4.1 ###reference_Thm1### and Theorem 4.2 ###reference_Thm2###.\nThis result is presented in Proposition 4.4 ###reference_Thm4###.\nTo enhance readability, let us denote the optimal value of the objective function in Problem 2 as with a given denominator , i.e.,\nFurthermore, Let and denote the results obtained by solving the optimization problem (4.1 ###reference_###) using the AO approach.\nUse and to denote the optimal values obtained by solving the optimization problem (19 ###reference_###).\nSubsequently, the suboptimality gap for Problem 2 is presented in the next proposition.\nGiven a stable denominator , the optimal value of the objective function in Problem 2 is bounded by\nThe proof is relegated to Section 5.2 ###reference_###.\n\u220e\nIn contrast to the immediate acquisition of the lower bound from the optimization problem\u2019s resolution in reference (19 ###reference_###), the upper bound derived through the AO approach generally demands multiple iterative phases. This iterative nature can lead to substantial computational burdens unless the initial value is judiciously selected. Fortunately, a remedy lies in employing the solution from the more lenient design problem described in Theorem 4.2 ###reference_Thm2### as the starting point. This initial solution provides a solid foundation for refining the upper bound outlined in reference (4.1 ###reference_###) through the utilization of the AO approach in solving the optimization problem. The entire process is succinctly encapsulated in Algorithm 2 ###reference_###.\nSelect , , and a stable denominator\nSelect frequency points uniformly from the frequency range and the weight\nCompute the matrix , , and for\nFind the numerator and the bounds and by solving (19 ###reference_###)\nOutput the lower bound:\nSet as the initial condition and fix for (4.1 ###reference_###)\nOptimize the numerator by solving (4.1 ###reference_###) with the AO approach, and obtain and\nOutput the upper bound:\nThis section is closed with the following remarks on the proposed design approaches to fault estimation filters.\nThere is a trade-off between decoupling the unknown signals (consisting of the unknown state and disturbance ), suppressing the noise , and estimating the fault in (4.1 ###reference_###) and (19 ###reference_###). First, the feasible solutions to (4.1 ###reference_###) and (19 ###reference_###) lie in the left null space of , which restricts the choice of .\nSecond, increasing improves the noise suppression capability of the filter. However, it reduces the estimation performance and vice versa.\nThe trade-offs can, therefore, be used as a guide for selecting appropriate weights.\nWhen using the AO approach to solve the bilinear optimization problems stated in Theorem 3.1 ###reference_Thm1### and Theorem 4.1 ###reference_Thm1###, it is essential to partition the decision variables in the bilinear terms into two sets, namely and .\nWe observe that, for different optimization problems, the choice of decision variable sets greatly influences the convergence speed of the AO approach.\nIn particular, when solving the optimization problem (4.1 ###reference_###), if the decision variable sets are selected without overlap, i.e., and , it leads to a more efficient solution compared to the way in (12 ###reference_###).\nFor non-minimum phase systems, it is reported that the optimal distance between and in the framework is [25 ###reference_b25###, Theorem 14.5], i.e., , which indicates that a satisfactory fault estimation over the whole frequency range is not achievable.\nOur methods proposed in Theorem 4.1 ###reference_Thm1### and Theorem 4.2 ###reference_Thm2### can improve the estimation performance by limiting the frequency ranges.\nThis assertion will be substantiated by supporting evidence from simulation results.\nFor disturbances that cannot be completely decoupled, and supposing that the knowledge of disturbance frequency content is available, the restricted norm can be employed to limit their impact on residuals.\nIt is observed from off-line exhaustive simulations that expanding the frequency range of disturbances does not significantly affect fault sensitivity, while the ability to suppress disturbances degrades.\nThe conservatism of the fault estimation filter design method is summarized as follows:\nTo reduce computational complexity, a selective approach is adopted for the design of fault estimation filters in (19 ###reference_###), where constraints are only imposed on a subset of frequency points in .\nAs a result, the estimation performance at the other frequency points in may not be guaranteed.\nHowever, as demonstrated by simulation results, the degradation of estimation performance at those points is minor.\nFor simplicity, the denominator of the transfer function is fixed in the optimization problem (19 ###reference_###), which restricts the design freedom. However, including the simultaneous design of both and would result in a much more complex optimization problem, which might not be computationally tractable."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "5. Technical Proofs of Main Results",
|
| 69 |
+
"text": ""
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.1",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "5.1. Proofs of results in fault detection",
|
| 75 |
+
"text": "The following two lemmas are required for the proof of Theorem 3.1 ###reference_Thm1###.\nConsider a transfer function defined as . Given a symmetric matrix and a frequency range , the following statements are equivalent:\nThe inequality holds in the frequency range\nThere exist Hermitian matrices and with appropriate dimensions and such that\nwhere the following hold: \na. For the low frequency range , ; \nb. For the middle frequency range , , where and ; \nc. For the high frequency range , .\nFor matrices and with appropriate dimensions, the following statements are equivalent:\n, where denote the matrix satisfying ;\nThere exists a matrix such that .\nFirst, according to the multiplication rule of polynomial matrices [7 ###reference_b7###, Lemma 4.2], the constraint (11a ###reference_.1###) implies , which means that is completely decoupled from . Thus, (5a ###reference_1###) is satisfied.\nSecond, from the expression of in (4 ###reference_###), the transfer function from to is when (11a ###reference_.1###) is satisfied, and its state-space realization is denoted as .\nAccording to the classical result on norm [39 ###reference_b39###, Lemma 1], the equivalence between (11b ###reference_.2###) and (5b ###reference_2###) can be obtained directly.\nIn the last part of the proof, the equivalence between (11c ###reference_.3###) and the mapping relation (5c ###reference_3###) for a single frequency range is established.\nAccording to Lemma 5.2 ###reference_Thm2###, the first matrix inequality in (11c ###reference_.3###) is equivalent to\nwhere and . The above inequality can be expanded into\nRecall that the transfer function from to , denoted by , has a state-space realization given by .\nAccording to the middle-frequency case in Lemma 5.1 ###reference_Thm1###, the last equation of (22 ###reference_###) is equivalent to\nThus, it holds that for .\nThis completes the proof.\n\u220e\nThe following lemma is introduced to prove Theorem 3.6 ###reference_Thm6###.\nLet be the transfer function from to . If follows the i.i.d. sub-Gaussian distribution with zero mean and parameter , the signal is also sub-Gaussian with zero mean and the respective parameter .\nLet us first show that the given FAR is guaranteed if is determined by (14 ###reference_###) in the absence of faults.\nFrom the expression of the residual (4 ###reference_###), since is decoupled and .\nAccording to Lemma 5.3 ###reference_Thm3###, is sub-Gaussian and its parameter satisfies\nwhere (23 ###reference_###) holds by invoking Theorem 3.1 ###reference_Thm1###. Then, we have\nThe inequality (a) holds as a result of the equivalence between vector norms, i.e., . The inequality (b) holds due to the fact that where .\nThe inequality (c) is derived from the concentration inequality in Lemma 3.4 ###reference_Thm4###. And the inequality (d) is obtained according to (23 ###reference_###).\nSubstituting (14 ###reference_###) into the last inequality yields .\nThis completes the first part of the proof.\nThe second step is to demonstrate that (15 ###reference_###) holds for . Consider the residual in the presence of faults, whose expectation is .\nNote that is sub-Gaussian with the parameter as indicated above. Thus, for a positive scalar , it holds that\nwhich is equivalent to\nSince ,\nwe have\nLet . The above inequality becomes\nAdditionally, the following inequalities hold\nwhere the first inequality is derived from the equivalence between vector norms and the second inequality follows from the result in Theorem 3.1 ###reference_Thm1###, i.e., , and for .\nTo make sure that is positive, let\nThus, the lower bound of should satisfy .\nFinally, from inequalities (24 ###reference_###), we obtain\nThis completes the proof.\n\u220e"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.2",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "5.2. Proofs of results in fault estimation",
|
| 81 |
+
"text": "To prove Theorem 4.2 ###reference_Thm2###, the covariance matrix of the output of an LTI system driven by white noise is computed through the following lemma.\nConsider the expression of the residual in (4 ###reference_###) with the unknown signal decoupled. The noise is assumed to be i.i.d. white noise and the fault is considered to be deterministic. The covariance matrix of is given by\nLet be the impulse response of . The covariance function of denoted by for can be written as\nwhere is the covariance function of .\nBy applying the -transform on , the spectrum of denoted by is derived as\nwhere is the spectrum of .\nWhen , since is an uncorrelated sequence, we have\nwhere the inverse -transform and the fact that on the unit circle are used in the last two equations. Also, due to the derivative , it holds that\nThis completes the proof.\n\u220e\nFirst, it is demonstrated in Theorem 3.1 ###reference_Thm1### that (19a ###reference_.1###) is equivalent to condition (5a ###reference_1###).\nSecond, to show that (19b ###reference_.2###) implies the satisfaction of (5b ###reference_2###), let us recall that , where and is assumed to be deterministic. According to Lemma 5.4 ###reference_Thm4###, the covariance of satisfies\nwhere the inequality holds due to its demonstration through Taylor series expansion and comparison of terms of the same power for (defined in Lemma 3.4 ###reference_Thm4###).\nIt can be shown that for sub-Gaussian random variables, .\nAs a result, condition (5b ###reference_2###) which is introduced to suppress the effect of the noise on can be achieved by bounding the trace of . This also coincides with the norm.\nThe last part of the proof shows that the relaxed condition (17 ###reference_###) can be realized through (19c ###reference_.3###).\nNote that the singular values of a complex matrix are equal to those of the augmented matrix derived from .\nTherefore, constraining the -norm of the augmented matrix in (19c ###reference_.3###), which is constructed using the real and imaginary parts of , i.e., and , is equivalent to constraining .\nThis completes the proof.\n\u220e\nThe Lagrange function of (19 ###reference_###) is\nwhere with is the Lagrange multiplier. is the -th column of . According to the definition of Frobenius norm\nTaking the partial derivative of yields\nThen, setting the partial derivative to zero and considering the equality constraint (19a ###reference_.1###) leads to\nSolving this equation provides the analytical solution.\nThis completes the proof.\n\u220e\nLet us first show that the upper bound holds.\nSince the optimization problem (4.1 ###reference_###) is an exact reformulation of Problem 2, applying the AO approach to solve (4.1 ###reference_###) leads to the convergence of the objective function value to the optimal value of Problem 2.\nThus, the derived objective function value, i.e., , is an upper bound on .\nIn the second part of the proof, the satisfaction of the lower bound is proved by contradiction. Suppose that\nLet and denote the optimal solutions to\nrespectively.\nRecall the definition of the restricted norm. For all sampling frequency points , it holds that\nwhich contradicts the fact that is the optimal solution to .\nThus, we have .\nAdditionally, the constraints (5a ###reference_1###) and (5b ###reference_2###) on noise suppression and disturbance decoupling are identical in both Problem 2 and Problem 2r. As a result, the optimal objective value of Problem 2r, obtained by solving (19 ###reference_###), serves as a lower bound for . This completes the proof.\n\u220e"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "6",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "6. Simulation results",
|
| 87 |
+
"text": "The effectiveness of the proposed FDE methods is validated on a non-minimum phase hydraulic turbine system and on a multi-area power system."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6.1",
|
| 91 |
+
"parent_section_id": "6",
|
| 92 |
+
"section_name": "6.1. A hydraulic turbine system",
|
| 93 |
+
"text": "Note that non-minimum phase systems are prevalent in a wide range of practical applications, such as aerospace engineering, power systems, etc.\nThe ubiquity of non-minimum phase systems in the real-world underscores the critical importance of developing fault diagnosis methods for them.\nHowever, the inherent characteristics of non-minimum phase systems, particularly their unstable inverse response behavior, pose significant challenges in fault estimation, as discussed in Remark LABEL:rem:_non_minimum.\nTo address this issue, we develop fault estimation filter design techniques that focus on specific frequency bands of interest, offering significant advantages in estimation performance compared to existing results.\nTo verify the performance, a hydraulic turbine system from [40 ###reference_b40###] is considered as follows\nwhere and are the turbine valve and the turbine speed, respectively.\nThe fault on the turbine valve is denoted as .\nThe system has an unstable zero at .\nTo facilitate diagnosis filter design, the transfer function of the hydraulic turbine system is transferred into the state-space representation and discretized with the sampling period s.\nIn addition, though modeling errors exist caused by discretization, their effects are negligible when the sampling interval is sufficiently small.\nIn this part, methods developed in Theorem 4.1 ###reference_Thm1### (ER, exact reformulation) and Theorem 4.2 ###reference_Thm2### (RR, relaxed reformulation) are used to estimate the fault signal in the absence of disturbances and noise.\nIn the simulation, the proposed estimation methods are compared with the UIO (unknown input observer) method [33 ###reference_b33###], the LS (least square) method [29 ###reference_b29###], and the IUIE (inversion-based unknown input estimation) method [20 ###reference_b20###]. Both the UIO, LS, and IUIE methods are proven to be asymptotically unbiased estimation methods under certain conditions.\nThe frequency range of interest is and the fault signal is sampled from the corresponding continuous-time signal with the sampling time s here.\nFirst, a stable denominator is selected as and frequency points are chosen when using the RR method in Theorem 4.2 ###reference_Thm2### to design the fault estimation filter.\nBy solving the optimization problem (19 ###reference_###), the numerator and the optimal value are obtained.\nThen, the denominator is fixed and is used as the initial condition to design the fault estimation filter when using the ER method in Theorem 4.1 ###reference_Thm1### and Algorithm 1 ###reference_###.\nThe obtained value of the objective function is after iteration steps.\nAccording to (21 ###reference_###), the suboptimality gap is .\nFig. 3 ###reference_### presents the fault signal and its estimates obtained by different methods, while errors of fault estimates are illustrated in Fig. 3 ###reference_###.\nAs illustrated in Fig. 3 ###reference_###, both the IUIE and LS methods diverge, while the UIO methods produce high estimation errors.\nIn comparison with the above methods, the proposed ER and RR methods offer better estimation performance.\nIn Fig. 4 ###reference_###, it is further demonstrated that increasing the degree of the RR filter can reduce the estimation error.\n###figure_2### ###figure_3### ###figure_4###"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "6.2",
|
| 97 |
+
"parent_section_id": "6",
|
| 98 |
+
"section_name": "6.2. Multi-area power systems",
|
| 99 |
+
"text": ""
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6.2.1",
|
| 103 |
+
"parent_section_id": "6.2",
|
| 104 |
+
"section_name": "6.2.1. System description",
|
| 105 |
+
"text": "Consider a multi-area power system described in [3 ###reference_b3###]. Suppose each area of the power system can be represented by a model with equivalent governors, turbines, and generators. Then, in area for , the dynamics of frequency can be written as\nwhere represents the equivalent inertia constant, denotes the nominal frequency, is the power base, denotes the total generated power, denotes the total tie-line power exchanges from area , denotes the deviation caused by the load, and is the deviation caused by the frequency dependency of the load.\nLet and be the number of generators and the set of areas that connect to area , respectively. The term denotes the power generated by the th generator, is the power exchanges between area and , and is the maximum transfer power on the line, which is assumed to be constant. It holds that .\nFor the dynamics of , is the governor-turbine\u2019s time constant, and is the drop coefficient.\nThe term is the automatic generation control (AGC) signal and is the participating factor, i.e., .\nThe area control error signal is denoted by and is the frequency bias factor. The AGC signal in the last line of (32 ###reference_###) is in integration of with the integral gain . The parameters are provided in Table 1 ###reference_###.\nNote that different faults may happen due to the vulnerabilities of multi-area power systems. Here, the following fault scenarios are considered:\nfaults on the tie line between areas that cause deviation in frequency, i.e., ;\nfaults on the AGC part of area , i.e., ;\nfaults on the sensors of area , i.e., , where , and are the output, output matrix, and state of area , respectively. The matrix characterizes the sensors that are vulnerable.\nBased on the dynamics (32 ###reference_###) and descriptions of the faults, the state-space model of area in the presence of faults becomes\nwhere the state , is the process fault signal.\nSignal denotes noise in the system. The matrices can be obtained based on the dynamics (32 ###reference_###) and the vulnerable parts of area . The output matrix is a tall or square matrix with the full column rank, i.g., . The matrices and indicate which signal is affected by the noise.\nStacking the state of each area, i.e., , and discretizing the system with sampling period results in the discrete-time state-space model for the whole three-area power system in the form of (1 ###reference_###). The system matrices are given by\nHere, we consider faults in the tie-line of area , the AGC part of area , and the measurement of area .\nThe corresponding faulty matrices are\nThe unknown loads are with denoting the uncertain signal. The signal is white noise with zero mean and variance .\nThe matrices and , where represents a column vector with all elements ."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "6.2.2",
|
| 109 |
+
"parent_section_id": "6.2",
|
| 110 |
+
"section_name": "6.2.2. Fault detection results",
|
| 111 |
+
"text": "Suppose that the frequency content of fault signals is in the fault detection problem.\nLet us consider process faults first, i.e., and , which are zero before and then become\nThe process of the fault detection task is summarized as:\nStep 1. Set the residual dimension and filter degree to and .\nNote that the dimension of the filter states is , which is smaller than that of the system .\nStep 2. Solve the filter coefficients by using the optimization problem in Theorem 3.1 ###reference_Thm1### with the AO approach in Algorithm 1 ###reference_###, where the weight .\nStep 3. Compute the threshold for fault detection based on Theorem 3.6 ###reference_Thm6###, which is with the acceptable FAR and time interval .\nStep 4. Compare the value of the evaluation function to to render the diagnosis decision.\nThe fault detection filter developed in the DAE framework is compared with the Luenberger observer designed using fault frequency content information (LO()) [4 ###reference_b4###] and the UIO approach designed for the entire frequency range [33 ###reference_b33###].\nSince the dimensions of residuals generated by LO() and UIO methods are , while in our approach, the evaluation function is divided by for comparison, as is the threshold.\n###figure_5### ###figure_6### Fig. 6 ###reference_### presents the detection results for and .\nOne can see that the values of remain below the threshold when and exceed the threshold immediately after faults happen at . Thus, all three approaches have successfully detected the process faults, wherein our proposed method has the best fault sensitivity.\nMoreover, the threshold derived using (14 ###reference_###) is found to be less conservative than the threshold derived using Chebyshev\u2019s inequality, i.e., .\nThe process of sensor fault detection is the same as above.\nThe following fault signal is employed to test the detection ability of different methods for sensor faults:\nFig. 6 ###reference_### shows the detection results for . It can be seen that the UIO approach fails to detect the occurrence of the sensor fault as the amplitude of the fault signal is quite small.\nNonetheless, the LO() method and our proposed method considering the fault frequency information successfully detect the fault. In addition, our method exhibits superior sensitivity to sensor faults compared to the LO() method.\n###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15###"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "6.2.3",
|
| 115 |
+
"parent_section_id": "6.2",
|
| 116 |
+
"section_name": "6.2.3. Fault estimation results",
|
| 117 |
+
"text": "In the fault estimation part, it is supposed that the fault frequency content consists of two disjoint ranges, i.e., and . The AGC fault signal and the sensor fault signal remain unchanged with frequencies in . The tie-line fault is replaced with\nwhose frequency is in . The process of the fault estimation task is as follows:\nStep 1. Set the residual dimension and filter degree to and .\nStep 2. Solve two fault estimation filters using the ER method in Theorem 4.1 ###reference_Thm1### and the RR method in Theorem 4.2 ###reference_Thm2###, respectively.\nIn the ER method, the AO approach is employed to solve (4.1 ###reference_###).\nWhen using the RR method, select a stable denominator and some frequency points in and before solving the optimization problem (19 ###reference_###).\nStep 3. Feeding the control input and the measurement into the fault estimation filters yields estimates of fault signals.\nTo validate the performance of the proposed ER and RR methods, they are compared with the UIO, LS, and IUIE methods in the two cases of no noise and considering noise.\nFirst, the weight is set to in the optimization problems (4.1 ###reference_###) and (19 ###reference_###) in the noise-free case.\nThe estimation results are presented in Fig. 10 ###reference_###-10 ###reference_###.\nSpecifically, Fig. 10 ###reference_###-10 ###reference_### show the estimates of the tie-line fault , the AGC fault , and the sensor fault by different methods. Since the UIO, LS, and IUIE methods both obtain unbiased estimation results with a one-step delay, estimation errors of the three methods are the same as shown in Fig. 10 ###reference_###.\nIn contrast, the proposed ER and RR methods produce smaller estimation errors than the other three methods.\nNote that though the errors are large at the initial estimation phase, they decrease quickly.\nFurthermore, Fig. 11 ###reference_### shows the effect of the sampling number of frequency points in the RR method along with the suboptimality gap. For simplicity, a single frequency range is considered. The number of frequency points increases from to , where the new frequency point is added to the previous ones during the process. As a result, the lower bound increases monotonically because more constraints are included in (19 ###reference_###) when adding frequency points\nIn the case of considering noise, the weight is set to .\nSince the effect of noise is ignored in the design of the UIO, LS, and IUIE methods, much smaller noise is considered for these three methods.\nFig. 15 ###reference_###-15 ###reference_### depict the estimates of the fault signals in the presence of noise by different methods.\nOne can see from Fig. 15 ###reference_### that the estimates of the AGC fault signal obtained by the UIO, LS, and IUIE methods are corrupted by noise seriously.\nIn contrast, thanks to the noise suppression and design in the specific frequency ranges, the ER and RR methods achieve smaller estimation errors than the other three methods under the effects of noise as illustrated in Fig. 15 ###reference_###."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "7",
|
| 121 |
+
"parent_section_id": null,
|
| 122 |
+
"section_name": "7. Conclusions",
|
| 123 |
+
"text": "This paper studies the design methods of FDE filters in the frequency domain for LTI systems with disturbances and stochastic noise.\nBased on an integration of residual generation and norm approaches, the optimal design of FDE filters is formulated into a unified optimization framework.\nIn future work, a potential research direction is to extend the results to nonlinear systems."
|
| 124 |
+
}
|
| 125 |
+
],
|
| 126 |
+
"appendix": [],
|
| 127 |
+
"tables": {
|
| 128 |
+
"1": {
|
| 129 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>Parameters of the multi-area power system.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T1.27.27\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T1.27.27.28.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S6.T1.27.27.28.1.1\">Name</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T1.27.27.28.1.2\">Values</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S6.T1.27.27.28.1.3\">Name</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.27.27.28.1.4\">Values</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T1.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S6.T1.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.2.2.2.3\">60 Hz</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S6.T1.2.2.2.2\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.2.2.2.4\">0.0064 Hz/MW</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.3.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.4.4.4.3\">4.41 MW/MVA</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.4.4.4.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.4.4.4.4\">0.0045 Hz/MW</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.6.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.5.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.6.6.6.3\">4.15 MW/MVA</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.6.6.6.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.6.6.6.4\">0.0056 Hz/MW</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.8.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.7.7.7.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.8.8.8.3\">3.46 MW/MVA</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.8.8.8.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.8.8.8.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.10.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.9.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.10.10.10.3\">1500 MVA</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.10.10.10.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.10.10.10.4\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.12.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.11.11.11.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.12.12.12.3\">2100 MVA</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.12.12.12.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.12.12.12.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.14.14.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.13.13.13.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.14.14.14.3\">1700 MVA</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.14.14.14.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.14.14.14.4\">500.0064 Hz/MW</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.16.16.16\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.15.15.15.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.16.16.16.3\">0.002 MW/Hz</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.16.16.16.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.16.16.16.4\">700.0045 Hz/MW</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.18.18.18\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.17.17.17.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.18.18.18.3\">0.0014 MW/Hz</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.18.18.18.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.18.18.18.4\">566.6723 Hz/MW</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.20.20.20\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.19.19.19.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.3\">0.0018 MW/Hz</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.20.20.20.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.20.20.20.4\">0.65</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.23.23.23\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.22.22.22.2\">\n,\u00a0\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.23.23.23.4\">1/2</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.23.23.23.3\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.23.23.23.5\">1/3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.25.25.25\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.24.24.24.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.25.25.25.3\">2100 MW</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.25.25.25.2\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.25.25.25.4\">2100 MW</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.27.27.27\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S6.T1.26.26.26.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S6.T1.27.27.27.3\">2100 MW</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S6.T1.27.27.27.2\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T1.27.27.27.4\">1.4950</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 130 |
+
"capture": "Table 1. Parameters of the multi-area power system."
|
| 131 |
+
}
|
| 132 |
+
},
|
| 133 |
+
"image_paths": {
|
| 134 |
+
"1": {
|
| 135 |
+
"figure_path": "2310.04922v4_figure_1.png",
|
| 136 |
+
"caption": "Figure 1. Geometric illustration of the multi-dimensional residual.",
|
| 137 |
+
"url": "http://arxiv.org/html/2310.04922v4/x1.png"
|
| 138 |
+
},
|
| 139 |
+
"2": {
|
| 140 |
+
"figure_path": "2310.04922v4_figure_2.png",
|
| 141 |
+
"caption": "Figure 2. Fault and its estimates generated using different methods.\n",
|
| 142 |
+
"url": "http://arxiv.org/html/2310.04922v4/x2.png"
|
| 143 |
+
},
|
| 144 |
+
"3": {
|
| 145 |
+
"figure_path": "2310.04922v4_figure_3.png",
|
| 146 |
+
"caption": "Figure 3. Errors of fault estimates.\n",
|
| 147 |
+
"url": "http://arxiv.org/html/2310.04922v4/x3.png"
|
| 148 |
+
},
|
| 149 |
+
"4": {
|
| 150 |
+
"figure_path": "2310.04922v4_figure_4.png",
|
| 151 |
+
"caption": "Figure 4. Errors of fault estimates with different degrees.",
|
| 152 |
+
"url": "http://arxiv.org/html/2310.04922v4/x4.png"
|
| 153 |
+
},
|
| 154 |
+
"5": {
|
| 155 |
+
"figure_path": "2310.04922v4_figure_5.png",
|
| 156 |
+
"caption": "Figure 5. Detection results for fa\u2062g\u2062c2subscript\ud835\udc53\ud835\udc4e\ud835\udc54subscript\ud835\udc502f_{agc_{2}}italic_f start_POSTSUBSCRIPT italic_a italic_g italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT and ft\u2062i\u2062e12subscript\ud835\udc53\ud835\udc61\ud835\udc56subscript\ud835\udc5212f_{tie_{12}}italic_f start_POSTSUBSCRIPT italic_t italic_i italic_e start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT end_POSTSUBSCRIPT.\n",
|
| 157 |
+
"url": "http://arxiv.org/html/2310.04922v4/x5.png"
|
| 158 |
+
},
|
| 159 |
+
"6": {
|
| 160 |
+
"figure_path": "2310.04922v4_figure_6.png",
|
| 161 |
+
"caption": "Figure 6. Detection results for fy1subscript\ud835\udc53subscript\ud835\udc661f_{y_{1}}italic_f start_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT.\n",
|
| 162 |
+
"url": "http://arxiv.org/html/2310.04922v4/x6.png"
|
| 163 |
+
},
|
| 164 |
+
"7": {
|
| 165 |
+
"figure_path": "2310.04922v4_figure_7.png",
|
| 166 |
+
"caption": "Figure 7. Estimates of ft\u2062i\u2062e\u206212subscript\ud835\udc53\ud835\udc61\ud835\udc56\ud835\udc5212f_{tie{12}}italic_f start_POSTSUBSCRIPT italic_t italic_i italic_e 12 end_POSTSUBSCRIPT without \u03c9\ud835\udf14\\omegaitalic_\u03c9.\n",
|
| 167 |
+
"url": "http://arxiv.org/html/2310.04922v4/x7.png"
|
| 168 |
+
},
|
| 169 |
+
"8": {
|
| 170 |
+
"figure_path": "2310.04922v4_figure_8.png",
|
| 171 |
+
"caption": "Figure 8. Estimates of fa\u2062g\u2062c2subscript\ud835\udc53\ud835\udc4e\ud835\udc54subscript\ud835\udc502f_{agc_{2}}italic_f start_POSTSUBSCRIPT italic_a italic_g italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT without \u03c9\ud835\udf14\\omegaitalic_\u03c9.\n",
|
| 172 |
+
"url": "http://arxiv.org/html/2310.04922v4/x8.png"
|
| 173 |
+
},
|
| 174 |
+
"9": {
|
| 175 |
+
"figure_path": "2310.04922v4_figure_9.png",
|
| 176 |
+
"caption": "Figure 9. Estimates of fy1subscript\ud835\udc53subscript\ud835\udc661f_{y_{1}}italic_f start_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT without \u03c9\ud835\udf14\\omegaitalic_\u03c9.\n",
|
| 177 |
+
"url": "http://arxiv.org/html/2310.04922v4/x9.png"
|
| 178 |
+
},
|
| 179 |
+
"10": {
|
| 180 |
+
"figure_path": "2310.04922v4_figure_10.png",
|
| 181 |
+
"caption": "Figure 10. Estimation errors without \u03c9\ud835\udf14\\omegaitalic_\u03c9.\n",
|
| 182 |
+
"url": "http://arxiv.org/html/2310.04922v4/x10.png"
|
| 183 |
+
},
|
| 184 |
+
"11": {
|
| 185 |
+
"figure_path": "2310.04922v4_figure_11.png",
|
| 186 |
+
"caption": "Figure 11. Suboptimality gap with different sampling number.",
|
| 187 |
+
"url": "http://arxiv.org/html/2310.04922v4/x11.png"
|
| 188 |
+
},
|
| 189 |
+
"12": {
|
| 190 |
+
"figure_path": "2310.04922v4_figure_12.png",
|
| 191 |
+
"caption": "Figure 12. Estimates of ft\u2062i\u2062e12subscript\ud835\udc53\ud835\udc61\ud835\udc56subscript\ud835\udc5212f_{tie_{12}}italic_f start_POSTSUBSCRIPT italic_t italic_i italic_e start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT end_POSTSUBSCRIPT with \u03c9\ud835\udf14\\omegaitalic_\u03c9.\n",
|
| 192 |
+
"url": "http://arxiv.org/html/2310.04922v4/x12.png"
|
| 193 |
+
},
|
| 194 |
+
"13": {
|
| 195 |
+
"figure_path": "2310.04922v4_figure_13.png",
|
| 196 |
+
"caption": "Figure 13. Estimates of fa\u2062g\u2062c2subscript\ud835\udc53\ud835\udc4e\ud835\udc54subscript\ud835\udc502f_{agc_{2}}italic_f start_POSTSUBSCRIPT italic_a italic_g italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT with \u03c9\ud835\udf14\\omegaitalic_\u03c9.\n",
|
| 197 |
+
"url": "http://arxiv.org/html/2310.04922v4/x13.png"
|
| 198 |
+
},
|
| 199 |
+
"14": {
|
| 200 |
+
"figure_path": "2310.04922v4_figure_14.png",
|
| 201 |
+
"caption": "Figure 14. Estimates of fy1subscript\ud835\udc53subscript\ud835\udc661f_{y_{1}}italic_f start_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT with \u03c9\ud835\udf14\\omegaitalic_\u03c9.\n",
|
| 202 |
+
"url": "http://arxiv.org/html/2310.04922v4/x14.png"
|
| 203 |
+
},
|
| 204 |
+
"15": {
|
| 205 |
+
"figure_path": "2310.04922v4_figure_15.png",
|
| 206 |
+
"caption": "Figure 15. Estimation errors with \u03c9\ud835\udf14\\omegaitalic_\u03c9.\n",
|
| 207 |
+
"url": "http://arxiv.org/html/2310.04922v4/x15.png"
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
"validation": true,
|
| 211 |
+
"references": [],
|
| 212 |
+
"url": "http://arxiv.org/html/2310.04922v4"
|
| 213 |
+
}
|
20241001/2310.06000v3.json
ADDED
|
@@ -0,0 +1,398 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Towards Replication-Robust Data Markets",
|
| 3 |
+
"abstract": "Despite widespread adoption of machine learning throughout industry, many firms face a common challenge: relevant datasets are typically distributed amongst market competitors that are reluctant to share information.\nRecent works propose data markets to provide monetary incentives for collaborative machine learning, where agents share features with each other and are rewarded based on their contribution to improving the predictions others.\nThese contributions are determined by their relative Shapley value, which is computed by treating features as players and their interactions as a characteristic function game.\nHowever, in its standard form, this setup further provides an incentive for agents to replicate their data and act under multiple false identities in order to increase their own revenue and diminish that of others, restricting their use in practice.\nIn this work, we develop a replication-robust data market for supervised learning problems. We adopt Pearl\u2019s do-calculus from causal reasoning to refine the characteristic function game by differentiating between observational and interventional conditional probabilities. By doing this, we derive Shapley value-based rewards that are robust to this malicious replication by design, whilst preserving desirable market properties.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "When faced with machine learning task, it can often be the case that a firm would benefit from using the data of others.\nFor example, rival distributors of similar goods may improve supply forecasts by sharing sales data, hoteliers could find value in data from airline companies for anticipating demand, hospitals could reduce social biases from diagnostic support systems by sharing patient details, and so forth. In this work, we consider the example of renewable energy producers exposed to uncertain levels of production and therefore require reliable forecasts to competitively participate in electricity markets, with their revenue a function of predictive performance. It is well-studied that, with access to distributed data, in both a geographic and ownership sense, these agents could exploit spatial and temporal correlations between sites to improve their forecasts (Tastu et al., 2013 ###reference_b38###).\nIn practice, firms may be reluctant to share information due to privacy concerns or perceived conflicts of interest. Whilst methods from the field of federated learning (Lalitha et al., 2018 ###reference_b23###) could indeed be used to train models on local servers without the need to centralize any data, this relies on altruistic sharing of information amongst market competitors.\nAn alternative approach is to provide incentives for data sharing\u2014recent works propose data markets (Bergemann and Bonatti, 2019 ###reference_b5###), where agents can collaborate by sharing features with each other to improve the predictions of others, without transferring any raw data between them (Pinson et al., 2022 ###reference_b33###).\nWith foundations in informational efficiency of financial markets (Hayek, 1986 ###reference_b19###), data markets have similar economic roots as prediction markets (Waggoner et al., 2015 ###reference_b40###), mechanisms designed to consolidate information with the goal of forecasting outcomes of future events (Frongillo and Waggoner, 2018 ###reference_b14###).\nWhilst prediction markets can also be used to crowdsource data for machine learning (Abernethy and Frongillo, 2011 ###reference_b1###), data owners themselves need to decide which tasks to contribute to a priori, when the relevance of their dataset is unknown.\nIn contrast, data markets serve as real-time mechanism that match features to machine learning tasks based their capacity to improve predictive performance.\nMarket revenue is a function of the value this brings to the task owner, and each feature owned by a distributed agent is rewarded based on its marginal contribution to the improvement."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Preliminaries",
|
| 15 |
+
"text": "Throughout our work, we consider regression models to be used for forecasting, however our setup can readily be extended to general supervised learning problems.\nWe build upon prior work on data acquisition for machine learning tasks from both strategic (Dekel et al., 2010 ###reference_b11###) and privacy-conscious (Cummings et al., 2015 ###reference_b10###) agents.\nIn particular, we characterize an owner of a regression task by their valuation for a marginal improvement in predictive performance, which sets the price of the data of the distributed agents, whom in turn propose their own data as features and are eventually rewarded based on their relative marginal contributions.\nWe denote this valuation , the value of which we assume to be known. The reader is referred to Ravindranath et al. (2024 ###reference_b35###) for a recent proposal of how may be learnt in practice."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Characteristic Function",
|
| 21 |
+
"text": "Commonly adopted lifts can broadly be categorized as either observational or interventional, which affect the characteristic function that underpins the cooperative game.\nThe former is typically found in work related to data markets (e.g, Agarwal et al., 2019 ###reference_b3###; Pinson et al., 2022 ###reference_b33###).\nThe observational lift uses the observational conditional expectation, the expectation of the loss over the conditional density of out-of-coalition features, given in-coalition take on their observed values, such that\nwhere denotes the out-of-coalition features.\nWe propose to instead use the interventional lift, which uses the interventional conditional expectation, where features in the coalition are manually fixed to their observed values to manipulate the data generating process, expressed mathematically using Pearl\u2019s do-calculus (Pearl, 2012 ###reference_b31###), such that\nThe difference between (2 ###reference_###) and (3 ###reference_###) is that in the latter, dependence between out-of-coalition features and those within the coalition is broken.\nIn theory, observing would change the distribution of the out-of-coalition features if the random variables were connected through latent effects. However, by intervening on a coalition, this distribution is unaffected. To illustrate this, consider two random variables, and , with the causal relationship in Figure 1 ###reference_###.\nSuppose we observe , the observational conditional distribution describes: the distribution of given that is observed to take on the value , written as . The interventional conditional distribution describes instead: the distribution of given that we artificially set the value of to , denoted , obtained by assuming that is distributed by the original data generating process.\nGraphically, an intervention will remove all of the edges going into the corresponding variable.\nConsequently, we get that, but . This means that the distribution of under the intervention is equivalent to conditioned on , yet for , and become disconnected, hence has no effect on , which is simply sampled from its marginal distribution."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Robustness To Replication",
|
| 27 |
+
"text": "Although it is natural for datasets to contain some overlapping information, in our analytics market such redundancy may also arise as a result of replication.\nThe fact that data can be freely replicated differentiates it from material commodities\u2014a motive for reassessing fundamental mechanism design concepts (Aiello et al., 2001 ###reference_b4###). For example, a simple second price auction becomes impractical unless sellers somehow limit the number of replications, which may in turn curtail revenue. In this section, we demonstrate how the observational lift provides incentives for replication, the downsides of this, and how these can be remedied by instead adopting the interventional lift.\nA replicate of the -th feature is defined as , where represents centred noise with finite variance, conditionally independent of the target given the feature.\nUnder Definition 4.1 ###reference_theorem1###, the observational lift described in (2 ###reference_###) provides a monetary incentive for support agents to replicate their data and act under multiple (false) identities.\nTo illustrate this, consider the causal graph in Figure 3 ###reference_###. Suppose that and are identical features, such that , and that each is owned by a unique support agent, and , respectively.\nWith Theorem 3.1 ###reference_theorem1###, the reward to each support agent before any replication is made will be , where recall is the total market revenue. Now suppose that replicates their feature times and for ease assume . Using the same logic, the revenues of agents and will be and , respectively. Hence a malicious agent can simply replicate their data many times so as to maximize their overall revenue, and diminish that of others.\nLet denote the original feature vector augmented to include any additional replicates, with an analogous index set, . According to Agarwal et al. (2019 ###reference_b3###), a market is replication-robust if , where is the new revenue derived using instead.\nIn attempt to remedy this issue, the authors in Agarwal et al. (2019 ###reference_b3###) propose Robust-Shapley, , where is a similarity metric (e.g., cosine similarity). This method penalizes similar features so as to remove the incentive for replication, satisfying Definition 4.2 ###reference_theorem2###. However, this means that not only replicated features are penalized, but also those with naturally occurring correlations between features. As a result, budget balance is lost, the extent to which depends on the chosen similarly metric and the value of .\nA similar result is presented in Han et al. (2023 ###reference_b18###) who consider the general set of semivalues, the class of solution concepts to submodular games to which the Shapley value belongs (Dubey et al., 1981 ###reference_b13###). The authors show that the way in which a semivalue weights coalition sizes has an affect on the resultant properties, and that the Banzhaf value (Lehrer, 1988 ###reference_b24###) is in fact replication-robust by design (i.e., with respect to Definition 4.2 ###reference_theorem2###), along with many other semivalues, albeit still penalizing naturally occurring correlations.\nThat being said, Definition 4.2 ###reference_theorem2### leaves the market susceptible to spiteful agents\u2014those willing to sacrifice their revenue in order to minimize that of others. As such, we refer to this definition as weakly robust.\nA Shapley-value based attribution policy based on the interventional lift instead yields a stricter notion of being replication-robust, such that\n.\nWith Definition 4.1 ###reference_theorem1###, each replicate in only induces an indirect effect on the target. However, from Theorem 3.1 ###reference_theorem1###, we know that the interventional lift only captures direct effects. Therefore, for each of the replicates, we write the marginal contribution for a single permutation as\nand therefore for each of the replicates. For the original features, any direct effects will remain unchanged, as visualized in Figure 3 ###reference_###. This leads to\nshowing that by replacing the conventional observational lift with the interventional lift, the Shapley value-based attribution policy is strictly robust to both replication and spitefulness by design.\n\u220e"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Experimental Analysis",
|
| 33 |
+
"text": "We now validate our key findings on a real-world case study.\n111Our code has been made publicly available at: https://github.com/tdfalc/regression-markets ###reference_ts###\nWe use an open source dataset to aid reproduction of our work, namely the Wind Integration National Dataset (WIND)\nToolkit, detailed in Draxl et al. (2015 ###reference_b12###).\nOur setup is a stylised electricity market setup where agents\u2014in our case, wind producers\u2014are required to notify the system operator of their expected electricity generation in a forward stage, one hour ahead of delivery, for which they receive a fixed price per unit. In real-time, they receive a penalty for deviations from the scheduled production, thus their downstream revenue is an explicit function of forecast accuracy."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "6",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Conclusions",
|
| 39 |
+
"text": "Many machine learning tasks could benefit from using the data of others, however convincing firms to share information, even if privacy is assured, poses a considerable challenge. Rather than relying on data altruism, data markets are recognized as a promising way of providing incentives for data sharing, many of which use Shapley values to allocate rewards. Nevertheless, there are a number of open issues that remain before such mechanisms can be used in practice, one of which is vulnerability to replication incentives, which we showed leads to undesirable reward allocation and restricts the practical viability of these markets.\nWe introduced a general framework for data markets for supervised learning problems that subsumes many of these existing proposals. We demonstrated that there are several different ways to formulate a machine learning task as cooperative game and analysed their differences from a causal perspectives. We showed that use of the observational lift to value a coalition is the source of these replication incentives, which many works have tried to remedy through penalization methods, which facilitate only weak robustness. Our main contribution is an alternative algorithm for allocating rewards that instead uses interventional conditional probabilities. Our proposal is robust to replication without comprising market properties such as budget balance. This is a step towards making Shapley value-based data markets feasible in practice.\nFrom a causal perspective, the interventional lift has additional potential benefits, including reward allocations that better represent the reliance of the model on each feature, providing an incentive for timely and reliable data streams for useful features, that is, those with greater influence on predictive performance. It is also favourable with respect to computational expenditure. There is of course, no free lunch, as using the interventional conditional expectation can yield undesirable rewards when feautres are highly correlated and the number of observations is low. Nevertheless, future work could examine the extent to which the mentioned remedies mitigate this issue, as well as their impact on the market outcomes.\nUltimately, when it comes to data valuation, the Shapley value is not without its limitations\u2014it is not generally well-defined in a machine learning context and requires strict assumptions, not to mention its computational complexity. This should also incite future work into alternative mechanism design frameworks, for example those based on non-cooperative game theory instead."
|
| 40 |
+
}
|
| 41 |
+
],
|
| 42 |
+
"appendix": [],
|
| 43 |
+
"tables": {},
|
| 44 |
+
"image_paths": {
|
| 45 |
+
"2": {
|
| 46 |
+
"figure_path": "2310.06000v3_figure_2.png",
|
| 47 |
+
"caption": "Figure 2: Interventions producing points outwith the data manifold. Green and red lines are level sets denoting the 0.99 quantile of the training data when features are independent and correlated.",
|
| 48 |
+
"url": "http://arxiv.org/html/2310.06000v3/x1.png"
|
| 49 |
+
},
|
| 50 |
+
"4(a)": {
|
| 51 |
+
"figure_path": "2310.06000v3_figure_4(a).png",
|
| 52 |
+
"caption": "(a) Observational: Revenue of a4subscript\ud835\udc4e4a_{4}italic_a start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT increases due to indirect effects induced by the replicates.\nFigure 4: Revenue allocations for each support agent for both (a) observational and (b) interventional lifts, when agent a4subscript\ud835\udc4e4a_{4}italic_a start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT is honest (/\u2063//// /) and malicious (\u2218\\circ\u2218), by replicating their feature. The gray and white bars correspond to in-sample and out-of-sample market stages, respectively. The revenue split amongst replicates is depicted by the stacked bars highlighted in red.",
|
| 53 |
+
"url": "http://arxiv.org/html/2310.06000v3/x2.png"
|
| 54 |
+
},
|
| 55 |
+
"4(b)": {
|
| 56 |
+
"figure_path": "2310.06000v3_figure_4(b).png",
|
| 57 |
+
"caption": "(b) Interventional: Revenue of a4subscript\ud835\udc4e4a_{4}italic_a start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT remains the same by accounting only for direct effects.\nFigure 4: Revenue allocations for each support agent for both (a) observational and (b) interventional lifts, when agent a4subscript\ud835\udc4e4a_{4}italic_a start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT is honest (/\u2063//// /) and malicious (\u2218\\circ\u2218), by replicating their feature. The gray and white bars correspond to in-sample and out-of-sample market stages, respectively. The revenue split amongst replicates is depicted by the stacked bars highlighted in red.",
|
| 58 |
+
"url": "http://arxiv.org/html/2310.06000v3/x3.png"
|
| 59 |
+
},
|
| 60 |
+
"5": {
|
| 61 |
+
"figure_path": "2310.06000v3_figure_5.png",
|
| 62 |
+
"caption": "Figure 5: Revenue allocation of agent a4subscript\ud835\udc4e4a_{4}italic_a start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT with increasing number of replicates.",
|
| 63 |
+
"url": "http://arxiv.org/html/2310.06000v3/x4.png"
|
| 64 |
+
}
|
| 65 |
+
},
|
| 66 |
+
"validation": true,
|
| 67 |
+
"references": [
|
| 68 |
+
{
|
| 69 |
+
"1": {
|
| 70 |
+
"title": "A collaborative mechanism for crowdsourcing prediction problems.",
|
| 71 |
+
"author": "Jacob D Abernethy and Rafael Frongillo.",
|
| 72 |
+
"venue": "Advances in neural information processing systems, 24, 2011.",
|
| 73 |
+
"url": null
|
| 74 |
+
}
|
| 75 |
+
},
|
| 76 |
+
{
|
| 77 |
+
"2": {
|
| 78 |
+
"title": "Too much data: Prices and inefficiencies in data markets.",
|
| 79 |
+
"author": "Daron Acemoglu, Ali Makhdoumi, Azarakhsh Malekian, and Asu Ozdaglar.",
|
| 80 |
+
"venue": "American Economic Journal: Microeconomics, 14(4):218\u2013256, 2022.",
|
| 81 |
+
"url": null
|
| 82 |
+
}
|
| 83 |
+
},
|
| 84 |
+
{
|
| 85 |
+
"3": {
|
| 86 |
+
"title": "A marketplace for data: An algorithmic solution.",
|
| 87 |
+
"author": "Anish Agarwal, Munther Dahleh, and Tuhin Sarkar.",
|
| 88 |
+
"venue": "In Proceedings of the 2019 ACM Conference on Economics and Computation, pages 701\u2013726, 2019.",
|
| 89 |
+
"url": null
|
| 90 |
+
}
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"4": {
|
| 94 |
+
"title": "Priced oblivious transfer: How to sell digital goods.",
|
| 95 |
+
"author": "Bill Aiello, Yuval Ishai, and Omer Reingold.",
|
| 96 |
+
"venue": "In International Conference on the Theory and Applications of Cryptographic Techniques, pages 119\u2013135. Springer, 2001.",
|
| 97 |
+
"url": null
|
| 98 |
+
}
|
| 99 |
+
},
|
| 100 |
+
{
|
| 101 |
+
"5": {
|
| 102 |
+
"title": "Markets for information: An introduction.",
|
| 103 |
+
"author": "Dirk Bergemann and Alessandro Bonatti.",
|
| 104 |
+
"venue": "Annual Review of Economics, 11(1):85\u2013107, 2019.",
|
| 105 |
+
"url": null
|
| 106 |
+
}
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"6": {
|
| 110 |
+
"title": "Polynomial calculation of the shapley value based on sampling.",
|
| 111 |
+
"author": "Javier Castro, Daniel G\u00f3mez, and Juan Tejada.",
|
| 112 |
+
"venue": "Computers & operations research, 36(5):1726\u20131730, 2009.",
|
| 113 |
+
"url": null
|
| 114 |
+
}
|
| 115 |
+
},
|
| 116 |
+
{
|
| 117 |
+
"7": {
|
| 118 |
+
"title": "True to the model or true to the data?, 2020.",
|
| 119 |
+
"author": "Hugh Chen, Joseph D Janizek, Scott Lundberg, and Su-In Lee.",
|
| 120 |
+
"venue": null,
|
| 121 |
+
"url": null
|
| 122 |
+
}
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"8": {
|
| 126 |
+
"title": "Explaining a series of models by propagating shapley values.",
|
| 127 |
+
"author": "Hugh Chen, Scott M Lundberg, and Su-In Lee.",
|
| 128 |
+
"venue": "Nature Communications, 13(1):4512, 2022.",
|
| 129 |
+
"url": null
|
| 130 |
+
}
|
| 131 |
+
},
|
| 132 |
+
{
|
| 133 |
+
"9": {
|
| 134 |
+
"title": "Explaining by removing: A unified framework for model explanation.",
|
| 135 |
+
"author": "Ian C Covert, Scott Lundberg, and Su-In Lee.",
|
| 136 |
+
"venue": "The Journal of Machine Learning Research, 22(1):9477\u20139566, 2021.",
|
| 137 |
+
"url": null
|
| 138 |
+
}
|
| 139 |
+
},
|
| 140 |
+
{
|
| 141 |
+
"10": {
|
| 142 |
+
"title": "Truthful linear regression.",
|
| 143 |
+
"author": "Rachel Cummings, Stratis Ioannidis, and Katrina Ligett.",
|
| 144 |
+
"venue": "In Peter Gr\u00fcnwald, Elad Hazan, and Satyen Kale, editors, Proceedings of the 28th Conference on Learning Theory, pages 448\u2013483, Paris, France, 2015.",
|
| 145 |
+
"url": null
|
| 146 |
+
}
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"11": {
|
| 150 |
+
"title": "Incentive compatible regression learning.",
|
| 151 |
+
"author": "Ofer Dekel, Felix Fischer, and Ariel D. Procaccia.",
|
| 152 |
+
"venue": "Journal of Computer and System Sciences, 76(8):759\u2013777, 2010.",
|
| 153 |
+
"url": null
|
| 154 |
+
}
|
| 155 |
+
},
|
| 156 |
+
{
|
| 157 |
+
"12": {
|
| 158 |
+
"title": "The wind integration national dataset (wind) toolkit.",
|
| 159 |
+
"author": "Caroline Draxl, Andrew Clifton, Bri-Mathias Hodge, and Jim McCaa.",
|
| 160 |
+
"venue": "Applied Energy, 151:355\u2013366, 2015.",
|
| 161 |
+
"url": null
|
| 162 |
+
}
|
| 163 |
+
},
|
| 164 |
+
{
|
| 165 |
+
"13": {
|
| 166 |
+
"title": "Value theory without efficiency.",
|
| 167 |
+
"author": "Pradeep Dubey, Abraham Neyman, and Robert James Weber.",
|
| 168 |
+
"venue": "Mathematics of Operations Research, 6(1):122\u2013128, 1981.",
|
| 169 |
+
"url": null
|
| 170 |
+
}
|
| 171 |
+
},
|
| 172 |
+
{
|
| 173 |
+
"14": {
|
| 174 |
+
"title": "Bounded-loss private prediction markets.",
|
| 175 |
+
"author": "Rafael Frongillo and Bo Waggoner.",
|
| 176 |
+
"venue": "Advances in Neural Information Processing Systems, 31, 2018.",
|
| 177 |
+
"url": null
|
| 178 |
+
}
|
| 179 |
+
},
|
| 180 |
+
{
|
| 181 |
+
"15": {
|
| 182 |
+
"title": "Shapley explainability on the data manifold, 2020a.",
|
| 183 |
+
"author": "Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, and Ilya Feige.",
|
| 184 |
+
"venue": null,
|
| 185 |
+
"url": null
|
| 186 |
+
}
|
| 187 |
+
},
|
| 188 |
+
{
|
| 189 |
+
"16": {
|
| 190 |
+
"title": "Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability.",
|
| 191 |
+
"author": "Christopher Frye, Colin Rowat, and Ilya Feige.",
|
| 192 |
+
"venue": "Advances in Neural Information Processing Systems, 33:1229\u20131239, 2020b.",
|
| 193 |
+
"url": null
|
| 194 |
+
}
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"17": {
|
| 198 |
+
"title": "Data shapley: Equitable valuation of data for machine learning.",
|
| 199 |
+
"author": "Amirata Ghorbani and James Zou.",
|
| 200 |
+
"venue": "In Proceedings of the 36th International Conference on Machine Learning, pages 2242\u20132251, 09\u201315 Jun 2019.",
|
| 201 |
+
"url": null
|
| 202 |
+
}
|
| 203 |
+
},
|
| 204 |
+
{
|
| 205 |
+
"18": {
|
| 206 |
+
"title": "Replication robust payoff allocation in submodular cooperative games.",
|
| 207 |
+
"author": "Dongge Han, Michael Wooldridge, Alex Rogers, Olga Ohrimenko, and Sebastian Tschiatschek.",
|
| 208 |
+
"venue": "IEEE Transactions on Artificial Intelligence, 4(5):1114\u20131128, 2023.",
|
| 209 |
+
"url": null
|
| 210 |
+
}
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"19": {
|
| 214 |
+
"title": "The use of knowledge in society.",
|
| 215 |
+
"author": "FA Hayek.",
|
| 216 |
+
"venue": "The Economic Nature of the Firm: A Reader, pages 66\u201371, 1986.",
|
| 217 |
+
"url": null
|
| 218 |
+
}
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"20": {
|
| 222 |
+
"title": "Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models.",
|
| 223 |
+
"author": "Tom Heskes, Evi Sijben, Ioan Gabriel Bucur, and Tom Claassen.",
|
| 224 |
+
"venue": "Advances in Neural Information Processing Systems, 33:4778\u20134789, 2020.",
|
| 225 |
+
"url": null
|
| 226 |
+
}
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"21": {
|
| 230 |
+
"title": "Feature relevance quantification in explainable ai: A causal problem.",
|
| 231 |
+
"author": "Dominik Janzing, Lenon Minorics, and Patrick Bl\u00f6baum.",
|
| 232 |
+
"venue": "In International Conference on Artificial Intelligence and Statistics, pages 2907\u20132916. PMLR, 2020.",
|
| 233 |
+
"url": null
|
| 234 |
+
}
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"22": {
|
| 238 |
+
"title": "Problems with shapley-value-based explanations as feature importance measures.",
|
| 239 |
+
"author": "I Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle Friedler.",
|
| 240 |
+
"venue": "In International Conference on Machine Learning, pages 5491\u20135500, 2020.",
|
| 241 |
+
"url": null
|
| 242 |
+
}
|
| 243 |
+
},
|
| 244 |
+
{
|
| 245 |
+
"23": {
|
| 246 |
+
"title": "Fully decentralized federated learning.",
|
| 247 |
+
"author": "Anusha Lalitha, Shubhanshu Shekhar, Tara Javidi, and Farinaz Koushanfar.",
|
| 248 |
+
"venue": "In Third workshop on bayesian deep learning (NeurIPS), volume 2, 2018.",
|
| 249 |
+
"url": null
|
| 250 |
+
}
|
| 251 |
+
},
|
| 252 |
+
{
|
| 253 |
+
"24": {
|
| 254 |
+
"title": "An axiomatization of the banzhaf value.",
|
| 255 |
+
"author": "Ehud Lehrer.",
|
| 256 |
+
"venue": "International Journal of Game Theory, 17:89\u201399, 1988.",
|
| 257 |
+
"url": null
|
| 258 |
+
}
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"25": {
|
| 262 |
+
"title": "Absolute shapley value, 2020.",
|
| 263 |
+
"author": "Jinfei Liu.",
|
| 264 |
+
"venue": null,
|
| 265 |
+
"url": null
|
| 266 |
+
}
|
| 267 |
+
},
|
| 268 |
+
{
|
| 269 |
+
"26": {
|
| 270 |
+
"title": "A unified approach to interpreting model predictions.",
|
| 271 |
+
"author": "Scott M Lundberg and Su-In Lee.",
|
| 272 |
+
"venue": "Advances in Neural Information Processing Systems, 30, 2017.",
|
| 273 |
+
"url": null
|
| 274 |
+
}
|
| 275 |
+
},
|
| 276 |
+
{
|
| 277 |
+
"27": {
|
| 278 |
+
"title": "Generalized integrated gradients: A practical method for explaining diverse ensembles, 2019.",
|
| 279 |
+
"author": "John Merrill, Geoff Ward, Sean Kamkar, Jay Budzik, and Douglas Merrill.",
|
| 280 |
+
"venue": null,
|
| 281 |
+
"url": null
|
| 282 |
+
}
|
| 283 |
+
},
|
| 284 |
+
{
|
| 285 |
+
"28": {
|
| 286 |
+
"title": "Sampling permutations for shapley value estimation.",
|
| 287 |
+
"author": "Rory Mitchell, Joshua Cooper, Eibe Frank, and Geoffrey Holmes.",
|
| 288 |
+
"venue": "Journal of Machine Learning Research, 23(43):1\u201346, 2022.",
|
| 289 |
+
"url": null
|
| 290 |
+
}
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"29": {
|
| 294 |
+
"title": "Collaborative machine learning markets with data-replication-robust payments, 2019.",
|
| 295 |
+
"author": "Olga Ohrimenko, Shruti Tople, and Sebastian Tschiatschek.",
|
| 296 |
+
"venue": "URL https://arxiv.org/abs/1911.09052.",
|
| 297 |
+
"url": null
|
| 298 |
+
}
|
| 299 |
+
},
|
| 300 |
+
{
|
| 301 |
+
"30": {
|
| 302 |
+
"title": "On shapley value for measuring importance of dependent inputs.",
|
| 303 |
+
"author": "Art B Owen and Cl\u00e9mentine Prieur.",
|
| 304 |
+
"venue": "SIAM/ASA Journal on Uncertainty Quantification, 5(1):986\u20131002, 2017.",
|
| 305 |
+
"url": null
|
| 306 |
+
}
|
| 307 |
+
},
|
| 308 |
+
{
|
| 309 |
+
"31": {
|
| 310 |
+
"title": "The do-calculus revisited.",
|
| 311 |
+
"author": "Judea Pearl.",
|
| 312 |
+
"venue": "In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, UAI\u201912, page 3\u201311, Arlington, Virginia, USA, 2012. AUAI Press.",
|
| 313 |
+
"url": null
|
| 314 |
+
}
|
| 315 |
+
},
|
| 316 |
+
{
|
| 317 |
+
"32": {
|
| 318 |
+
"title": "Very-short-term probabilistic forecasting of wind power with generalized logit\u2013normal distributions.",
|
| 319 |
+
"author": "Pierre Pinson.",
|
| 320 |
+
"venue": "Journal of the Royal Statistical Society: Series C (Applied Statistics), 61(4):555\u2013576, 2012.",
|
| 321 |
+
"url": null
|
| 322 |
+
}
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"33": {
|
| 326 |
+
"title": "Regression markets and application to energy forecasting.",
|
| 327 |
+
"author": "Pierre Pinson, Liyang Han, and Jalal Kazempour.",
|
| 328 |
+
"venue": "TOP, 30(3):533\u2013573, 2022.",
|
| 329 |
+
"url": null
|
| 330 |
+
}
|
| 331 |
+
},
|
| 332 |
+
{
|
| 333 |
+
"34": {
|
| 334 |
+
"title": "An upper bound on the bayesian error bars for generalized linear regression.",
|
| 335 |
+
"author": "Cazhaow S Qazaz, Christopher KI Williams, and Christopher M Bishop.",
|
| 336 |
+
"venue": "In Mathematics of Neural Networks: Models, Algorithms and Applications, pages 295\u2013299. Springer, 1997.",
|
| 337 |
+
"url": null
|
| 338 |
+
}
|
| 339 |
+
},
|
| 340 |
+
{
|
| 341 |
+
"35": {
|
| 342 |
+
"title": "Data market design through deep learning.",
|
| 343 |
+
"author": "Sai Srivatsa Ravindranath, Yanchen Jiang, and David C Parkes.",
|
| 344 |
+
"venue": "Advances in Neural Information Processing Systems, 36, 2024.",
|
| 345 |
+
"url": null
|
| 346 |
+
}
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"36": {
|
| 350 |
+
"title": "A value for n-person games.",
|
| 351 |
+
"author": "Lloyd S Shapley.",
|
| 352 |
+
"venue": "Classics in Game Theory, 69, 1997.",
|
| 353 |
+
"url": null
|
| 354 |
+
}
|
| 355 |
+
},
|
| 356 |
+
{
|
| 357 |
+
"37": {
|
| 358 |
+
"title": "The many shapley values for model explanation.",
|
| 359 |
+
"author": "Mukund Sundararajan and Amir Najmi.",
|
| 360 |
+
"venue": "In International Conference on Machine Learning, pages 9269\u20139278, 2020.",
|
| 361 |
+
"url": null
|
| 362 |
+
}
|
| 363 |
+
},
|
| 364 |
+
{
|
| 365 |
+
"38": {
|
| 366 |
+
"title": "Probabilistic forecasts of wind power generation accounting for geographically dispersed information.",
|
| 367 |
+
"author": "Julija Tastu, Pierre Pinson, Pierre-Julien Trombe, and Henrik Madsen.",
|
| 368 |
+
"venue": "IEEE Transactions on Smart Grid, 5(1):480\u2013489, 2013.",
|
| 369 |
+
"url": null
|
| 370 |
+
}
|
| 371 |
+
},
|
| 372 |
+
{
|
| 373 |
+
"39": {
|
| 374 |
+
"title": "Manifold restricted interventional shapley values.",
|
| 375 |
+
"author": "Muhammad Faaiz Taufiq, Patrick Bl\u00f6baum, and Lenon Minorics.",
|
| 376 |
+
"venue": "In International Conference on Artificial Intelligence and Statistics, pages 5079\u20135106. PMLR, 2023.",
|
| 377 |
+
"url": null
|
| 378 |
+
}
|
| 379 |
+
},
|
| 380 |
+
{
|
| 381 |
+
"40": {
|
| 382 |
+
"title": "A market framework for eliciting private data.",
|
| 383 |
+
"author": "Bo Waggoner, Rafael Frongillo, and Jacob D Abernethy.",
|
| 384 |
+
"venue": "Advances in Neural Information Processing Systems, 28, 2015.",
|
| 385 |
+
"url": null
|
| 386 |
+
}
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"41": {
|
| 390 |
+
"title": "Efficient sampling approaches to shapley value approximation.",
|
| 391 |
+
"author": "Jiayao Zhang, Qiheng Sun, Jinfei Liu, Li Xiong, Jian Pei, and Kui Ren.",
|
| 392 |
+
"venue": "Proceedings of the ACM on Management of Data, 1(1):1\u201324, 2023.",
|
| 393 |
+
"url": null
|
| 394 |
+
}
|
| 395 |
+
}
|
| 396 |
+
],
|
| 397 |
+
"url": "http://arxiv.org/html/2310.06000v3"
|
| 398 |
+
}
|
20241001/2310.06341v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2310.07867v6.json
ADDED
|
@@ -0,0 +1,460 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Cheap Talking AlgorithmsWe thank Fran\u00e7oise Forges, Alkis Georgiadis-Harris, Balazs Szentes, ChatGPT4, and seminar participants at the University of Warwick and at the 2023 Bergamo SkIO conference for their valuable comments. We are also grateful to Sophia Skenderis for her research assistance and to several anonymous reviewers. All computations have been performed using the Julia programming language. The code to replicate the results is available at https://github.com/massimilianofurlan/rl_cheap_talk.",
|
| 3 |
+
"abstract": "We simulate behaviour of two independent reinforcement learning algorithms playing the Crawford and Sobel, (1982) game of strategic information transmission. We adopt memoryless algorithms to capture learning in a static game where a large population interacts anonymously. We show that sender and receiver converge to Nash equilibrium play. The level of informativeness of the sender\u2019s cheap talk decreases as the bias increases and, at intermediate level of the bias, it matches the level predicted by the Pareto optimal equilibrium or by the second best one. Conclusions are robust to alternative specifications of the learning hyperparameters and of the game.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Consider the classic signalling game: a sender is informed about a payoff-relevant parameter drawn from a known distribution and takes one of several possible actions; an uninformed receiver observes the sender\u2019s action but not the parameter, and makes a decision. In a landmark article, Crawford and Sobel, (1982 ###reference_b14###) (henceforth CS) showed that, even if the payoffs of both agents are independent of the sender\u2019s action, there are equilibria where the action transmits information about the parameter, as long as the conflict of interest about the ideal receiver\u2019s decision is not too large. By interpreting the payoff-irrelevant actions of the sender as \u201ccheap talk\u201d, CS delivers a powerful formal theory of communication. Non-committal and purely symbolic behaviour can convey information and help coordinate subsequent interactions even if rational agents do not share identical goals.\nIn this paper, we compute stationary points of memoryless independent reinforcement learning algorithms playing the CS\u2019s game of information transmission.111Computational techniques are necessary because finding limit points of independent learning algorithms interacting with each other is, to date, an intractable mathematical problem. Methods relying on approximation via systems of differential equations are not directly applicable to our case (see B\u00f6rgers and Sarin, (1997 ###reference_b8###) and Banchio and Mantegazza, (2022 ###reference_b4###)). These algorithms work roughly as follows. For each of a finite set of types, the sender keeps track of a vector, which stores its current estimates of the value of taking each action given the type. The receiver, instead, holds a vector for each of the signals the sender may send. Any such vector contains the receiver\u2019s estimate of the value of each action following a given signal. In each period, the algorithms select actions following a softmax policy. Most likely, they take the highest-reward action according to their estimates, but with some probability they experiment with other actions. Such probability decays over time, depending on a hyperparameter (i.e., the temperature-decay factor). After both agents have moved, the relevant estimates are updated to account for the payoffs received. Another hyperparameter (i.e., the learning rate) establishes how much the current experience is weighted vis-a-vis the past.222The reinforcement learning literature has proposed numerous learning algorithms. We chose one of the simplest forms of reinforcement learning available for our environment (Sutton and Barto,, 2018 ###reference_b31###).\nWe find that a sender and a receiver playing together in repeated instances of the CS\u2019s game converge to Nash equilibrium behaviour with substantial information transmission, which grows monotonically as the bias decreases. Except at levels of the bias that make perfect information transmission an equilibrium, where behaviour is more nuanced, the mutual information between the distribution of the type and that of the message chosen by the sender (i.e., the informativeness of the sender\u2019s cheap talk) matches that of the maximally informative, Pareto optimal, equilibrium or of the second most informative one. As the bias grows, the sender\u2019s strict preference for the Pareto optimal equilibrium attenuates and, when near indifference is reached, information transmission drops to the second-best equilibrium, which itself becomes the optimal one once the bias increases further.\nOur algorithms play a very large number of identical one-shot games together. However, at convergence, play is independent of past choices. By design, algorithms are learning to play a static game, not a dynamic one. Therefore, the reader may wonder why not endowing them with the ability to respond to past history, as this allows for a richer set of outcomes and, quite possibly, more communication in the long run. While this would be a valid modelling choice were we interested in the emergence of bilateral communication, it is not the right approach to evaluate the emergence of language in a large population, which is the focus of our analysis and, arguably, the appropriate interpretation of equilibrium in CS. In fact, building on a standard, if not entirely uncontroversial argument in the theory of learning in games, our results should be interpreted as arising from learning within a large population of possibly more sophisticated reinforcers interacting anonymously (see Fudenberg and Kreps, (1993 ###reference_b20###) and Fudenberg and Levine, (1998 ###reference_b21###)).\nOur main contribution is to the theory of strategic information transmission. Following pioneering work in psychology (e.g., Bush and Mosteller, (1955 ###reference_b9###)) and in game theory (e.g., Erev and Roth, (1998 ###reference_b17###)), reinforcement learning offers a model of human behaviour alternative to the traditional game-theoretic one.333Erev and Roth, (1998 ###reference_b17###) wrote: \u201cwell-developed, cognitively informed adaptive game theory will complement conventional game theory, both as a theoretical tool and as a tool of applied economics.\u201d However, in contrast to what was hoped for by Erev and Roth, (1998 ###reference_b17###), simple reinforcement learning algorithms do not fit the experimental data well in this case. In fact, over-communication relative to the most informative equilibrium is a common feature of experimental implementations of the CS game involving human subjects. (see Dickhaut et al., (1995 ###reference_b16###) and, especially, Cai and Wang, (2006 ###reference_b10###)). In this light, our results complement the existing knowledge in two ways. First, we show that information transmission in cheap talk games with conflict of interest is a robust feature of play, emerging from a different modelling approach to strategic interaction. To our knowledge, theoretical results on the convergence of reinforcement learning algorithms to informative equilibria are only available in the case where agents have aligned interests. Both Hu et al., (2011 ###reference_b24###) and W\u00e4rneryd, (1993 ###reference_b34###) demonstrate convergence of forms of reinforcement learning algorithms to the most informative equilibrium. Second, we contribute to the large game-theoretic literature on equilibrium selection in games with information transmission. As it is well known, such games possess many qualitatively different equilibria. Our main result delivers a cautionary tale regarding the consensus reached in the uniform-quadratic environment around the selection of the most informative and Pareto optimal equilibrium. In particular, in addition to convergence to the first best, we also show convergence to the second most informative equilibrium at certain levels of the bias, where such an equilibrium is also close to optimal for the sender.444For an important recent contribution to the equilibrium selection literature see Chen et al., (2008 ###reference_b13###). Most closely related to our work is perhaps the evolutionary and learning approach to selection, which shows that the most informative equilibrium is the evolutionarily stable outcome of the CS game when it exists (see Blume et al., (1993 ###reference_b6###)) and the limit point of the best-response dynamics (see Gordon et al., (2022 ###reference_b22###); S\u00e9mirat and Forges, (2024 ###reference_b29###)).\nWhile experimental evidence shows that communication in cheap talk games with partial conflict of interests is achieved by humans (e.g., see Blume et al., (2020 ###reference_b7###) for a survey), to our knowledge an analogous conclusion has not yet been robustly established for artificially intelligent agents (AI agents). Most of the machine learning literature has focused on games with common interest, observing that AI agents learn to communicate successfully (e.g., see Lazaridou et al., (2016 ###reference_b26###), Havrylov and Titov, (2017 ###reference_b23###), Foerster et al., (2016 ###reference_b18###)). Instead, mostly negative results have been obtained in games where agents have conflicting interests (e.g., see Cao et al., (2018 ###reference_b12###)). An important exception is Noukhovitch et al., (2021 ###reference_b27###). They consider a CS game played on a circle, for which equilibrium characterisation is not available. Employing AI agents controlled by neural networks they show that communication is achieved even when the bias of the sender is non-zero. We depart from Noukhovitch et al., (2021 ###reference_b27###) by employing simple reinforcement learners and by looking at the original (discretised) CS game. Doing this allows us to compare simulation outcomes to the theoretical benchmark and establish that communication often takes place at the highest level predicted by theory even when a very simple model of learning is adopted.\nFinally, we hope that the observation that private information can be successfully communicated between AI agents will open up new questions within a growing literature in economics which, motivated by policy concerns, looks at AI agents playing various market games. Contributions to this recent literature include Calvano et al., (2020 ###reference_b11###), Banchio and Skrzypacz, (2022 ###reference_b5###), Asker et al., (2022 ###reference_b1###), Johnson et al., (2023 ###reference_b25###) and Decarolis et al., (2023 ###reference_b15###).555The literature on market games played by AI agents was initiated by computer scientists, with early contributions including Waltman and Kaymak, (2008 ###reference_b33###) and Tesauro and Kephart, (2002 ###reference_b32###) among others. A central theme of this research agenda is showing that AI agents learn to play strategies that deliver supra-equilibrium profits, which would be deemed implicitly collusive if played by humans.\nSince communication expands the equilibrium set in a game-theoretic sense (e.g., see Aumann and Hart, (2003 ###reference_b2###)), the possibility of communication raises the question of what should be expected in market games played by algorithms if collusion can be explicit. This is not a moot concern, even when a direct communication channel is not part of market design. In fact, as auction practice has shown, bidders learn to exchange information in very imaginative ways, for instance by using the last digits of their submitted bids.666Bajari and Yeo, (2009 ###reference_b3###) suggest that in some FCC spectrum auctions bidders used such form of code-bidding to communicate their intentions and avoid competing on the same portions of the spectrum for sale. Since we expect sophisticated AI agents to exploit all communication opportunities, our results suggest that explicit collusion between algorithms with a sufficiently large state space and a long history of interactions may be as worrisome as the implicit one uncovered by the existing literature.\nIn the next section, we present the discretised uniform-quadratic specification of the CS game and our simulation design. Section 3 ###reference_### presents the main results. In Section 4 ###reference_### we illustrate the robustness of our findings in terms of the parameters of the game. Section 5 ###reference_### concludes with some avenues for future work."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "RL agents playing the cheap talk game",
|
| 15 |
+
"text": "There are two agents, a sender () and a receiver (). At the outset, a type is drawn from a known uniform distribution with PDF and support over a finite set , which is composed by uniformly spaced points in the interval . The sender privately observes the realised type and sends a message to the receiver, with . Then, the receiver observes message and takes an action , with formed by uniformly spaced points in . The receiver wants the action to match . Her payoff is . Given bias , the sender wants the action of the receiver to match . Thus, his payoff is . The bias parameter measures their divergence of interest.\nFollowing CS, a (Bayes) Nash equilibrium is a family of type-conditional probability distributions over messages for the sender and a choice of action conditional on the message for the receiver, such that there is no profitable deviation by the receiver or by any type of sender. Babbling equilibria always exists, where the sender\u2019s message conveys no relevant information on the type and the receiver plays her ex-ante optimal action for messages that are sent with positive probability.\nWe let two independent reinforcement learning agents play, as sender and receiver, the discretised cheap talk game. To allow learning, the two agents play the game multiple times, up to a maximum of periods. Both are programmed to take an action conditional on a state, first the sender and then the receiver. In each period, a state for the sender is a type drawn from according to , independently of previous interactions. Then, the sender takes an action from , which represents the state for the receiver. Finally, the receiver takes an action from and agents collect their rewards. Because the underlying learning model is the same for both agents (i.e., both take an action conditional on a state), we describe it for a generic agent, with states and actions taking appropriate meaning based on who is playing.\nLet be the finite set of possible states and the finite set of actions, for either the sender or the receiver. Each time an agent is called to play in state , it chooses action following a parameterised softmax probability distribution\nwhere (discussed in the next paragraph) represents the agent\u2019s estimate in period of the value of taking action in state . The parameter , called temperature, modulates the intensity of exploration: for smaller values of , the probability mass increasingly concentrates on the action(s) that are most rewarding according to . We reduce exploration at each interaction by letting the temperature decay according to , where is the decay rate and . Hence, exploration goes to zero as .\nThe initial estimate, , is arbitrarily initialised for all . If the agent takes action in state in period , the estimate associated with that specific state-action pair is updated iteratively according to\nwhere the step-size parameter , called learning rate, regulates how quickly new information replaces the old and (discussed later) denotes the reward the agent obtains by playing action in state in period . For all other pairs, .777The specification we adopt agrees with Banchio and Mantegazza, (2022 ###reference_b4###)\u2019s definition of a reinforcer but differs from their -greedy Q-learning in two ways. First, rather than a greedy policy, we employ softmax, which smooths out the effects of minor differences in the agents\u2019 estimates during learning. Second, because actions have no direct effect on future states, our players are not designed to estimate the long-run benefit of taking an action today. Formally, we set their discount parameter to zero.\nIn multi-agent reinforcement learning, the rewards obtained in each period depend on the actions taken by other agents in that same period. In our case, let be the pair of state and action taken by the other agent in . Then, we have for the sender\u2019s algorithm and for the receiver.\nIf the distribution of were to depend only on the agent\u2019s own actions, existing results would guarantee convergence of the policy to an optimal one. However, because the underlying distributions of rewards the agents face are non-stationary, convergence is not guaranteed. For this reason, we consider agents to have converged and stop the simulation if, before reaching the maximum number of interactions , the policies of both agents exhibit relative deviations in norm smaller than for consecutive interactions.\nPseudocode for the simulation is given in Algorithm 1 ###reference_###."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Main Results",
|
| 21 |
+
"text": "In this section, we discuss results from the baseline simulations, which involve a uniform prior and quadratic utility. The robustness of our findings to alternative game forms is demonstrated in the next section.\nFor our baseline case, we consider the discretised game with types in , with any two adjacent types separated by a increment. Hence, , and .\nWe implement algorithms for both the sender and the receiver that use the same learning rate and exploration decay-rate . With these hyperparameters, the agents typically converge in less than one million periods, and rewards\u2019 weight in the estimates is less than after updates. We test robustness to different hyperparameter configurations at the end of this section. The Q-matrices of the sender and of the receiver have dimensions and , respectively. Their entries are initialised using a uniform distribution in the interval for the sender and for the receiver, where the lower bounds correspond to the payoffs the two agents obtain ex-ante in the babbling equilibrium.\nAdditional simulations confirmed that the initialisation of the matrices is irrelevant.\nWe study interactions for different levels of bias, taking points spaced apart in the interval . For each bias we run 1000 independent simulations. At the end of each simulation, if the agents\u2019 policies have converged, we record the Q-matrices at the point of convergence and compute the implied policies for the sender and receiver, denoted and , respectively. Using these policies we can compute the ex-ante expected rewards of the agents from playing the information transmission game together:\nWe also compute the mutual information between type and message, normalised by the entropy of the type. This is equivalent to the expected relative reduction in the entropy of the type from knowing the message. Formally,\nThis metric takes value 1 if knowledge of the message implies knowledge of the type, as in a perfectly informative equilibrium. It takes value 0 when type and message are statistically independent, as in the babbling equilibrium. Other measures of informativeness do not have qualitatively different behaviour. In particular, in the uniform-quadratic case, the negative of the residual variance of the sender\u2019s type given the message coincides with the ex-ante payoff of the receiver, ."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Convergence to Nash",
|
| 27 |
+
"text": "In our baseline case, all simulations converged according to the stated criterion and, as the next figure illustrates, most converged to a (Bayes) Nash equilibrium. The top panel of Figure 1 ###reference_###, shows the maximum probability mass, across all states, that the policy of the sender (and that of the receiver) places on actions that are not best responses to the policy of the opponent. The bottom panel of Figure 1 ###reference_### displays the ex-ante gain the sender and the receiver would make by best responding to the policy of the opponent, thus measuring epsilon equilibrium behaviour (Radner,, 1980 ###reference_b28###).\n###figure_1### Except at a few bias levels around where the perfectly informative equilibrium disappears and where babbling becomes the unique equilibrium, the agents converge to an exact Nash with high precision. When they do not, their loss from not best responding is low, on average around 0.00085 at most. While an explicit discussion is omitted from the next section, we note that this result remains valid for different specifications of the game form.\nTo evaluate how the choice of hyperparameters affects this conclusion, we run our simulations of the cheap talk game for a grid of reinforcement learning hyperparameters. We consider and . The exploration decay rates are spaced so that the number of periods to converge roughly doubles at each step.888In practice, with it takes approximately interactions for the agents\u2019 policies to converge, and with it takes approximately interactions. In Figure 2 ###reference_### below, we report the frequency of simulations where agents\u2019 policies converged close to a Nash equilibrium.\n###figure_2### Figure 2 ###reference_### indicates that, while intermediate learning rates are most favourable for convergence to equilibrium, letting the agents explore more by reducing the learning decay rate has an unambiguously positive effect on convergence to equilibrium."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Equilibrium Selection",
|
| 33 |
+
"text": "The CS game of information transmission has multiple Nash equilibria and this remains the case in its discretised version. While babbling is the unique equilibrium when the bias is above 0.3, other equilibria that induce different levels of information transmission exist with lower bias. The existing literature does not offer a complete characterisation of such equilibria in the discretised model. Frug, (2016 ###reference_b19###) (Proposition 2) shows that in the uniform-quadratic case the set of Pareto efficient equilibria is a singleton, payoff-wise. This equilibrium, which Frug, (2016 ###reference_b19###) characterises, represents an important benchmark for us and is henceforth referred to as \u201coptimal\u201d.\nOne class of equilibria, which we refer to as monotone partitional equilibria, is especially relevant to study equilibrium selection because our simulations nearly always converge to strategies in such class. In a monotone partitional equilibrium, the sender partitions into contiguous intervals and uses an (ex-ante) strategy which is measurable with respect to the partition. As we shall see, this class is never a singleton in our specifications, except in cases where babbling is the unique equilibrium. In fact, both the babbling equilibrium and the optimal equilibrium characterised in Frug, (2016 ###reference_b19###) are monotone partitional.999Frug, (2016 ###reference_b19###) endows the receiver with a continuum of actions. This ensures that a single optimal action corresponds to each belief the receiver may have. Given our discretisation of and , the optimal action is unique as long as the strategy of the sender is monotone partitional. The optimal equilibrium constructed in Frug, (2016 ###reference_b19###) is also optimal in our setting because in it the strategy of the sender is partitional. To fix ideas, the modal policies at convergence for both the sender and the receiver are illustrated below for some selected levels of the bias.\n###figure_3### Having established that reinforcement learning dynamics prevalently lead to partitional Nash and that the game has many such equilibria exhibiting different levels of informativeness, we now study where our algorithms converge as a function of the bias.\nIn Figure 4 ###reference_###, we compare the distribution (blue heatmap) of ex-ante payoffs arising from the simulations to the theoretical bounds provided by the babbling equilibrium (dotted line) and the optimal equilibrium (in red) for the different levels of bias in the discretised interval. In Figure 6 ###reference_###, we show the distribution of the normalised mutual information between type and message from our simulations, for the same range of biases. Finally, in Figure 6 ###reference_### we identify all monotone partitional equilibria that exist in our discretised game at different levels of the bias, distinguishing them by their level of informativeness. Equilibria where the algorithms converge are highlighted in blue.\n###figure_4### ###figure_5### ###figure_6### ###figure_7### Two conclusions are immediate and, as we shall see in the next section, robust to different specifications of the game. First, unless babbling is the only equilibrium, our agents converge to play with substantial information transmission, which in this model is reflected both by the mutual information and agents\u2019 payoffs above the babbling level. Second, the level of information transmission that takes place at convergence decreases monotonically as the bias increases, which seems a natural property of any equilibrium selection criterion in the information transmission game.\nAlbeit requiring a more granular observation of the data, a clear pattern also emerges at levels of the bias where neither perfect information transmission is an equilibrium nor babbling is the unique equilibrium, that is between 0.1 and 0.3 bias in this baseline case. In particular, play converges to either the optimal equilibrium or to the second-best one, which becomes the optimal one at higher levels of the bias. Behaviour is more nuanced at lower levels of bias. In this case, as Figure 6 ###reference_### shows, the game exhibits many equilibria which then disappear once 0.1 is reached. The agents only play the perfectly informative equilibrium when the bias is nearly zero. Then, as the bias grows toward 0.1, they gradually descend into less and less informative equilibria, reaching the Nash that is optimal at bias midway at around . As the left panel of Figure 4 ###reference_### shows, the transition is smoother for the sender, suggesting that jumps from one equilibrium to the other take place when the seller becomes nearly indifferent between the two equilibria.\nThe behaviour described in this subsection is robust to alternative specifications of the learning hyperparameters. Figure 7 ###reference_### below shows the distribution of the normalised mutual information for values of and .\n###figure_8### ###figure_9### In addition to showing robustness of the results, Figure 7 ###reference_### highlights that letting agents explore more extensively yields outcomes that are progressively closer to the ex-ante optimal equilibrium. The same trend naturally extends to the normalised mutual information between messages and types."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Comparative statics and Robustness",
|
| 39 |
+
"text": "In this section, we demonstrate that communication emerges robustly in information transmission games played by AI agents, beyond the classic uniform-quadratic case studied so far. First, we look at making the language more or less expressive than it is necessary to achieve all equilibria of the game and, analogously, the set of actions more or less large. Second, we report the results of simulations obtained for a variety of alternative assumptions on the information transmission game. We consider a higher and lower number of types, non-uniform prior, and utility functions that are not linear-quadratic. Throughout, we keep fixed the reinforcement learning hyperparameters as in our baseline configuration. We show for each case how the ex-ante expected reward of the agents is distributed over 1000 simulations."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.1",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "Comparative statics on messages and actions",
|
| 45 |
+
"text": "In this subsection, we perform some comparative statics by changing the number of messages available to the sender and the number of actions available to the receiver.\nIn Figure 9 ###reference_### we endow the sender with more or fewer messages than the number required to achieve full information transmission. In particular, while keeping fixed the number of types to 6, we look at the case where only 3 messages are available and the case where 9 messages are present. Results are in line with expectations. On the one hand, increasing the number of messages does not have a tangible effect on the qualitative conclusions reached in the previous section. The sender simply learns to avoid redundant messages and the receiver responds to those in a way that does not stimulate the sender to play those actions. Learning becomes slower though, because the dimensions of the Q-matrices increase. On the other hand, making the language less expressive may impede players from playing the more informative equilibria. In fact, when the constraint on messages is non-binding, behaviour is as in the baseline scenario and less noisy because less learning is required. Instead, when the constraint is binding, sender and receiver tend to set on the most informative equilibrium compatible with the number of messages available.\nIn Figure 9 ###reference_### we modify our baseline scenario by increasing or reducing the number of actions available to the receiver compared to the baseline case of 11. In particular, we consider cases with and uniformly spaced actions in the unit interval. Both are odd numbers, which guarantee the set of actions to contain the best reply to babbling, which is 0.5. As in the case of superfluous messages, if the number of actions is large enough to include all those required in the equilibrium or more, there is no substantial change in behaviour at convergence compared to the baseline scenario. The main qualitative conclusions we reached in the baseline scenario also continue to hold when the number of actions available to the receiver is reduced. However, for any number of actions smaller than , the best reply to partitional strategies may not be unique and the ex-ante optimal equilibrium, hence our benchmark, is different than the one identified by (Frug,, 2016 ###reference_b19###).\n###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.2",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "Robustness to alternative game forms",
|
| 51 |
+
"text": "We now look at specifications with different numbers of types, different utility specifications and different distributions over the types.\nIn Figure 11 ###reference_### we consider simulations with and types, so that any two adjacent types are spaced and from each other, respectively. The figure confirms that all conclusions reached in the previous section extend to the case of a smaller or larger type space. However, with a large number of types, behaviour tends to be more noisy. This is explained by the relative decrease in exploration due to the change in size of the agents\u2019 Q-matrices. In general, as we keep fixed to the base case configuration, each state-action pair is on average visited more (less) often the smaller (larger) the agent\u2019s Q-matrix is. This eventually results in improving (worsening) the agent\u2019s learning.\nFigure 11 ###reference_### shows simulation outcomes with different utility specifications. We consider the case of a fourth-power loss function and the case of an absolute loss function. Both functions are concave and, together with the rest of our assumptions, this guarantees the existence of an optimal monotone partitional equilibrium. Then, the figure confirms that our main results are not dependent on the specific forms of the utility function. Both scenarios show similar results, in line with our benchmark case. While we have not run further cases because it is hard to identify the right comparator, we strongly suppose that the assumptions of concavity and upward bias are not crucial for the main result that communication will take place at the highest levels predicted by equilibrium.\nFinally, in Figure 12 ###reference_### we show outcomes for different distributions over the types; namely, a probability distribution with linearly increasing probability mass, and one with linearly decreasing probability mass. In this case results also indicate communication is roughly in line with the most optimistic theoretical benchmark. Now, however, existence of the ex-ante optimal equilibrium is no longer guaranteed. Hence, we rely on the receiver-preferred partitional equilibrium as comparator. A surprising result is obtained in the case of the decreasing distribution. While in all our simulations agents do better than babbling, here the receiver obtains a payoff lower than the babbling one, while the sender obtains a larger one. We think this finding is interesting because the sender seems able to manipulate the receiver even when theoretically it should not be possible and the receiver is losing out from not just playing the ex-ante optimal action. Unfortunately, we do not have a convincing explanation for this result. We believe it might be due to the existence of multiple optimal actions for the receiver even when the sender\u2019s strategy is partitional.\n###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Conclusions",
|
| 57 |
+
"text": "We showed that simple reinforcement learning algorithms training together in the classic Crawford and Sobel, (1982 ###reference_b14###) cheap talk game engage in substantial information transmission at the level predicted by the most informative equilibrium.\nIn the uniform-quadratic version of CS, equilibria exhibit a nice structure. Both sender and receiver unambiguously benefit from more communication. This raises the question of what would happen in games with multiple equilibria that are not Pareto ranked, with some more favourable to the receiver and others to the sender. Will communication break down? Or will one of the two agents lead the other to their favourite equilibrium? While our results in Section 4 suggest that communication will persist and favour the sender, we believe extending the analysis to more general games with communication is an interesting avenue for future work.\nOur results are interpreted as arising from a large population of randomly matched agents. A natural extension of the present framework, which would require explicitly simulating the population dynamics, would be looking at how a language is learned when agents within the populations are heterogeneous (e.g., senders may have different biases) or the frequency of interactions is not driven by random matching (e.g., agents may be arranged in a network as in Skyrms, (2009 ###reference_b30###)\u2019s model of unbiased communication). Would agents still be able to learn a common language? Will there be winners and losers depending on the level of bias or the network architecture of interactions? What population structures facilitate learning?\nA more speculative next step would be to consider the interaction between humans and algorithms. Suppose we let a multitude of humans randomly interact with multiple algorithms. Will they learn a common language? Will there be more communication than in human-to-human experiments? Would humans be manipulated or maybe the other way around? Human-algorithm play also raises interesting questions regarding the interaction between strategic signalling and natural language. How would reinforcement learning algorithms endowed with natural language processing abilities, such as those currently possessed by chatGPT and other large language models, perform? Will the use of natural language result in more or less information transmission? Will human agents be more easily deceived? We think that human-AI experiments show promise well beyond the questions raised above.\nFinally, it may be worth revisiting some of the existing findings in the economics of AI agents playing market games. For instance, will the sort of code-bidding collusion described in the introduction emerge in market played by AI agents? Since a large state space would be required to handle this sort of \u201cnon-verbal\u201d exchange, our finding that communication emerges suggests it may be worth looking at the behaviour of more complex agents, such as those endowed with deep neural networks. It would not be surprising to see collusion sustained at higher levels than those already observed with simple learning algorithms. Such a finding would suggest the need for market design to mitigate communication possibilities."
|
| 58 |
+
}
|
| 59 |
+
],
|
| 60 |
+
"appendix": [],
|
| 61 |
+
"tables": {},
|
| 62 |
+
"image_paths": {
|
| 63 |
+
"1": {
|
| 64 |
+
"figure_path": "2310.07867v6_figure_1.png",
|
| 65 |
+
"caption": "Figure 1: Top: Maximum probability mass that the policy of the sender (receiver) places on suboptimal messages (actions) across all types (and messages). Bottom: Potential ex-ante gain by a unilateral deviation across all types (and messages). Averages over 1000100010001000 simulations.\nAlso applies to subsequent figures: The ex-ante optimal equilibrium entails perfect information transmission for biases identified by the shaded grey area to the left, while babbling is the unique equilibrium for biases in the shaded grey areas to the right.",
|
| 66 |
+
"url": "http://arxiv.org/html/2310.07867v6/x1.png"
|
| 67 |
+
},
|
| 68 |
+
"2": {
|
| 69 |
+
"figure_path": "2310.07867v6_figure_2.png",
|
| 70 |
+
"caption": "Figure 2: Frequency of simulations in which both agents place at most 0.010.010.010.01 probability mass on suboptimal actions across all states, for different levels of bias in [0,0.5]00.5[0,0.5][ 0 , 0.5 ]. Each graph has bias on the horizontal axis and frequency on the vertical axis, and corresponds to a specific (\u03bb,\u03b1)\ud835\udf06\ud835\udefc(\\lambda,\\alpha)( italic_\u03bb , italic_\u03b1 ) combination of hyperparameters. Horizontal dashed lines indicate frequencies of 0 and 1.",
|
| 71 |
+
"url": "http://arxiv.org/html/2310.07867v6/x2.png"
|
| 72 |
+
},
|
| 73 |
+
"3": {
|
| 74 |
+
"figure_path": "2310.07867v6_figure_3.png",
|
| 75 |
+
"caption": "Figure 3: Heathmap of the modal policies of sender (top) and receiver (top) for different levels of bias over 1000 independent simulations. All vertical pairs of strategies correspond to exact equilibria. Randomisation over messages is with equal probability as indicated by the same colour tone. Messages to the right of the dashed line are off the equilibrium path. To find the modal policy we relabeled messages in each simulation assigning a natural number to each message such that messages with smaller numbers are associated with smaller types.",
|
| 76 |
+
"url": "http://arxiv.org/html/2310.07867v6/x3.png"
|
| 77 |
+
},
|
| 78 |
+
"4(a)": {
|
| 79 |
+
"figure_path": "2310.07867v6_figure_4(a).png",
|
| 80 |
+
"caption": "Figure 4: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. The distribution of values of 1000 simulations is shown in shades of blue. The value associated with the ex-ante optimal equilibrium is in red and the one associated with the babbling equilibrium is dotted gray.",
|
| 81 |
+
"url": "http://arxiv.org/html/2310.07867v6/x4.png"
|
| 82 |
+
},
|
| 83 |
+
"4(b)": {
|
| 84 |
+
"figure_path": "2310.07867v6_figure_4(b).png",
|
| 85 |
+
"caption": "Figure 4: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. The distribution of values of 1000 simulations is shown in shades of blue. The value associated with the ex-ante optimal equilibrium is in red and the one associated with the babbling equilibrium is dotted gray.",
|
| 86 |
+
"url": "http://arxiv.org/html/2310.07867v6/x5.png"
|
| 87 |
+
},
|
| 88 |
+
"5": {
|
| 89 |
+
"figure_path": "2310.07867v6_figure_5.png",
|
| 90 |
+
"caption": "Figure 5: Normalised mutual information between the distribution of messages induced by the sender\u2019s policy and the distribution of sender\u2019s types. The distribution over 1000 simulations is shown in shades of blue. The value associated with the optimal equilibrium is in red and the one associated with the worst equilibrium is dotted gray.\n",
|
| 91 |
+
"url": "http://arxiv.org/html/2310.07867v6/x6.png"
|
| 92 |
+
},
|
| 93 |
+
"6": {
|
| 94 |
+
"figure_path": "2310.07867v6_figure_6.png",
|
| 95 |
+
"caption": "Figure 6: Normalised mutual information of the sender\u2019s modal policy across simulations converged to an equilibrium (maximum mass on suboptimal actions across states < 0.01 for both agents). The normalised mutual information of monotone partitional equilibria that exist for a given bias is shown in grey.\n",
|
| 96 |
+
"url": "http://arxiv.org/html/2310.07867v6/x7.png"
|
| 97 |
+
},
|
| 98 |
+
"7(a)": {
|
| 99 |
+
"figure_path": "2310.07867v6_figure_7(a).png",
|
| 100 |
+
"caption": "Figure 7: Normalised mutual information for a grid of hyperparameters; as in Figure 6. Distribution of outcomes over 1000100010001000 simulations.",
|
| 101 |
+
"url": "http://arxiv.org/html/2310.07867v6/x8.png"
|
| 102 |
+
},
|
| 103 |
+
"7(b)": {
|
| 104 |
+
"figure_path": "2310.07867v6_figure_7(b).png",
|
| 105 |
+
"caption": "Figure 7: Normalised mutual information for a grid of hyperparameters; as in Figure 6. Distribution of outcomes over 1000100010001000 simulations.",
|
| 106 |
+
"url": "http://arxiv.org/html/2310.07867v6/x9.png"
|
| 107 |
+
},
|
| 108 |
+
"8(a)": {
|
| 109 |
+
"figure_path": "2310.07867v6_figure_8(a).png",
|
| 110 |
+
"caption": "Figure 8: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 messages (top) and 9999 messages (bottom).",
|
| 111 |
+
"url": "http://arxiv.org/html/2310.07867v6/x10.png"
|
| 112 |
+
},
|
| 113 |
+
"8(b)": {
|
| 114 |
+
"figure_path": "2310.07867v6_figure_8(b).png",
|
| 115 |
+
"caption": "Figure 8: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 messages (top) and 9999 messages (bottom).",
|
| 116 |
+
"url": "http://arxiv.org/html/2310.07867v6/x11.png"
|
| 117 |
+
},
|
| 118 |
+
"8(c)": {
|
| 119 |
+
"figure_path": "2310.07867v6_figure_8(c).png",
|
| 120 |
+
"caption": "Figure 8: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 messages (top) and 9999 messages (bottom).",
|
| 121 |
+
"url": "http://arxiv.org/html/2310.07867v6/x12.png"
|
| 122 |
+
},
|
| 123 |
+
"8(d)": {
|
| 124 |
+
"figure_path": "2310.07867v6_figure_8(d).png",
|
| 125 |
+
"caption": "Figure 8: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 messages (top) and 9999 messages (bottom).",
|
| 126 |
+
"url": "http://arxiv.org/html/2310.07867v6/x13.png"
|
| 127 |
+
},
|
| 128 |
+
"8(e)": {
|
| 129 |
+
"figure_path": "2310.07867v6_figure_8(e).png",
|
| 130 |
+
"caption": "Figure 8: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 messages (top) and 9999 messages (bottom).",
|
| 131 |
+
"url": "http://arxiv.org/html/2310.07867v6/x14.png"
|
| 132 |
+
},
|
| 133 |
+
"8(f)": {
|
| 134 |
+
"figure_path": "2310.07867v6_figure_8(f).png",
|
| 135 |
+
"caption": "Figure 8: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 messages (top) and 9999 messages (bottom).",
|
| 136 |
+
"url": "http://arxiv.org/html/2310.07867v6/x15.png"
|
| 137 |
+
},
|
| 138 |
+
"9(a)": {
|
| 139 |
+
"figure_path": "2310.07867v6_figure_9(a).png",
|
| 140 |
+
"caption": "Figure 10: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 types (top) and 9999 types (bottom).",
|
| 141 |
+
"url": "http://arxiv.org/html/2310.07867v6/x16.png"
|
| 142 |
+
},
|
| 143 |
+
"9(b)": {
|
| 144 |
+
"figure_path": "2310.07867v6_figure_9(b).png",
|
| 145 |
+
"caption": "Figure 10: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 types (top) and 9999 types (bottom).",
|
| 146 |
+
"url": "http://arxiv.org/html/2310.07867v6/x17.png"
|
| 147 |
+
},
|
| 148 |
+
"9(c)": {
|
| 149 |
+
"figure_path": "2310.07867v6_figure_9(c).png",
|
| 150 |
+
"caption": "Figure 10: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 types (top) and 9999 types (bottom).",
|
| 151 |
+
"url": "http://arxiv.org/html/2310.07867v6/x18.png"
|
| 152 |
+
},
|
| 153 |
+
"9(d)": {
|
| 154 |
+
"figure_path": "2310.07867v6_figure_9(d).png",
|
| 155 |
+
"caption": "Figure 10: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 types (top) and 9999 types (bottom).",
|
| 156 |
+
"url": "http://arxiv.org/html/2310.07867v6/x19.png"
|
| 157 |
+
},
|
| 158 |
+
"9(e)": {
|
| 159 |
+
"figure_path": "2310.07867v6_figure_9(e).png",
|
| 160 |
+
"caption": "Figure 10: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 types (top) and 9999 types (bottom).",
|
| 161 |
+
"url": "http://arxiv.org/html/2310.07867v6/x20.png"
|
| 162 |
+
},
|
| 163 |
+
"9(f)": {
|
| 164 |
+
"figure_path": "2310.07867v6_figure_9(f).png",
|
| 165 |
+
"caption": "Figure 10: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with 3333 types (top) and 9999 types (bottom).",
|
| 166 |
+
"url": "http://arxiv.org/html/2310.07867v6/x21.png"
|
| 167 |
+
},
|
| 168 |
+
"10(a)": {
|
| 169 |
+
"figure_path": "2310.07867v6_figure_10(a).png",
|
| 170 |
+
"caption": "Figure 12: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with a linearly increasing distribution (top) and linearly decreasing distribution (bottom). We use p\u2062(\u03b8k)\ud835\udc5dsubscript\ud835\udf03\ud835\udc58p(\\theta_{k})italic_p ( italic_\u03b8 start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) to indicate the probability mass on the k\ud835\udc58kitalic_k-th type in \u0398={\u03b81,\u2026,\u03b8n}\u0398subscript\ud835\udf031\u2026subscript\ud835\udf03\ud835\udc5b\\Theta=\\{\\theta_{1},\\ldots,\\theta_{n}\\}roman_\u0398 = { italic_\u03b8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , \u2026 , italic_\u03b8 start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }. There are n=6\ud835\udc5b6n=6italic_n = 6 types as in the base-case simulations.",
|
| 171 |
+
"url": "http://arxiv.org/html/2310.07867v6/x22.png"
|
| 172 |
+
},
|
| 173 |
+
"10(b)": {
|
| 174 |
+
"figure_path": "2310.07867v6_figure_10(b).png",
|
| 175 |
+
"caption": "Figure 12: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with a linearly increasing distribution (top) and linearly decreasing distribution (bottom). We use p\u2062(\u03b8k)\ud835\udc5dsubscript\ud835\udf03\ud835\udc58p(\\theta_{k})italic_p ( italic_\u03b8 start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) to indicate the probability mass on the k\ud835\udc58kitalic_k-th type in \u0398={\u03b81,\u2026,\u03b8n}\u0398subscript\ud835\udf031\u2026subscript\ud835\udf03\ud835\udc5b\\Theta=\\{\\theta_{1},\\ldots,\\theta_{n}\\}roman_\u0398 = { italic_\u03b8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , \u2026 , italic_\u03b8 start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }. There are n=6\ud835\udc5b6n=6italic_n = 6 types as in the base-case simulations.",
|
| 176 |
+
"url": "http://arxiv.org/html/2310.07867v6/x23.png"
|
| 177 |
+
},
|
| 178 |
+
"10(c)": {
|
| 179 |
+
"figure_path": "2310.07867v6_figure_10(c).png",
|
| 180 |
+
"caption": "Figure 12: Ex-ante expected reward for the sender (left) and receiver (right) for different levels of bias. Cases with a linearly increasing distribution (top) and linearly decreasing distribution (bottom). We use p\u2062(\u03b8k)\ud835\udc5dsubscript\ud835\udf03\ud835\udc58p(\\theta_{k})italic_p ( italic_\u03b8 start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) to indicate the probability mass on the k\ud835\udc58kitalic_k-th type in \u0398={\u03b81,\u2026,\u03b8n}\u0398subscript\ud835\udf031\u2026subscript\ud835\udf03\ud835\udc5b\\Theta=\\{\\theta_{1},\\ldots,\\theta_{n}\\}roman_\u0398 = { italic_\u03b8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , \u2026 , italic_\u03b8 start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }. There are n=6\ud835\udc5b6n=6italic_n = 6 types as in the base-case simulations.",
|
| 181 |
+
"url": "http://arxiv.org/html/2310.07867v6/x24.png"
|
| 182 |
+
}
|
| 183 |
+
},
|
| 184 |
+
"validation": true,
|
| 185 |
+
"references": [
|
| 186 |
+
{
|
| 187 |
+
"1": {
|
| 188 |
+
"title": "Artificial intelligence, algorithm design, and pricing.",
|
| 189 |
+
"author": "Asker, J., Fershtman, C., and Pakes, A. (2022).",
|
| 190 |
+
"venue": "AEA Papers and Proceedings, 112:452\u201356.",
|
| 191 |
+
"url": null
|
| 192 |
+
}
|
| 193 |
+
},
|
| 194 |
+
{
|
| 195 |
+
"2": {
|
| 196 |
+
"title": "Long cheap talk.",
|
| 197 |
+
"author": "Aumann, R. J. and Hart, S. (2003).",
|
| 198 |
+
"venue": "Econometrica, 71(6):1619\u20131660.",
|
| 199 |
+
"url": null
|
| 200 |
+
}
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"3": {
|
| 204 |
+
"title": "Auction design and tacit collusion in fcc spectrum auctions.",
|
| 205 |
+
"author": "Bajari, P. and Yeo, J. (2009).",
|
| 206 |
+
"venue": "Information Economics and Policy, 21(2):90\u2013100.",
|
| 207 |
+
"url": null
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"4": {
|
| 212 |
+
"title": "Adaptive algorithms and collusion via coupling.",
|
| 213 |
+
"author": "Banchio, M. and Mantegazza, G. (2022).",
|
| 214 |
+
"venue": null,
|
| 215 |
+
"url": null
|
| 216 |
+
}
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"5": {
|
| 220 |
+
"title": "Artificial intelligence and auction design.",
|
| 221 |
+
"author": "Banchio, M. and Skrzypacz, A. (2022).",
|
| 222 |
+
"venue": null,
|
| 223 |
+
"url": null
|
| 224 |
+
}
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"6": {
|
| 228 |
+
"title": "Evolutionary stability in games of communication.",
|
| 229 |
+
"author": "Blume, A., Kim, Y.-G., and Sobel, J. (1993).",
|
| 230 |
+
"venue": "Games and Economic Behavior, 5(4):547\u2013575.",
|
| 231 |
+
"url": null
|
| 232 |
+
}
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"7": {
|
| 236 |
+
"title": "Handbook of Experimental Game Theory, chapter 13, pages\n311\u2013347.",
|
| 237 |
+
"author": "Blume, A., Lai, E. K., and Lim, W. (2020).",
|
| 238 |
+
"venue": "Edward Elgar Publishing.",
|
| 239 |
+
"url": null
|
| 240 |
+
}
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"8": {
|
| 244 |
+
"title": "Learning through reinforcement and replicator dynamics.",
|
| 245 |
+
"author": "B\u00f6rgers, T. and Sarin, R. (1997).",
|
| 246 |
+
"venue": "Journal of Economic Theory, 77(1):1\u201314.",
|
| 247 |
+
"url": null
|
| 248 |
+
}
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"9": {
|
| 252 |
+
"title": "Stochastic models for learning.",
|
| 253 |
+
"author": "Bush, R. R. and Mosteller, F. (1955).",
|
| 254 |
+
"venue": "John Wiley & Sons, Inc.",
|
| 255 |
+
"url": null
|
| 256 |
+
}
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"10": {
|
| 260 |
+
"title": "Overcommunication in strategic information transmission games.",
|
| 261 |
+
"author": "Cai, H. and Wang, J. T.-Y. (2006).",
|
| 262 |
+
"venue": "Games and Economic Behavior, 56(1):7\u201336.",
|
| 263 |
+
"url": null
|
| 264 |
+
}
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"11": {
|
| 268 |
+
"title": "Artificial intelligence, algorithmic pricing, and collusion.",
|
| 269 |
+
"author": "Calvano, E., Calzolari, G., Denicol\u00f2, V., and Pastorello, S. (2020).",
|
| 270 |
+
"venue": "American Economic Review, 110(10):3267\u201397.",
|
| 271 |
+
"url": null
|
| 272 |
+
}
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"12": {
|
| 276 |
+
"title": "Emergent communication through negotiation.",
|
| 277 |
+
"author": "Cao, K., Lazaridou, A., Lanctot, M., Leibo, J. Z., Tuyls, K., and Clark, S.\n(2018).",
|
| 278 |
+
"venue": "CoRR, abs/1804.03980.",
|
| 279 |
+
"url": null
|
| 280 |
+
}
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"13": {
|
| 284 |
+
"title": "Selecting cheap-talk equilibria.",
|
| 285 |
+
"author": "Chen, Y., Kartik, N., and Sobel, J. (2008).",
|
| 286 |
+
"venue": "Econometrica, 76(1):117\u2013136.",
|
| 287 |
+
"url": null
|
| 288 |
+
}
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"14": {
|
| 292 |
+
"title": "Strategic information transmission.",
|
| 293 |
+
"author": "Crawford, V. P. and Sobel, J. (1982).",
|
| 294 |
+
"venue": "Econometrica, 50(6):1431\u20131451.",
|
| 295 |
+
"url": null
|
| 296 |
+
}
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"15": {
|
| 300 |
+
"title": "DP18009 artificial intelligence & data obfuscation: Algorithmic\ncompetition in digital Ad auctions.",
|
| 301 |
+
"author": "Decarolis, F., Rovigatti, G., Rovigatti, M., and Shakhgildyan, K. (2023).",
|
| 302 |
+
"venue": "(mimeo).",
|
| 303 |
+
"url": null
|
| 304 |
+
}
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"16": {
|
| 308 |
+
"title": "An experimental study of strategic information transmission.",
|
| 309 |
+
"author": "Dickhaut, J. W., McCabe, K. A., and Mukherji, A. (1995).",
|
| 310 |
+
"venue": "Economic Theory, 6(3):389\u2013403.",
|
| 311 |
+
"url": null
|
| 312 |
+
}
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"17": {
|
| 316 |
+
"title": "Predicting how people play games: Reinforcement learning in\nexperimental games with unique, mixed strategy equilibria.",
|
| 317 |
+
"author": "Erev, I. and Roth, A. E. (1998).",
|
| 318 |
+
"venue": "The American Economic Review, 88(4):848\u2013881.",
|
| 319 |
+
"url": null
|
| 320 |
+
}
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"18": {
|
| 324 |
+
"title": "Learning to communicate with deep multi-agent reinforcement learning.",
|
| 325 |
+
"author": "Foerster, J. N., Assael, Y. M., de Freitas, N., and Whiteson, S. (2016).",
|
| 326 |
+
"venue": "CoRR, abs/1605.06676.",
|
| 327 |
+
"url": null
|
| 328 |
+
}
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"19": {
|
| 332 |
+
"title": "A note on optimal cheap talk equilibria in a discrete state space.",
|
| 333 |
+
"author": "Frug, A. (2016).",
|
| 334 |
+
"venue": "Games and Economic Behavior, 99:180\u2013185.",
|
| 335 |
+
"url": null
|
| 336 |
+
}
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"20": {
|
| 340 |
+
"title": "Learning mixed equilibria.",
|
| 341 |
+
"author": "Fudenberg, D. and Kreps, D. M. (1993).",
|
| 342 |
+
"venue": "Games and Economic Behavior, 5(3):320\u2013367.",
|
| 343 |
+
"url": null
|
| 344 |
+
}
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"21": {
|
| 348 |
+
"title": "The Theory of Learning in Games, volume 1 of MIT Press\nBooks.",
|
| 349 |
+
"author": "Fudenberg, D. and Levine, D. K. (1998).",
|
| 350 |
+
"venue": "The MIT Press.",
|
| 351 |
+
"url": null
|
| 352 |
+
}
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"22": {
|
| 356 |
+
"title": "Effective communication in cheap talk games.",
|
| 357 |
+
"author": "Gordon, S., Kartik, N., Pei-yu Lo, M., Olszewski, W., and Sobel, J. (2022).",
|
| 358 |
+
"venue": "(mimeo).",
|
| 359 |
+
"url": null
|
| 360 |
+
}
|
| 361 |
+
},
|
| 362 |
+
{
|
| 363 |
+
"23": {
|
| 364 |
+
"title": "Emergence of language with multi-agent games: Learning to communicate\nwith sequences of symbols.",
|
| 365 |
+
"author": "Havrylov, S. and Titov, I. (2017).",
|
| 366 |
+
"venue": "CoRR, abs/1705.11192.",
|
| 367 |
+
"url": null
|
| 368 |
+
}
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"24": {
|
| 372 |
+
"title": "Reinforcement learning in signaling game.",
|
| 373 |
+
"author": "Hu, Y., Skyrms, B., and Tarr\u00e8s, P. (2011).",
|
| 374 |
+
"venue": null,
|
| 375 |
+
"url": null
|
| 376 |
+
}
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"25": {
|
| 380 |
+
"title": "Platform design when sellers use pricing algorithms.",
|
| 381 |
+
"author": "Johnson, J. P., Rhodes, A., and Wildenbeest, M. (2023).",
|
| 382 |
+
"venue": "Econometrica, 91(5):1841\u20131879.",
|
| 383 |
+
"url": null
|
| 384 |
+
}
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"26": {
|
| 388 |
+
"title": "Multi-agent cooperation and the emergence of (natural) language.",
|
| 389 |
+
"author": "Lazaridou, A., Peysakhovich, A., and Baroni, M. (2016).",
|
| 390 |
+
"venue": "CoRR, abs/1612.07182.",
|
| 391 |
+
"url": null
|
| 392 |
+
}
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"27": {
|
| 396 |
+
"title": "Emergent communication under competition.",
|
| 397 |
+
"author": "Noukhovitch, M., LaCroix, T., Lazaridou, A., and Courville, A. C. (2021).",
|
| 398 |
+
"venue": "CoRR, abs/2101.10276.",
|
| 399 |
+
"url": null
|
| 400 |
+
}
|
| 401 |
+
},
|
| 402 |
+
{
|
| 403 |
+
"28": {
|
| 404 |
+
"title": "Collusive behavior in noncooperative epsilon-equilibria of\noligopolies with long but finite lives.",
|
| 405 |
+
"author": "Radner, R. (1980).",
|
| 406 |
+
"venue": "Journal of Economic Theory, 22(2):136\u2013154.",
|
| 407 |
+
"url": null
|
| 408 |
+
}
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"29": {
|
| 412 |
+
"title": "Converging better response dynamics in sender-receiver games.",
|
| 413 |
+
"author": "S\u00e9mirat, S. and Forges, F. (2024).",
|
| 414 |
+
"venue": "(mimeo).",
|
| 415 |
+
"url": null
|
| 416 |
+
}
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"30": {
|
| 420 |
+
"title": "Evolution of signalling systems with multiple senders and receivers.",
|
| 421 |
+
"author": "Skyrms, B. (2009).",
|
| 422 |
+
"venue": "Philosophical Transactions of the Royal Society B: Biological\nSciences, 364(1518):771\u2013779.",
|
| 423 |
+
"url": null
|
| 424 |
+
}
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"31": {
|
| 428 |
+
"title": "Reinforcement Learning: An Introduction.",
|
| 429 |
+
"author": "Sutton, R. S. and Barto, A. G. (2018).",
|
| 430 |
+
"venue": "A Bradford Book, Cambridge, MA, USA.",
|
| 431 |
+
"url": null
|
| 432 |
+
}
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"32": {
|
| 436 |
+
"title": "Pricing in agent economies using multi-agent q-learning.",
|
| 437 |
+
"author": "Tesauro, G. and Kephart, J. O. (2002).",
|
| 438 |
+
"venue": "Autonomous Agents and Multi-Agent Systems, 5(3):289\u2013304.",
|
| 439 |
+
"url": null
|
| 440 |
+
}
|
| 441 |
+
},
|
| 442 |
+
{
|
| 443 |
+
"33": {
|
| 444 |
+
"title": "Q-learning agents in a cournot oligopoly model.",
|
| 445 |
+
"author": "Waltman, L. and Kaymak, U. (2008).",
|
| 446 |
+
"venue": "Journal of Economic Dynamics and Control, 32(10):3275\u20133293.",
|
| 447 |
+
"url": null
|
| 448 |
+
}
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"34": {
|
| 452 |
+
"title": "Cheap talk, coordination, and evolutionary stability.",
|
| 453 |
+
"author": "W\u00e4rneryd, K. (1993).",
|
| 454 |
+
"venue": "Games and Economic Behavior, 5(4):532\u2013546.",
|
| 455 |
+
"url": null
|
| 456 |
+
}
|
| 457 |
+
}
|
| 458 |
+
],
|
| 459 |
+
"url": "http://arxiv.org/html/2310.07867v6"
|
| 460 |
+
}
|
20241001/2310.12239v2.json
ADDED
|
@@ -0,0 +1,411 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "On the Space Usage of Approximate Distance Oracles with Sub-2 StretchThis work is partially supported by Israel Science Foundation (ISF) grant no. 1926/19 and by United States - Israel Binational Science Foundation (BSF) grant no. 2018364. Contact: kopelot.biu@gmail.com, korinariel10@gmail.com, liamr@macs.biu.ac.il.",
|
| 3 |
+
"abstract": "For an undirected unweighted graph with vertices and edges, let denote the distance from to in .\nAn -stretch approximate distance oracle (ADO) for is a data structure that given returns in constant (or near constant) time a value such that , for some reals .\nIf , we say that the ADO has stretch .",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "One of the most fundamental and classic problems in algorithmic research is the task of computing distances in graphs.\nFormally, given an undirected unweighted graph , and , the distance between two vertices , denoted , is the length of a shortest path between and .\nA central problem in distance computations is the all-pairs shortest paths (APSP) problem [18 ###reference_b18###, 10 ###reference_b10###, 20 ###reference_b20###, 16 ###reference_b16###, 37 ###reference_b37###, 12 ###reference_b12###, 23 ###reference_b23###] where the objective is to compute the distances between every pair of vertices in the graph.\nA main disadvantage in handling the output of the APSP problem is that storing the distances between every pair of vertices in the graph requires space.\nAs in many other problems in computer science, the lack of space efficiency in solving the APSP problem has motivated researchers to search for a tradeoff between space and accuracy.\nAs a result, one central form of the APSP problem emerging from this line of research is constructing an approximate distance oracle where we sacrifice accuracy for space efficiency."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "Main results: When can we beat stretch 2 with subquadratic space?",
|
| 15 |
+
"text": "The line of work by [28 ###reference_b28###, 29 ###reference_b29###, 31 ###reference_b31###] is a natural research path given the observation that the (conditional) lower bounds of [34 ###reference_b34###] apply only to graphs with .\nSimilarly, a natural goal, which we address in this paper, is to understand for which families of graphs can an ADO beat stretch using subquadratic space.\nIn particular, the conditional lower bound proof of P\u01cetra\u015fcu and Roditty [28 ###reference_b28###] does not apply to graphs with maximum degree of , since in such graphs the number of paths of length 2 is , and so constructing a subquadratic space -distinguisher oracle is straightforward (by explicitly storing all length 2 paths).\nThus, a natural goal, which we investigate in this paper, is to understand the relationship between the maximum degree of , denoted by , and the best possible stretch obtainable for an ADO using subquadratic space.\nTo address this question, we present an upper bound and matching lower bounds. The upper bound considers and is summarized in the following theorem.\nFor any graph , positive real constant , and positive integer for which for some real , there exists an ADO for that uses space and has a -stretch.\nFor , Theorem 1.4 ###reference_theorem4### implies a subquadratic sub-2 stretch ADO for graphs for which .\nNext, we provide a tight conditional lower bound, conditioned on Hypothesis 1.2 ###reference_theorem2###, that applies when , for all integers 222The case was proven by Thorup and Zwick [34 ###reference_b34###] to hold unconditionally. Thus, Theorem 1.5 ###reference_theorem5### focuses on ..\nThe conditional lower bound is summarized in the following Theorem.\nLet . Assuming Hypothesis 1.2 ###reference_theorem2###, a -stretch ADO for graphs with vertices and maximum degree must use space.\nNotice that Theorem 1.5 ###reference_theorem5### implies that the upper bound of Theorem 1.5 ###reference_theorem5### is optimal under Hypothesis 1.2 ###reference_theorem2### in two ways. First, while a subquadratic space -stretch ADO is achievable for , achieving the same stretch for graphs with requires quadratic space. Secondly, even a slight improvement to the additive error of the ADO in the upper bound, such as a -stretch subquadratic ADO for graphs with , would refute Hypothesis 1.2 ###reference_theorem2### according to Theorem 1.5 ###reference_theorem5###, since by setting , we obtain a subquadratic -stretch ADO for graphs with which contradicts Theorem 1.5 ###reference_theorem5###.\nThe tight bounds formed by Theorems 1.4 ###reference_theorem4### and 1.5 ###reference_theorem5### leave open the question of whether it is possible to achieve an ADO with a stretch and subquadratic space for positive constants . In the following theorem we rule out the possibility of such an ADO assuming Hypothesis 1.2 ###reference_theorem2###.\nLet be constants. Assuming Hypothesis 1.2 ###reference_theorem2###, a -stretch ADO for graphs with vertices and maximum degree must use space.\nNotice that Theorem 1.6 ###reference_theorem6### also holds for graphs with a maximum degree larger than since one could always add an isolated star component of at most vertices to a graph with to make arbitrarily large. Thus, if an ADO achieves certain bounds for graphs with a large it could also match those bounds for graphs with ."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "Organization",
|
| 21 |
+
"text": "The rest of this paper is organized as follows.\nIn Section 1.3 ###reference_### we provide an overview of the main ideas used in this paper.\nIn Section 1.4 ###reference_### we survey some additional related work.\nIn Section 2 ###reference_### we provide some definitions that are used in the more technical parts of the paper.\nIn Section 3 ###reference_### we prove some useful lemmas that are used in the proof of Theorem 1.4 ###reference_theorem4###, which is described in Section 4 ###reference_### together with the construction of our new ADO.\nIn Section 5 ###reference_### we turn to the lower bounds and prove Theorem 1.5 ###reference_theorem5### and Theorem 1.6 ###reference_theorem6###.\nWe conclude our work in Section 6 ###reference_###."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "1.3",
|
| 25 |
+
"parent_section_id": "1",
|
| 26 |
+
"section_name": "Overview of Results and Techniques",
|
| 27 |
+
"text": "In this section we describe an overview of the intuition and techniques used to obtain our main results.\nOur new ADO construction is based on the following intuition regarding the ADO construction of Agarwal and Godfrey [4 ###reference_b4###].\nIf we enlarge by moving to a further vertex (from ), then we would increase the likelihood of , and so the ADO would return exact distances for more pairs of vertices.\nHowever, in such a case, the quality guarantee on the stretch obtained by approximating with the shortest path from to that passes through becomes worse.\nPart of the challenge is to balance the size of which affects the usefulness of the intersections and the role of when approximating the distances.\nOur approach, intuitively, is to separate the definition of used for the approximations and the set chosen for the intersections.\nSpecifically, the definition of remains unchanged relative to (we do however change the size of ), but instead of testing whether , we use a larger set (which contains ), and test whether (or ).\nTesting whether is implemented by storing all of the distances between and vertices in .\nWe remark that one may consider the possibility of testing whether instead of testing whether , however, such an approach seems to require too much space.\nRecall that Agarwal and Godfrey [4 ###reference_b4###] obtained a -stretch.\nIn our algorithm, we choose in such a way that when then if , which ends up reducing the additive component of the stretch by at least (see 4.2 ###reference_theorem2###).\nThus, the approximation of the ADO is always at most , which is less than stretch 2 for .\nThere are two issues that need to be addressed in order to extend the reduction by P\u01cetra\u015fcu and Roditty [28 ###reference_b28###] in a way that proves Lemma 1.7 ###reference_theorem7###. The first is to adjust the distances so that if and only if , and otherwise, .\nThe second issue is that the degrees of vertices in need to be adjusted to be at most .\nIn order to simplify our intuitive explanation, we focus our attention to the special case where has only one element.\nOne straightforward way of dealing with the first issue is to replace vertex with a path of length , and for each that contains we add edges and .\nThus, the constructed graph would be a layered graph.\nThe number of vertices in such a graph is , and the distance between vertices in the first and last layers are for some integer .\nHowever, we still need to address the second issue of bounding the maximum degree, since and may have a very high degree corresponding to the number of sets containing .\nOn the other hand, one initial idea (that does not work) for dealing with the second issue is to replace (in the original 3 layered graph) with vertices ,\nand for each that contains we add edges and .\nNow the maximum degree of each node is constant, however, for such that , there is no path from to .\nThis idea is missing the functionality of the path which allows us to connect more than one pair of vertices from .\nOur reduction makes use of an underlying layered infrastructure graph , commonly known as the butterfly graph (see [27 ###reference_b27###, 29 ###reference_b29###]), which has the following three properties:\n\n\n(i) each layer contains vertices,\n\n(ii) there is a path of length from every vertex in the first layer to every vertex in the last layer, and\n\n(iii) the degree of every vertex is at most .\n\n\nThe layers of are numbered to .\nThe vertices in each layer are (separately) indexed with integers from to , and the construction of is based on the base representation of the these indices: for , vertices from layer are connected with vertices from layer if and only if the base representation of their corresponding indices are the same, except for possibly the \u2019th digit.\nSimilar to before, we denote the first layer of by and the last layer by .\nFinally, we construct a layered graph which is intuitively obtained by removing from edges touching either or for every that does not contain .\nThus, in , if then there is a path of length from to in , and otherwise, there is no path from to in .\nWe remark that in the general case, where may be larger than 1,\nwe combine for different in a special way, and so we may introduce paths from to even if . However, since the resulting graph is still a layered graph, and we are interested in paths between vertices in the first layer and vertices in the last layer, the lengths of such paths must be at least .\nThus, a -distinguisher oracle on the combined graph suffices for solving 1.1 ###reference_theorem1###.\nIn order to prove Theorem 1.6 ###reference_theorem6###, which assuming Hypothesis 1.2 ###reference_theorem2### eliminates the possibility of a subquadratic -stretch ADO, even for graphs with , we use a construction which is based on the construction we use in the proof of Theorem 1.5 ###reference_theorem5###, but with two changes:\n\n\n(i) we use so we have ,\n\n(ii) we use the edge splitting technique also used in [28 ###reference_b28###, 17 ###reference_b17###], but unlike in [28 ###reference_b28###, 17 ###reference_b17###] we only split certain edges in the graph so the lower bound on the stretch is as high as possible.\n\n\nSee Section 5 ###reference_### for more details."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "1.3.1",
|
| 31 |
+
"parent_section_id": "1.3",
|
| 32 |
+
"section_name": "1.3.1 Upper Bound: A New ADO",
|
| 33 |
+
"text": "Since our new ADO is based on the ADO of Agarwal and Godfrey [4 ###reference_b4###], that has a -stretch and uses space (which simplifies the ADO of [28 ###reference_b28###]), we provide an overview of the construction of their ADO.\nThe ADO constructed by Agarwal and Godfrey [4 ###reference_b4###] uses the concept of bunches and clusters introduced by Thorup and Zwick [33 ###reference_b33###]. Following the conventions of Thorup and Zwick [33 ###reference_b33###, 34 ###reference_b34###], for a vertex and set , let be the vertex in which is closest to (breaking ties arbitrarily).\nThe bunch of with respect to is defined as .\nThe cluster of with respect to is defined as .\nWe omit from the notation when it is clear from context.\nThorup and Zwick [33 ###reference_b33###] presented an algorithm that computes a set of size such that , for every .\nThe ADO of Agarwal and Godfrey [4 ###reference_b4###] uses a hitting set of size such that for every , .\nGiven two vertices , the ADO first tests whether , and, if so, then the ADO returns the exact distance .\nThe method for testing whether the two bunches intersect is based on the observation (which follows from the definitions of bunch and cluster) that if and only if 333For , let ..\nThus, each vertex stores the exact distances to all vertices in , and now, the case of costs constant time and returns an exact distance.\nTo deal with the case of , the oracle stores the distances of pairs in , and the ADO returns the minimum of either the length of the shortest path between and passing through or the length of the shortest path between and passing through .\nThe space usage is for storing for every , and for storing the distances for all pairs in .\nOur new ADO construction is based on the following intuition regarding the ADO construction of Agarwal and Godfrey [4 ###reference_b4### ###reference_b4###].\nIf we enlarge by moving to a further vertex (from ), then we would increase the likelihood of , and so the ADO would return exact distances for more pairs of vertices.\nHowever, in such a case, the quality guarantee on the stretch obtained by approximating with the shortest path from to that passes through becomes worse.\nPart of the challenge is to balance the size of which affects the usefulness of the intersections and the role of when approximating the distances.\nOur approach, intuitively, is to separate the definition of used for the approximations and the set chosen for the intersections.\nSpecifically, the definition of remains unchanged relative to (we do however change the size of ), but instead of testing whether , we use a larger set (which contains ), and test whether (or ).\nTesting whether is implemented by storing all of the distances between and vertices in .\nWe remark that one may consider the possibility of testing whether instead of testing whether , however, such an approach seems to require too much space.\nRecall that Agarwal and Godfrey [4 ###reference_b4### ###reference_b4###] obtained a -stretch.\nIn our algorithm, we choose in such a way that when then if , which ends up reducing the additive component of the stretch by at least (see 4.2 ###reference_theorem2### ###reference_theorem2###).\nThus, the approximation of the ADO is always at most , which is less than stretch 2 for ."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "1.3.2",
|
| 37 |
+
"parent_section_id": "1.3",
|
| 38 |
+
"section_name": "1.3.2 Conditional Lower bound",
|
| 39 |
+
"text": "In Section 5 ###reference_### we prove the following lemma, which directly implies Theorem 1.5 ###reference_theorem5### since a -stretch ADO is also a -distinguisher oracle.\nLet . Assuming Hypothesis 1.2 ###reference_theorem2###, any -distinguisher oracle for graphs with vertices and maximum degree must use space.\nNotice that it is straightforward to construct a -distinguisher oracle for graphs with vertices and maximum degree in space by storing all pairs of vertices at distance exactly .\nThus, Lemma 1.7 ###reference_theorem7### shows that such a construction is essentially optimal.\nThere are two issues that need to be addressed in order to extend the reduction by P\u01cetra\u015fcu and Roditty [28 ###reference_b28### ###reference_b28###] in a way that proves Lemma 1.7 ###reference_theorem7### ###reference_theorem7###. The first is to adjust the distances so that if and only if , and otherwise, .\nThe second issue is that the degrees of vertices in need to be adjusted to be at most .\nIn order to simplify our intuitive explanation, we focus our attention to the special case where has only one element.\nOne straightforward way of dealing with the first issue is to replace vertex with a path of length , and for each that contains we add edges and .\nThus, the constructed graph would be a layered graph.\nThe number of vertices in such a graph is , and the distance between vertices in the first and last layers are for some integer .\nHowever, we still need to address the second issue of bounding the maximum degree, since and may have a very high degree corresponding to the number of sets containing .\nOn the other hand, one initial idea (that does not work) for dealing with the second issue is to replace (in the original 3 layered graph) with vertices ,\nand for each that contains we add edges and .\nNow the maximum degree of each node is constant, however, for such that , there is no path from to .\nThis idea is missing the functionality of the path which allows us to connect more than one pair of vertices from .\nOur reduction makes use of an underlying layered infrastructure graph , commonly known as the butterfly graph (see [27 ###reference_b27### ###reference_b27###, 29 ###reference_b29### ###reference_b29###]), which has the following three properties:\n\n\n(i) each layer contains vertices,\n\n(ii) there is a path of length from every vertex in the first layer to every vertex in the last layer, and\n\n(iii) the degree of every vertex is at most .\n\n\nThe layers of are numbered to .\nThe vertices in each layer are (separately) indexed with integers from to , and the construction of is based on the base representation of the these indices: for , vertices from layer are connected with vertices from layer if and only if the base representation of their corresponding indices are the same, except for possibly the \u2019th digit.\nSimilar to before, we denote the first layer of by and the last layer by .\nFinally, we construct a layered graph which is intuitively obtained by removing from edges touching either or for every that does not contain .\nThus, in , if then there is a path of length from to in , and otherwise, there is no path from to in .\nWe remark that in the general case, where may be larger than 1,\nwe combine for different in a special way, and so we may introduce paths from to even if . However, since the resulting graph is still a layered graph, and we are interested in paths between vertices in the first layer and vertices in the last layer, the lengths of such paths must be at least .\nThus, a -distinguisher oracle on the combined graph suffices for solving 1.1 ###reference_theorem1### ###reference_theorem1###.\nIn order to prove Theorem 1.6 ###reference_theorem6### ###reference_theorem6###, which assuming Hypothesis 1.2 ###reference_theorem2### ###reference_theorem2### eliminates the possibility of a subquadratic -stretch ADO, even for graphs with , we use a construction which is based on the construction we use in the proof of Theorem 1.5 ###reference_theorem5### ###reference_theorem5###, but with two changes:\n\n\n(i) we use so we have ,\n\n(ii) we use the edge splitting technique also used in [28 ###reference_b28### ###reference_b28###, 17 ###reference_b17### ###reference_b17###], but unlike in [28 ###reference_b28### ###reference_b28###, 17 ###reference_b17### ###reference_b17###] we only split certain edges in the graph so the lower bound on the stretch is as high as possible.\n\n\nSee Section 5 ###reference_### ###reference_### for more details."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "1.4",
|
| 43 |
+
"parent_section_id": "1",
|
| 44 |
+
"section_name": "Additional related work",
|
| 45 |
+
"text": "Different aspects of Thorup and Zwick [34 ###reference_b34###] ADOs were studied since they were introduced for the first time.\nChechik [14 ###reference_b14###, 13 ###reference_b13###] reduced the query time from to , while keeping the stretch and the space unchanged. (See also [36 ###reference_b36###, 24 ###reference_b24###].)\nRoditty, Thorup, and Zwick [30 ###reference_b30###] presented a deterministic algorithm that constructs an ADO in time while keeping the stretch and the space unchanged.\nBaswana and Kavitha [9 ###reference_b9###] presented an algorithm with running time444For the query time is . For the query time is .. Baswana, Goyaland and Sen [8 ###reference_b8###] presented an time algorithm that computes a -distance oracle with space.\nSommer [32 ###reference_b32###] presented an time algorithm that computes a -distance oracle with space.\nAkav and Roditty [7 ###reference_b7###] presented the first sub-quadratic time algorithm that constructs an ADO with stretch better than . They presented an -time algorithm that constructs a ADO with space and -stretch.\nChechik and Zhang [15 ###reference_b15###] improved the result of Akav and Roditty [7 ###reference_b7###].\nAmong their results is an time algorithm that constructs an ADO with -stretch and space.\nFollowing the work by P\u01cetra\u015fcu and Roditty [28 ###reference_b28###] who constructed an ADO for unweighted graphs that uses space and returns a -stretch in time, Abraham and Gavoille [2 ###reference_b2###] extended the ADO by P\u01cetra\u015fcu and Roditty [28 ###reference_b28###] for all even stretch values, by constructing for any integer , an ADO of size with a -stretch returned in time. P\u01cetra\u015fcu, Roditty and Thorup [29 ###reference_b29###] focused on analyzing sparse graphs where and noted that both the ADOs by Thorup and Zwick [34 ###reference_b34###], and the ADOs by Abraham and Gavoille [2 ###reference_b2###] use a space complexity that can be described by the curve where is the stretch of the ADO and is the number of edges in the graph.\nP\u01cetra\u015fcu, Roditty and Thorup [29 ###reference_b29###] extended the curve to work for non integer stretch values .\nAlthough our research focuses on constant query time ADOs, another branch of research includes ADOs that have non constant query time [6 ###reference_b6###, 25 ###reference_b25###, 5 ###reference_b5###, 3 ###reference_b3###, 11 ###reference_b11###].\nIn the lower bound regime, the problem of constructing a -distinguisher oracle was analyzed from the perspective of time complexity as well.\nFor graphs with degree of at most , the problem of determining for each edge in the graph whether it is in a triangle in time for some was shown to be hard under either the [26 ###reference_b26###, 22 ###reference_b22###] or [35 ###reference_b35###] hypotheses.\nSince there exists a standard reduction from the problem of determining for each edge in the graph whether it is in a triangle to the problem of constructing a -distinguisher oracle (see [1 ###reference_b1###]), a -distinguisher oracle is also hard to construct in subquadratic time for graphs with degree of at most under either the or hypotheses.\nThe problem of constructing a -distinguisher oracle for a general integer was also studied in the past in terms of time complexity.\nDor, Halperin and Zwick [19 ###reference_b19###] showed that if all distances in an undirected vertex graph can be approximated with an additive error of at most in time, then Boolean matrix multiplication on matrices of size can also be performed in time.\nDor, Halperin and Zwick [19 ###reference_b19###] conclude that constructing a -distinguisher oracle for an integer is at least as hard as multiplying two Boolean matrices."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Preliminaries",
|
| 51 |
+
"text": "Let be the distance between vertices and in the graph . The eccentricity of a vertex in a graph , denoted by , is defined as .\nThe diameter of is defined as and the radius of is defined as .\nThe eccentricity of a vertex can be thought of as the distance between and the last vertex met during a Breadth First Search (BFS) of the graph starting at .\nSince our goal is to construct an ADO that uses subquadratic space, we cannot afford to store a separate BFS tree for each vertex.\nInstead, the construction algorithm of the ADO from Theorem 1.4 ###reference_theorem4### will store only a partial BFS tree for each vertex by truncating the BFS scan after some number of vertices.\nMotivated by this notion of a truncated scan, we introduce the following generalization of eccentricity which turns out to be useful for our purposes.\nLet be the first vertices met during a BFS555 The traversal order of vertices in the same layer during the BFS execution does not matter as long as the order is consistent. starting from in the graph , i.e., the closest vertices to (excluding ).\nIf is not an integer, then let .\nFor an integer , define and .\nNotice that .\nFor any real , define to be the maximum integer for which .\nNotice that .\nDefine .\nNotice that . We omit the subscript when using the definitions above whenever is clear from context."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Useful Lemmas",
|
| 57 |
+
"text": "In this section we prove several useful properties of the graph attributes defined in Section 2 ###reference_### which will be used to prove the upper bound theorem in Section 4 ###reference_###.\nThe following observation and corollary address the relationship between and , and follow from the definition of BFS.\nLet be an unweighted undirected graph, with . For any and integers and such that and , either\n\n\n(i) ,\n\n(ii) , or\n\n(iii) \n.\nLet be an unweighted undirected graph, with .\nFor any and integers and such that and ,\n\n\n(i) if then ,\n\n(ii) if then , and\n\n(iii) if then \n.\nThe following useful property addresses the relationship between and for the special cases where either or .\nLet be an unweighted undirected graph, with . For any and integer such that , we have:\n\n\n(i) , and\n\n(ii) if , then \n\n.\nBy definition, is the largest integer for which .\nThus,\n\n\n(i) , and\n\n(ii) if then \n\n.\nIf , it must be that , since if we assume towards a contradiction that for some then but on the other hand, by definition of , we have , which is a contradiction.\n\u220e\nThe following lemma states that exhibits a behavior that is similar to the behavior of the distance function which cannot decrease when removing edges and vertices from .\nLet be an unweighted undirected graph, , and let be the subgraph of induced by the vertices in . For any vertex and for any integer such that , it holds that .\nGiven an integer , let and . We want to show that . By definition of , is the largest value for which .\nThus, in order to show that , it suffices to show that .\nFor any vertex pair , we have since is a subgraph of . Thus,\nThis implies that .\nBy Corollary 3.2 ###reference_theorem2###, since then , as required.\n\u220e"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.1",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "The Logarithmic-Like Behavior of Eccentricity",
|
| 63 |
+
"text": "In the following lemma, which is an important ingredient in the analysis of our new ADO, we show that satisfies a logarithmic-like behavior. Specifically, .\nThe reason for this behavior is that the number of vertices in each layer of a BFS tree expands in a similar way to an exponential function.\nFor a tree-graph with minimum degree rooted at a vertex , for integers it holds that .\nSince the number of vertices in every layer of the rooted tree grows exponentially, the eccentricity grows logarithmically (in relation to ).\nUnlike in trees where the expansion of the number of vertices in every layer of a BFS can be analyzed using , for general graphs, in order to achieve a lower bound for the expansion rate of the eccentricity of the vertices, we use instead.\nLet be an unweighted undirected graph, with .\nFor any vertex and integers such that , it holds that .\nAssume towards a contradiction that . Thus, .\nLet be a BFS tree rooted at in graph .\nLet and let be the vertices in .\nFor any , where , let be the set of descendant of in and let be the graph induced by in 666It is important to note that for all scans referenced in this proof, which include a BFS procedure of starting at and BFS procedures of starting at for , we require a consistent order of scanning, i.e., that for a given , and vertices , if is scanned before in then should also be scanned before in (and vice versa).\nThis is a valid requirement since for any vertices , if and only if ..\nLet .\nWe will show that:\nLet .\nBy the definition of BFS, since , must be a descendant of some vertex , and so .\nSince is a shortest path tree rooted at and since , it must be that .\nBy definition of , . Thus, and so .\nBy definition, .\nSince is a shortest path tree, for any it must be that . Thus, .\nSince , and since , then . By Property 3.3 ###reference_theorem3###, . It follows that every vertex in must be included in for some , thus confirming Equation (1 ###reference_###).\nBy Property 3.3 ###reference_theorem3###, . Combining with Equation (1 ###reference_###) we have that , and so:\nNow,\nThus, .\nNotice that , since, by Property 3.3 ###reference_theorem3###, .\nTherefore,\nBy Corollary 3.2 ###reference_theorem2###, it follows that , which contradicts Property 3.3 ###reference_theorem3###.\n\u220e"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Upper Bounds",
|
| 69 |
+
"text": "In this section we prove Theorem 1.4 ###reference_theorem4### by introducing a new ADO which uses subquadratic space and produces a -stretch for graphs for which for a positive integer and real constant .\nThe ADO is parameterized by a parameter which quantifies the tradeoff between the space and the stretch of the ADO.\nWhen the ADO is very similar to the ADO of Agarwal and Godfrey [4 ###reference_b4###] which uses space and has a -stretch. For , the ADO uses additional space and is able to improve the stretch of the ADO for the family of graphs for which ."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.1",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "The Construction Algorithm",
|
| 75 |
+
"text": "The description of our construction algorithm follows the notations and definitions described in Section 1.3.1 ###reference_.SSS1###.\nThe construction begins with an algorithm of Thorup and Zwick [33 ###reference_b33###] that computes a set of size such that , for every .\nIn our case we set , thus and , for every .\nFor every vertex , the ADO explicitly stores the distances between and every vertex in for some constant to be decided later. In addition, for every vertex the ADO stores , and the distances between and every vertex in .\nA distance query between vertices and is answered as follows. If one of the following conditions holds\n\n\n(i) or ,\n\n(ii) or \n\n, then the exact distance is returned. Otherwise, the ADO returns . Notice that the query time is constant.\nIn Claim 4.1 ###reference_theorem1###, we show that the space complexity of the ADO is and in Claim 4.2 ###reference_theorem2###, we show that the ADO satisfies a -stretch.\nThe space complexity of the ADO is .\nStoring , and the distances between and every vertex in , for all vertices , uses space. As mentioned in the construction phase, for every .\nThus, storing the distances between every vertex and requires space as well, leading to an overall space complexity of . \u220e\nThe distance estimation returned by the ADO satisfies .\nNotice that since the ADO always returns a length of some path in the graph between and . It is left to show that .\nIf the exact distance is stored in the ADO then and the claim follows.\nConsider the case that the exact distance is not stored. This implies that and .\nAssume towards a contradiction that and let be a vertex such that .\nFrom the definitions of bunch and cluster, we have that if and only if .\nThus, , and since , it must be that which is a contradiction.\nThus, we have that .\nLet be a shortest path between and . Let be the furthest vertex from in , let be the furthest vertex from in and let be the furthest vertex from in (see Figure 1 ###reference_###).\nNotice that, by definition of and , if then and if then . Thus, we get that .\nSince , , and and are both on a shortest path between and , it must be that . Since is on a shortest path between and , it holds that , and so . Since , it follows that:\nBy Lemma 3.5 ###reference_theorem5###, . Since and , it follows from the definitions of and bunch that .\nWe have that:\nThus, . It follows that:\nNotice that by the definitions of bunch, and , it holds that . Similarly, . Thus:\n\u220e\nBy combining our ADO construction with Claims 4.1 ###reference_theorem1### and 4.2 ###reference_theorem2### we have proven the following lemma.\nFor any graph with vertices, real and constant , it is possible to construct an ADO that uses space and has a -stretch."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.2",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Proof of Main Upper Bound Theorem",
|
| 81 |
+
"text": "The following lemma connects and , which is the last ingredient needed for proving Theorem 1.4 ###reference_theorem4###.\nLet be an unweighted undirected graph, with . For any real such that , it holds that .\nFor any vertex and integer , cannot include more than vertices.\nSince we have that and so for any integer such that it must be that .\nBy definition, is equal to the largest integer for which .\nThus, . Since for any vertex , it follows from the definition of that .\nSetting , or , it follows that for any integer such that it must be that . Thus, .\n\u220e\nFinally, we are ready to prove Theorem 1.4 ###reference_theorem4###.\nBy Lemma 4.4 ###reference_theorem4###, . Thus, the ADO from Lemma 4.3 ###reference_theorem3### constructed for using and uses space and produces a distance estimation that satisfies .\n\u220e"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Conditional Lower Bounds",
|
| 87 |
+
"text": ""
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.1",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Conditional Lower Bound on Additive Error Improvement",
|
| 93 |
+
"text": "As mentioned in Section 1 ###reference_###, proving Lemma 1.7 ###reference_theorem7###, which assuming Hypothesis 1.2 ###reference_theorem2###, eliminates the possibility of a subquadratic -distinguisher oracle for graphs with , directly implies Theorem 1.5 ###reference_theorem5###, since a -stretch ADO is also a -distinguisher oracle.\nGiven an instance of Problem 1.1 ###reference_theorem1###, we construct a graph with vertices and , such that a -distinguisher oracle for solves the instance of Problem 1.1 ###reference_theorem1###.\nWe begin by focusing on a layered graph , which we call the infrastructure graph.\nThe infrastructure graph has three important properties:\n\n\n(i) each layer contains vertices,\n\n(ii) there is a path of length from every vertex in the first layer to every vertex in the last layer, and\n\n(iii) the degree of every vertex is at most .\nWe then construct\nfor each a graph ,\nwhich is a subgraph of (a copy of) , by removing some of the edges between the first (last) and second (second to last) layers of in a way that expresses which sets contain and which do not.\nFinally, we construct the graph which is specialized union of all of the graphs for all , and enables solving the instance of Problem 1.1 ###reference_theorem1### by using a -distinguisher oracle on .\nThe infrastructure graph is a layered graph where each layer contains vertices, and each layer of vertices is locally indexed from to . The layers are numbered to .\nThe edges of are defined using the following labels.\nAssign a label to every vertex in which is the digit representation in base777We assume for convenience that is an integer, since otherwise, one can increase slightly without affecting the asymptotic complexities. of the local index (an integer between and ) of .\nThen, for every , connect from layer with from layer if and only if the digits of and the digits of all match, except for possibly the \u2019th digit.\nIt is straightforward to observe (since each digit has options) that the degree of every vertex in is , except for the vertices in the first and last layers which have degree .\nThe following claim shows that there is a path of length from every vertex in the first layer and every vertex in the last layer.\nLet be a vertex in the first layer of and let be a vertex in the last layer of .\nThen there exists a path of length from to in .\nWe describe the path of length between and .\nFor any , consider the vertex in layer of with the label of the following form: the first digits are the first digits of , and the last digits are the last digits of .\nThus, for , the edge is in since and are the same, except for possibly the -th digit.\nThe set of edges which we described form a path of length between and .\n\u220e\nWe construct by making copies of each vertex in .\nDenote the first layer of by and the last layer by .\nLet .\nThus, is the set of edges in that touch vertices in the first or last layers whose index corresponds to the index of sets that do not contain .\nWe construct by making copies of all edges in .\nThe reason for removing the edges in is so that vertices in the first and last layers of whose edges are in are not connected to any other vertex in .\nThus, for each () in the first (last) layer of , if and only if there are edges between () and the second (second to last) layer in .\nBy 5.1 ###reference_theorem1###, if then there exists a path of length between and , and otherwise, there is no path in between and .\nFinally, since is a partial copy of , the maximum degree in is .\nWe construct the layered graph by performing the following special union of for all :\nfor the \u2019th layer of is the union of the \u2019th layer of all of the graphs taken over all .\nThus, each of the inner layers (excluding the first and last layer of ) has vertices.\nFor the first (last) layer , instead of taking the union of all of the first (last) layers from all of the graphs, we merge them all into one layer of vertices.\nSo the \u2019th vertex in the first (last) layer of is a vertex obtained by merging the \u2019th vertex in the first (last) layer of every , for all .\nThus, the first and last layers of contain vertices each.\nSince the vertices in the first and last layer of correspond directly to the vertices and in , respectively, we treat the first layer of as and the last layer of by .\nThus, each node in has maximum degree at most\nNotice that for a set intersection query between and , if , then there exists some , and since contains as a subgraph, the distance between and is at most (and actually exactly) .\nOn the other hand, if there exists a path of length between and , then by the construction of , must be completely contained within some for some .\nBy the construction of , and specifically , the existence of in implies that and .\nSo, in such a case .\nNotice that, since is a layered graph, any path between a vertex in the first layer and a vertex in the last layer must be of length for some integer .\nThus, to answer a set intersection query, it suffices to establish whether the distance in between and is either or at least , which the -distinguisher oracle returns in constant time.\nWe conclude that a -distinguisher oracle for graphs with vertices and maximum degree also solves the instance of Problem 1.1 ###reference_theorem1### (of size ).\nThus, according to Hypothesis 1.2 ###reference_theorem2###, an ADO for graphs with vertices, for which the maximum degree is , must use space.\nWe note that the maximum degree can be reduced to by artificially adding isolated vertices to .\n\u220e"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.2",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Conditional Lower Bound on Multiplicative Error Improvement",
|
| 99 |
+
"text": "We now move on to prove Theorem 1.6 ###reference_theorem6###, which states that the multiplicative approximation of the ADO from Theorem 1.4 ###reference_theorem4### is optimal under Hypothesis 1.2 ###reference_theorem2###, even when allowing an arbitrarily large constant additive error. We first provide some intuition. Notice that by the way we constructed the graph in the proof of Lemma 1.7 ###reference_theorem7###, a path between representatives and in the case of , has to pass through some other set representative vertex in the first or last layer. In order to prove Theorem 1.6 ###reference_theorem6###, we want to construct a -stretch ADO that can distinguish the case from the case . Thus, we want to maximize the ratio between in the case that to in the case that in . To do so, we split the edges touching the first and last layers of .\nWe build upon the construction from the proof of Lemma 1.7 ###reference_theorem7### but with two changes:\n\n\n(i) we use so , and\n\n(ii) given constants we choose and split each of the edges connecting the first and second (second to last and last) layers of the graph into edges.\nThe number of vertices in the graph is now , the number of edges is and the maximum degree is .\nFor a set intersection query between and , if , then the distance between and is now . On the other hand, if , the distance must be . In the case of where the distance is , a -stretch ADO would not return a distance larger than which by our choice of is strictly smaller than , which is the smallest possible distance in the case of . Thus, a -stretch ADO could also solve Problem 1.1 ###reference_theorem1### (of size ) and so assuming Hypothesis 1.2 ###reference_theorem2### such an ADO requires .\n\u220e"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "Conclusion",
|
| 105 |
+
"text": "In this paper we provide an algorithm (Theorem 1.4 ###reference_theorem4###) and tight conditional lower bounds (Theorem 1.5 ###reference_theorem5### and Theorem 1.6 ###reference_theorem6###) for subquadratic space ADOs as a function of the maximum degree. We show that for graphs with maximum degree , it is possible to construct a subquadratic space ADO with stretch . Furthermore, we show that under Hypothesis 1.2 ###reference_theorem2###, it is impossible to improve the additive approximation of our ADO, nor is it possible to improve the multiplicative approximation to , even when allowing an arbitrarily large constant additive error."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [],
|
| 109 |
+
"tables": {},
|
| 110 |
+
"image_paths": {},
|
| 111 |
+
"validation": true,
|
| 112 |
+
"references": [
|
| 113 |
+
{
|
| 114 |
+
"1": {
|
| 115 |
+
"title": "Hardness of approximation in p via short cycle removal: cycle\ndetection, distance oracles, and beyond.",
|
| 116 |
+
"author": "Amir Abboud, Karl Bringmann, Seri Khoury, and Or Zamir.",
|
| 117 |
+
"venue": "In Stefano Leonardi and Anupam Gupta, editors, STOC \u201922: 54th\nAnnual ACM SIGACT Symposium on Theory of Computing, Rome, Italy, June 20\n- 24, 2022, pages 1487\u20131500. ACM, 2022.",
|
| 118 |
+
"url": null
|
| 119 |
+
}
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"2": {
|
| 123 |
+
"title": "On approximate distance labels and routing schemes with affine\nstretch.",
|
| 124 |
+
"author": "Ittai Abraham and Cyril Gavoille.",
|
| 125 |
+
"venue": "In David Peleg, editor, Distributed Computing - 25th\nInternational Symposium, DISC 2011, Rome, Italy, September 20-22, 2011.\nProceedings, volume 6950 of Lecture Notes in Computer Science, pages\n404\u2013415. Springer, 2011.",
|
| 126 |
+
"url": null
|
| 127 |
+
}
|
| 128 |
+
},
|
| 129 |
+
{
|
| 130 |
+
"3": {
|
| 131 |
+
"title": "The space-stretch-time tradeoff in distance oracles.",
|
| 132 |
+
"author": "Rachit Agarwal.",
|
| 133 |
+
"venue": "In Andreas S. Schulz and Dorothea Wagner, editors, Algorithms -\nESA 2014 - 22th Annual European Symposium, Wroclaw, Poland, September 8-10,\n2014. Proceedings, volume 8737 of Lecture Notes in Computer Science,\npages 49\u201360. Springer, 2014.",
|
| 134 |
+
"url": null
|
| 135 |
+
}
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"4": {
|
| 139 |
+
"title": "Brief announcement: a simple stretch 2 distance oracle.",
|
| 140 |
+
"author": "Rachit Agarwal and Philip Brighten Godfrey.",
|
| 141 |
+
"venue": "In Panagiota Fatourou and Gadi Taubenfeld, editors, ACM\nSymposium on Principles of Distributed Computing, PODC \u201913, Montreal, QC,\nCanada, July 22-24, 2013, pages 110\u2013112. ACM, 2013.",
|
| 142 |
+
"url": null
|
| 143 |
+
}
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"5": {
|
| 147 |
+
"title": "Distance oracles for stretch less than 2.",
|
| 148 |
+
"author": "Rachit Agarwal and Philip Brighten Godfrey.",
|
| 149 |
+
"venue": "In Sanjeev Khanna, editor, Proceedings of the Twenty-Fourth\nAnnual ACM-SIAM Symposium on Discrete Algorithms, SODA 2013, New Orleans,\nLouisiana, USA, January 6-8, 2013, pages 526\u2013538. SIAM, 2013.",
|
| 150 |
+
"url": null
|
| 151 |
+
}
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"6": {
|
| 155 |
+
"title": "Approximate distance queries and compact routing in sparse graphs.",
|
| 156 |
+
"author": "Rachit Agarwal, Philip Brighten Godfrey, and Sariel Har-Peled.",
|
| 157 |
+
"venue": "In INFOCOM 2011. 30th IEEE International Conference on\nComputer Communications, Joint Conference of the IEEE Computer and\nCommunications Societies, 10-15 April 2011, Shanghai, China, pages\n1754\u20131762. IEEE, 2011.",
|
| 158 |
+
"url": null
|
| 159 |
+
}
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"7": {
|
| 163 |
+
"title": "An almost 2-approximation for all-pairs of shortest paths in\nsubquadratic time.",
|
| 164 |
+
"author": "Maor Akav and Liam Roditty.",
|
| 165 |
+
"venue": "In Shuchi Chawla, editor, Proceedings of the 2020 ACM-SIAM\nSymposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA,\nJanuary 5-8, 2020, pages 1\u201311. SIAM, 2020.",
|
| 166 |
+
"url": null
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"8": {
|
| 171 |
+
"title": "All-pairs nearly 2-approximate shortest paths in I time.",
|
| 172 |
+
"author": "Surender Baswana, Vishrut Goyal, and Sandeep Sen.",
|
| 173 |
+
"venue": "Theor. Comput. Sci., 410(1):84\u201393, 2009.",
|
| 174 |
+
"url": null
|
| 175 |
+
}
|
| 176 |
+
},
|
| 177 |
+
{
|
| 178 |
+
"9": {
|
| 179 |
+
"title": "Faster algorithms for approximate distance oracles and all-pairs\nsmall stretch paths.",
|
| 180 |
+
"author": "Surender Baswana and Telikepalli Kavitha.",
|
| 181 |
+
"venue": "In 2006 47th Annual IEEE Symposium on Foundations of Computer\nScience (FOCS\u201906), pages 591\u2013602. IEEE, 2006.",
|
| 182 |
+
"url": null
|
| 183 |
+
}
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"10": {
|
| 187 |
+
"title": "On a routing problem.",
|
| 188 |
+
"author": "Richard Bellman.",
|
| 189 |
+
"venue": "Quarterly of applied mathematics, 16(1):87\u201390, 1958.",
|
| 190 |
+
"url": null
|
| 191 |
+
}
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"11": {
|
| 195 |
+
"title": "Improved approximate distance oracles: Bypassing the thorup-zwick\nbound in dense graphs.",
|
| 196 |
+
"author": "Davide Bil\u00f2, Shiri Chechik, Keerti Choudhary, Sarel Cohen, Tobias\nFriedrich, and Martin Schirneck.",
|
| 197 |
+
"venue": "arXiv preprint arXiv:2307.11677, 2023.",
|
| 198 |
+
"url": null
|
| 199 |
+
}
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"12": {
|
| 203 |
+
"title": "More algorithms for all-pairs shortest paths in weighted graphs.",
|
| 204 |
+
"author": "Timothy M. Chan.",
|
| 205 |
+
"venue": "SIAM J. Comput., 39(5):2075\u20132089, 2010.",
|
| 206 |
+
"url": null
|
| 207 |
+
}
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"13": {
|
| 211 |
+
"title": "Approximate distance oracles with constant query time.",
|
| 212 |
+
"author": "Shiri Chechik.",
|
| 213 |
+
"venue": "In David B. Shmoys, editor, Symposium on Theory of Computing,\nSTOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 654\u2013663.\nACM, 2014.",
|
| 214 |
+
"url": null
|
| 215 |
+
}
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"14": {
|
| 219 |
+
"title": "Approximate distance oracles with improved bounds.",
|
| 220 |
+
"author": "Shiri Chechik.",
|
| 221 |
+
"venue": "In Proceedings of the Forty-Seventh Annual ACM on Symposium on\nTheory of Computing, STOC 2015, Portland, OR, USA, June 14-17, 2015, pages\n1\u201310, 2015.",
|
| 222 |
+
"url": null
|
| 223 |
+
}
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"15": {
|
| 227 |
+
"title": "Nearly 2-approximate distance oracles in subquadratic time.",
|
| 228 |
+
"author": "Shiri Chechik and Tianyi Zhang.",
|
| 229 |
+
"venue": "In Joseph (Seffi) Naor and Niv Buchbinder, editors, Proceedings\nof the 2022 ACM-SIAM Symposium on Discrete Algorithms, SODA 2022, Virtual\nConference / Alexandria, VA, USA, January 9 - 12, 2022, pages 551\u2013580.\nSIAM, 2022.",
|
| 230 |
+
"url": null
|
| 231 |
+
}
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"16": {
|
| 235 |
+
"title": "Shortest paths algorithms: Theory and experimental evaluation.",
|
| 236 |
+
"author": "Boris V. Cherkassky, Andrew V. Goldberg, and Tomasz Radzik.",
|
| 237 |
+
"venue": "Math. Program., 73:129\u2013174, 1996.",
|
| 238 |
+
"url": null
|
| 239 |
+
}
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"17": {
|
| 243 |
+
"title": "On the hardness of distance oracle for sparse graph.",
|
| 244 |
+
"author": "Hagai Cohen and Ely Porat.",
|
| 245 |
+
"venue": "CoRR, abs/1006.1117, 2010.",
|
| 246 |
+
"url": null
|
| 247 |
+
}
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"18": {
|
| 251 |
+
"title": "A note on two problems in connexion with graphs.",
|
| 252 |
+
"author": "Edsger W. Dijkstra.",
|
| 253 |
+
"venue": "Numerische Mathematik, 1:269\u2013271, 1959.",
|
| 254 |
+
"url": null
|
| 255 |
+
}
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"19": {
|
| 259 |
+
"title": "All-pairs almost shortest paths.",
|
| 260 |
+
"author": "Dorit Dor, Shay Halperin, and Uri Zwick.",
|
| 261 |
+
"venue": "SIAM J. Comput., 29(5):1740\u20131759, 2000.",
|
| 262 |
+
"url": null
|
| 263 |
+
}
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"20": {
|
| 267 |
+
"title": "Fibonacci heaps and their uses in improved network optimization\nalgorithms.",
|
| 268 |
+
"author": "Michael L. Fredman and Robert Endre Tarjan.",
|
| 269 |
+
"venue": "J. ACM, 34(3):596\u2013615, 1987.",
|
| 270 |
+
"url": null
|
| 271 |
+
}
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"21": {
|
| 275 |
+
"title": "Conditional lower bounds for space/time tradeoffs.",
|
| 276 |
+
"author": "Isaac Goldstein, Tsvi Kopelowitz, Moshe Lewenstein, and Ely Porat.",
|
| 277 |
+
"venue": "In Workshop on Algorithms and Data Structures, pages 421\u2013436.\nSpringer, 2017.",
|
| 278 |
+
"url": null
|
| 279 |
+
}
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"22": {
|
| 283 |
+
"title": "Higher lower bounds from the 3sum conjecture.",
|
| 284 |
+
"author": "Tsvi Kopelowitz, Seth Pettie, and Ely Porat.",
|
| 285 |
+
"venue": "In Robert Krauthgamer, editor, Proceedings of the Twenty-Seventh\nAnnual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington,\nVA, USA, January 10-12, 2016, pages 1272\u20131287. SIAM, 2016.",
|
| 286 |
+
"url": null
|
| 287 |
+
}
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"23": {
|
| 291 |
+
"title": "A survey of shortest-path algorithms.",
|
| 292 |
+
"author": "Amgad Madkour, Walid G. Aref, Faizan Ur Rehman, Mohamed Abdur Rahman, and\nSaleh M. Basalamah.",
|
| 293 |
+
"venue": "CoRR, abs/1705.02044, 2017.",
|
| 294 |
+
"url": null
|
| 295 |
+
}
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"24": {
|
| 299 |
+
"title": "Ramsey partitions and proximity data structures.",
|
| 300 |
+
"author": "Manor Mendel and Assaf Naor.",
|
| 301 |
+
"venue": "In 47th Annual IEEE Symposium on Foundations of Computer\nScience (FOCS 2006), 21-24 October 2006, Berkeley, California, USA,\nProceedings, pages 109\u2013118. IEEE Computer Society, 2006.",
|
| 302 |
+
"url": null
|
| 303 |
+
}
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"25": {
|
| 307 |
+
"title": "Preprocess, set, query!",
|
| 308 |
+
"author": "Ely Porat and Liam Roditty.",
|
| 309 |
+
"venue": "Algorithmica, 67(4):516\u2013528, 2013.",
|
| 310 |
+
"url": null
|
| 311 |
+
}
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"26": {
|
| 315 |
+
"title": "Towards polynomial lower bounds for dynamic problems.",
|
| 316 |
+
"author": "Mihai Puatracscu.",
|
| 317 |
+
"venue": "In Leonard J. Schulman, editor, Proceedings of the 42nd ACM\nSymposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA,\n5-8 June 2010, pages 603\u2013610. ACM, 2010.",
|
| 318 |
+
"url": null
|
| 319 |
+
}
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"27": {
|
| 323 |
+
"title": "Unifying the landscape of cell-probe lower bounds.",
|
| 324 |
+
"author": "Mihai Puatracscu.",
|
| 325 |
+
"venue": "SIAM J. Comput., 40(3):827\u2013847, 2011.",
|
| 326 |
+
"url": null
|
| 327 |
+
}
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"28": {
|
| 331 |
+
"title": "Distance oracles beyond the thorup-zwick bound.",
|
| 332 |
+
"author": "Mihai Puatracscu and Liam Roditty.",
|
| 333 |
+
"venue": "In 51th Annual IEEE Symposium on Foundations of Computer\nScience, FOCS 2010, October 23-26, 2010, Las Vegas, Nevada, USA, pages\n815\u2013823. IEEE Computer Society, 2010.",
|
| 334 |
+
"url": null
|
| 335 |
+
}
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"29": {
|
| 339 |
+
"title": "A new infinity of distance oracles for sparse graphs.",
|
| 340 |
+
"author": "Mihai Puatracscu, Liam Roditty, and Mikkel Thorup.",
|
| 341 |
+
"venue": "In 53rd Annual IEEE Symposium on Foundations of Computer\nScience, FOCS 2012, New Brunswick, NJ, USA, October 20-23, 2012, pages\n738\u2013747. IEEE Computer Society, 2012.",
|
| 342 |
+
"url": null
|
| 343 |
+
}
|
| 344 |
+
},
|
| 345 |
+
{
|
| 346 |
+
"30": {
|
| 347 |
+
"title": "Deterministic constructions of approximate distance oracles and\nspanners.",
|
| 348 |
+
"author": "Liam Roditty, Mikkel Thorup, and Uri Zwick.",
|
| 349 |
+
"venue": "In Lu\u00eds Caires, Giuseppe F. Italiano, Lu\u00eds Monteiro,\nCatuscia Palamidessi, and Moti Yung, editors, Automata, Languages and\nProgramming, 32nd International Colloquium, ICALP 2005, Lisbon, Portugal,\nJuly 11-15, 2005, Proceedings, volume 3580 of Lecture Notes in Computer\nScience, pages 261\u2013272. Springer, 2005.",
|
| 350 |
+
"url": null
|
| 351 |
+
}
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"31": {
|
| 355 |
+
"title": "Approximate distance oracles with improved stretch for sparse graphs.",
|
| 356 |
+
"author": "Liam Roditty and Roei Tov.",
|
| 357 |
+
"venue": "Theor. Comput. Sci., 943:89\u2013101, 2023.",
|
| 358 |
+
"url": null
|
| 359 |
+
}
|
| 360 |
+
},
|
| 361 |
+
{
|
| 362 |
+
"32": {
|
| 363 |
+
"title": "All-pairs approximate shortest paths and distance oracle\npreprocessing.",
|
| 364 |
+
"author": "Christian Sommer.",
|
| 365 |
+
"venue": "In Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani, and\nDavide Sangiorgi, editors, 43rd International Colloquium on Automata,\nLanguages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy,\nvolume 55 of LIPIcs, pages 55:1\u201355:13. Schloss Dagstuhl -\nLeibniz-Zentrum f\u00fcr Informatik, 2016.",
|
| 366 |
+
"url": null
|
| 367 |
+
}
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"33": {
|
| 371 |
+
"title": "Compact routing schemes.",
|
| 372 |
+
"author": "Mikkel Thorup and Uri Zwick.",
|
| 373 |
+
"venue": "In Arnold L. Rosenberg, editor, Proceedings of the Thirteenth\nAnnual ACM Symposium on Parallel Algorithms and Architectures, SPAA 2001,\nHeraklion, Crete Island, Greece, July 4-6, 2001, pages 1\u201310. ACM, 2001.",
|
| 374 |
+
"url": null
|
| 375 |
+
}
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"34": {
|
| 379 |
+
"title": "Approximate distance oracles.",
|
| 380 |
+
"author": "Mikkel Thorup and Uri Zwick.",
|
| 381 |
+
"venue": "J. ACM, 52(1):1\u201324, 2005.",
|
| 382 |
+
"url": null
|
| 383 |
+
}
|
| 384 |
+
},
|
| 385 |
+
{
|
| 386 |
+
"35": {
|
| 387 |
+
"title": "Monochromatic triangles, triangle listing and APSP.",
|
| 388 |
+
"author": "Virginia Vassilevska Williams and Yinzhan Xu.",
|
| 389 |
+
"venue": "In Sandy Irani, editor, 61st IEEE Annual Symposium on\nFoundations of Computer Science, FOCS 2020, Durham, NC, USA, November\n16-19, 2020, pages 786\u2013797. IEEE, 2020.",
|
| 390 |
+
"url": null
|
| 391 |
+
}
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"36": {
|
| 395 |
+
"title": "Approximate distance oracles with improved query time.",
|
| 396 |
+
"author": "Christian Wulff-Nilsen.",
|
| 397 |
+
"venue": "In Encyclopedia of Algorithms, pages 94\u201397, 2016.",
|
| 398 |
+
"url": null
|
| 399 |
+
}
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"37": {
|
| 403 |
+
"title": "Exact and approximate distances in graphs - A survey.",
|
| 404 |
+
"author": "Uri Zwick.",
|
| 405 |
+
"venue": "In Friedhelm Meyer auf der Heide, editor, Algorithms - ESA\n2001, 9th Annual European Symposium, Aarhus, Denmark, August 28-31, 2001,\nProceedings, volume 2161 of Lecture Notes in Computer Science, pages\n33\u201348. Springer, 2001.",
|
| 406 |
+
"url": null
|
| 407 |
+
}
|
| 408 |
+
}
|
| 409 |
+
],
|
| 410 |
+
"url": "http://arxiv.org/html/2310.12239v2"
|
| 411 |
+
}
|
20241001/2310.12831v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2311.02262v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2311.08369v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2311.09356v3.json
ADDED
|
@@ -0,0 +1,448 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "LePaRD: A Large-Scale Dataset of Judicial Citations to Precedent",
|
| 3 |
+
"abstract": "We present the Legal Passage Retrieval Dataset, LePaRD.\nLePaRD contains millions of examples of U.S. federal judges citing precedent in context.\nThe dataset aims to facilitate work on legal passage retrieval, a challenging practice-oriented legal retrieval and reasoning task.\nLegal passage retrieval seeks to predict relevant passages from precedential court decisions given the context of a legal argument.\nWe extensively evaluate various approaches on LePaRD, and find that classification-based retrieval appears to work best.\nOur best models only achieve a recall of 38% when trained on data corresponding to the 10,000 most-cited passages, underscoring the difficulty of legal passage retrieval.\nBy publishing LePaRD, we provide a large-scale and high quality resource to foster further research on legal passage retrieval.\nWe hope that research on this practice-oriented NLP task will help expand access to justice by reducing the burden associated with legal research via computational assistance.\nWarning: Extracts from judicial opinions may contain offensive language.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "###table_1### A third of the global population lives in a common law jurisdiction where legal arguments are based on prior decisions, known as precedent Fathally and Mariani (2008 ###reference_b7###).\nJudges and lawyers use citations to build on precedent and frequently quote passages directly from prior cases.\nThe U.S. legal system is an example of a common law system and U.S. federal courts have produced around 1.7 million published judicial opinions, giving rise to tens of millions of passages containing legal rules, standards, and explanations, which could potentially be cited in new cases.\nLawyers and judges frequently cite such passages as the basis for their arguments (a prototypical example is shown in Figure 1 ###reference_###).\nAs a result, identifying appropriate precedent relevant to a given argument represents a fundamental component of legal practice.\nThis is a complicated and time consuming endeavour: Based on the Case Law Access Project, a public repository of U.S. case law, there are almost 2 million published federal judicial opinions with an average length of around 4,500 tokens.\nThe sheer volume of passages which could potentially be cited thus adds to the complexity of legal research, which is exacerbated by subtle rules about the contexts in which a given passage is legally binding.\nWe provide the large-scale dataset, LePaRD, which can be used to develop computational retrieval methods that facilitate the retrieval of U.S. federal court precedent.\nLePaRD was constructed by focusing on how judges actually used precedential passages and as such it builds on millions of expert decisions.\nIn practice, highly paid attorneys spend significant time on legal research to find relevant precedent\u2014and they routinely bill up to $100 per individual search Franklin County Law Library (2023 ###reference_b8###).\nMeanwhile, in the U.S., around 90% of civil legal problems encountered by low-income individuals do not receive adequate legal help Slosar (2022 ###reference_b32###) and access to such services is also limited for small businesses Baxter (2022 ###reference_b2###).\nThus, the complexity and cost of legal research may be partially responsible for the high cost of litigation and the associated access to justice gap.\nLegal NLP promises to be a powerful equalizer in the legal profession Mahari et al. (2023b ###reference_b24###), but many areas of legal practice have been slow to adopt technologies that increase efficiencies and reduce costs for clients.\nWhile this may be partially driven by a lack of incentives and risk-aversion from legal community, legal NLP research also appears to be disconnected from the needs of legal practitioners Mahari et al. (2023b ###reference_b24###).\nThis in turn is partially driven by the lack of large-scale resources for practice-oriented legal NLP tasks.\nOften, the data needed for this type of research is proprietary and constructing legal datasets from publicly available sources requires legal expertise.\nTo help address the high costs of legal research, and the resulting access to justice issues, and to foster more legal NLP research on practice-oriented tasks, we release the Legal Passage Retrieval Dataset LePaRD.\nLePaRD represents a large set of previously cited U.S. federal precedent, containing millions of argument contexts and the relevant target passage.\nIn this work, we document the construction of LePaRD and describe relevant dataset statistics.\nWe also extensively evaluate various retrieval approaches from the NLP literature (see e.g., Yang et al., 2017 ###reference_b39###; Reimers and Gurevych, 2019 ###reference_b27###; Mahari, 2021 ###reference_b25###; Tay et al., 2022 ###reference_b33###), some of which have been applied to other legal information retrieval tasks (e.g., Ma et al., 2021a ###reference_b21###; Rosa et al., 2021 ###reference_b28###).\nOur most accurate method achieves a recall of 38% on the LePaRD test set, indicating that legal passage retrieval is a challenging task that requires new technical approaches.\nNo large-scale resources for legal passage retrieval exists and we address this gap by constructing and releasing LePaRD.\nLePaRD contains citations to relevant precedent paired with the contexts in which they have been cited by judges.\nWe also provide relevant meta-data, such as the court and decision year of an opinion, which may be relevant for future work on legal retrieval.\nRetrieving relevant passages with computational assistance has the potential to reduce the time and cost associated with legal research and thus to reduce the overall cost of litigation.\nIn publishing the dataset, we seek to catalyze practice-oriented legal NLP, and ultimately, and we hope that models trained on LePaRD will reduce the burden associated with legal research for litigants, judges, and lawyers, thus helping to expand access to justice."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": "Retrieval of relevant legal passages or cases is a fundamental task in legal practice.\nMost existing search tools are closed-source and the usage of such tools can cost up to $100 per search (Franklin County Law Library, 2023 ###reference_b8###).\nLegal retrieval has been explored in some prior work. Mahari (2021 ###reference_b25###) introduces the legal passage retrieval task, however, no corresponding dataset was released and the paper focused on just 5,000 target passages (in contrast to 1.8 million in LePaRD).\nThis is a general problem in legal NLP where large-scale professionally annotated data sources remain proprietary111For example LexisNexis, Westlaw, and Bloomberg Law..\nMoreover, creating such resources remains costly due to the intricacies of legal language, which complicate the creation of large-scale resources without expert annotators who tend to be very costly.\nThe lack of data has in turn made it challenging for legal NLP research to focus on tasks aligned with the needs of legal practitioners.\nOther related work includes the COLIEE shared task series related to legal case retrieval (e.g., Rabelo et al., 2022 ###reference_b26###; Kim et al., 2023 ###reference_b17###).\nIn this setting, a system is given a query and has to retrieve the most related case (or statute) from a pre-defined knowledge base.\nCompared to these information retrieval tasks using synthetic queries, our dataset construction is more closely aligned with actual legal practice.\nFurthermore, the COLIEE datasets remain limited in size, containing around 4,400 cases which could potentially be retrieved222We acknowledge and greatly appreciate the continued effort in constructing and expanding the COLIEE datasets. They are increasing in size each year, however, we believe there is room for other, complementary larger-scale legal retrieval datasets., whereas our dataset allows us to investigate legal passage retrieval methods at scale, containing the universe of all cited legal passages in U.S. federal courts.\nThis setting more closely resembles how a practicing attorney would perform legal research.\nFinally, lexical overlap seems to play a significant role in COLIEE datasets Rosa et al. (2021 ###reference_b28###), making BM25 a strong baseline in that setting.\nIn contrast, we find that this is not the case for LePaRD.\nA growing body of work investigates legal citation prediction Dadgostari et al. (2021 ###reference_b5###); Huang et al. (2021 ###reference_b12###) or the retrieval of relevant cases given a query Sansone and Sperl\u00ed (2022 ###reference_b31###); Ma et al. (2021b ###reference_b22###); Joshi et al. (2023 ###reference_b14###).\nBased on the preceding context from a legal document, the goal in legal citation prediction is to identify the citation that supports the context in question.\nBy contrast, in legal passage retrieval, the aim is to identify a specific passage of precedent rather than a citation to a whole case (which is usually tens or even hundreds of pages long).\nWe believe there are several reasons to focus on legal passage retrieval over legal citation prediction.\nLegal citation prediction accuracy numbers seem very strong (see e.g., Huang et al., 2021 ###reference_b12###).\nWe attribute these results to the long-tailed distribution of citations and believe that models take shortcuts to determine a topic for a snippet and then return the most cited cases for these topics\u2014whereas legal passage retrieval inherently requires more involved legal reasoning.\nThis also connects to relevance in legal search, i.e., finding the appropriate target Van Opijnen and Santos (2017 ###reference_b36###).\nWe believe legal relevance is more strongly captured by searching for short passages, rather than predicting citations to entire cases, because a case is likely to deal with multiple independent arguments.\nSome passages may not be semantically linked to the concepts they stand for, making it difficult to identify them using lexical overlap or semantic search.333For example the phrase \u201cplay in the joints\u201d is commonly used by courts to refer to a category of state actions that are permitted by the Establishment Clause but not required by the Free Exercises Clause of the First Amendment to the U.S. Constitution, see Locke v. Davey, 540 U.S. 712 (2004). Instead, the link is established via frequent citations.\nBy contrast, sometimes there exists an entailment relation (see e.g. Dagan et al., 2005 ###reference_b6###; Bowman et al., 2015 ###reference_b3###) between the context and the cited source passage, where the two passages are connected via legal reasoning.\nHowever, we find that this entailment in legal reasoning manifests differently in practical legal settings than in other NLP contexts.\nThus, models trained on e.g, natural language inference Bowman et al. (2015 ###reference_b3###); Reimers and Gurevych (2019 ###reference_b27###) fail to recognize such relations in LePaRD.\nHence, our specially curated dataset may better facilitate the approximation of legal reasoning by NLP models.\nFinally, from the perspective of practitioners, we believe that it is more useful to predict specific passages than citations to cases that may be hundreds of pages long."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Legal Passage Retrieval Dataset (LePaRD)",
|
| 21 |
+
"text": "U.S. federal courts are bound by the doctrine of Stare Decisis, which means that they must abide by past decisions.\nAs a result, judges and lawyers build their arguments on citations to precedent.\nOften these citations will be accompanied by quotations.\nWhen performing legal research, frequently cited passages of precedent are often displayed prominently by research platforms (known as \u201cheadnotes\u201d or \u201ckey cites\u201d) and serve as quasi-summaries of judicial opinions.\nIn this work, we leverage the quotations contained in judicial opinions to assemble a large dataset of precedential passages."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Case Law Access Project",
|
| 27 |
+
"text": "Harvard\u2019s Case Law Access Project (CAP) has scanned almost seven million published judicial opinions from U.S. federal and state courts.444https://case.law\nCAP provides access to raw opinion texts along with opinion metadata (which includes the relevant court, citations contained in the opinion, and the decision date).\nHere we focus on judicial opinions published in U.S. federal courts including the U.S. Supreme Court, 13 federal appellate courts, and 94 district courts.\nOur study focuses on the 1.7 million published federal judicial opinions contained in CAP.\n###figure_1###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Dataset Construction",
|
| 33 |
+
"text": "LePaRD is assembled by identifying quoted passages in judicial opinions, matching these passages to source opinions, and extracting the context within which the passages occur. This procedure is summarized in Figure 2 ###reference_###.\nIn general, our construction process aims to construct a large dataset that covers as many legal contexts as possible while minimizing the amount of noise introduced by e.g., OCR errors. Given the large volume of data available, we made some design decisions that removed training examples (for example, very short passages), because including these special cases led to other issues, e.g., noisier data.\nFor each CAP opinion, we retain the opinion id, opinion text, citations, court, and decision date.\nTo facilitate downstream tasks, each opinion text was split into sentences using a Roberta model Liu et al. (2019 ###reference_b19###) trained to predict sentence boundaries in legal text, using the legal sentence tokenization dataset introduced by Sanchez (2019 ###reference_b29###).\nThe model was trained using the transformer library Wolf et al. (2020 ###reference_b38###) with the standard hyper-parameters found in the Trainer library.\nNo further text preprocessing is performed.\nFor all case citations, we drop duplicated citations as well as erroneous self-citations.\nWe convert citations to case ids by mapping each possible case citation to the relevant id.\nFor example, Marbury v. Madison may be cited as \u201c1 Cranch 137\u201d, \u201c5 U.S. 137\u201d, \u201c2 L. Ed. 60\u201d, \u201cSCDB 1803-005\u201d, or \u201c1803 U.S. LEXIS 352\u201d. We map all of these to case_id = 12121622.\nFor each opinions, we search for text in quotation marks (either straight or left/right quotation marks) using a regular expression.\nWe retain quotations longer than five words (short quotations are harder to unambiguously match to a source and may result in duplicate training data).\nWe extract one or more sentences of \u201cpreceding context\u201d before the quotation up to a maximum of 300 words or until we reach the end of the last quotation to avoid \u201coverlapping contexts\u201d where we would have to predict multiple precedential passages from the same context.\nFor multi-sentence contexts, we impose this word limit as sentences vary drastically in length.\nWe refer to the opinions from which quotations have been extracted as \u201cdestination opinions\u201d and we seek to match these quotations to the relevant \u201csource opinion\u201d.\nBased on the previous steps, our starting point is a list of quotations and citations for each destination opinion.\nUsing the citations, we check whether each quotation appears in each of the cited opinions (using fuzzy string matching to account for OCR errors and modifications judges might make to the quotation to match verb tenses and capitalization).\nSpecifically, we match the quoted text against each sentence in the source opinion.\nThis means that source passages will always be a single sentence long, potentially excluding very long quotations.\nIn practice, we find that courts usually quote fairly short portions of longer passages (see Table 1 ###reference_###).\nTo avoid many versions of the same passage we retain the entire passage sentence as the target (see Appendix A ###reference_### for some examples.).\nIf a quoted passage is found to exist in a cited opinion, then this opinion is treated as the \u201csource\u201d of the passage.\nEach passage thus has one source but it may have many destinations (two on average, see Table 1 ###reference_###).\nWhile most of the unsuccessful matches are quotations that do not come from other opinions, our approach does not tend to match multi-sentence quotations or ellipsized quotations.\nLePaRD contains the preceding context, target passage, destination court, source court, destination decision date, and source decision date for each quotation that could be matched to a passage.\nUltimately, we extract and validate 3.9 million unique target passages that have appeared in approximately 14 million contexts."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Dataset Statistics",
|
| 39 |
+
"text": "Quotations serve several purposes in legal writing: they may be used for emphasis, refer to case documents and exhibits, introduce information from witness or expert testimony, cite supporting materials like treatises or academic publications, or they may reference precedential court opinions.\nWhile quotations and citations in judicial opinions offer several interesting avenues for legal NLP and legal passage retrieval, we focus on quotations that can be mapped to single sentences from another opinion.\nFuture work could also examine the retrieval of longer passages or move beyond quotations to general citations (many of which are associated with a \u201cpincite\u201d or page number).\nIn this section, we present several summary statistics about LePaRD (see Table 1 ###reference_###) and we highlight some key observations.\nFirst, we note that citations in judicial opinions obey a long-tailed distribution, with the top-1% accounting for 24% of all citations and 15% of all passages receiving just 1 citation.\nThis results in an inherent imbalance in the dataset, raising unique challenges for legal precedent retrieval.\nSecond, the sentence lengths vary substantially and this results in passages and contexts of varying lengths (the longest passage is over 13,000 characters long).\nThis means that many passages and contexts will be truncated by standard text retrieval approaches.\nThird, most destination opinions contain several passages (around 16 on average, but occasionally tens or hundreds).\nThis suggests that there are multiple contexts that occur within a single opinion\u2014something that will be familiar to legal practitioners.\nIn our view, this validates the approach of using local context before a quotation rather than searching for more remote context that may be less relevant (for example, many opinions will discuss factors related to jurisdiction or venue early on but these will not come up anywhere else in the opinion).\nFourth, the average source opinion is represented 33 times in our data.\nWhile we treat passages from the same source as separate, it appears likely that they would be conceptually linked (since the portions of an opinion that are cited tend to be somewhat novel or unique and it is uncommon, though not impossible, for there to be multiple such passages in the same opinion).\nFuture work could thus explore whether passage retrieval benefits from grouping passages by their source.\nFinally, we find that there is a tremendous amount of variance in the training data by source court.\nWe include courts to allow future users of LePaRD to narrow predictions by court in order to consider the role of binding precedent.\nHowever, it appears that for most courts, there is insufficient data to train independent models."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "5",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Experiments",
|
| 45 |
+
"text": ""
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5.1",
|
| 49 |
+
"parent_section_id": "5",
|
| 50 |
+
"section_name": "Results",
|
| 51 |
+
"text": "We observe that there is only limited lexical overlap between the context and the cited passage, reflected in rather poor performance of the BM25 retrieval. This is in strong contrast to e.g., the COLIEE shared tasks where BM25 remains one of the most competitive retrieval methods Rosa et al. (2021 ###reference_b28###).\nDeploying a pre-trained SBERT variant also seems to transfer poorly to the legal passage retrieval task.\nWe attribute this finding to the domain shift (i.e., he model was not trained on legal data), and the particular challenges of legal language and entailment present in the legal passage retrieval task.\nWe find, however, that results improve noticeably as soon as we start to fine-tune models on the LePaRD training set.\nWe see at least double the recall for dense SBERT-based retrieval after domain-specific fine-tuning.\nRecall results improve even further if we treat legal passage retrieval as a supervised classification task: Rather than seeking to embed a context and target passage close in some representation space, we assign a unique class label to each passage, and then aim to predict that label from the legal context (see e.g., Mahari, 2021 ###reference_b25###; Tay et al., 2022 ###reference_b33###).\nWe experiment with two different models, and observe that the DistilBERT model achieves the best overall performance in all settings.\nOur best performance in the 10K label setting suggests that the correct target passage would be predicted among the top 10 search results in 8 out of 10 cases.\nSurprisingly, a domain-specific LEGAL-BERT model achieves worse performance than the more generic DistilBERT model. We speculate that LEGAL-BERT has been pre-trained on vast amounts of legal text from various judicial systems\u2014and some of this pre-training data does not seem to be beneficial to retrieving relevant U.S. precedent.\nAlthough a supervised classification approach seems to work best in our experiments, this approach comes with major limitations.\nFirstly, updating models to accommodate new precedent requires either updating existing models or re-training them from scratch (Tay et al., 2022 ###reference_b33###).\nSecondly, LLMs have been shown to exhibit biases (Abid et al., 2021 ###reference_b1###; Lucy and Bamman, 2021 ###reference_b20###) and the resulting classification of passages in our application might potentially perpetuate these biases.\nLastly, zero- and few-shot retrieval for the long tail of the distribution will not be solved by this approach, and require other methods, as highlighted by the inverse relationship between model performance and the passages frequency.\nOur experiments showcase how LePaRD is a large-scale yet challenging legal retrieval dataset.\nWe believe there is ample room for improvement, for example by considering re-ranking approaches or late interactions (Khattab and Zaharia, 2020 ###reference_b16###).\nNevertheless, our experiments help us make sense of the dataset, by e.g., highlighting how there is only limited lexical overlap between context and the target passage.\nAll experiments exhibit consistent behavior across dataset splits and metrics\u2014and are intended as baselines to be used in future research involving LePaRD."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "6",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Expert Evaluation",
|
| 57 |
+
"text": "A legal expert (licensed attorney) reviewed 100 randomly sampled training examples.\nFor each example, the expert determined whether (1) the example was generally clean and free of errors and (2) the preceding context provided sufficient information to determine that the target passage is relevant to the context.\nBased on this evaluation, all examples were clean and free of errors other than preexisting errors stemming from the OCR\u2014we leave addressing these as an opportunity for future work.\nIn 99% of these examples, the expert determined that there was enough information in the context to determine the relevance of the target passage.\nIn the problematic case, the destination context spans two footnotes, the former a series of citations to unrelated memoranda, and the latter an explanatory footnote containing a quotation.\nDue to the CAP processing, these unrelated consecutive footnotes appear as adjacent sentences.\nFurther investigation showed that this type of explanatory footnote with a quotation is very uncommon in the data."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "7",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Discussion",
|
| 63 |
+
"text": "We highlight that the legal passage retrieval task is non-trivial, complicated by the long-tailed distribution of cited precedent and the sheer size of the corpus.\nBy publishing LePaRD, we aim to encourage NLP work on a set of problems that are closely aligned with the needs of the legal profession.\nMore broadly, our aim is to offer an example of how NLP can be used to broaden access to justice and to catalyze similar work in other legal domains.\nOne of the challenges of legal research is that not all case law content carries the same weight.\nOn the one hand, the structure of court systems means that precedent that is binding in one court may not be binding in another court, even if they are part of the same system (e.g., precedent from the U.S. District Court for the District of Massachusetts is not binding in the U.S District Court for the District of Oregon because these district courts are part of different judicial circuits within the U.S. federal judiciary).\nSimilarly, old precedent may be overturned and thus lawyers must be careful to cite \u201cgood law\u201d (although we find that passages tend to be cited for an average of about ten years, see Appendix B ###reference_###).\nOn the other hand, not everything that is said in a judicial opinion has the status of precedent: only the elements of a court\u2019s reasoning that are essential to the decision bind future courts while other content contained in a judicial opinion is known as obiter dictum and is not legally binding.\nAs a result, methods that focus on lexical overlap or semantic search create a large risk of retrieving content that is not binding precedent.\nLePaRD addresses these issues in two ways.\nFirst, we include the court and date associated with each precedent to facilitate the identification of precedent that is binding in a certain court and time.\nSecond, only passages that have been previously cited by judges are included in the dataset, which significantly reduces the probability of retrieving non-binding dicta.\nWhile we note that requiring a passage to be cited at least once restricts our dataset, we believe this limitation is far outweighed by the value of knowing that the passage has been selected for citation by a federal judge.\nOne particularly promising application of precedent prediction is its potential to serve as the basis for retrieval augmented generation using large language models (RAG).\nRAG has been put forward as a method of allowing models to generate text based on information that is not contained in the training data Lewis et al. (2020 ###reference_b18###); Karpukhin et al. (2020 ###reference_b15###); Gautier et al. (2022 ###reference_b9###).\nIn the context of legal research and writing, RAG appears to have several key advantages.\nFirst, RAG is likely to increase the correctness of citations by allowing practitioners to ensure that only real precedent are cited (i.e., reducing, though not eliminating, the risk of hallucinations), the cited precedent is relevant to the particular court, and the cited precedent remains good law (it has not been overturned).\nThe importance of this capability was highlighted by the recent Mata v. Avianca Airlines case where an attorney relied on ChatGPT to write a brief that turned out to rely on non-existent references Weiser (2023 ###reference_b37###)\nSecond, RAG is more easily updatable than fine-tuned models and thus allows case law to be quickly updated as new cases come out and old cases are overturned Mahari et al. (2023a ###reference_b23###).\nThird, RAG is auditable in the sense that practitioners see the basis for generated outputs, allowing them to remove any irrelevant precedent before text is generated.\nThe effective design of such systems to integrate well into lawyers\u2019 exist workflow raises interesting questions around human-computer-interaction.\nWhile rules of professional responsibility related to lawyers\u2019 use of generative AI continue to evolve, some proposals highlight an attorney\u2019s \u201cduty to supervise\u201d the technologies they use Greenwood et al. (2023 ###reference_b10###) and the ability to evaluate what precedent will be used as a basis for a brief appears to be a likely prerequisite for \u201csupervising\u201d brief writing models."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "8",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Conclusion",
|
| 69 |
+
"text": "We introduce LePaRD, a large-scale dataset for predicting a target precedential passage given a legal argument context. Legal passage retrieval is an important task for legal practitioners, and a challenging NLP retrieval task.\nFrom a legal perspective, searching for relevant case law consumes significant resources and contributes to the cost of litigation and the associated access to justice gap.\nFrom an NLP perspective, legal passage retrieval is a retrieval task with little lexical overlap between queries and targets, which makes it a particularly interesting retrieval problem.\nWe present various experiments to provide initial benchmarks and to highlight the difficulty of the legal passage retrieval task.\nThere are several approaches toward better legal precedent retrieval, some of which we outline here, and the experiments we present are intended as baselines rather than optimal solutions.\nOne example approach is to combine citation and passage retrieval to first find relevant cases and then identify specific passages within them\u2014which can be thought of as a retrieve and re-rank approach.\nAlternatively, one could also retrieve the top-N passages, and re-rank those with a more powerful re-ranker.\nWe are excited for LePaRD to serve as a large-scale resource for such experiments and other retrieval research in the legal domain."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "9",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Limitations",
|
| 75 |
+
"text": "We discussed several limitations of this work throughout the paper. In this section, we expand on some of these points, detail other limitations, and outline avenues for future work."
|
| 76 |
+
}
|
| 77 |
+
],
|
| 78 |
+
"appendix": [
|
| 79 |
+
{
|
| 80 |
+
"section_id": "Appendix 1",
|
| 81 |
+
"parent_section_id": null,
|
| 82 |
+
"section_name": "Appendix A Data Sample",
|
| 83 |
+
"text": "Table 4 ###reference_### shows a sample of five training examples from LePaRD. Note how often only a small portion of a target passage is actually quoted in the destination opinions."
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"section_id": "Appendix 2",
|
| 87 |
+
"parent_section_id": null,
|
| 88 |
+
"section_name": "Appendix B Further Dataset Statistics",
|
| 89 |
+
"text": "Here we provide some additional insights derived from LePaRD.\nIn contrast to the details provided in Section 4 ###reference_###, we will explore interdisciplinary insights that may catalyze future research.\nWe find that passages are cited for a long time after initial publication with a mean of 10 years and a maximum of over 150 years between the first and last citation (see Figure 4 ###reference_###).\nThis is relevant insofar as it highlights that a legal passage dataset will be a valuable contribution with a lasting impact for legal precedent retrieval.\nWe further observe that a majority of quotations are to passages produced by another court, especially by the U.S. Supreme Court or by appellate courts (see Figure 3 ###reference_###).\nIn particular, district courts appear to cite very little of their own precedent, which is unsurprising given that they are bound by the relevant higher courts and thus are more likely to cite precedent from these higher courts.\nThese observations provide some evidence that LePaRD represents a fairly representative sample of precedential passage usage in federal courts.\nClustering passage co-occurrence based on whether passages appear in the same destination context reveals interesting patterns (see Figure 5 ###reference_###).\nWe observe three clusters: First, a very small cluster (just two cases, Anderson v. Liberty Lobby, Inc. and Celotex Corp. v. Catrett) which pertain to summary judgement, when a judgement is entered without a full trial which happens very frequently in many different civil disputes.\nSecond, a small cluster of bankruptcy court cases, which are brought in a subset of specialized federal courts. Third, a large cluster containing all other passages.\nThis clustering highlights an alternative approach to legal passage retrieval that uses a pre-existing set of citations to predict missing ones, as explored by Huang et al. (2021 ###reference_b12###).\n###figure_2### ###figure_3### ###figure_4###"
|
| 90 |
+
}
|
| 91 |
+
],
|
| 92 |
+
"tables": {
|
| 93 |
+
"1": {
|
| 94 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.1\">Feature</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.2.1\">Mean</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.3.1\">Std</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.4.1\">Min</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.5.1\">25%</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.6.1\">50%</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.7.1\">75%</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.1.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.8.1\">Max</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.1.2.1.1\">Length of passage text (chars)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.2.1.2\">275</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.2.1.3\">192</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.2.1.4\">22</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.2.1.5\">167</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.2.1.6\">233</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.2.1.7\">326</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.2.1.8\">13,091</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.3.2.1\">Length of preceding context (chars)</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.3.2.2\">719</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.3.2.3\">647</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.3.2.4\">5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.3.2.5\">171</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.3.2.6\">454</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.3.2.7\">1,277</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.3.2.8\">14,275</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.4.3.1\">Training examples per passage</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.4.3.2\">3.62</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.4.3.3\">33.10</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.4.3.4\">1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.4.3.5\">1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.4.3.6\">1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.4.3.7\">3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.4.3.8\">33,439</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.5.4.1\">Training examples per destination</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.5.4.2\">16.20</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.5.4.3\">30.08</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.5.4.4\">1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.5.4.5\">3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.5.4.6\">7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.5.4.7\">18</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.5.4.8\">2,576</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.6.5.1\">Training examples per source</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.6.5.2\">33.00</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.6.5.3\">245.88</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.6.5.4\">1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.6.5.5\">3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.6.5.6\">9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.6.5.7\">27</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.1.6.5.8\">93,918</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T1.1.7.6.1\">Training examples per source court</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T1.1.7.6.2\">35,432</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T1.1.7.6.3\">216,851</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T1.1.7.6.4\">1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T1.1.7.6.5\">39</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T1.1.7.6.6\">798</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T1.1.7.6.7\">3,737</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T1.1.7.6.8\">3,402,091</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Summary statistics of dataset features.</figcaption>\n</figure>",
|
| 95 |
+
"capture": "Table 1: Summary statistics of dataset features."
|
| 96 |
+
},
|
| 97 |
+
"2": {
|
| 98 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row\" id=\"S5.T2.1.1.1.1\">Number of cited passages</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.1.1.1.2\">Train</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.1.1.1.3\">Dev</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.1.1.1.4\">Test</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.1.2.1.1\">10\u2019000</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.1.2\">1,732</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.1.3\">95K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.1.4\">95K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.1.3.2.1\">20\u2019000</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.2\">2,228K</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.3\">124K</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.4\">124K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S5.T2.1.4.3.1\">50\u2019000</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.1.4.3.2\">3,149K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.1.4.3.3\">175K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.1.4.3.4\">175K</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Number of examples in different splits of <span class=\"ltx_text\" id=\"S5.T2.3.1\">LePaRD</span>.</figcaption>\n</figure>",
|
| 99 |
+
"capture": "Table 2: Number of examples in different splits of LePaRD."
|
| 100 |
+
},
|
| 101 |
+
"3": {
|
| 102 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row\" id=\"S5.T3.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.1.1.1\">Approach</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.1.2.1\">N</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" colspan=\"4\" id=\"S5.T3.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.1.3.1\">Development Set</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"4\" id=\"S5.T3.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.1.4.1\">Test Set</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row\" id=\"S5.T3.1.2.2.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S5.T3.1.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.1.2.2.3\">rc@1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.1.2.2.4\">rc@10</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.1.2.2.5\">NDCG@10</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T3.1.2.2.6\">MAP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.1.2.2.7\">rc@1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.1.2.2.8\">rc@10</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.1.2.2.9\">NDCG@10</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.1.2.2.10\">MAP</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.3.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.1.3.1.1.1\">BM25</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.3.1.2\">10K</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.1.3\">9.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.1.4\">26.52</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.1.5\">17.21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.1.3.1.6\">14.32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.1.7\">9.62</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.1.8\">26.78</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.1.9\">17.44</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.1.10\">14.54</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.4.2.1\">20K</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.2.2\">7.86</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.2.3\">23.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.2.4\">14.82</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.4.2.5\">12.22</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.2.6\">8.08</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.2.7\">23.36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.2.8\">15.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.2.9\">12.41</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.5.3.1\">50K</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.3.2\">6.77</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.3.3\">19.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.3.4\">12.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.5.3.5\">10.33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.3.6\">6.84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.3.7\">19.67</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.3.8\">12.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.3.9\">10.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.6.4.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.1.6.4.1.1\">SBERT</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.6.4.2\">10K</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.4.3\">6.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.4.4\">19.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.4.5\">12.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.1.6.4.6\">9.69</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.4.7\">6.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.4.8\">19.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.4.9\">12.14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.4.10\">9.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.7.5.1\">20K</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.5.2\">4.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.5.3\">16.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.5.4\">9.96</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.7.5.5\">7.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.5.6\">5.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.5.7\">16.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.5.8\">10.18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.5.9\">8.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.8.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.8.6.1\">50K</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.8.6.2\">4.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.8.6.3\">12.95</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.8.6.4\">8.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.8.6.5\">8.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.8.6.6\">4.16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.8.6.7\">13.04</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.8.6.8\">8.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.8.6.9\">6.56</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.9.7.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.1.9.7.1.1\">fine-tuned SBERT</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.9.7.2\">10K</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.9.7.3\">19.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.9.7.4\">61.97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.9.7.5\">38.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.1.9.7.6\">31.54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.9.7.7\">19.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.9.7.8\">62.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.9.7.9\">38.91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.9.7.10\">31.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.10.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.10.8.1\">20K</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.10.8.2\">15.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.10.8.3\">51.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.10.8.4\">31.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.10.8.5\">25.33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.10.8.6\">15.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.10.8.7\">51.81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.10.8.8\">31.67</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.10.8.9\">25.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.11.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.11.9.1\">50K</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.11.9.2\">11.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.11.9.3\">39.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.11.9.4\">23.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.11.9.5\">18.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.11.9.6\">11.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.11.9.7\">39.52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.11.9.8\">23.75</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.11.9.9\">18.92</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.12.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.12.10.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.1.12.10.1.1\">LEGAL-BERT Classifier</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.12.10.2\">10K</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.12.10.3\">35.45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.12.10.4\">79.22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.12.10.5\">56.86</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.1.12.10.6\">49.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.12.10.7\">35.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.12.10.8\">79.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.12.10.9\">57.08</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.12.10.10\">49.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.13.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.13.11.1\">20K</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.13.11.2\">28.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.13.11.3\">68.75</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.13.11.4\">47.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.13.11.5\">41.13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.13.11.6\">28.45</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.13.11.7\">68.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.13.11.8\">47.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.13.11.9\">41.01</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.14.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.14.12.1\">50K</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.14.12.2\">19.32</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.14.12.3\">48.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.14.12.4\">32.93</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.14.12.5\">28.13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.14.12.6\">19.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.14.12.7\">48.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.14.12.8\">32.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.14.12.9\">28.12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.15.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S5.T3.1.15.13.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.1.15.13.1.1\">DistilBERT Classifier</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.15.13.2\">10K</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.15.13.3\">37.77</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.15.13.4\">81.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.15.13.5\">59.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.1.15.13.6\">52.22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.15.13.7\">38.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.15.13.8\">81.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.15.13.9\">59.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.15.13.10\">52.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.16.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.16.14.1\">20K</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.16.14.2\">33.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.16.14.3\">75.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.16.14.4\">53.55</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.16.14.5\">46.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.16.14.6\">33.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.16.14.7\">74.95</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.16.14.8\">53.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.16.14.9\">46.68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.17.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S5.T3.1.17.15.1\">50K</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.1.17.15.2\">26.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.1.17.15.3\">65.53</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.1.17.15.4\">45.25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.1.17.15.5\">38.86</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.1.17.15.6\">26.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.1.17.15.7\">65.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.1.17.15.8\">45.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T3.1.17.15.9\">38.82</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>We run different approaches to the legal passage retrieval task on versions of LePaRD with a varying number of target passages (N). We measure the recall at <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.5.1\">1</span> and <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.6.2\">10</span>, normalized discounted cumulative gain (NDCG) at <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.7.3\">10</span>, and Mean Average Precision (MAP) for development and test sets across these baselines. The best results are obtained using classification and (relatively) few labels. Metrics are calculated using the <cite class=\"ltx_cite ltx_citemacro_citet\">Van\u00a0Gysel and de\u00a0Rijke (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.09356v3#bib.bib35\" title=\"\">2018</a>)</cite> package.</figcaption>\n</figure>",
|
| 103 |
+
"capture": "Table 3: We run different approaches to the legal passage retrieval task on versions of LePaRD with a varying number of target passages (N). We measure the recall at 1 and 10, normalized discounted cumulative gain (NDCG) at 10, and Mean Average Precision (MAP) for development and test sets across these baselines. The best results are obtained using classification and (relatively) few labels. Metrics are calculated using the Van\u00a0Gysel and de\u00a0Rijke (2018) package."
|
| 104 |
+
},
|
| 105 |
+
"4": {
|
| 106 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A2.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A2.T4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A2.T4.1.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"A2.T4.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.1.1.1.1.1\" style=\"width:93.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.1.1.1.1.1.1\">Meta-Data</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"A2.T4.1.1.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.1.1.2.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.1.1.2.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.1.1.2.1.1.1\">Preceding Context</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"A2.T4.1.1.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.1.1.3.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.1.1.3.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.1.1.3.1.1.1\">Target Passage</span></span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T4.1.2.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"A2.T4.1.2.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.2.1.1.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.2.1.1.1.1\" style=\"width:93.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.2.1.1.1.1.1\">Destination Court:</span> E.D.N.Y \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.2.1.1.1.1.2\">Destination Date:</span> 2001-03-28 \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.2.1.1.1.1.3\">Source Court:</span> Supreme Court \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.2.1.1.1.1.4\">Source Date:</span> 1974-12-23</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"A2.T4.1.2.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.2.1.2.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.2.1.2.1.1\" style=\"width:113.8pt;\">In order to satisfy this requirement, a plaintiff must establish a \u201csufficiently close nexus between the State and the challenged action. See American Mfrs. Mut. Ins. Co. v. Sullivan, 526 U.S. 40, 50, 119 S.Ct. 977, 985, 143 L.Ed.2d 130 (1999). Alternatively, if the government has</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"A2.T4.1.2.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.2.1.3.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.2.1.3.1.1\" style=\"width:113.8pt;\">There where a private lessee, who practiced racial discrimination, leased space for a restaurant from a state parking authority in a publicly owned building, the Court held that the State had <span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.2.1.3.1.1.1\">so far insinuated itself into a position of interdependence with the restaurant that it was a joint participant in the enterprise.</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.1.3.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"A2.T4.1.3.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.3.2.1.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.3.2.1.1.1\" style=\"width:93.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.3.2.1.1.1.1\">Destination Court:</span> D.D.C. \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.3.2.1.1.1.2\">Destination Date:</span> 2012-02-13 \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.3.2.1.1.1.3\">Source Court:</span> Supreme Court \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.3.2.1.1.1.4\">Source Date:</span> 2005-04-19</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"A2.T4.1.3.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.3.2.2.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.3.2.2.1.1\" style=\"width:113.8pt;\">He filed no opposition. That Order was also mailed to Plaintiff on Sept. 14. The Court again informed Plaintiff that he must respond on or before Sept. 30 or face dismissal. Although the notice pleading rules are</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"A2.T4.1.3.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.3.2.3.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.3.2.3.1.1\" style=\"width:113.8pt;\">We concede that ordinary pleading rules are <span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.3.2.3.1.1.1\">not meant to impose a great burden upon a plaintiff.</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.1.4.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"A2.T4.1.4.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.4.3.1.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.4.3.1.1.1\" style=\"width:93.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.4.3.1.1.1.1\">Destination Court:</span> 5th Circuit \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.4.3.1.1.1.2\">Destination Date:</span> 1971-10-21 \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.4.3.1.1.1.3\">Source Court:</span> Supreme Court \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.4.3.1.1.1.4\">Source Date:</span> 1966-06-20</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"A2.T4.1.4.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.4.3.2.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.4.3.2.1.1\" style=\"width:113.8pt;\">That petitioners seek to commence an immediate appeal of that portion of the courts order entered on May 28, 1971. The motives of the officers bringing the charges may be corrupt, but that does not show that the state trial court will find the defendant guilty if he is innocent, or that in any other manner the defendant will be</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"A2.T4.1.4.3.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.4.3.3.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.4.3.3.1.1\" style=\"width:113.8pt;\">Against any person who is <span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.4.3.3.1.1.1\">denied or cannot enforce in the courts</span> of such State a right under any law providing for the equal civil rights of citizens of the United States, or of all persons within the jurisdiction thereof;\u201c(2) For any act under color of authority derived from any law providing for equal rights, or for refusing to do any act on the ground that it would be inconsistent with such law.</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.1.5.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"A2.T4.1.5.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.5.4.1.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.5.4.1.1.1\" style=\"width:93.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.5.4.1.1.1.1\">Destination Court:</span> 9th Circuit \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.5.4.1.1.1.2\">Destination Date:</span> 1980-03-28 \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.5.4.1.1.1.3\">Source Court:</span> Supreme Court \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.5.4.1.1.1.4\">Source Date:</span> 1911-02-20</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"A2.T4.1.5.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.5.4.2.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.5.4.2.1.1\" style=\"width:113.8pt;\">In this case there is even a stronger possibility of recurrence since the police have not offered to discontinue the practice. Id. at 43, 65 S.Ct. at 14-15. (Citations omitted). Some might read De Funis v. Odegaard, 416 U.S. 312, 94 S.Ct. 1704, 40 L.Ed.2d 164 (1974), the equal protection challenge to the University of Washington\u2019s \u201cquota\u201d system in admissions as authority for the proposition that the W. T. Grant or the</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"A2.T4.1.5.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.5.4.3.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.5.4.3.1.1\" style=\"width:113.8pt;\">The questions involved in the orders of the Interstate Commerce Commission are usually continuing (as are manifestly those in the case at bar) and their consideration ought not to be, as they might be, defeated, by short term orders, <span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.5.4.3.1.1.1\">capable of repetition, yet evading review</span>, and at one time the Government and at another time the carriers have their rights determined by the Commission without a chance of r\u00e9dress.</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.1.6.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"A2.T4.1.6.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.6.5.1.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.6.5.1.1.1\" style=\"width:93.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.6.5.1.1.1.1\">Destination Court:</span> 11th Circuit \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.6.5.1.1.1.2\">Destination Date:</span> 2000-03-08 \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.6.5.1.1.1.3\">Source Court:</span> 10th Circuit \n<br class=\"ltx_break\"/><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.6.5.1.1.1.4\">Source Date:</span> 1994-11-22</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"A2.T4.1.6.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.6.5.2.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.6.5.2.1.1\" style=\"width:113.8pt;\">Section 1512, however, applies to attempts to prevent or influence testimony not only in federal courts but also before Congress, federal agencies, and insurance regulators. Moreover, \u00a7 1512(b) subsumes but is significantly broader than the provision of \u00a7 1985(2) making it illegal to</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"A2.T4.1.6.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"A2.T4.1.6.5.3.1\">\n<span class=\"ltx_p\" id=\"A2.T4.1.6.5.3.1.1\" style=\"width:113.8pt;\">Section 1985(2) creates a cause of action against those who <span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.1.6.5.3.1.1.1\">\u201cconspire to deter, by force, intimidation, or threat</span>, any party or witness\u201d from attending or testifying in a federal court.</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Sample from LePaRD. For readability, only the last few sentence of preceding context are displayed. The portion of the target passage that appears in quotations in the destination opinion is in bold.</figcaption>\n</figure>",
|
| 107 |
+
"capture": "Table 4: Sample from LePaRD. For readability, only the last few sentence of preceding context are displayed. The portion of the target passage that appears in quotations in the destination opinion is in bold."
|
| 108 |
+
}
|
| 109 |
+
},
|
| 110 |
+
"image_paths": {
|
| 111 |
+
"2": {
|
| 112 |
+
"figure_path": "2311.09356v3_figure_2.png",
|
| 113 |
+
"caption": "Figure 2: Schematic of how LePaRD is constructed. First, we find all quotations across all 1.7 million published federal opinions in CAP and we retain the text ahead of the quotation (\u201ccontext\u201d) and the citations to other opinions. Second, we use the citations to other opinions to check whether each quotation can be matched to a passage from a prior case. If a match was found, then a training example is constructed using the relevant preceding context and the associated target passage.",
|
| 114 |
+
"url": "http://arxiv.org/html/2311.09356v3/extracted/5877189/img/flowchart.png"
|
| 115 |
+
},
|
| 116 |
+
"3": {
|
| 117 |
+
"figure_path": "2311.09356v3_figure_3.png",
|
| 118 |
+
"caption": "Figure 3: Comparing citations to judicial opinions from the same court (\u201cself citation\u201d) to citations to other courts (\u201ccross cite\u201d). We find that appellate courts are most likely to cite themselves, while district courts only rarely cite their own precedent.",
|
| 119 |
+
"url": "http://arxiv.org/html/2311.09356v3/x2.png"
|
| 120 |
+
},
|
| 121 |
+
"4": {
|
| 122 |
+
"figure_path": "2311.09356v3_figure_4.png",
|
| 123 |
+
"caption": "Figure 4: Distribution of time in units of log days between the first and last citation of a passage in our data.",
|
| 124 |
+
"url": "http://arxiv.org/html/2311.09356v3/extracted/5877189/img/duration_NAACL.png"
|
| 125 |
+
},
|
| 126 |
+
"5": {
|
| 127 |
+
"figure_path": "2311.09356v3_figure_5.png",
|
| 128 |
+
"caption": "Figure 5: Hierarchical clustering of passage co-occurrence.",
|
| 129 |
+
"url": "http://arxiv.org/html/2311.09356v3/extracted/5877189/img/cluster_NAACL.png"
|
| 130 |
+
}
|
| 131 |
+
},
|
| 132 |
+
"validation": true,
|
| 133 |
+
"references": [
|
| 134 |
+
{
|
| 135 |
+
"1": {
|
| 136 |
+
"title": "Persistent anti-muslim bias in large language models.",
|
| 137 |
+
"author": "Abubakar Abid, Maheen Farooqi, and James Zou. 2021.",
|
| 138 |
+
"venue": "In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics,\nand Society, pages 298\u2013306.",
|
| 139 |
+
"url": "https://dl.acm.org/doi/abs/10.1145/3461702.3462624"
|
| 140 |
+
}
|
| 141 |
+
},
|
| 142 |
+
{
|
| 143 |
+
"2": {
|
| 144 |
+
"title": "Dereliction of duty: State-bar inaction in response to America\u2019s\naccess-to-justice crisis.",
|
| 145 |
+
"author": "Ralph Baxter. 2022.",
|
| 146 |
+
"venue": "Yale Law Journal Forum, 132:228.",
|
| 147 |
+
"url": null
|
| 148 |
+
}
|
| 149 |
+
},
|
| 150 |
+
{
|
| 151 |
+
"3": {
|
| 152 |
+
"title": "A large annotated\ncorpus for learning natural language inference.",
|
| 153 |
+
"author": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning.\n2015.",
|
| 154 |
+
"venue": "In Proceedings of the 2015 Conference on Empirical Methods in\nNatural Language Processing, pages 632\u2013642, Lisbon, Portugal. Association\nfor Computational Linguistics.",
|
| 155 |
+
"url": "https://doi.org/10.18653/v1/D15-1075"
|
| 156 |
+
}
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"4": {
|
| 160 |
+
"title": "LEGAL-BERT: The muppets straight out of law school.",
|
| 161 |
+
"author": "Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras,\nand Ion Androutsopoulos. 2020.",
|
| 162 |
+
"venue": "In Findings of the Association for Computational Linguistics:\nEMNLP 2020, pages 2898\u20132904, Online. Association for Computational\nLinguistics.",
|
| 163 |
+
"url": "https://doi.org/10.18653/v1/2020.findings-emnlp.261"
|
| 164 |
+
}
|
| 165 |
+
},
|
| 166 |
+
{
|
| 167 |
+
"5": {
|
| 168 |
+
"title": "Modeling law\nsearch as prediction.",
|
| 169 |
+
"author": "Faraz Dadgostari, Mauricio Guim, Peter A. Beling, Michael A. Livermore, and\nDaniel N. Rockmore. 2021.",
|
| 170 |
+
"venue": "Artificial Intelligence and Law, 29(1):3\u201334.",
|
| 171 |
+
"url": "https://doi.org/10.1007/s10506-020-09261-5"
|
| 172 |
+
}
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"6": {
|
| 176 |
+
"title": "The pascal recognising\ntextual entailment challenge.",
|
| 177 |
+
"author": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005.",
|
| 178 |
+
"venue": "In Proceedings of the First International Conference on Machine\nLearning Challenges: Evaluating Predictive Uncertainty Visual Object\nClassification, and Recognizing Textual Entailment, MLCW\u201905, page 177\u2013190,\nBerlin, Heidelberg. Springer-Verlag.",
|
| 179 |
+
"url": "https://doi.org/10.1007/11736790_9"
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"7": {
|
| 184 |
+
"title": "Percentage of the World Population, Civil Law and Common Law\nSystems. Wilson & Lafleur.",
|
| 185 |
+
"author": "Jabeur Fathally and Nicola Mariani. 2008.",
|
| 186 |
+
"venue": null,
|
| 187 |
+
"url": "http://www.juriglobe.ca/eng/syst-demo/tableau-dcivil-claw.php"
|
| 188 |
+
}
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"8": {
|
| 192 |
+
"title": "Lexis & Westlaw pricing - cost-effective electronic legal research.",
|
| 193 |
+
"author": "Franklin County Law Library. 2023.",
|
| 194 |
+
"venue": null,
|
| 195 |
+
"url": "https://fclawlib.libguides.com/costeffectivelegalresearch"
|
| 196 |
+
}
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"9": {
|
| 200 |
+
"title": "Few-shot learning with\nretrieval augmented language models.",
|
| 201 |
+
"author": "Izacard Gautier, Lewis Patrick, Lomeli Maria, Hosseini Lucas, Petroni Fabio,\nSchick Timo, Dwivedi-Yu Jane, Joulin Armand, Riedel Sebastian, and Grave\nEdouard. 2022.",
|
| 202 |
+
"venue": "arXiv preprint arXiv: 2208.03299.",
|
| 203 |
+
"url": "https://arxiv.org/abs/2208.03299"
|
| 204 |
+
}
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"10": {
|
| 208 |
+
"title": "Task force on responsible use of\ngenerative AI for law.",
|
| 209 |
+
"author": "Dazza Greenwood, Shawnna Hoffman, Olga V. Mack, Jeff Saviano, Megan Ma, and\nAileen Schultz. 2023.",
|
| 210 |
+
"venue": "MIT Computational Law Report.",
|
| 211 |
+
"url": "https://law.mit.edu/ai"
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"11": {
|
| 216 |
+
"title": "Efficient natural language\nresponse suggestion for smart reply.",
|
| 217 |
+
"author": "Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun hsuan Sung, Laszlo Lukacs,\nRuiqi Guo, Sanjiv Kumar, Balint Miklos, and Ray Kurzweil. 2017.",
|
| 218 |
+
"venue": null,
|
| 219 |
+
"url": "http://arxiv.org/abs/1705.00652"
|
| 220 |
+
}
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"12": {
|
| 224 |
+
"title": "Context-aware legal\ncitation recommendation using deep learning.",
|
| 225 |
+
"author": "Zihan Huang, Charles Low, Mengqiu Teng, Hongyi Zhang, Daniel E. Ho, Mark S.\nKrass, and Matthias Grabmair. 2021.",
|
| 226 |
+
"venue": "In Proceedings of the Eighteenth International Conference on\nArtificial Intelligence and Law, ICAIL \u201921, page 79\u201388, New York, NY, USA.\nAssociation for Computing Machinery.",
|
| 227 |
+
"url": "https://doi.org/10.1145/3462757.3466066"
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"13": {
|
| 232 |
+
"title": "Billion-scale similarity search with GPUs.",
|
| 233 |
+
"author": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2019.",
|
| 234 |
+
"venue": "IEEE Transactions on Big Data, 7(3):535\u2013547.",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"14": {
|
| 240 |
+
"title": "U-CREAT:\nUnsupervised case retrieval using events extrAcTion.",
|
| 241 |
+
"author": "Abhinav Joshi, Akshat Sharma, Sai Kiran Tanikella, and Ashutosh Modi. 2023.",
|
| 242 |
+
"venue": "In Proceedings of the 61st Annual Meeting of the Association\nfor Computational Linguistics (Volume 1: Long Papers), pages 13899\u201313915,\nToronto, Canada. Association for Computational Linguistics.",
|
| 243 |
+
"url": "https://doi.org/10.18653/v1/2023.acl-long.777"
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"15": {
|
| 248 |
+
"title": "Dense passage retrieval for open-domain question answering.",
|
| 249 |
+
"author": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey\nEdunov, Danqi Chen, and Wen-tau Yih. 2020.",
|
| 250 |
+
"venue": "In Proceedings of the 2020 Conference on Empirical Methods in\nNatural Language Processing (EMNLP), pages 6769\u20136781.",
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"16": {
|
| 256 |
+
"title": "Colbert: Efficient and\neffective passage search via contextualized late interaction over bert.",
|
| 257 |
+
"author": "Omar Khattab and Matei Zaharia. 2020.",
|
| 258 |
+
"venue": null,
|
| 259 |
+
"url": "http://arxiv.org/abs/2004.12832"
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"17": {
|
| 264 |
+
"title": "Coliee 2022\nsummary: Methods for legal document retrieval and entailment.",
|
| 265 |
+
"author": "Mi-Young Kim, Juliano Rabelo, Randy Goebel, Masaharu Yoshioka, Yoshinobu Kano,\nand Ken Satoh. 2023.",
|
| 266 |
+
"venue": "In New Frontiers in Artificial Intelligence: JSAI-IsAI 2022\nWorkshop, JURISIN 2022, and JSAI 2022 International Session, Kyoto, Japan,\nJune 12\u201317, 2022, Revised Selected Papers, page 51\u201367, Berlin,\nHeidelberg. Springer-Verlag.",
|
| 267 |
+
"url": "https://doi.org/10.1007/978-3-031-29168-5_4"
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"18": {
|
| 272 |
+
"title": "Retrieval-augmented generation for knowledge-intensive NLP tasks.",
|
| 273 |
+
"author": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir\nKarpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim\nRockt\u00e4schel, et al. 2020.",
|
| 274 |
+
"venue": "Advances in Neural Information Processing Systems,\n33:9459\u20139474.",
|
| 275 |
+
"url": null
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"19": {
|
| 280 |
+
"title": "Roberta: A robustly\noptimized bert pretraining approach.",
|
| 281 |
+
"author": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer\nLevy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.",
|
| 282 |
+
"venue": null,
|
| 283 |
+
"url": "http://arxiv.org/abs/1907.11692"
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"20": {
|
| 288 |
+
"title": "Gender and\nrepresentation bias in GPT-3 generated stories.",
|
| 289 |
+
"author": "Li Lucy and David Bamman. 2021.",
|
| 290 |
+
"venue": "In Proceedings of the Third Workshop on Narrative\nUnderstanding, pages 48\u201355.",
|
| 291 |
+
"url": "https://aclanthology.org/2021.nuse-1.5/"
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"21": {
|
| 296 |
+
"title": "Retrieving legal cases from a large-scale candidate corpus.",
|
| 297 |
+
"author": "Yixiao Ma, Yunqiu Shao, Bulou Liu, Yiqun Liu, M. Zhang, Shaoping Ma, and\nyiqunliu. 2021a.",
|
| 298 |
+
"venue": null,
|
| 299 |
+
"url": "https://api.semanticscholar.org/CorpusID:239772834"
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"22": {
|
| 304 |
+
"title": "Lecard: A legal case\nretrieval dataset for chinese law system.",
|
| 305 |
+
"author": "Yixiao Ma, Yunqiu Shao, Yueyue Wu, Yiqun Liu, Ruizhe Zhang, Min Zhang, and\nShaoping Ma. 2021b.",
|
| 306 |
+
"venue": "In Proceedings of the 44th International ACM SIGIR Conference\non Research and Development in Information Retrieval, SIGIR \u201921, page\n2342\u20132348, New York, NY, USA. Association for Computing Machinery.",
|
| 307 |
+
"url": "https://doi.org/10.1145/3404835.3463250"
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"23": {
|
| 312 |
+
"title": "Transparency by design for large language models.",
|
| 313 |
+
"author": "Robert Mahari, Tobin South, and Alex Pentland. 2023a.",
|
| 314 |
+
"venue": "Computational Legal Futures, Network Law Review.",
|
| 315 |
+
"url": "https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4468791"
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"24": {
|
| 320 |
+
"title": "The law\nand NLP: Bridging disciplinary disconnects.",
|
| 321 |
+
"author": "Robert Mahari, Dominik Stammbach, Elliott Ash, and Alex Pentland.\n2023b.",
|
| 322 |
+
"venue": "In Findings of the Association for Computational Linguistics:\nEMNLP 2023, pages 3445\u20133454, Singapore. Association for Computational\nLinguistics.",
|
| 323 |
+
"url": "https://doi.org/10.18653/v1/2023.findings-emnlp.224"
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"25": {
|
| 328 |
+
"title": "AutoLAW: Augmented legal\nreasoning through legal precedent prediction.",
|
| 329 |
+
"author": "Robert Zev Mahari. 2021.",
|
| 330 |
+
"venue": null,
|
| 331 |
+
"url": "http://arxiv.org/abs/2106.16034"
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"26": {
|
| 336 |
+
"title": "Overview and\ndiscussion of the competition on legal information extraction/entailment\n(coliee) 2021.",
|
| 337 |
+
"author": "Juliano Rabelo, Randy Goebel, Mi-Young Kim, Yoshinobu Kano, Masaharu Yoshioka,\nand Ken Satoh. 2022.",
|
| 338 |
+
"venue": "The Review of Socionetwork Strategies, 16(1):111\u2013133.",
|
| 339 |
+
"url": "https://doi.org/10.1007/s12626-022-00105-z"
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"27": {
|
| 344 |
+
"title": "Sentence-BERT: Sentence\nembeddings using siamese BERT-Networks.",
|
| 345 |
+
"author": "Nils Reimers and Iryna Gurevych. 2019.",
|
| 346 |
+
"venue": "In Proceedings of the 2019 Conference on Empirical Methods in\nNatural Language Processing. Association for Computational Linguistics.",
|
| 347 |
+
"url": "https://arxiv.org/abs/1908.10084"
|
| 348 |
+
}
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"28": {
|
| 352 |
+
"title": "Yes, BM25 is a strong\nbaseline for legal case retrieval.",
|
| 353 |
+
"author": "Guilherme Moraes Rosa, Ruan Chaves Rodrigues, Roberto Lotufo, and Rodrigo\nNogueira. 2021.",
|
| 354 |
+
"venue": null,
|
| 355 |
+
"url": "http://arxiv.org/abs/2105.05686"
|
| 356 |
+
}
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"29": {
|
| 360 |
+
"title": "Sentence boundary detection in legal text.",
|
| 361 |
+
"author": "George Sanchez. 2019.",
|
| 362 |
+
"venue": "In Proceedings of the natural legal language processing\nworkshop 2019, pages 31\u201338.",
|
| 363 |
+
"url": null
|
| 364 |
+
}
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"30": {
|
| 368 |
+
"title": "Distilbert, a distilled\nversion of bert: smaller, faster, cheaper and lighter.",
|
| 369 |
+
"author": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020.",
|
| 370 |
+
"venue": null,
|
| 371 |
+
"url": "http://arxiv.org/abs/1910.01108"
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"31": {
|
| 376 |
+
"title": "Legal information retrieval systems: State-of-the-art and open issues.",
|
| 377 |
+
"author": "Carlo Sansone and Giancarlo Sperl\u00ed. 2022.",
|
| 378 |
+
"venue": "Information Systems, 106:101967.",
|
| 379 |
+
"url": "https://doi.org/https://doi.org/10.1016/j.is.2021.101967"
|
| 380 |
+
}
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"32": {
|
| 384 |
+
"title": "The justice gap: The unmet civil legal needs of low-income\nAmericans.",
|
| 385 |
+
"author": "Mary C. Slosar. 2022.",
|
| 386 |
+
"venue": "Technical report, Legal Services Corporation.",
|
| 387 |
+
"url": null
|
| 388 |
+
}
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"33": {
|
| 392 |
+
"title": "Transformer memory as a\ndifferentiable search index.",
|
| 393 |
+
"author": "Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta,\nZhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, and\nDonald Metzler. 2022.",
|
| 394 |
+
"venue": null,
|
| 395 |
+
"url": "http://arxiv.org/abs/2202.06991"
|
| 396 |
+
}
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"34": {
|
| 400 |
+
"title": "Llama 2: Open foundation and fine-tuned chat models.",
|
| 401 |
+
"author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine\nBabaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale,\net al. 2023.",
|
| 402 |
+
"venue": "arXiv preprint arXiv:2307.09288.",
|
| 403 |
+
"url": null
|
| 404 |
+
}
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"35": {
|
| 408 |
+
"title": "Pytrec_eval: An extremely fast python interface to trec_eval.",
|
| 409 |
+
"author": "Christophe Van Gysel and Maarten de Rijke. 2018.",
|
| 410 |
+
"venue": "In SIGIR. ACM.",
|
| 411 |
+
"url": null
|
| 412 |
+
}
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"36": {
|
| 416 |
+
"title": "On the concept of relevance in legal information retrieval.",
|
| 417 |
+
"author": "Marc Van Opijnen and Cristiana Santos. 2017.",
|
| 418 |
+
"venue": "Artificial Intelligence and Law, 25:65\u201387.",
|
| 419 |
+
"url": null
|
| 420 |
+
}
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"37": {
|
| 424 |
+
"title": "Here\u2019s what happens when your lawyer uses ChatGPT.",
|
| 425 |
+
"author": "Benjamin Weiser. 2023.",
|
| 426 |
+
"venue": "The New York Times.",
|
| 427 |
+
"url": "https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html"
|
| 428 |
+
}
|
| 429 |
+
},
|
| 430 |
+
{
|
| 431 |
+
"38": {
|
| 432 |
+
"title": "Huggingface\u2019s transformers:\nState-of-the-art natural language processing.",
|
| 433 |
+
"author": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue,\nAnthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe\nDavison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien\nPlu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest,\nand Alexander M. Rush. 2020.",
|
| 434 |
+
"venue": null,
|
| 435 |
+
"url": "http://arxiv.org/abs/1910.03771"
|
| 436 |
+
}
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"39": {
|
| 440 |
+
"title": "Anserini: Enabling\nthe use of lucene for information retrieval research.",
|
| 441 |
+
"author": "Peilin Yang, Hui Fang, and Jimmy Lin. 2017.",
|
| 442 |
+
"venue": "In Proceedings of the 40th International ACM SIGIR Conference\non Research and Development in Information Retrieval, SIGIR \u201917, page\n1253\u20131256, New York, NY, USA. Association for Computing Machinery.",
|
| 443 |
+
"url": "https://doi.org/10.1145/3077136.3080721"
|
| 444 |
+
}
|
| 445 |
+
}
|
| 446 |
+
],
|
| 447 |
+
"url": "http://arxiv.org/html/2311.09356v3"
|
| 448 |
+
}
|
20241001/2311.10122v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2312.01314v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2312.05492v6.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2312.06908v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2312.07783v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2312.08255v4.json
ADDED
|
@@ -0,0 +1,559 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "OCTDL: Optical Coherence Tomography Dataset for Image-Based Deep Learning Methods",
|
| 3 |
+
"abstract": "Optical coherence tomography (OCT) is a non-invasive imaging technique with extensive clinical applications in ophthalmology. OCT enables the visualization of the retinal layers, playing a vital role in the early detection and monitoring of retinal diseases. OCT uses the principle of light wave interference to create detailed images of the retinal microstructures, making it a valuable tool for diagnosing ocular conditions. This work presents an open-access OCT dataset (OCTDL) comprising over 2000 OCT images labeled according to disease group and retinal pathology. The dataset consists of OCT records of patients with Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), Epiretinal Membrane (ERM), Retinal Artery Occlusion (RAO), Retinal Vein Occlusion (RVO), and Vitreomacular Interface Disease (VID). The images were acquired with an Optovue Avanti RTVue XR using raster scanning protocols with dynamic scan length and image resolution. Each retinal b-scan was acquired by centering on the fovea and interpreted and cataloged by an experienced retinal specialist. In this work, we applied Deep Learning classification techniques to this new open-access dataset.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Background & Summary",
|
| 9 |
+
"text": "Optical coherence tomography (OCT) is a non-invasive imaging modality that is of great importance in clinical ophthalmology [1 ###reference_b1###, 2 ###reference_b2###]. OCT is one of the most widely used, rapidly developing medical imaging technologies. Today, visualization of the neural tissue is not limited to the macular area as it was at the beginning of OCT [3 ###reference_b3###] but also to the vascular structures as well [4 ###reference_b4###]. OCT imaging of the retina was first proposed by Huang et al. [5 ###reference_b5###] in 1991. OCT utilizes the basic principle of low coherent light interferometry to detect the backscattered near-infrared light to reconstruct the depth profile of the biological tissue sample. The relatively low resolution of the first OCT devices has been gradually improved so that the image quality is now able to resolve more subtle changes in retinal morphology. Numerous studies have shown that OCT can be used in monitoring and confirming many common and sight-threatening ocular conditions, such as glaucoma [6 ###reference_b6###], diabetic retinopathy [7 ###reference_b7###], and age-related macular degeneration [8 ###reference_b8###].\nIn this work, we present a new open-access OCT dataset for\nImage-Based Deep Learning Methods (OCTDL) comprising over 2000 OCT images labeled according to various pathological conditions. The OCTDL dataset includes macular raster scans of Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), Epiretinal Membrane (ERM), Retinal Artery Occlusion (RAO), Retinal Vein Occlusion (RVO), and Vitreomacular Interface Disease (VID) with the following pathological conditions: Macular Neovascular membranes (MNV), Disorganization of Retinal Inner Layers (DRIL), drusen, Macular Edema (ME), and Macular Hole (MH). We also analyzed OCT scans from existing public datasets and applied Deep Learning (DL) classification methods to these as well as to the OCTDL dataset and with combinations of the OCTDL dataset and publicly available datasets. Table 1 ###reference_### lists a comparative analysis of published OCT datasets: Kermany [9 ###reference_b9###] dataset, published in 2019, remains the most extensive in terms of the number of OCT images. The second largest OCT image open-access dataset is provided in our new dataset, OCTDL, which is described in this work. The most represented diseases in the published datasets are AMD (more than ten times), DME (more than three times), and central serous chorioretinopathy (CSC) (more than three times). The most common equipment used for capturing OCT images was the Heidelberg Engineering Spectralis and Zeiss Cirrus systems, as these OCT systems provide high-resolution and wide-spectrum eye images for diagnosing various ocular conditions."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "Open-access datasets",
|
| 15 |
+
"text": "The RETOUCH [10 ###reference_b10###] dataset was sourced from the retinal OCT fluid challenge of MICCAI 2017. This dataset features 70 OCT volumes labeled for retinal fluid types \u2014 intra-retinal fluid (IRF), sub-retinal fluid (SRF), and pigment epithelial detachment (PED), related to ME secondary to AMD and RVO. The training data incorporated varying volumes from different OCT systems (Cirrus, Triton, Spectralis) labeled for different types of fluid manually by experienced human graders. The B-scans were annotated at the Medical University of Vienna and Radboud University Medical Center. The RETOUCH dataset is widely utilized in multiple studies related to retinal fluid classification and segmentation [11 ###reference_b11###].\nThe University of Minnesota (UMN) [12 ###reference_b12###] dataset comprises 600 OCT B-scan images from exudative AMD subjects. Each subject\u2019s data includes approximately 100 B-scans, with the most significant area containing fluid chosen for exporting. The dataset includes manual annotation of IRF, SRF, and PED regions, enabling validation of segmentation algorithms. Challenges include a large number of fluid regions, making segmentation a complex task.\nThe OPTIMA [13 ###reference_b13###] dataset, derived from the MICCAI 2015 cyst segmentation challenge, provides 30 macular volumes collected from different ophthalmic OCT devices: Cirrus, Spectralis, Topcon, and Nidek. This dataset is primarily used for IRF segmentation and was annotated by experienced human graders. The dataset was split into training and testing subsets with the macular scans. The challenge with this dataset is the precise localization of IRF segmentation areas contained in the volumes obtained from different devices.\nThe Duke [14 ###reference_b14###] dataset is a public dataset provided by Duke University, featuring 110 annotated OCT B-scans from patients with severe DME. The scans are annotated with eight retinal layer boundaries, aiding the training and testing of segmentation algorithms. Special attention was given to anonymity, enabling public access to the dataset.\nThe healthy controls multiple sclerosis (HCMS) [15 ###reference_b15###] dataset, provided by the Johns Hopkins University, contains OCT scans of 35 subjects featuring both healthy and multiple sclerosis subjects. The scans are annotated to limited semantic fluid regions, with additional preprocessing required to validate segmentation performance.\nThe Kermany [9 ###reference_b9###] dataset, with 207130 OCT B-scan images, was constructed to categorize conditions including choroidal neovascularization (CNV), DME, drusen, and normal. Annotations were done by tiered graders, enabling an extensive dataset for retinal fluid labels in maculopathies.\nThe open-access OCTID [16 ###reference_b16###] dataset comprises more than 500 high-resolution OCT images categorized across distinct pathological conditions. The dataset encompasses normal, MH, AMD, Central Serous Retinopathy (CSR), and Diabetic Retinopathy (DR). The dataset images are from raster scans, with a 2mm scan length and a resolution of 512x1024 pixels. Moreover, 25 normal OCT images are supplemented with precise delineations for accurate OCT image segmentation evaluation. The dataset serves as a valuable resource for early diagnosis and monitoring of retinal diseases.\nThe OCTDL [17 ###reference_b17###] dataset, reported here, comprises 2064 images categorized into various diseases and eye conditions. These high-resolution OCT B-scans allow the visualization of the retinal layers centered on the fovea, the posterior vitreous body, and the choroidal vessels. This large open-access dataset is provided to aid in the diagnosing and monitoring of retinal diseases. The dataset was released for research and algorithm development, and it offers fully labeled images to advance automatic processing and early disease detection. Updates are planned for ongoing enhancement with additional clinical populations and samples."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "Limited access datasets",
|
| 21 |
+
"text": "Schlegl et al. [19 ###reference_b19###] dataset contains 1200 OCT B-scan volumes associated with AMD, DME, and Retinal Vein Occlusions, segmented by two experienced retinal specialists, to enable quantification of macular fluid in these conditions.\nGao et al. [22 ###reference_b22###] provides 52 B-scan volumes that of Central Serous Chorioretinopathy (CSC). Their work introduced a deep learning model, double-branched and area-constraint fully convolutional networks (DA-FCN), which provides substantial high performance in segmenting subretinal fluid.\nLee et al. [18 ###reference_b18###] dataset features 1289 B-scan images, which were provided to aid in the automated segmentation of ME using a convolutional neural network (CNN) to demonstrate high concordance between machine learning and expert human segmentation of the OCT scans.\nRao et al. [23 ###reference_b23###] OCT dataset consists of 150 macular volumes for retinal fluid segmentation that were used to study the effects of signal noise and motion artifacts in segmenting sub-retinal fluid.\nYang et al. [24 ###reference_b24###] dataset has 103 OCT volumes that were used for the automatic assessment of neurosensory retinal detachment and introduced the residual multiple pyramid pooling network (RMPPNet) to address segmentation challenges in Spectral Domain OCT images.\nBao et al. [25 ###reference_b25###] dataset comprised 240 B-scans for PED segmentation. The attention multi-scale network (AM-Net) architecture was used to address the uneven sizes of PED and achieved accurate segmentation in the OCT-B scans.\nPawan et al. [26 ###reference_b26###] dataset of 25 macular volumes aimed at segmenting SRF from central serous chorioretinopathy (CSCR) OCT images, and employed an enhanced SegCaps architecture, termed DRIP-Caps that provided an advanced alternative to existing models in segmentation of fluid in CSCR.\nHu et al. [21 ###reference_b21###] dataset comprised 70 training, 15 testing, and 15 cases containing 126 scans each to segment SRF and PED lesions, using deep neural networks together with Atrous Spatial Pyramid Pooling (ASPP).\nVenhuizen et al. [20 ###reference_b20###] collected 221 OCT volumes (6158 B-scans) to segment intraretinal cystoid fluid (IRC) using a neural network cascade that significantly boosted performance by incorporating prior anatomical information."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Methods",
|
| 27 |
+
"text": "The B-scan OCT images were acquired using a raster scanning protocol with dynamic scan length and image resolution and obtained with an Optovue Avanti RTVue XR. Each retinal scan was taken after centering the scan area over the macular fossa (fovea) and further interpreted and cataloged by an experienced retinal specialist. Axial and transverse resolutions were 5 m and 15 m, respectively. A superluminescent diode (SLD) with a wavelength of 840 nm served as the optical source. A beam of light directed toward the tissues forms an interference pattern with back-reflected light from the retina. This occurs due to the interaction of waves reflected from the tissue surface and waves that have traveled deeper into the tissue. The back-reflected waves travel back to the beam splitter, where interference occurs. The interference fringes are detected by a detector that records the phase difference between the back-reflected waves. By measuring the difference in the time delay of interference fringes as a function of depth in the tissue, a 2D image of the internal structures of the retina is created. This method produces detailed, high-resolution images of the eye\u2019s internal structures. Each image pixel\u2019s light intensity corresponds to the wave reflected from a certain depth. Grey scale images are formed based on different intensities of reflected light from various retina structures supra- and underlying tissues. Fig.1 ###reference_### shows an OCT image of a healthy normal retina of the fovea with retinal and choroidal structures. In Fig.1 ###reference_###, darker areas (hyporeflective: 2, 8, 9, 16) may correspond to places where light is absorbed or scattered, and lighter (hyperreflective: 1, 3, 13, 14, 15) areas to places where back reflection occurs. Thus, the grey scale images visualize tissue structures and layers based on their optical properties and differences in the intensity of light reflected from different depths.\nThe dataset labeling procedure for this study was performed in several steps:\nAssigning a group of 7 medical students for initial image labeling. Each student was trained in retinal pathology detection. Students performed independent labeling of an entire dataset. Where disagreement occurred, a discussion on the differences in their labels was undertaken until consensus agreement on each case. Patients with ambiguous diagnoses were screened out for further peer review.\nTwo experienced clinical specialists (A.S. and A.K.) then performed independent labeling with any disagreements resolved through consensus agreement for each case.\nThe head of the clinic experts (A.N.) confirmed the final diagnosis for all patients.\nStudents performed labeling on at most 100 images per session and experienced experts on at most 200 images per session. Sessions were limited to one per day to prevent fatigue and to sustain concentration.\nIn this section, we provide a brief description of each of the disease groups.\n###figure_1###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.1",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Epiretinal Membrane",
|
| 33 |
+
"text": "ERM can develop idiopathically, secondary to intraocular surgery or inflammation, and are characterized by the proliferation of glial tissue on the retina\u2019s inner surface in the macular area, Fig.7 ###reference_###. The Pathologic connective tissue overgrowth results in epiretinal fibrosis (fibrosis of the inner border membrane, epiretinal membrane). Clinically, the disease is characterized by thickening and wrinkling of the inner limiting membrane, sometimes called cellophane retinopathy, because of its appearance on fundus examination[41 ###reference_b41###].\n###figure_2### In ERM maturation, the vireo-retinal traction can deform the retina, reducing visual acuity, cause metamorphopsia, and can lead to macular tears and holes. In such cases, there is an irreversible loss of visual function without timely surgical intervention requiring an ERM peel[42 ###reference_b42###].\nThe study was approved by the ethics committee of Ural Federal University Named after the First President of Russia B. N. Yeltsin (Conclusion No. 1, dated 1 February 2023). Informed written consent was obtained from all subjects involved in the study."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Data records",
|
| 39 |
+
"text": "The OCTDL dataset is available at Mendeley [17 ###reference_b17###]. The final release contains 2064 images of 821 patients. All images are stored in JPG format in separate folders corresponding to the disease labels. Each file\u2019s name consists of disease label, ID of the patient, and the sequence number. Thus, the file path looks like \u2019/OCTDL/[label]/[label]_[patient_id]_[n].jpg\u2019. An additional file, \u2019OCTDL_labels.csv\u2019 consists of the following columns:\n\u2019file_name\u2019, \u2019disease\u2019, \u2019subcategory\u2019, \u2019condition\u2019, \u2019patient_id\u2019, \u2019eye\u2019, \u2019sex\u2019, \u2019year\u2019, \u2019image_width\u2019, and \u2019image_height\u2019. Table 2 ###reference_### shows the distribution of images in the dataset. Data was collected from patients aged 20 to 93 years, with a male-to-female ratio of 3:2 and a mean age of 63 years, in Yekaterinburg, Russia. Data on age, sex, and eye (right (OD) or left (OS)) are given for the images for which this information was available for publication."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Technical Validation",
|
| 45 |
+
"text": "In this work, we tested the performance of the DL architectures VGG16 [43 ###reference_b43###] and ResNet50 [44 ###reference_b44###] on our dataset (OCTDL). VGG16 and ResNet50 are well-established and widely recognized convolutional neural networks (CNN). They have been extensively studied and benchmarked on various OCT datasets [45 ###reference_b45###, 46 ###reference_b46###, 47 ###reference_b47###]. Therefore, We can establish a strong baseline for the OCTDL dataset\u2019s performance using these architectures. VGG and ResNet are considered classical architectures. However, they still perform remarkably well on many image classification problems [48 ###reference_b48###, 49 ###reference_b49###, 50 ###reference_b50###].\nVGG16 is a 16-layer, relatively extensive DL network with 138 million parameters. However, the simplicity of the VGG16 architecture is its main attraction. VGG16 has 13 convolutional layers and three fully connected layers, each followed by a ReLU activation function, five max pooling operations, and a softmax activation function.\nResNet was based on the VGG neural networks. However, a ResNet has fewer filters and is less complex than a VGG. Using shortcut connections, ResNet provided a novel way to use more convolutional layers without running into the vanishing gradient problem [51 ###reference_b51###]. A shortcut connection skips some layers, converting a regular network to a residual network. The ResNet50 is a 50-layer CNN that consists of 48 convolutional layers, one MaxPool layer, and one average pool layer.\nThe OCTDL dataset was randomly split into training, validation, and test subsets in the proportion of 60:10:30 on a patients level, so that images of one patient can be found in only one of the subsets. For all experiments, we used the Cross-Entropy loss function and Adaptive Moment Estimation (ADAM) optimizer with a 0.0005 learning rate. For data augmentation, we used random crop, horizontal and vertical flips, rotation, translation, and Gaussian blur.\nWe can navigate from the disease to the corresponding pathological condition(s) using a CSV file with labels for each image. This is necessary, for example, to combine different available datasets. Thus, for experiments, we combined OCTDL with OCTID and Kermany datasets. DME is a particular case of DR, and MH is a particular case of VID, so we can combine them into one category for classification purposes. Drusen and MNV are the early and late stages of AMD, respectfully. OCTDL and OCTID datasets were mixed and randomly split into subsets. For Kermany, we used OCTDL as a test subset.\nThe following presents the results of training neural networks exclusively on our dataset and combining our dataset with the OCTID and Kermany datasets to solve the classification problem. Confusion matrices for training on ResNet50 and VGG16 with our proposed dataset are presented in Fig.8 ###reference_###. As metrics, we used Accuracy (ACC), F1-score, Area Under the Curve (AUC), Precision (P), and Recall (R). Table 3 ###reference_### summarizes the results of the experiments.\n###figure_3### The class-wise balanced accuracy across all categories within our dataset approached 0.979, with the highest accuracy observed for AMD at 0.963 and the lowest for RVO at 0.633. Similarly, the class-wise recall demonstrated a comparable pattern, with AMD exhibiting the highest value at 0.975 and RVO displaying the weakest at 0.652. Concatenation of multiple datasets yielded favorable outcomes: this approach augmented the variety of diseases within open datasets and enabled the training of neural networks using images acquired from different OCT systems. This strategy holds the potential to bolster long-term reliability and enhance overall classification accuracy.\nFurther potential applications of the OCTDL dataset include the automated segmentation of OCT image layers, for which manual segmentation will also be performed. Labels with pathological conditions are also available in the OCTDL dataset for every image. Training on both disease and pathological condition labels with further voting ensembles could also increase classification accuracy. Semi- and Unsupervised anomaly detection [52 ###reference_b52###] has also been tested for some diseases and is a promising direction for developing Artificial Intelligence (AI) in OCT.\nThe results show that the new OCTDL dataset may be used to support and expand the application of AI in ophthalmology [53 ###reference_b53###]. The dataset will be extended and will become more balanced with respect to rare conditions, including inherited retinal dystrophies and retinopathy of prematurity that may assist with diagnosing and managing these and related sight-threatening conditions [54 ###reference_b54###]."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Code availability",
|
| 51 |
+
"text": "The code used to generate the results in this paper is available at https://github.com/MikhailKulyabin/OCTDL ###reference_###."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "6",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Author contributions statement",
|
| 57 |
+
"text": "Data collection, A.N., A.S., A.K.; conceptualization, M.K., A.Z. and A.N.; software, M.K.; writing-original draft preparation, M.K., A.N., V.B. and M.R.; writing\u2014review and editing, V.B, M.R. and P.C; supervision, A.M., S.K., A.B. All authors have read and agreed to the published version of the manuscript."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "7",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Competing interests",
|
| 63 |
+
"text": "The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [],
|
| 67 |
+
"tables": {
|
| 68 |
+
"1": {
|
| 69 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx1.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Comparative analysis of published OCT datasets.</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"Sx1.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"Sx1.T1.1.1.1.1\">Year</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx1.T1.1.1.1.2\">Dataset</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx1.T1.1.1.1.3\">Dataset Size</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx1.T1.1.1.1.4\">Equipment Used</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx1.T1.1.1.1.5\">Labels</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx1.T1.1.1.1.6\">Access</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx1.T1.1.2.1.1\">2015</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx1.T1.1.2.1.2\">Duke <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib14\" title=\"\">14</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx1.T1.1.2.1.3\">110 B-scan images</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx1.T1.1.2.1.4\">Not specified</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx1.T1.1.2.1.5\">Severe DME</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx1.T1.1.2.1.6\">open</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.3.2.1\">2016</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.3.2.2\">OPTIMA <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib13\" title=\"\">13</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.3.2.3\">30 volumes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.3.2.4\">Cirrus, Topcon</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.3.2.5\">IRF</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.3.2.6\">open</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.4.3.1\">2017</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.4.3.2\">Lee <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib18\" title=\"\">18</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.4.3.3\">1289 B-scan images</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.4.3.4\">Spectralis</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.4.3.5\">ME</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.4.3.6\">limited</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.5.4.1\">2017</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.5.4.2\">UMN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib12\" title=\"\">12</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.5.4.3\">600 B-scan images</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.5.4.4\">Spectralis</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.5.4.5\">Exudative AMD</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.5.4.6\">open</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.6.5.1\">2018</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.6.5.2\">Kermany <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib9\" title=\"\">9</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.6.5.3\">207130 B-scan images</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.6.5.4\">Spectralis</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.6.5.5\">CNV, DME, Drusen, NO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.6.5.6\">open</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.7.6.1\">2018</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.7.6.2\">Schlegl <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib19\" title=\"\">19</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.7.6.3\">1200 B-scan volumes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.7.6.4\">Cirrus, Topcon</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.7.6.5\">AMD, DME, RVO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.7.6.6\">limited</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.8.7.1\">2018</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.8.7.2\">OCTID <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib16\" title=\"\">16</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.8.7.3\">500 B-scan images</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.8.7.4\">Not specified</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.8.7.5\">MH, AMD, CSR, DR, NO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.8.7.6\">open</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.9.8.1\">2018</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.9.8.2\">Venhuizen <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib20\" title=\"\">20</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.9.8.3\">221 volumes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.9.8.4\">Spectralis</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.9.8.5\">AMD</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.9.8.6\">limited</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.10.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.10.9.1\">2019</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.10.9.2\">Hu <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib21\" title=\"\">21</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.10.9.3\">100 volumes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.10.9.4\">Not specified</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.10.9.5\">SRF, PED</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.10.9.6\">limited</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.11.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.11.10.1\">2019</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.11.10.2\">RETOUCH <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib10\" title=\"\">10</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.11.10.3\">70 volumes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.11.10.4\">Cirrus, Triton</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.11.10.5\">AMD, RVO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.11.10.6\">open</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.12.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.12.11.1\">2019</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.12.11.2\">HCMS <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib15\" title=\"\">15</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.12.11.3\">35 volumes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.12.11.4\">Spectralis</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.12.11.5\">Healthy Controls, MS</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.12.11.6\">open</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.13.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.13.12.1\">2019</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.13.12.2\">Gao <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib22\" title=\"\">22</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.13.12.3\">52 volumes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.13.12.4\">Spectralis</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.13.12.5\">CSC</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.13.12.6\">limited</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.14.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.14.13.1\">2019</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.14.13.2\">Rao <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib23\" title=\"\">23</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.14.13.3\">150 volumes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.14.13.4\">Cirrus</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.14.13.5\">Sub-retinal fluid segmentation</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.14.13.6\">limited</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.15.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.15.14.1\">2020</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.15.14.2\">Yang <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib24\" title=\"\">24</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.15.14.3\">103 volumes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.15.14.4\">Cirrus</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.15.14.5\">CSC</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.15.14.6\">limited</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.16.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.16.15.1\">2020</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.16.15.2\">Bao <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib25\" title=\"\">25</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.16.15.3\">240 B-scan images</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.16.15.4\">Not specified</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.16.15.5\">AMD, PED</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.16.15.6\">limited</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.17.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx1.T1.1.17.16.1\">2021</th>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.17.16.2\">Pawan <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib26\" title=\"\">26</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.17.16.3\">25 volumes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.17.16.4\">Cirrus</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.17.16.5\">CSC</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx1.T1.1.17.16.6\">limited</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.18.17\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"Sx1.T1.1.18.17.1\">2023</th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx1.T1.1.18.17.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.1.18.17.2.1\">OCTDL</span> <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08255v4#bib.bib17\" title=\"\">17</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx1.T1.1.18.17.3\">2064 B-scan images</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx1.T1.1.18.17.4\">Optovue Avanti</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx1.T1.1.18.17.5\">AMD, DME, ERM, NO, RAO, RVO, VID</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx1.T1.1.18.17.6\">open</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 70 |
+
"capture": "Table 1: Comparative analysis of published OCT datasets."
|
| 71 |
+
},
|
| 72 |
+
"2": {
|
| 73 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Dataset distribution by a corresponding disease.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T2.1.1.1.1\">Disease</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T2.1.1.1.2\">Label</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T2.1.1.1.3\">Number of Scans</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T2.1.1.1.4\">Number of Patients</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx3.T2.1.2.1.1\">Age-related Macular Degeneration</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx3.T2.1.2.1.2\">AMD</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx3.T2.1.2.1.3\">1231</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx3.T2.1.2.1.4\">421</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.3.2.1\">Diabetic Macular Edema</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.3.2.2\">DME</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.3.2.3\">147</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.3.2.4\">107</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.4.3.1\">Epiretinal Membrane</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.4.3.2\">ERM</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.4.3.3\">155</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.4.3.4\">71</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.5.4.1\">Normal</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.5.4.2\">NO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.5.4.3\">332</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.5.4.4\">110</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.6.5.1\">Retinal Artery Occlusion</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.6.5.2\">RAO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.6.5.3\">22</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.6.5.4\">11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.7.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.7.6.1\">Retinal Vein Occlusion</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.7.6.2\">RVO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.7.6.3\">101</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.7.6.4\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.8.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.8.7.1\">Vitreomacular Interface Disease</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.8.7.2\">VID</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.8.7.3\">76</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T2.1.8.7.4\">51</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.9.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"Sx3.T2.1.9.8.1\">Total</td>\n<td class=\"ltx_td ltx_border_bb ltx_border_t\" id=\"Sx3.T2.1.9.8.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"Sx3.T2.1.9.8.3\">2064</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"Sx3.T2.1.9.8.4\">821</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 74 |
+
"capture": "Table 2: Dataset distribution by a corresponding disease."
|
| 75 |
+
},
|
| 76 |
+
"3": {
|
| 77 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Resulting metrics on different combinations of datasets.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T3.1.1.1.1\">Model</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T3.1.1.1.2\">Dataset</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T3.1.1.1.3\">Labels</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T3.1.1.1.4\">ACC</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T3.1.1.1.5\">F1</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T3.1.1.1.6\">AUC</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T3.1.1.1.7\">P</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T3.1.1.1.8\">R</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T3.1.2.1.1\">ResNet50</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T3.1.2.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.2.1.2.1\">OCTDL</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T3.1.2.1.3\">AMD, DME, ERM, NO, RAO, RVO, VID</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T3.1.2.1.4\">0.846</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T3.1.2.1.5\">0.866</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T3.1.2.1.6\">0.988</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T3.1.2.1.7\">0.898</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T3.1.2.1.8\">0.846</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.3.2.1\">VGG16</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.3.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.3.2.2.1\">OCTDL</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.3.2.3\">AMD, DME, ERM, NO, RAO, RVO, VID</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.3.2.4\">0.859</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.3.2.5\">0.869</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.3.2.6\">0.977</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.3.2.7\">0.888</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.3.2.8\">0.859</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.4.3.1\">ResNet50</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.4.3.2\">OCTID</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.4.3.3\">AMD, CSR, DR, MH, NO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.4.3.4\">0.923</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.4.3.5\">0.927</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.4.3.6\">0.979</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.4.3.7\">0.932</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.4.3.8\">0.923</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.5.4.1\">VGG16</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.5.4.2\">OCTID</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.5.4.3\">AMD, CSR, DR, MH, NO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.5.4.4\">0.932</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.5.4.5\">0.933</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.5.4.6\">0.970</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.5.4.7\">0.939</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.5.4.8\">0.932</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.6.5.1\">ResNet50</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.6.5.2\">Kermany</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.6.5.3\">CNV, DME, Drusen, NO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.6.5.4\">0.998</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.6.5.5\">0.998</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.6.5.6\">0.999</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.6.5.7\">0.998</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.6.5.8\">0.998</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.7.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.7.6.1\">VGG16</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.7.6.2\">Kermany</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.7.6.3\">CNV, DME, Drusen, NO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.7.6.4\">0.998</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.7.6.5\">0.998</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.7.6.6\">0.999</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.7.6.7\">0.998</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.7.6.8\">0.998</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.8.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.8.7.1\">ResNet50</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.8.7.2\">OCTID + <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.8.7.2.1\">OCTDL</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.8.7.3\">AMD, DR, MH, NO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.8.7.4\">0.957</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.8.7.5\">0.955</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.8.7.6\">0.996</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.8.7.7\">0.954</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.8.7.8\">0.957</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.9.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.9.8.1\">VGG16</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.9.8.2\">OCTID + <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.9.8.2.1\">OCTDL</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.9.8.3\">AMD, DR, MH, NO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.9.8.4\">0.975</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.9.8.5\">0.977</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.9.8.6\">0.998</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.9.8.7\">0.979</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.9.8.8\">0.975</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.10.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.10.9.1\">ResNet50</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.10.9.2\">Kermany + <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.10.9.2.1\">OCTDL</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.10.9.3\">CNV, DME, Drusen, NO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.10.9.4\">0.833</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.10.9.5\">0.805</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.10.9.6\">0.963</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.10.9.7\">0.823</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.1.10.9.8\">0.833</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.11.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T3.1.11.10.1\">VGG16</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T3.1.11.10.2\">Kermany + <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.11.10.2.1\">OCTDL</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T3.1.11.10.3\">CNV, DME, Drusen, NO</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T3.1.11.10.4\">0.818</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T3.1.11.10.5\">0.798</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T3.1.11.10.6\">0.966</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T3.1.11.10.7\">0.823</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T3.1.11.10.8\">0.818</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 78 |
+
"capture": "Table 3: Resulting metrics on different combinations of datasets."
|
| 79 |
+
}
|
| 80 |
+
},
|
| 81 |
+
"image_paths": {
|
| 82 |
+
"1": {
|
| 83 |
+
"figure_path": "2312.08255v4_figure_1.png",
|
| 84 |
+
"caption": "Figure 1: Structure of the posterior segment of the eye as visualized with OCT B-scan and labelled accordingly from inner to outer retina. 1 - Posterior Hyaloid Membrane; 2 - preretinal space; 3 - retinal nerve fiber layer and inner limiting membrane; 4 - ganglion cell layer; 5 - inner plexiform layer; 6 - inner nuclear layer; 7 - outer plexiform layer; 8 - outer nuclear layer; 9 - Henle\u2019s nerve fiber layer; 10 - external limiting membrane; 11 - myoid zone of the photoreceptors; 12 - ellipsoid zone of the photoreceptors; 13 - outer segments of the photoreceptors; 14 - interdigitation zone of the photoreceptors; 15 - retinal pigment epithelium and Bruch\u2019s membrane; 16 - choriocapillarises.",
|
| 85 |
+
"url": "http://arxiv.org/html/2312.08255v4/extracted/5893953/images/figure_1.png"
|
| 86 |
+
},
|
| 87 |
+
"2": {
|
| 88 |
+
"figure_path": "2312.08255v4_figure_2.png",
|
| 89 |
+
"caption": "Figure 2: Age-related Macular Degeneration (AMD). Initial stage (a) with an arrow indicating a solitary hard drusen deposit on Bruch\u2019s membrane below the basolateral membrane of the retinal pigment epithelium; Intermediate stage (b) with medium-sized cuticular drusen which gives a ribbon-like or saw-tooth pattern of hyperreflectivity on OCT indicated by the arrow; Intermediate stage (c) with drusenoid detachment of retinal pigment epithelium with hyporeflective subretinal space filled with fluid and the retinal pigment epithelium detached from Bruch\u2019s membrane.",
|
| 90 |
+
"url": "http://arxiv.org/html/2312.08255v4/extracted/5893953/images/figure_2.png"
|
| 91 |
+
},
|
| 92 |
+
"3": {
|
| 93 |
+
"figure_path": "2312.08255v4_figure_3.png",
|
| 94 |
+
"caption": "Figure 3: Age-related Macular Degeneration (AMD). Markers (a): 1 - outer retinal tubulation or cystic spaces; 2 - Subretinal fibrosis causing distortion of the macular and hyporeflectivity of the underlying choroid. Types of fluid (b): 1 - subretinal fluid; 2 - intraretinal fluid; 3 - sub-retinal pigment epithelial fluid accumulation.",
|
| 95 |
+
"url": "http://arxiv.org/html/2312.08255v4/extracted/5893953/images/figure_3.png"
|
| 96 |
+
},
|
| 97 |
+
"4": {
|
| 98 |
+
"figure_path": "2312.08255v4_figure_4.png",
|
| 99 |
+
"caption": "Figure 4: (a) Signs of Diabetic Macular Edema (DME): 1 - Hard exudates (HE), 2 - Intraretinal fluid (IRF), 3 - Hyperreflective foci; (b) Disorganization of retinal inner layers (DRIL).",
|
| 100 |
+
"url": "http://arxiv.org/html/2312.08255v4/extracted/5893953/images/figure_4.png"
|
| 101 |
+
},
|
| 102 |
+
"5": {
|
| 103 |
+
"figure_path": "2312.08255v4_figure_5.png",
|
| 104 |
+
"caption": "Figure 5: Retinal Vein Occlusion (RVO). Cystic macular edema in central retinal vein thrombosis. (a): 1 - Intraretinal fluid (IRF), 2 - hyperreflectivity of the inner retinal layers; Signs of Retinal Artery Occlusion (RAO) (a): 1 - Increased hyperreflectivity of the inner retina following ischemia, 2 - prominent middle limiting membrane (p-MLM).",
|
| 105 |
+
"url": "http://arxiv.org/html/2312.08255v4/extracted/5893953/images/figure_5.png"
|
| 106 |
+
},
|
| 107 |
+
"6": {
|
| 108 |
+
"figure_path": "2312.08255v4_figure_6.png",
|
| 109 |
+
"caption": "Figure 6: Vitreomacular Interface Disease (VID). Vitreomacular traction syndrome (a): 1 - Posterior hyaloid membrane, 2 - Vitreomacular adhesion zone, 3 - Emerging neurosensory retinal defect; Retinal interface disorder (b): 1 - intraretinal fluid (IRF), 2 - Edges of the tear, 3 - detached posterior hyaloid membrane; Lamellar tear (c).",
|
| 110 |
+
"url": "http://arxiv.org/html/2312.08255v4/extracted/5893953/images/figure_6.png"
|
| 111 |
+
},
|
| 112 |
+
"7": {
|
| 113 |
+
"figure_path": "2312.08255v4_figure_7.png",
|
| 114 |
+
"caption": "Figure 7: VID by the epiretinal membrane (a); ERM with foveola deformity and Ectopia (b): 1 - ERM, 2 - Ectopia.",
|
| 115 |
+
"url": "http://arxiv.org/html/2312.08255v4/extracted/5893953/images/figure_7.png"
|
| 116 |
+
},
|
| 117 |
+
"8": {
|
| 118 |
+
"figure_path": "2312.08255v4_figure_8.png",
|
| 119 |
+
"caption": "Figure 8: Confusion matricies of ResNet50 (a) and VGG16 (b) models, trained on OCTDL dataset.",
|
| 120 |
+
"url": "http://arxiv.org/html/2312.08255v4/extracted/5893953/images/figure_8.png"
|
| 121 |
+
}
|
| 122 |
+
},
|
| 123 |
+
"validation": true,
|
| 124 |
+
"references": [
|
| 125 |
+
{
|
| 126 |
+
"1": {
|
| 127 |
+
"title": "Handbook of Retinal OCT: Optical Coherence Tomography E-Book (Elsevier Health Sciences, 2021).",
|
| 128 |
+
"author": "Duker, J. S., Waheed, N. K. & Goldman, D.",
|
| 129 |
+
"venue": null,
|
| 130 |
+
"url": null
|
| 131 |
+
}
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"2": {
|
| 135 |
+
"title": "Oct and oct angiography update: Clinical application to age-related macular degeneration, central serous chorioretinopathy, macular telangiectasia, and diabetic retinopathy.",
|
| 136 |
+
"author": "Zhang, L., Van Dijk, E. H., Borrelli, E., Fragiotta, S. & Breazzano, M. P.",
|
| 137 |
+
"venue": "\\JournalTitleDiagnostics 13, 232 (2023).",
|
| 138 |
+
"url": null
|
| 139 |
+
}
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"3": {
|
| 143 |
+
"title": "Practical handbook of OCT (JP Medical Ltd, 2012).",
|
| 144 |
+
"author": "Lumbroso, B. & Rispoli, M.",
|
| 145 |
+
"venue": null,
|
| 146 |
+
"url": null
|
| 147 |
+
}
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"4": {
|
| 151 |
+
"title": "Optical coherence tomography angiography in primary eye care.",
|
| 152 |
+
"author": "Coffey, A. M. et al.",
|
| 153 |
+
"venue": "\\JournalTitleClinical and Experimental Optometry 104, 3\u201313 (2021).",
|
| 154 |
+
"url": null
|
| 155 |
+
}
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"5": {
|
| 159 |
+
"title": "Optical coherence tomography.",
|
| 160 |
+
"author": "Huang, D. et al.",
|
| 161 |
+
"venue": "\\JournalTitlescience 254, 1178\u20131181 (1991).",
|
| 162 |
+
"url": null
|
| 163 |
+
}
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"6": {
|
| 167 |
+
"title": "Optical coherence tomography and glaucoma.",
|
| 168 |
+
"author": "Geevarghese, A., Wollstein, G., Ishikawa, H. & Schuman, J. S.",
|
| 169 |
+
"venue": "\\JournalTitleAnnual review of vision science 7, 693\u2013726 (2021).",
|
| 170 |
+
"url": null
|
| 171 |
+
}
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"7": {
|
| 175 |
+
"title": "Diabetic retinopathy and diabetic macular oedema pathways and management: Uk consensus working group.",
|
| 176 |
+
"author": "Amoaku, W. M. et al.",
|
| 177 |
+
"venue": "\\JournalTitleEye 34, 1\u201351 (2020).",
|
| 178 |
+
"url": null
|
| 179 |
+
}
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"8": {
|
| 183 |
+
"title": "Retinal progression biomarkers of early and intermediate age-related macular degeneration.",
|
| 184 |
+
"author": "Flores, R., Carneiro, ., Tenreiro, S. & Seabra, M. C.",
|
| 185 |
+
"venue": "\\JournalTitleLife 12, 36 (2021).",
|
| 186 |
+
"url": null
|
| 187 |
+
}
|
| 188 |
+
},
|
| 189 |
+
{
|
| 190 |
+
"9": {
|
| 191 |
+
"title": "Identifying medical diagnoses and treatable diseases by image-based deep learning.",
|
| 192 |
+
"author": "Kermany, D. S. et al.",
|
| 193 |
+
"venue": "\\JournalTitlecell 172, 1122\u20131131 (2018).",
|
| 194 |
+
"url": null
|
| 195 |
+
}
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"10": {
|
| 199 |
+
"title": "Retouch: The retinal oct fluid detection and segmentation benchmark and challenge.",
|
| 200 |
+
"author": "Bogunovi\u0107, H. et al.",
|
| 201 |
+
"venue": "\\JournalTitleIEEE transactions on medical imaging 38, 1858\u20131874 (2019).",
|
| 202 |
+
"url": null
|
| 203 |
+
}
|
| 204 |
+
},
|
| 205 |
+
{
|
| 206 |
+
"11": {
|
| 207 |
+
"title": "Retifluidnet: A self-adaptive and multi-attention deep convolutional network for retinal oct fluid segmentation.",
|
| 208 |
+
"author": "Rasti, R., Biglari, A., Rezapourian, M., Yang, Z. & Farsiu, S.",
|
| 209 |
+
"venue": "\\JournalTitleIEEE Transactions on Medical Imaging (2022).",
|
| 210 |
+
"url": null
|
| 211 |
+
}
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"12": {
|
| 215 |
+
"title": "Fully automated segmentation of fluid/cyst regions in optical coherence tomography images with diabetic macular edema using neutrosophic sets and graph algorithms.",
|
| 216 |
+
"author": "Rashno, A. et al.",
|
| 217 |
+
"venue": "\\JournalTitleIEEE Transactions on Biomedical Engineering 65, 989\u20131001 (2017).",
|
| 218 |
+
"url": null
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"13": {
|
| 223 |
+
"title": "Multivendor spectral-domain optical coherence tomography dataset, observer annotation performance evaluation, and standardized evaluation framework for intraretinal cystoid fluid segmentation.",
|
| 224 |
+
"author": "Wu, J. et al.",
|
| 225 |
+
"venue": "\\JournalTitleJournal of Ophthalmology 2016 (2016).",
|
| 226 |
+
"url": null
|
| 227 |
+
}
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"14": {
|
| 231 |
+
"title": "Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema.",
|
| 232 |
+
"author": "Chiu, S. J. et al.",
|
| 233 |
+
"venue": "\\JournalTitleBiomedical optics express 6, 1172\u20131194 (2015).",
|
| 234 |
+
"url": null
|
| 235 |
+
}
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"15": {
|
| 239 |
+
"title": "Retinal layer parcellation of optical coherence tomography images: Data resource for multiple sclerosis and healthy controls.",
|
| 240 |
+
"author": "He, Y. et al.",
|
| 241 |
+
"venue": "\\JournalTitleData in brief 22, 601\u2013604 (2019).",
|
| 242 |
+
"url": null
|
| 243 |
+
}
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"16": {
|
| 247 |
+
"title": "Octid: Optical coherence tomography image database.",
|
| 248 |
+
"author": "Gholami, P., Roy, P., Parthasarathy, M. K. & Lakshminarayanan, V.",
|
| 249 |
+
"venue": "\\JournalTitleComputers & Electrical Engineering 81, 106532 (2020).",
|
| 250 |
+
"url": null
|
| 251 |
+
}
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"17": {
|
| 255 |
+
"title": "Octdl: Optical coherence tomography dataset for image-based deep learning methods, https://doi.org/10.17632/sncdhf53xc (2023).",
|
| 256 |
+
"author": "Kulyabin, M. et al.",
|
| 257 |
+
"venue": null,
|
| 258 |
+
"url": null
|
| 259 |
+
}
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"18": {
|
| 263 |
+
"title": "Deep-learning based, automated segmentation of macular edema in optical coherence tomography.",
|
| 264 |
+
"author": "Lee, C. S. et al.",
|
| 265 |
+
"venue": "\\JournalTitleBiomedical optics express 8, 3440\u20133448 (2017).",
|
| 266 |
+
"url": null
|
| 267 |
+
}
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"19": {
|
| 271 |
+
"title": "Fully automated detection and quantification of macular fluid in oct using deep learning.",
|
| 272 |
+
"author": "Schlegl, T. et al.",
|
| 273 |
+
"venue": "\\JournalTitleOphthalmology 125, 549\u2013558 (2018).",
|
| 274 |
+
"url": null
|
| 275 |
+
}
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"20": {
|
| 279 |
+
"title": "Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography.",
|
| 280 |
+
"author": "Venhuizen, F. G. et al.",
|
| 281 |
+
"venue": "\\JournalTitleBiomedical optics express 9, 1545\u20131569 (2018).",
|
| 282 |
+
"url": null
|
| 283 |
+
}
|
| 284 |
+
},
|
| 285 |
+
{
|
| 286 |
+
"21": {
|
| 287 |
+
"title": "Automated segmentation of macular edema in oct using deep neural networks.",
|
| 288 |
+
"author": "Hu, J., Chen, Y. & Yi, Z.",
|
| 289 |
+
"venue": "\\JournalTitleMedical image analysis 55, 216\u2013227 (2019).",
|
| 290 |
+
"url": null
|
| 291 |
+
}
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"22": {
|
| 295 |
+
"title": "Double-branched and area-constraint fully convolutional networks for automated serous retinal detachment segmentation in sd-oct images.",
|
| 296 |
+
"author": "Gao, K. et al.",
|
| 297 |
+
"venue": "\\JournalTitleComputer methods and programs in biomedicine 176, 69\u201380 (2019).",
|
| 298 |
+
"url": null
|
| 299 |
+
}
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"23": {
|
| 303 |
+
"title": "Deep learning based sub-retinal fluid segmentation in central serous chorioretinopathy optical coherence tomography scans.",
|
| 304 |
+
"author": "Rao, T. N., Girish, G., Kothari, A. R. & Rajan, J.",
|
| 305 |
+
"venue": "In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 978\u2013981 (IEEE, 2019).",
|
| 306 |
+
"url": null
|
| 307 |
+
}
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"24": {
|
| 311 |
+
"title": "Rmppnet: residual multiple pyramid pooling network for subretinal fluid segmentation in sd-oct images.",
|
| 312 |
+
"author": "Yang, J. et al.",
|
| 313 |
+
"venue": "\\JournalTitleOSA Continuum 3, 1751\u20131769 (2020).",
|
| 314 |
+
"url": null
|
| 315 |
+
}
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"25": {
|
| 319 |
+
"title": "Attention multi-scale network for pigment epithelial detachment segmentation in oct images.",
|
| 320 |
+
"author": "Bao, D., Cheng, X., Zhu, W., Shi, F. & Chen, X.",
|
| 321 |
+
"venue": "In Medical Imaging 2020: Image Processing, vol. 11313, 793\u2013798 (SPIE, 2020).",
|
| 322 |
+
"url": null
|
| 323 |
+
}
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"26": {
|
| 327 |
+
"title": "Capsule network\u2013based architectures for the segmentation of sub-retinal serous fluid in optical coherence tomography images of central serous chorioretinopathy.",
|
| 328 |
+
"author": "Pawan, S. et al.",
|
| 329 |
+
"venue": "\\JournalTitleMedical & Biological Engineering & Computing 59, 1245\u20131259 (2021).",
|
| 330 |
+
"url": null
|
| 331 |
+
}
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"27": {
|
| 335 |
+
"title": "Real-world outcomes of faricimab in patients with previously treated neovascular age-related macular degeneration.",
|
| 336 |
+
"author": "Pandit, S. A. et al.",
|
| 337 |
+
"venue": "\\JournalTitleOphthalmology Retina (2023).",
|
| 338 |
+
"url": null
|
| 339 |
+
}
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"28": {
|
| 343 |
+
"title": "Age-related macular degeneration.",
|
| 344 |
+
"author": "Thomas, C. J., Mirza, R. G. & Gill, M. K.",
|
| 345 |
+
"venue": "\\JournalTitleMedical Clinics 105, 473\u2013491 (2021).",
|
| 346 |
+
"url": null
|
| 347 |
+
}
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"29": {
|
| 351 |
+
"title": "A systematic review of clinical practice guidelines for age-related macular degeneration.",
|
| 352 |
+
"author": "Han, X. et al.",
|
| 353 |
+
"venue": "\\JournalTitleOphthalmic Epidemiology 30, 213\u2013220 (2023).",
|
| 354 |
+
"url": null
|
| 355 |
+
}
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"30": {
|
| 359 |
+
"title": "Clinical manifestations of cuticular drusen: current perspectives.",
|
| 360 |
+
"author": "Fragiotta, S., Fern\u00e1ndez-Avellaneda, P., Breazzano, M. P. & Scuderi, G.",
|
| 361 |
+
"venue": "\\JournalTitleClinical Ophthalmology 3877\u20133887 (2021).",
|
| 362 |
+
"url": null
|
| 363 |
+
}
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"31": {
|
| 367 |
+
"title": "Incidence and risk of advanced age-related macular degeneration in eyes with drusenoid pigment epithelial detachment.",
|
| 368 |
+
"author": "Shijo, T. et al.",
|
| 369 |
+
"venue": "\\JournalTitleScientific Reports 12, 4715 (2022).",
|
| 370 |
+
"url": null
|
| 371 |
+
}
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"32": {
|
| 375 |
+
"title": "Ion channels in the rpe.",
|
| 376 |
+
"author": "Wimmers, S., Karl, M. O. & Strauss, O.",
|
| 377 |
+
"venue": "\\JournalTitleProgress in retinal and eye research 26, 263\u2013301 (2007).",
|
| 378 |
+
"url": null
|
| 379 |
+
}
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"33": {
|
| 383 |
+
"title": "Diabetic macular edema: evidence-based management.",
|
| 384 |
+
"author": "Browning, D. J., Stewart, M. W. & Lee, C.",
|
| 385 |
+
"venue": "\\JournalTitleIndian journal of ophthalmology 66, 1736 (2018).",
|
| 386 |
+
"url": null
|
| 387 |
+
}
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"34": {
|
| 391 |
+
"title": "Hyperreflective dots on oct as a predictor of treatment outcome in diabetic macular edema: a systematic review.",
|
| 392 |
+
"author": "Huang, H., Jansonius, N. M., Chen, H. & Los, L. I.",
|
| 393 |
+
"venue": "\\JournalTitleOphthalmology Retina 6, 814\u2013827 (2022).",
|
| 394 |
+
"url": null
|
| 395 |
+
}
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"35": {
|
| 399 |
+
"title": "Optical coherence tomography (angiography) biomarkers in the assessment and monitoring of diabetic macular edema.",
|
| 400 |
+
"author": "Suciu, C.-I., Suciu, V.-I., Nicoara, S.-D. et al.",
|
| 401 |
+
"venue": "\\JournalTitleJournal of Diabetes Research 2020 (2020).",
|
| 402 |
+
"url": null
|
| 403 |
+
}
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"36": {
|
| 407 |
+
"title": "Anatomic biomarkers of macular edema associated with retinal vein occlusion.",
|
| 408 |
+
"author": "Ciulla, T. A. et al.",
|
| 409 |
+
"venue": "\\JournalTitleOphthalmology Retina 6, 1206\u20131220 (2022).",
|
| 410 |
+
"url": null
|
| 411 |
+
}
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"37": {
|
| 415 |
+
"title": "Predictors of visual acuity outcomes after anti\u2013vascular endothelial growth factor treatment for macular edema secondary to central retinal vein occlusion.",
|
| 416 |
+
"author": "Sen, P. et al.",
|
| 417 |
+
"venue": "\\JournalTitleOphthalmology Retina 5, 1115\u20131124 (2021).",
|
| 418 |
+
"url": null
|
| 419 |
+
}
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"38": {
|
| 423 |
+
"title": "Retinal oct findings in acute central retinal artery occlusion of varying severity at different disease stages\u2013a retrospective, observational study.",
|
| 424 |
+
"author": "Mangla, R. et al.",
|
| 425 |
+
"venue": "\\JournalTitleInternational Journal of Retina and Vitreous 9, 1\u201310 (2023).",
|
| 426 |
+
"url": null
|
| 427 |
+
}
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"39": {
|
| 431 |
+
"title": "The international vitreomacular traction study group classification of vitreomacular adhesion, traction, and macular hole.",
|
| 432 |
+
"author": "Duker, J. S. et al.",
|
| 433 |
+
"venue": "\\JournalTitleOphthalmology 120, 2611\u20132619 (2013).",
|
| 434 |
+
"url": null
|
| 435 |
+
}
|
| 436 |
+
},
|
| 437 |
+
{
|
| 438 |
+
"40": {
|
| 439 |
+
"title": "Macular hole closure patterns: an updated classification.",
|
| 440 |
+
"author": "Rossi, T. et al.",
|
| 441 |
+
"venue": "\\JournalTitleGraefe\u2019s Archive for Clinical and Experimental Ophthalmology 258, 2629\u20132638 (2020).",
|
| 442 |
+
"url": null
|
| 443 |
+
}
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"41": {
|
| 447 |
+
"title": "Correlation between new oct parameters and metamorphopsia in advanced stages of epiretinal membranes.",
|
| 448 |
+
"author": "Alkabes, M. et al.",
|
| 449 |
+
"venue": "\\JournalTitleActa Ophthalmologica 98, 780\u2013786 (2020).",
|
| 450 |
+
"url": null
|
| 451 |
+
}
|
| 452 |
+
},
|
| 453 |
+
{
|
| 454 |
+
"42": {
|
| 455 |
+
"title": "Idiopathic epiretinal membrane: progression and timing of surgery.",
|
| 456 |
+
"author": "Chua, P. Y., Sandinha, M. T. & Steel, D. H.",
|
| 457 |
+
"venue": "\\JournalTitleEye 36, 495\u2013503 (2022).",
|
| 458 |
+
"url": null
|
| 459 |
+
}
|
| 460 |
+
},
|
| 461 |
+
{
|
| 462 |
+
"43": {
|
| 463 |
+
"title": "Very deep convolutional networks for large-scale image recognition.",
|
| 464 |
+
"author": "Simonyan, K. & Zisserman, A.",
|
| 465 |
+
"venue": "\\JournalTitlearXiv 1409.1556 (2014).",
|
| 466 |
+
"url": null
|
| 467 |
+
}
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"44": {
|
| 471 |
+
"title": "Deep residual learning for image recognition.",
|
| 472 |
+
"author": "He, K., Zhang, X., Ren, S. & Sun, J.",
|
| 473 |
+
"venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770\u2013778, 10.1109/CVPR.2016.90 (2016).",
|
| 474 |
+
"url": null
|
| 475 |
+
}
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"45": {
|
| 479 |
+
"title": "Classification of retinal oct images using deep learning.",
|
| 480 |
+
"author": "Subramanian, M., Shanmugavadivel, K., Naren, O. S., Premkumar, K. & Rankish, K.",
|
| 481 |
+
"venue": "In 2022 International Conference on Computer Communication and Informatics (ICCCI), 1\u20137 (IEEE, 2022).",
|
| 482 |
+
"url": null
|
| 483 |
+
}
|
| 484 |
+
},
|
| 485 |
+
{
|
| 486 |
+
"46": {
|
| 487 |
+
"title": "Oct-based deep-learning models for the identification of retinal key signs.",
|
| 488 |
+
"author": "Leandro, I. et al.",
|
| 489 |
+
"venue": "\\JournalTitleScientific Reports 13, 14628 (2023).",
|
| 490 |
+
"url": null
|
| 491 |
+
}
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"47": {
|
| 495 |
+
"title": "Deep learning for quality assessment of retinal oct images.",
|
| 496 |
+
"author": "Wang, J. et al.",
|
| 497 |
+
"venue": "\\JournalTitleBiomedical optics express 10, 6057\u20136072 (2019).",
|
| 498 |
+
"url": null
|
| 499 |
+
}
|
| 500 |
+
},
|
| 501 |
+
{
|
| 502 |
+
"48": {
|
| 503 |
+
"title": "A deep transfer convolutional neural network framework for eeg signal classification.",
|
| 504 |
+
"author": "Xu, G. et al.",
|
| 505 |
+
"venue": "\\JournalTitleIEEE Access 7, 112767\u2013112776, 10.1109/ACCESS.2019.2930958 (2019).",
|
| 506 |
+
"url": null
|
| 507 |
+
}
|
| 508 |
+
},
|
| 509 |
+
{
|
| 510 |
+
"49": {
|
| 511 |
+
"title": "A skin cancer classification method based on discrete wavelet down-sampling feature reconstruction.",
|
| 512 |
+
"author": "Wu, Q.-e., Yu, Y. & Zhang, X.",
|
| 513 |
+
"venue": "\\JournalTitleElectronics 12, 10.3390/electronics12092103 (2023).",
|
| 514 |
+
"url": null
|
| 515 |
+
}
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"50": {
|
| 519 |
+
"title": "Deep transfer learning for the multilabel classification of chest x-ray images.",
|
| 520 |
+
"author": "Huang, G.-H. et al.",
|
| 521 |
+
"venue": "\\JournalTitleDiagnostics 12, 10.3390/diagnostics12061457 (2022).",
|
| 522 |
+
"url": null
|
| 523 |
+
}
|
| 524 |
+
},
|
| 525 |
+
{
|
| 526 |
+
"51": {
|
| 527 |
+
"title": "The vanishing gradient problem during learning recurrent neural nets and problem solutions.",
|
| 528 |
+
"author": "Hochreiter, S.",
|
| 529 |
+
"venue": "\\JournalTitleInternational Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 6, 107\u2013116 (1998).",
|
| 530 |
+
"url": null
|
| 531 |
+
}
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"52": {
|
| 535 |
+
"title": "A multi-scale anomaly detection framework for retinal oct images based on the bayesian neural network.",
|
| 536 |
+
"author": "Mou, L., Liang, L., Gao, Z. & Wang, X.",
|
| 537 |
+
"venue": "\\JournalTitleBiomedical Signal Processing and Control 75, 103619, https://doi.org/10.1016/j.bspc.2022.103619 (2022).",
|
| 538 |
+
"url": null
|
| 539 |
+
}
|
| 540 |
+
},
|
| 541 |
+
{
|
| 542 |
+
"53": {
|
| 543 |
+
"title": "The current state of artificial intelligence in ophthalmology.",
|
| 544 |
+
"author": "Kapoor, R., Walters, S. P. & Al-Aswad, L. A.",
|
| 545 |
+
"venue": "\\JournalTitleSurvey of ophthalmology 64, 233\u2013240 (2019).",
|
| 546 |
+
"url": null
|
| 547 |
+
}
|
| 548 |
+
},
|
| 549 |
+
{
|
| 550 |
+
"54": {
|
| 551 |
+
"title": "Artificial intelligence in retinal disease: clinical application, challenges, and future directions.",
|
| 552 |
+
"author": "Daich Varela, M. et al.",
|
| 553 |
+
"venue": "\\JournalTitleGraefe\u2019s Archive for Clinical and Experimental Ophthalmology 1\u201315 (2023).",
|
| 554 |
+
"url": null
|
| 555 |
+
}
|
| 556 |
+
}
|
| 557 |
+
],
|
| 558 |
+
"url": "http://arxiv.org/html/2312.08255v4"
|
| 559 |
+
}
|
20241001/2312.08367v4.json
ADDED
|
@@ -0,0 +1,185 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "ViLA: Efficient Video-Language Alignment for Video Question Answering",
|
| 3 |
+
"abstract": "We propose an efficient Video-Language\nAlignment (ViLA) network.\nOur ViLA model addresses both efficient frame sampling and effective cross-modal alignment in a unified way.\nIn our ViLA network, we design a new learnable text-guided Frame-Prompter together with a cross-modal distillation (QFormer-Distiller) module.\nPre-trained large image-language models have shown promising results on problems such as visual question answering (VQA).\nHowever,\nhow to efficiently and effectively sample video frames when\nadapting pre-trained large image-language model to video-language alignment is still the major challenge.\nCompared with prior work, our ViLA model demonstrates the capability of selecting key frames with critical contents, thus improving the video-language alignment accuracy while reducing the inference latency (+3.3% on NExT-QA Temporal with 3.0 speed up).\nOverall, our ViLA network outperforms the state-of-the-art methods on the video question-answering benchmarks:\n+4.6% on STAR Interaction, +2.2% on STAR average with 3.0 speed up, ours 2-frames out-perform SeViLA 4-frames on the VLEP dataset with 4.2 speed-up. Code will be available at https://github.com/xijun-cs/ViLA.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "\u201cIf a picture is worth thousands of words, what is a video worth?\u201d [36 ###reference_b36###] Video watching has become a new social norm. Statistics show YouTube has approximately million daily active users, based all over the world. Visitors spend on average minutes per day on YouTube. An average of close to 1 million hours of video are streamed by YouTube users each and every minute.\nAs video data continue to grow through internet viewing, video information retrieval becomes more demanding.\nVideo data has tremendous capacity to store a vast variety of useful information.\nCompared to image question answering (Q&A) problem, video QA is more challenging due to one extra temporal dimension.\nHow to efficiently sample relevant frames from a video with the computing resource constraint remains a long-standing problem in video QA research.\n###figure_1### Recent advances in pre-trained large-scale language models [6 ###reference_b6###, 46 ###reference_b46###] have greatly boosted the performance of the vision-language models, especially on cross-modality tasks.\nMany state-of-the-art image-language models [8 ###reference_b8###, 1 ###reference_b1###, 27 ###reference_b27###, 49 ###reference_b49###] leverage pre-trained LLMs.\nThese models [10 ###reference_b10###, 60 ###reference_b60###] achieve excellent performance on visual-language tasks such as image captioning [10 ###reference_b10###], visual question answering [60 ###reference_b60###] and more.\nInherently, many video-language models [65 ###reference_b65###, 33 ###reference_b33###] are built from these pre-trained image-language models.\nThese image-based video-language models treat a video as a series of multi-channel images sampled randomly or uniformly [27 ###reference_b27###].\nWhile this strategy works well for short videos,\nfor long videos or videos with non-uniform information distribution, random or uniform frame sampling may miss critical information.\nWhen it comes to video-language alignment, frame sampling efficiency and effectiveness go hand-in-hand.\nOne needs to not only reduce the number of sampled frames but also select frames that are most related to the input question.\nPrevious work such as SeViLA[65 ###reference_b65###] trains a separate keyframe localizer, which is not friendly for the real-time inference and introduces more parameters to the whole model.\nBesides video representation, cross-modality alignment while leveraging LLMs is another challenge.\nThe critical problem lies in how to efficiently transfer video information to the LLM\u2019s input domain.\n###figure_2### Main Contributions: To address these challenges, we propose a new network, ViLA.\nCompared to the state-of-the-art video-language models [65 ###reference_b65###, 10 ###reference_b10###, 27 ###reference_b27###], ViLA consists of a new Frame-Prompter together with a QFormer-Distiller.\nOur Frame-Prompter learns to select the most important frames influenced by the corresponding question text and and supervised by the VQA loss. Meanwhile, the Frame-Prompter is meticulously designed to keep lightweight so as to be efficient.\nTo effectively and efficiently transfer video information to LLM input domain, we add a new distillation on top of the QFormer, named QFormer-Distiller.\nThe QFormer is the cross-modal query-visual Transformer proposed in previous BLIP models [10 ###reference_b10###, 27 ###reference_b27###, 28 ###reference_b28###] for cross-modal fusion.\nWe train our Frame-Prompter and QFormer-Distiller end-to-end.\nThe cross-modal temporal distiller teaches a smaller (i.e. fewer frames) QFormer.\nTo the best of our knowledge, this work is the first to propose a Frame-Prompter and a distiller on top of the cross-modal alignment for video-language learning with pre-trained LLMs.\nWe validate our ViLA model on the Video Question Answering benchmark datasets.\nThis includes the NExT-QA [56 ###reference_b56###], STAR[54 ###reference_b54###], How2QA [31 ###reference_b31###], TVQA [25 ###reference_b25###] and VLEP [26 ###reference_b26###].\nOur work outperforms previous strong SOTA methods\nacross all the benchmarks,\nwhile reducing inference latency.\nComparing with SOTA video-language model SeViLA [65 ###reference_b65###], we reduce significant amount () of training parameters and inference latency ( speed up), while improving the accuracy.\nWe also performed ablation study on our text-guided Frame-Prompter and QFormer-Distiller. As shown in Table 4 ###reference_###, both the text-guided Frame-Prompter and the QFormer-Distiller play critical roles in making our method effective.\nTo sum up, the key novelty is a new text-guided Frame-Prompter and question-relevant QFormer-Distiller (trained from end-to-end). The Frame-Prompter enhances the efficiency, while the later bolsters the effectiveness. Together they optimize the selection of frames for video-language alignment learning."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Visual-Language Alignment",
|
| 21 |
+
"text": "Vision-Language cross-modal pre-training has greatly improved over the past couple of years.\nVarious network architectures and pre-training objectives have been proposed for different downstream tasks, including the dual-encoder architecture with image-text contrastive learning [42 ###reference_b42###], the fusion-encoder architecture with image-text matching [29 ###reference_b29###], and unified transformer architecture with masked language modeling [50 ###reference_b50###].\nThese methods, along with others, focus on the ability to find image-text affinity [61 ###reference_b61###], correlation [4 ###reference_b4###], and/or completion [64 ###reference_b64###], and need to pre-train the model end-to-end.\nTo address the incompatibility with pre-trained unimodal models such as LLMs [6 ###reference_b6###], recent works [27 ###reference_b27###] proposed to train a QFormer to bridge the domain gap between two frozen pre-trained models.\nInspired by its flexibility, more downstream tasks and applications have been proposed, including instruction-based image generation [55 ###reference_b55###] and image question-answering [60 ###reference_b60###].\nWhile most of the previous work focus on image-text alignment, very few have discussed the extension to videos until most recently when temporal modeling starts to be included for better reasoning capabilities.\nHiTeA [62 ###reference_b62###] jointly trains pairs in long and short view to capture temporal relations between moments and event.\nSmaug [32 ###reference_b32###] introduces sparse image patch masks to reduce pre-training costs and improve cross-modal alignment.\nEgoVLPv2 [40 ###reference_b40###] proposes cross-attention between the backbone encoders to improve both the pre-training efficiency and the downstream task performance.\nOur method can leverages the pre-trained model during the video-language alignment, which naturally provides reasoning capability with less training cost.\nDue to the high cost of large video dataset collection, many works leverage successful pre-trained image models and transfer the knowledge to video task. Previous works such as [3 ###reference_b3###, 5 ###reference_b5###, 18 ###reference_b18###, 30 ###reference_b30###] utilize a pretrained ViT [12 ###reference_b12###] and aggregate the temporal image feature sequence using transformer block to adapt for video understanding task. For Video-Language task, many works turn to large-scale Vision-Language models as the starting point, such as CLIP [42 ###reference_b42###], BLIP [28 ###reference_b28###]. Many works choose to adapt a pre-trained CLIP model for text-to-video retrieval task, by either augmenting the frame-level spatial features with temporal information [57 ###reference_b57###, 11 ###reference_b11###], or manipulate the cross-modal similarity calculation to get better video-language alignment [35 ###reference_b35###, 34 ###reference_b34###, 14 ###reference_b14###, 53 ###reference_b53###]. Other works [39 ###reference_b39###, 41 ###reference_b41###] focus on parameter efficient fine-tuning for video task by inserting trainable temporal modules into the pre-trained transformer architecture while keeping the rest of model frozen. Recent work SeViLA [65 ###reference_b65###] proposes a language-aware frame localizer to sample relevant key frames from videos. In this paper, we propose a trainable text-guided Frame-Prompter and a QFormer-Distiller module, which help focus more on the important temporal and spatial information from the 2D frames.\nThese techniques help to efficiently bridge the gap between image-language and video-language learning.\nOne major downstream task of Video-Language pre-training is Video Question Answering (VQA).\nEarly works [2 ###reference_b2###, 22 ###reference_b22###, 63 ###reference_b63###] often rely on human annotated datasets, while recent works [58 ###reference_b58###, 67 ###reference_b67###, 66 ###reference_b66###] make better use of large-scale data from public.\nAlong with the quick advances in VQA methods, A lot of benchmark datasets have also been introduced to standardize the model performance comparison, including NExT-QA [56 ###reference_b56###], STAR [54 ###reference_b54###], How2QA [31 ###reference_b31###], TVQA [25 ###reference_b25###], and VLEP [26 ###reference_b26###].\nWe benchmark our network mainly on the Video Question and Answering task."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Knowledge Distillation",
|
| 27 |
+
"text": "One of our key component is cross-modal distillation.\nKnowledge Distillation [19 ###reference_b19###, 69 ###reference_b69###, 51 ###reference_b51###] is original proposed for small and efficient models to mimic the softened class distributions, features of large teachers.\nFor multi-modalities, researchers explore how to utilize the prior knowledge between different modalities [17 ###reference_b17###, 44 ###reference_b44###, 45 ###reference_b45###].\nOn video domain, knowledge distillation has been used for efficient video inference [37 ###reference_b37###, 23 ###reference_b23###], video captioning [38 ###reference_b38###], video question answering [43 ###reference_b43###].\nOut of the supervised learning methods mentioned above, the idea of knowledge distillation has also been leveraged in many self-supervised methods for self-supervised video representation learning [48 ###reference_b48###]."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Frame Selection for Video QA",
|
| 33 |
+
"text": "Early approaches in Video QA relied heavily on dense sampling methods, where frames are extracted at a fixed interval throughout the video. While straightforward, such methods can lead to excessive computational costs and memory requirements without significantly improving performance. Zhang et al. [68 ###reference_b68###] proposed a more selective strategy, using attention mechanisms to identify key frames that are more likely to contain information relevant to the question. Following this, adaptive frame sampling methods [13 ###reference_b13###, 7 ###reference_b7###] have gained popularity. These methods aim to dynamically select frames based on the content\u2019s relevance to the question, thus optimizing the trade-off between computational efficiency and answer accuracy [13 ###reference_b13###]."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Method",
|
| 39 |
+
"text": "Our ViLA model tackles the following challenges in large-scale Video-Language learning: how to sample question related frames and how to efficiently transfer video information for pre-trained frozen LLMs."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Model Architecture",
|
| 45 |
+
"text": "As illustrated in Fig. 2 ###reference_###, ViLA has four components: a pre-trained frozen large scale visual encoder , a Frame-Prompter , a QFormer-Distiller (Querying Alignment Transformer [28 ###reference_b28###, 27 ###reference_b27###, 10 ###reference_b10###] with distillation) to fuse and extract text-conditioned visual information, and a pre-trained frozen LLM.\nWe train our Frame-Prompter, QFormer-Distiller and two other frozen modules end-to-end.\nOur training objective includes a distillation loss and a task loss .\nFor the multi-choice Video Question Answer task, the task loss is the cross-entropy."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Text-guided Frame-Prompter Learning",
|
| 51 |
+
"text": "###figure_3### For the four-dimensional video data, it\u2019s impractical and non-efficient to input all frames into a visual encoder model.\nHere we design a text-guided Frame-Prompter for efficient and effective frame sampling.\nIt is designed to learn attending to fewer but more question-related frames.\nWe start from the uniformly sampled video frames , is frame number.\nAs shown in Fig. 3 ###reference_###, these raw frames first go through the pre-trained visual encoder ,\nwhere is the visual feature extracted from raw frames and is the number of frames.\nWe perform mean-pooling on the channels per-patch. The Frame-Prompter input shape is , B batch size, T temporal, as the frame feature dimension. Mean-pool averages over the channel. After mean-pool, the feature dimension becomes . Then we divide the feature into S segments , with as the segment feature dimension. Then the segmented features goes through FC layer to project to dimension, ready for the gumbel-softmax computation. For free-form frame sampling Fig. 3 ###reference_###(b), after mean-pool, we use FC layer to transfer to .\nThen we encode the mean-pooled visual features onto an embedding using a fully-connected (FC) layer with a layer normalization (LN),\nwhere , are the learnable weights and is the layer normalization.\nOur Frame-Prompter is a differentiable frame selecter. This allows the text-supervised gradients from the QFormer to guide the learning our frame selecter via backpropagation.\nThe differentiability is achieved through the Gumbel-Softmax [21 ###reference_b21###].\nWe have two choices (shown in Fig. 3 ###reference_### a and b) for frame selection before the Frame-Prompter: a) strategic sampling (like: segmented uniform sampling) and b) free-form sampling.\nWe choose option (a) to force diverse temporal sampling.\nWe first convert a segment of concatenated frames feature into a categorical distribution through the softmax operation,\nwhere is the frame number of one segment.\nThen we compute our segment mask using the segment probability and the is sampled from the distribution,\nwhere is the one-hot encoding operation.\nDuring training, we replace the argmax with the softmax for differentiability,\nwhere is a tunable temperature. Full mask is . We apply mask to input frames and then conduct CrossAttention to obtain text guidance by\nwhere is the text information. This CrossAttention can help the Frame-Prompter learn to choose rich text frames. Finally, we use the task loss below to supervise the learning and choose the frame selection to answer our question:"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.3",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Cross-Modal Distillation",
|
| 57 |
+
"text": "In this section, we present in detail our cross-modal distillation module,\ndesigned to selectively transfer video information ready for LLM consumption.\nVideo-language alignment performance, unlike image-language alignment, is closely related the selected frames.\nMeanwhile, to leverage powerful pre-trained LLMs,\nwe need to transform the selected visual information to the LLM\u2019s input domain.\nWe adopt the QFormer proposed in the BLIP models [28 ###reference_b28###, 27 ###reference_b27###, 10 ###reference_b10###] for the cross-modal transformation learning.\nThe original QFormer is designed for image-text fusion, and we add a distillation module to make it efficient for video-text fusion.\nLike the traditional distillation, our QFormer-Distiller includes a teacher-QFormer and a student-QFormer.\nWe train the teacher-QFormer first (without the student-QFormer) on a wider receptive field.\nThe student-teacher learning mechanism further encourages both the QFormer and the Frame-Prompter to learn to attend to the most relevant visual information given the input question text.\nWe study how QFormer-Distiller affects Frame-Prompter (Sec. 4.3.1 ###reference_.SSS1###).\nAnd we find our FP+QFD 4-frames(Acc.74.4%) model\u2019s key-frame selection overlaps 92.3% with the key-frame selected by the FP 8-frames(Acc.74.1%) model.\nSpecifically, during student learning process, both the teacher-QFormer and the student-QFormer will take in the video and question text .\nThe concatenated tokenized video and question text will go through the QFormer cross-attention layers:\nwhere is a text tokenizer, is the visual tokenizer, is the learnable query token.\nWe utilize a decoder to transform the student-QFormer output, ensuring consistency and recoverability to the teacher\u2019s feature. This supervision bolsters the effectiveness and performance of the model with fewer frames. Then the distillation objective is defined by:\nWe carefully design the decoder to be a simple Fully Connected (FC) layer with a layer normalization.\nWe show through an ablation study (Sec. 4.3.2 ###reference_.SSS2###) that this combination works the best.\nMeanwhile, our FP+QFD 4-frames boosts accuracy by 1.8% compared with FP 4-frames model and the key-frame selected overlaps 56.8% with that of the FP 4-frames model.\nThis helps verify our QFD\u2019s ability to both boost performance and enhance the key-frame selection."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Experiments",
|
| 63 |
+
"text": ""
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.1",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Implementation Settings",
|
| 69 |
+
"text": "For training, we conduct experiments with 8 \u00d7 40GB A100 GPUs. All the models in our experiments are trained using AdamW with cosine learning scheduler. For all the experiments, we use ViT-G(1B) [15 ###reference_b15###] as the visual encoder and Flan-T5 XL (3B) [9 ###reference_b9###] as large language model. For all the datasets, we use accuracy of choosing the right answer as the metric, and test on the validation dataset. For the temperature in the Frame-Prompter, it changes from 1 to 0.01 by and is set as 0.01 during testing. And all inference (Infer.) time is evaluated on a single A100 GPU with batch size 4. Please refer supplementary for more training details."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.1.1",
|
| 73 |
+
"parent_section_id": "4.1",
|
| 74 |
+
"section_name": "4.1.1 Benchmark:",
|
| 75 |
+
"text": "To demonstrate the effectiveness of our proposed method on the video QA task,\nwe compare our algorithm with the state-of-the-art (SOTA) methods on 5 datasets including 1 on video event prediction:\n1) NExT-QA [56 ###reference_b56###], a multi-choice VideoQA benchmark,\nwith 3 types of questions: Causal (Why, How), Temporal (Previous/Next, Present), and Description (Binary, Location, Count and Other).\n2) STAR[54 ###reference_b54###], a multi-choice VideoQA benchmark for Situated Reasoning.\nIt has four kinds of questions: Interaction, Sequence, Prediction, and Feasibility.\n3) How2QA [31 ###reference_b31###], a multi-choice VideoQA benchmark with 44k QA pairs for 22k 60-second clips selected from 9035 videos.\n4) TVQA [25 ###reference_b25###] is a large-scale video QA dataset\nwith 152K questions along with 21k video clips from 460 hours of video.\n5) VLEP [26 ###reference_b26###] is a video event prediction benchmark,\nwith 28,726 future event prediction cases.\nFollowing SeViLA [65 ###reference_b65###], we formulate this task as a multi-choice Question Answering."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.1.2",
|
| 79 |
+
"parent_section_id": "4.1",
|
| 80 |
+
"section_name": "4.1.2 Baselines:",
|
| 81 |
+
"text": "We compare the performance of our ViLA model with\nseveral recent prominent models in the field.\nThese include SeViLA [65 ###reference_b65###], BLIP-2 [27 ###reference_b27###], and InternVideo [52 ###reference_b52###], within the context of fine-tuning scenarios.\nFor fair comparison with SeViLA and BLIP-2, we employ the Vision Transformer-Global (ViT-G) as the visual encoder and Flan-T5-XL models as the Large Language Model."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.2",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "Results",
|
| 87 |
+
"text": "Here we show both quantitative and qualitative (Sec. 4.2.2 ###reference_.SSS2###) comparison results between our ViLA and SOTA methods on Video QA and Video Event Prediction task (Sec. 4.2.1 ###reference_.SSS1###).\nTogether we present an in-depth analysis on the results.\nThen we conduct ablation study (Sec. 4.3 ###reference_###) on our proposed Frame-Prompter and QFormer-Distiller module and the design choices within each module."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.2.1",
|
| 91 |
+
"parent_section_id": "4.2",
|
| 92 |
+
"section_name": "4.2.1 Comparison Results on Video QA and Video Event Prediction:",
|
| 93 |
+
"text": "Overall, we demonstrate that the cross-modal key-frame selection matters.\nOur ViLA model out-performs strong SOTA methods across the 4 Video QA benchmark datasets and 1 Video Event Prediction while keeping the inference latency low.\nEspecially, we highlight that our models\u2019 performance stands out on temporal (including Causal, Interaction, Prediction) type of questions, NExT-QA Temporal (+3.3%, speed up), NExT-QA Causal(+1.7%, speed up), STAR Interaction (+4.6%, speed up), STAR Prediction (+2.8%, speed up), How2QA (+0.3%, speed up), VLEP (+0.7% with speed up, +0.3% with speed up) and TVQA (+1.8%, speed up).\nOn the NExT-QA [56 ###reference_b56###] dataset, compared with the SOTA SeViLA [65 ###reference_b65###] on 4-frame and 8-frame setting, our ViLA improves by 1.0% on average accuracy while achieving a speed up (see Table 1 ###reference_###).\n###table_1### We also test our ViLA model on the STAR [54 ###reference_b54###] dataset.\nThis dataset is challenging due to its diversified type of questions.\nAs shown in Table 2 ###reference_###, our ViLA model outperforms the several strong SOTA models on average by 2.2% with speed up.\nEspecially on the STAR Interaction and Prediction type of questions, we outperform SeViLA [65 ###reference_b65###] by 4.6% and 2.8%. This result further demonstrate the importance of key-frame selection. And our model\u2019s advantages in extracting temporal and causal related key-frames.\nWe further test our model on the longer and larger-scale Video QA benchmark datasets: TVQA [25 ###reference_b25###] and How2QA [31 ###reference_b31###], as shown in Table 3 ###reference_###.\nOn the TVQA [25 ###reference_b25###] dataset, our ViLA outperforms the SOTA method by 1.8% at the 4-frame setting.\nOn How2QA, our ViLA improvement is 0.3% with speed up compared with SeViLA [65 ###reference_b65###].\nThis is partially due to the limited 32-frame teacher-QFormer training.\nOn the other hand, compared with Blip-2 [27 ###reference_b27###], ViLA outperforms by 1.7%.\nThis difference again shows the key-frames selected by our ViLA model aligns better with the input question compared with uniform sampling.\nTo test our algorithm\u2019s capabilities, particularly for event prediction, we conduct an additional series of evaluations on VLEP [26 ###reference_b26###] dataset.\nAt the 4-frame setting, ViLA improves over SeViLA [65 ###reference_b65###] by 0.7% with speed up. it\u2019s noteworthy that ours 2-frames out-perform SeViLA 4-frames on the VLEP dataset by 0.3% with speed up (Table 3 ###reference_###).\n###figure_4###"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.2.2",
|
| 97 |
+
"parent_section_id": "4.2",
|
| 98 |
+
"section_name": "4.2.2 Qualitative Results:",
|
| 99 |
+
"text": "We qualitatively compare the key-frames selected by our ViLA with the ones from SeViLA [65 ###reference_b65###] on different type of questions.\nAs shown in Figure 4 ###reference_###, our network select more question-related key-frames across different question types (including Causal, Temporal and Description or Factual).\nIn the first (col 1 row 1) example in Figure 4 ###reference_###, our ViLA locates the frames that visibly show the \u201ccar and the dirt\u201d, but the frames selected by SeViLA focuses on the \u201croad\u201d.\nIn the second (col 2 row 1) example, we locate the frames with the \u201cthe man not on the bull at the end\u201d, but the frames selected by SeViLA focuses on the \u201cman on the bull at the end\u201d.\nAnd in the fourth (col 2 row 2) example, we locate 3 frames with the \u201cfour people\u201d according to the question, but none of the frames selected by SeViLA shows \u201cfour people\u201d.\nFor Temporal type of questions, our ViLA is also capable of selecting frames that are closer to the action specified in the question.\nIn the seventh (col 1 row 4) and eighth (col 2 row 4) example in Figure 4 ###reference_###, we locate key-frames around the \u201cthe men on stage\u201d and the \u201cblack dog\u201d (vs. SeViLA has 2 frames focusing on the \u201cbrown dog\u201d).\nIn addition, we qualitatively check the key-frames selected though our QFormer-Distiller.\nWe show in Figure 5 ###reference_### training our QFormer-Distiller helps select the most question related frames.\nIn the second example, we select frames that shows the \u201cdeep into the whole\u201d.\nAnd in the third example, out of the 16 frames, we pick up the frames that have both the \u201cman in grey and the stick falling\u201d."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.3",
|
| 103 |
+
"parent_section_id": "4",
|
| 104 |
+
"section_name": "Ablation Study",
|
| 105 |
+
"text": "Here we demonstrate the effectiveness of each proposed module: the text-guided Frame-Prompter and the question-relevant QFormer-Distiller.\nWe also validate the decoder design choice within the QFormer-Distiller.\n###figure_5###"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.3.1",
|
| 109 |
+
"parent_section_id": "4.3",
|
| 110 |
+
"section_name": "4.3.1 Frame-Prompter and QFormer-Distiller Ablation:",
|
| 111 |
+
"text": "###table_2### Here we ablate our new QFormer-Distiller (QFD) and Frame-Prompter (FP).\nTo summarize, our QFormer-Distiller and our Frame-Prompter each contributes 50% to the overall performance improvement across the 4 Video QA benchmarks.\nSpecifically, our QFormer-Distiller boost performance by 2.9% and our Frame-Prompter by 1.6% on the STAR [54 ###reference_b54###] dataset, as shown in Table 4 ###reference_###. This shows both modules are critical to achieving the desirable performance.\nWe further explore how QFormer-Distiller (QFD) affect Frame-Prompter (FP). We find our FP+QFD 4-frames(Acc.74.4%) model\u2019s key-frame selection overlaps 92.3% with the key-frame selected by the FP 8-frames(Acc.74.1%) model.\nMeanwhile, our FP+QFD 4-frames boosts accuracy by 1.8% compared with FP 4-frames model and the key-frame selected overlaps 56.8% with that of the FP 4-frames model.\nThis helps verify our QFD\u2019s ability to both boost performance and enhance the key-frame selection."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.3.2",
|
| 115 |
+
"parent_section_id": "4.3",
|
| 116 |
+
"section_name": "4.3.2 QFormer-Distiller Decoder Ablation:",
|
| 117 |
+
"text": "We study the design choice of the decoder of the QFormer-Distiller.\nOne of the key components is the decoder before computing the distillation loss.\nDesign choices for this component includes the number of Fully Connected layer (FC) and where to put a layer normalization (LN).\nFrom Table 5 ###reference_###, we show that the simple FC with a LN (after FC) works the best.\n###table_3### The FC layer provides the necessary computational structure, while the LN layer stabilizes the learning process. This configuration balances the effectiveness and operational efficiency, making it a well-suited choice for our method."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.4",
|
| 121 |
+
"parent_section_id": "4",
|
| 122 |
+
"section_name": "More Discussion",
|
| 123 |
+
"text": "More recent works [24 ###reference_b24###, 65 ###reference_b65###]\nhave shown powerful networks such as LLMs learns most of the data distribution during the pre-training stage.\nThis is one of the major reason why prompt-learning is very effective.\nOur work is inspired from the prompt-learning.\nWe hope to leverage LLM through proper visual prompting without affecting the generalization ability of the LLM, and this won\u2019t affect LLMs\u2019 original ability on language tasks.\nHowever, for the VQA task itself, optimal performance in this task often necessitates training the Language Model (LLM). Therefore, we conduct an ablation study on NExT-QA that fine-tunes the LLM by using LoRA during the training. Our ViLA achieves 75.1% average accuracy with only 4 frames. This demonstrates that our ViLA has a strong adaptation ability.\nFurthermore, to evaluate the generalization of our ViLA, we replace Flan-T5 with llama on NExT-QA, baseline with llama (68.6%) vs. ViLA with llama ( 72.7%), which shows that ViLA can adapt to different LLMs."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5",
|
| 127 |
+
"parent_section_id": null,
|
| 128 |
+
"section_name": "Conclusion, Limitation and Future Work",
|
| 129 |
+
"text": "How do we properly ingest visual information to LLMs to utilize its capability effectively in cross-modal tasks?\nIn this work, we present a new ViLA network with a new text-guided Frame-Prompter to smartly sample important frames, together with a cross-modal temporal distillation (QFormer-Distiller) for efficient and effective video-language alignment.\nFrom our experiments, our ViLA outperforms SOTA methods on four video question answering benchmarks and one video event prediction benchmark,\nespecially on the temporal and interaction type of questions. We demonstrate that cross-modal keyframe selection is key to successful video-language alignment task execution.\nDue to resource constraints, we only evaluate on LLMs with the number of parameters not larger than 13 billions.\nWe plan to continue research on the design of our Frame-Prompter, especially on video-language alignment for long video segments."
|
| 130 |
+
}
|
| 131 |
+
],
|
| 132 |
+
"appendix": [],
|
| 133 |
+
"tables": {
|
| 134 |
+
"1": {
|
| 135 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.3\" style=\"width:433.6pt;height:200.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-119.8pt,55.4pt) scale(0.644098678394766,0.644098678394766) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.3.3\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.3.4.1\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.3.4.2\"># Frames</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.3.4.3\">Temporal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.3.4.4\">Causal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.3.4.5\">Description</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.3.4.6\">Average</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.3.4.7\">\n<span class=\"ltx_text\" id=\"S4.T1.3.3.4.7.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.3.3.4.7.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.3.3.4.7.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.3.3.4.7.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.3.3.4.7.2.1.1.1\">Intro.</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.3.3.4.7.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.3.3.4.7.2.1.2.1\">Param.</span></span>\n</span></span><span class=\"ltx_text\" id=\"S4.T1.3.3.4.7.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.3.4.8\">\n<span class=\"ltx_text\" id=\"S4.T1.3.3.4.8.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.3.3.4.8.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.3.3.4.8.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.3.3.4.8.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.3.3.4.8.2.1.1.1\">Infer. Time</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.3.3.4.8.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.3.3.4.8.2.1.2.1\">(ms/video)</span></span>\n</span></span><span class=\"ltx_text\" id=\"S4.T1.3.3.4.8.3\"></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.5.1\">Just Ask\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib58\" title=\"\">58</a>]</cite> (ICCV2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.5.2\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.5.3\">51.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.5.4\">49.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.5.5\">63.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.5.6\">52.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.5.7\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.5.8\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.6.1\">All-in-One\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib47\" title=\"\">47</a>]</cite> (CVPR2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.6.2\">32</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.6.3\">48.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.6.4\">48.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.6.5\">63.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.6.6\">50.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.6.7\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.6.8\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.7.1\">MIST\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib16\" title=\"\">16</a>]</cite> (CVPR2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.7.2\">32</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.7.3\">56.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.7.4\">54.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.7.5\">66.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.7.6\">57.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.7.7\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.7.8\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.8.1\">HiTeA\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib62\" title=\"\">62</a>]</cite> (ICCV2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.8.2\">16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.8.3\">58.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.8.4\">62.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.8.5\">75.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.8.6\">63.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.8.7\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.8.8\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.9.1\">InternVideo\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib52\" title=\"\">52</a>]</cite> (Dec 2022)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.9.2\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.9.3\">58.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.9.4\">62.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.9.5\">75.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.9.6\">63.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.9.7\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.9.8\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.10.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.10.1.1\">ViLA (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.10.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.10.2.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.10.3\">66.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.10.4\">69.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.10.5\">78.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.10.6\">70.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.10.7\">188M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.10.8\">64</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.11.1\">BLIP-2\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib27\" title=\"\">27</a>]</cite> (ICML2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.11.2\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.11.3\">67.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.11.4\">70.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.11.5\">79.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.11.6\">71.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.11.7\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.11.8\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.12.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.12.1.1\">ViLA (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.12.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.12.2.1\">2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.12.3\">70.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.12.4\">71.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.12.5\">79.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.12.6\">72.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.12.7\">188M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.12.8\">72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.13.1\">SeViLA\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib65\" title=\"\">65</a>]</cite> (NeurIPS2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.13.2\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.13.3\"><span class=\"ltx_text ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.13.3.1\" style=\"border-color: #BF8040;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T1.3.3.13.3.1.1\" style=\"background-color:#FFFFFF;\">67.7</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.13.4\"><span class=\"ltx_text ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.13.4.1\" style=\"border-color: #BF8040;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T1.3.3.13.4.1.1\" style=\"background-color:#FFFFFF;\">72.1</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.13.5\"><span class=\"ltx_text ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.13.5.1\" style=\"border-color: #BF8040;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T1.3.3.13.5.1.1\" style=\"background-color:#FFFFFF;\">82.2</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.13.6\"><span class=\"ltx_text ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.13.6.1\" style=\"border-color: #BF8040;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T1.3.3.13.6.1.1\" style=\"background-color:#FFFFFF;\">73.4</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.13.7\">376M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.13.8\">301</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.14.1\">SeViLA\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib65\" title=\"\">65</a>]</cite> (NeurIPS2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.14.2\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.14.3\"><span class=\"ltx_text ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.14.3.1\" style=\"border-color: #0000FF;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T1.3.3.14.3.1.1\" style=\"background-color:#FFFFFF;\">67.0</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.14.4\"><span class=\"ltx_text ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.14.4.1\" style=\"border-color: #0000FF;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T1.3.3.14.4.1.1\" style=\"background-color:#FFFFFF;\">73.8</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.14.5\"><span class=\"ltx_text ltx_font_bold ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.14.5.1\" style=\"background-color:#FFFFFF;border-color: #0000FF;\">81.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.14.6\"><span class=\"ltx_text ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.14.6.1\" style=\"border-color: #0000FF;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T1.3.3.14.6.1.1\" style=\"background-color:#FFFFFF;\">73.8</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.14.7\">376M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.14.8\">306</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.2.1\">ViLA (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1.3\">4 (8 to 4)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold ltx_framed ltx_framed_rectangle\" id=\"S4.T1.1.1.1.4.1\" style=\"background-color:#FFFFFF;border-color: #BF8040;\">71.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1.5\">72.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1.6\"><span class=\"ltx_text ltx_font_bold ltx_framed ltx_framed_rectangle\" id=\"S4.T1.1.1.1.6.1\" style=\"background-color:#FFFFFF;border-color: #BF8040;\">82.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1.7\">74.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1.8\">188M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.1\">99 (3.04)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.2.2.2.1\">ViLA (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.2.3\">4 (32 to 4)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.2.4\">70.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.2.5\"><span class=\"ltx_text ltx_font_bold ltx_framed ltx_framed_rectangle\" id=\"S4.T1.2.2.2.5.1\" style=\"background-color:#FFFFFF;border-color: #BF8040;\">73.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.2.6\">82.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.2.7\"><span class=\"ltx_text ltx_font_bold ltx_framed ltx_framed_rectangle\" id=\"S4.T1.2.2.2.7.1\" style=\"background-color:#FFFFFF;border-color: #BF8040;\">74.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.2.8\">188M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.2.1\">208 ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.3.2.1\">ViLA (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.4\"><span class=\"ltx_text ltx_font_bold ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.3.4.1\" style=\"background-color:#FFFFFF;border-color: #0000FF;\">71.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.5\"><span class=\"ltx_text ltx_font_bold ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.3.5.1\" style=\"background-color:#FFFFFF;border-color: #0000FF;\">73.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.6\"><span class=\"ltx_text ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.3.6.1\" style=\"border-color: #0000FF;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T1.3.3.3.6.1.1\" style=\"background-color:#FFFFFF;\">81.4</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.7\"><span class=\"ltx_text ltx_font_bold ltx_framed ltx_framed_rectangle\" id=\"S4.T1.3.3.3.7.1\" style=\"background-color:#FFFFFF;border-color: #0000FF;\">74.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.8\">188M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3.1\">227 ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.15.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.15.1.1\">ViLA+LoRA (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.15.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.15.2.1\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.15.3\">71.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.15.4\">74.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.15.5\">80.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.15.6\">75.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.15.7\">188M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.15.8\">208</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.3.16.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.16.1.1\">ViLA (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.3.16.2\">32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.3.16.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.16.3.1\">71.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.3.16.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.16.4.1\">75.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.3.16.5\">82.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.3.16.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.16.6.1\">75.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.3.16.7\">188M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.3.16.8\">-</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.15.2.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.5.1\" style=\"font-size:90%;\">Comparison Results on NExT-QA dataset.<span class=\"ltx_text ltx_font_medium\" id=\"S4.T1.5.1.1\"> Here we measure the accuracy of choosing the right answer.\nEspecially on Temporal and Causal type of questions, our ViLA (using only 4 frames) improves </span>3.3%<span class=\"ltx_text ltx_font_medium\" id=\"S4.T1.5.1.2\"> and </span>1.7%<span class=\"ltx_text ltx_font_medium\" id=\"S4.T1.5.1.3\"> respectively, compared with SeViLA.\nWe use </span>bold-face<span class=\"ltx_text ltx_font_medium\" id=\"S4.T1.5.1.4\"> font to indicate the best results and <span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T1.5.1.4.1\">underline</span> on the second best using the same number of frames (<span class=\"ltx_text ltx_framed ltx_framed_rectangle\" id=\"S4.T1.5.1.4.2\" style=\"background-color:#FFFFFF;border-color: #BF8040;\">brown box</span> for 4 frames and <span class=\"ltx_text ltx_framed ltx_framed_rectangle\" id=\"S4.T1.5.1.4.3\" style=\"background-color:#FFFFFF;border-color: #0000FF;\">blue box</span> for 8 frames).\nViLA using 2-frames only out-performs BLIP-2 using 4-frames by </span>1.3%<span class=\"ltx_text ltx_font_medium\" id=\"S4.T1.5.1.5\">. ViLA also achieves upto </span>3.04<span class=\"ltx_text ltx_font_medium\" id=\"S4.T1.5.1.6\"> speedup. It needs to be noted that our ViLA achieves 75.1% average accuracy with only 4 frames when we finetune LLM with LoRA\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib20\" title=\"\">20</a>]</cite>.\n</span></span></figcaption>\n</figure>",
|
| 136 |
+
"capture": "Table 1: Comparison Results on NExT-QA dataset. Here we measure the accuracy of choosing the right answer.\nEspecially on Temporal and Causal type of questions, our ViLA (using only 4 frames) improves 3.3% and 1.7% respectively, compared with SeViLA.\nWe use bold-face font to indicate the best results and underline on the second best using the same number of frames (brown box for 4 frames and blue box for 8 frames).\nViLA using 2-frames only out-performs BLIP-2 using 4-frames by 1.3%. ViLA also achieves upto 3.04 speedup. It needs to be noted that our ViLA achieves 75.1% average accuracy with only 4 frames when we finetune LLM with LoRA\u00a0[20].\n"
|
| 137 |
+
},
|
| 138 |
+
"2": {
|
| 139 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.1\" style=\"width:433.6pt;height:125.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-103.1pt,29.9pt) scale(0.67771009380885,0.67771009380885) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.1.2.1\">Method (Frames Number)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.1.2.2\">Interaction</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.1.2.3\">Sequence</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.1.2.4\">Prediction</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.1.2.5\">Feasibility</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.1.2.6\">Avg.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.1.2.7\">\n<span class=\"ltx_text\" id=\"S4.T2.1.1.2.7.1\"></span> <span class=\"ltx_text\" id=\"S4.T2.1.1.2.7.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.1.1.2.7.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T2.1.1.2.7.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.1.1.2.7.2.1.1.1\">Infer. Time</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.1.1.2.7.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.1.1.2.7.2.1.2.1\">(ms/video)</span></span>\n</span></span><span class=\"ltx_text\" id=\"S4.T2.1.1.2.7.3\"></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.1\">Flamingo-9B 4-shot \u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib1\" title=\"\">1</a>]</cite> (30) (NeurIPS2022)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.6\">42.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.1\">All-in-One\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib47\" title=\"\">47</a>]</cite> (32) (CVPR2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.2\">47.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3\">50.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.4\">47.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.5\">44.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.6\">47.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.1\">MIST\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib16\" title=\"\">16</a>]</cite> (32) (CVPR2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.2\">55.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.3\">54.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.4\">54.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.5\">44.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.6\">51.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.1\">InternVideo\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib52\" title=\"\">52</a>]</cite> (8) (Dec 2022)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.2\">62.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.3\">65.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.4\">54.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.5\">51.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.6\">58.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.1\">BLIP-2\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib27\" title=\"\">27</a>]</cite> (4) (ICML2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.2\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.1.1.7.2.1\">65.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.3\">69.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.4\">59.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.5\">54.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.6\">62.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.8.1.1\">ViLA (2) (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.2\">65.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.3\">65.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.4\">62.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.5\">58.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.6\">62.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.7\">72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.1\">SeViLA\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib65\" title=\"\">65</a>]</cite> (4) (NeurIPS2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.2\">63.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.9.3.1\">70.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.4\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.1.1.9.4.1\">63.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.9.5.1\">62.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.6\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.1.1.9.6.1\">64.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.9.7\">301</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.1.2\">ViLA (4) (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.3.1\">70.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.4.1\">70.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.5.1\">65.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.1.6\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.1.1.1.6.1\">62.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.7.1\">67.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.1.1\">99 ()</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T2.7.2.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.1\" style=\"font-size:90%;\">Comparison Results on STAR Video QA benchmark.<span class=\"ltx_text ltx_font_medium\" id=\"S4.T2.3.1.1\">\nFor Interaction type of question, our ViLA improved </span>4.6%<span class=\"ltx_text ltx_font_medium\" id=\"S4.T2.3.1.2\">.\nOn average, our ViLA out-performs the SOTA method by 2.2% when using 4 frames with </span>3.04<span class=\"ltx_text ltx_font_medium\" id=\"S4.T2.3.1.3\"> speed up.\nNote that our ViLA using 2-frames out-performs BLIP-2 using 4-frames.\n</span></span></figcaption>\n</figure>",
|
| 140 |
+
"capture": "Table 2: Comparison Results on STAR Video QA benchmark.\nFor Interaction type of question, our ViLA improved 4.6%.\nOn average, our ViLA out-performs the SOTA method by 2.2% when using 4 frames with 3.04 speed up.\nNote that our ViLA using 2-frames out-performs BLIP-2 using 4-frames.\n"
|
| 141 |
+
},
|
| 142 |
+
"3": {
|
| 143 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.10\">\n<tr class=\"ltx_tr\" id=\"S4.T3.10.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.10.1.1\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.10.1.2\">Frames Numbers</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.10.1.3\">How2QA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.10.1.4\">VLEP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.10.1.5\">TVQA</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.10.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.2.1\">FrozenBiLM \u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib59\" title=\"\">59</a>]</cite> (NeurIPS2022)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.2.2\">10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.2.3\">81.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.2.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.2.5\">57.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.10.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.3.1\">InternVideo \u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib52\" title=\"\">52</a>]</cite> (Dec 2022)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.3.2\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.3.3\">79.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.3.4\">63.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.3.5\">57.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.10.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.4.1\">BLIP-2 \u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib27\" title=\"\">27</a>]</cite> (ICML2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.4.2\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.4.3\">82.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.4.4\">67.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.4.5\">54.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.10.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.5.1\">SeViLA \u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08367v4#bib.bib65\" title=\"\">65</a>]</cite> (NeurIPS2023)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.5.2\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.5.3\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T3.10.5.3.1\">83.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.5.4\">68.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.5.5\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T3.10.5.5.1\">61.6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.10.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.6.1\">ViLA (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.10.6.2.1\">2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.6.3\">82.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.6.4\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T3.10.6.4.1\">69.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.6.5\">60.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.10.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.10.7.1\">ViLA (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.10.7.2\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.10.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.10.7.3.1\">83.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.10.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.10.7.4.1\">69.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.10.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.10.7.5.1\">63.4</span></td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T3.13.5.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.4\" style=\"font-size:90%;\">Comparison Results on How2QA, VLEP, and TVQA Video QA Benchmarks.<span class=\"ltx_text ltx_font_medium\" id=\"S4.T3.8.4.4\"> ViLA improves performance over SeViLA by </span>1.8%<span class=\"ltx_text ltx_font_medium\" id=\"S4.T3.8.4.5\"> with </span>3.04<span class=\"ltx_text ltx_font_medium\" id=\"S4.T3.8.4.3\"> speed up on TVQA dataset, 0.7% with speed up on VLEP dataset, and 0.3% with speed up on How2QA dataset at 4 frames setting. Ours 2-frames out-perform SeViLA 4-frames on VLEP by 0.3% with speed up.</span></span></figcaption>\n</figure>",
|
| 144 |
+
"capture": "Table 3: Comparison Results on How2QA, VLEP, and TVQA Video QA Benchmarks. ViLA improves performance over SeViLA by 1.8% with 3.04 speed up on TVQA dataset, 0.7% with speed up on VLEP dataset, and 0.3% with speed up on How2QA dataset at 4 frames setting. Ours 2-frames out-perform SeViLA 4-frames on VLEP by 0.3% with speed up."
|
| 145 |
+
},
|
| 146 |
+
"4": {
|
| 147 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T4.2\">\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.2.1.1\">Components</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.2.1.2\">STAR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.2.1.3\">VLEP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.2.1.4\">TVQA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.2.1.5\">NExT-QA</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.2.1\">base (BLIP-2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.2.2\">62.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.2.3\">67.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.2.4\">54.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.2.5\">71.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.3.1\">base+QFormer-Distiller</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.3.2\">64.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.3.3\">68.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.3.4\">62.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.3.5\">73.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.4.1\">base+Frame-Prompter</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.4.2\">65.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.4.3\">68.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.4.4\">62.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.4.5\">73.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.2.5.1\">base+QFormer-Distiller+Frame-Prompter</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.2.5.2\">66.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.2.5.3\">69.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.2.5.4\">63.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.2.5.5\">74.4</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T4.6.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.7.2\" style=\"font-size:90%;\">Frame-Prompter and QFormer-Distiller Ablation Results.<span class=\"ltx_text ltx_font_medium\" id=\"S4.T4.7.2.1\"> Across all four VideoQA datasets, we observe that both Text-aware Frame-Prompter and cross-modal QFormer-Distiller contribute significantly to our final performance.\nWe highlight that on STAR, adding our QFormer-Distiller improves the accuracy by </span>2.9%<span class=\"ltx_text ltx_font_medium\" id=\"S4.T4.7.2.2\">.\nOur Frame-Prompter further boost the accuracy by </span>1.6%<span class=\"ltx_text ltx_font_medium\" id=\"S4.T4.7.2.3\">.\n</span></span></figcaption>\n</figure>",
|
| 148 |
+
"capture": "Table 4: Frame-Prompter and QFormer-Distiller Ablation Results. Across all four VideoQA datasets, we observe that both Text-aware Frame-Prompter and cross-modal QFormer-Distiller contribute significantly to our final performance.\nWe highlight that on STAR, adding our QFormer-Distiller improves the accuracy by 2.9%.\nOur Frame-Prompter further boost the accuracy by 1.6%.\n"
|
| 149 |
+
},
|
| 150 |
+
"5": {
|
| 151 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T5.2\">\n<tr class=\"ltx_tr\" id=\"S4.T5.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.2.1.1\">Frame Prompter Decoder</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.2.1.2\">Temporal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.2.1.3\">Causal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.2.1.4\">Description</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.2.1.5\">Average</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.2.2.1\">FC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.2.2.2\">68.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.2.2.3\">70.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.2.2.4\">79.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.2.2.5\">72.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.2.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.3.1\">FC+LN</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.3.2\">70.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.3.3\">73.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.3.4\">82.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.3.5\">74.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.2.4.1\">FC+LN+GELU+FC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.2.4.2\">69.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.2.4.3\">73.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.2.4.4\">81.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.2.4.5\">74.1</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T5.4.1.1\" style=\"font-size:90%;\">Table 5</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.5.2\" style=\"font-size:90%;\">QFormer-Distiller Decoder Ablation on NExT-QA.<span class=\"ltx_text ltx_font_medium\" id=\"S4.T5.5.2.1\"> We find that a simple Fully Connected layer (FC) with a Layer Normalization (LN) works best across\nTemporal, Causal, Description. It is efficient and effective. GELU is activation function.</span></span></figcaption>\n</figure>",
|
| 152 |
+
"capture": "Table 5: QFormer-Distiller Decoder Ablation on NExT-QA. We find that a simple Fully Connected layer (FC) with a Layer Normalization (LN) works best across\nTemporal, Causal, Description. It is efficient and effective. GELU is activation function."
|
| 153 |
+
}
|
| 154 |
+
},
|
| 155 |
+
"image_paths": {
|
| 156 |
+
"1": {
|
| 157 |
+
"figure_path": "2312.08367v4_figure_1.png",
|
| 158 |
+
"caption": "Figure 1: Our efficient Vision-Language Alignment (ViLA) model via Frame-Promper and distilling contains two new modules: a text-guided Frame-Prompter and a cross-modal QFormer-Distiller. It learns to extract the most question-related frames while keeping the inference latency low.",
|
| 159 |
+
"url": "http://arxiv.org/html/2312.08367v4/x1.png"
|
| 160 |
+
},
|
| 161 |
+
"2": {
|
| 162 |
+
"figure_path": "2312.08367v4_figure_2.png",
|
| 163 |
+
"caption": "Figure 2: Model Overview. Our ViLA model includes 4 sub-modules: the visual encoder, text-supervised Frame-Prompter (FP), QFormer-Distiller (QFD), and a LLM. We encode the video frames through a frozen visual encoder. Then we train the Teacher-QFormer using all the frame features.\nAfter that, we train the Student-QFormer and Frame-Prompter end-to-end.\nUnlike the Teacher-QFormer, our Student-QFormer is trained with masked frames features from a text-supervised Frame-Prompter.\nFinally, the input question text and QFormer transformed visual features go through a frozen large language model to generate the answer. Our network supports both leveraging LLM through proper visual prompting without affecting the original LLM (Frozen) ability on language tasks and finetuning LLMs(LoRA) simultaneously to get optimal performance on specific tasks.",
|
| 164 |
+
"url": "http://arxiv.org/html/2312.08367v4/x2.png"
|
| 165 |
+
},
|
| 166 |
+
"3": {
|
| 167 |
+
"figure_path": "2312.08367v4_figure_3.png",
|
| 168 |
+
"caption": "Figure 3: Text-guided Frame-Prompter. Here we show the details of our learnable text-guided Frame-Prompter. We design a learnable Frame-Prompter to sample the most text query-related frames, with two design choises (a and b).\nWe choose design (a) for diversified temporal sampling.\nWe first encode the mean-pooled segment features.\nWe then apply the Gumbel Softmax to compute the segment mask to guarantee the differentiability.\nThe selected frames embedding then goes through the QFormer-Distiller.\nHere B\ud835\udc35Bitalic_B means batch size, T\ud835\udc47Titalic_T means frame number, N\u00d7C\ud835\udc41\ud835\udc36N\\times Citalic_N \u00d7 italic_C means the frame feature sequences.\nThe Frame-Prompter is learned with the text-supervised gradient. When VQA loss is applied, the input question text-related gradient further flows to the Frame-Prompter. The question text-related gradient guides the Frame-Prompter to select the most critical frames.",
|
| 169 |
+
"url": "http://arxiv.org/html/2312.08367v4/x3.png"
|
| 170 |
+
},
|
| 171 |
+
"4": {
|
| 172 |
+
"figure_path": "2312.08367v4_figure_4.png",
|
| 173 |
+
"caption": "Figure 4: Key-frame Selection Comparison Results (select 4 frames from 32 frames).\nWe compare frames selected by our ViLA compared with that from the SOTA SeViLA [65] method.\nAcross different type of questions, especially the Causal, Temporal type questions, keyframes selected by our network is more relevant and better related to the question.",
|
| 174 |
+
"url": "http://arxiv.org/html/2312.08367v4/x4.png"
|
| 175 |
+
},
|
| 176 |
+
"5": {
|
| 177 |
+
"figure_path": "2312.08367v4_figure_5.png",
|
| 178 |
+
"caption": "Figure 5: QFormer-Distiller Results Visualization. Here we visualize the keyframes selected after cross-modal distillation.\nAfter distillation, we can select the most question-relevant frames even from 16 frames.",
|
| 179 |
+
"url": "http://arxiv.org/html/2312.08367v4/x5.png"
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
"validation": true,
|
| 183 |
+
"references": [],
|
| 184 |
+
"url": "http://arxiv.org/html/2312.08367v4"
|
| 185 |
+
}
|
20241001/2312.08887v4.json
ADDED
|
@@ -0,0 +1,224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "SpeedUpNet: A Plug-and-Play Adapter Network for Accelerating Text-to-Image Diffusion Models",
|
| 3 |
+
"abstract": "Text-to-image diffusion models (SD) exhibit significant advancements while requiring extensive computational resources.\nExisting acceleration methods usually require extensive training and are not universally applicable.\nLCM-LoRA, trainable once for diverse models, offers universality but rarely considers ensuring the consistency of generated content before and after acceleration.\nThis paper proposes SpeedUpNet (SUN), an innovative acceleration module, to address the challenges of universality and consistency.\nExploiting the role of cross-attention layers in U-Net for SD models, we introduce an adapter specifically designed for these layers, quantifying the offset in image generation caused by negative prompts relative to positive prompts.\nThis learned offset demonstrates stability across a range of models, enhancing SUN\u2019s universality.\nTo improve output consistency, we propose a Multi-Step Consistency (MSC) loss, which stabilizes the offset and ensures fidelity in accelerated content.\nExperiments on SD v1.5 show that SUN leads to an overall speedup of more than 10 times compared to the baseline 25-step DPM-solver++, and offers two extra advantages: (1) training-free integration into various fine-tuned Stable-Diffusion models and (2) state-of-the-art FIDs of the generated data set before and after acceleration guided by random combinations of positive and negative prompts.\nCode is available111Project: https://williechai.github.io/speedup-plugin-for-stable-diffusions.github.io.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In recent years, significant advancements have been made in the field of generative models, particularly in text-to-image generation, with Denoising Diffusion Probabilistic Models (DDPMs) [3 ###reference_b3###] playing a crucial role.\nTo further enhance the generation quality of text-to-image diffusion models, classifier-free guidance (CFG) [4 ###reference_b4###] is widely used in large-scale generative frameworks [14 ###reference_b14###] [20 ###reference_b20###] [19 ###reference_b19###] [21 ###reference_b21###].\nHowever, the iterative sampling procedure for diffusion models costs extensive computational resource, and CFG doubles the inference latency because it demands one diffusion process for the positive prompt and another for the negative.\nBased on the above problems, many efforts have been made on the topic of fast sampling and distillation of diffusion models.\nAdvanced sampling strategies [25 ###reference_b25###] [7 ###reference_b7###] [8 ###reference_b8###] significantly decrease the diffusion steps from several hundreds to without training.\nStructural pruning [6 ###reference_b6###] proposes that that a smaller \u201cstudent\" model can be trained to mimic the output of the \u201cteacher\" model.\nTo reduce the inference steps of the diffusion model, Progressive distillation [22 ###reference_b22###] and Consistency Models [26 ###reference_b26###] learn to iteratively reduce the sampling steps.\nGuided-Distill [12 ###reference_b12###] and Latent Consistency Models (LCM) [9 ###reference_b9###] [10 ###reference_b10###] augment above methods to text-to-image diffusion models, where CFG process is particularly considered in their distillation processes.\nThese methods have shown producing high-quality images in less than 4 sampling steps, but they still have several limitations in practicality. First, existing distillation methods require fine-tuning of the entire diffusion network and corresponding training data, which makes them difficult to apply to a new pre-trained model.\nSecond, efficient finetuning methods that use LoRA [10 ###reference_b10###] may result in significant visual differences\nbetween the images generated before and after acceleration.\nThis is also accompanied by the inaccuracy of the indicators, when the selected dataset (LAION5B or MCOCO) for FID and CLIP-score can be very different in terms of data distribution from the dataset used for training the stylized SD model.\nAdditionally, input from negative prompts is often simplified or discarded during accelerations, which weakens the adjustability of the accelerated models.\n###figure_1### To address these limitations, we propose a novel and universal acceleration adapter called SpeedUpNet (SUN).\nOnce trained on a base Stable Diffusion (SD) [20 ###reference_b20###] model, SUN can be easily plugged into various fine-tuned SD models (such as different stylized models) to significantly improve inference efficiency while maintaining content consistency and negative prompt control.\nIn particular, SUN is implemented in a teacher-student distillation framework, where the student has the same architecture with the teacher model except for an additional adapter network.\nDuring the training, only the adapter of the student, which consists of several cross-attention layers, are optimized with the other parameters frozen.\nThe adapter network takes the negative prompt embedding as an extra input to the diffusion model, allowing for CFG-like effects in one inference.\nSUN adapter consists of several cross-attention operations to calculate the offset of the negative prompt relative to the positive prompt on each of the attention layers in the U-Net.\nAs depicted in Fig.1 ###reference_###, the offset between negative and positive text embeddings, which is usually utilized to improve image quality,\nis noted to be a variable associated with text inputs and is unrelated to the model\u2019s style.\nAs a result, the trained adapter network can be generalized to other stylized T2I diffusion models.\nAdditionally, SUN introduces a Multi-Step Consistency (MSC) loss to ensure a harmonious balance between reducing inference steps and maintaining consistency in the generated output.\nDifferent from the existing method that gradually change the inverse diffusion trajectory to a new one for one-step generation such as LCM [9 ###reference_b9###] and Guided-Distill [12 ###reference_b12###],\nMSC divides the original dense trajectory into a few (e.g. 4) stages, with each stage being approached by an accelerated inference. By mapping the output of each stage to the point of the original trajectory, this method avoids cumulative errors during acceleration, thus maintaining the consistency of the output image.\nConsequently, SUN significant reduces in the number of inference steps to just 4 steps and eliminates the need for CFG, which leads to an overall speedup of more than 10 times for SD models compared to the 25-step dpm-solver++.\nTo sum up, our contributions are as follows:\nFirst, we propose a novel and universal acceleration module called SpeedUpNet (SUN), which can be seamlessly integrated into different fine-tuned SD models without training, once it is trained on a base SD model.\nSecond, we propose a method that supports classifier-free guidance distillation with controllable negative prompts and utilizes Multi-Step Consistency (MSC) loss to enhance content consistency between the generated outputs before and after acceleration.\nThird, experimental results demonstrate SUN achieves a remarkable speedup of over 10 times on diffusion models. SUN fits various style models as well as generation tasks (including Inpainting [11 ###reference_b11###], Image-to-Image and ControlNet [29 ###reference_b29###]) without extra training, and achieve better results compared to existing SOTA methods."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Diffusion Models and Classifier-free Guidance",
|
| 21 |
+
"text": "Diffusion Models have achieved great success in image generation ([3 ###reference_b3###][25 ###reference_b25###][15 ###reference_b15###][18 ###reference_b18###]).\nClassifier-free guidance (CFG[4 ###reference_b4###]) is an technique for improving the sample quality of text-to-image diffusion models,\nwhich has been applied in models such as GLIDE [14 ###reference_b14###], Stable Diffusion [20 ###reference_b20###] and DALL E 2 [19 ###reference_b19###].\nIt incorporates a guidance weight that balances the trade-off between sample quality and diversity during the generation process.\nHowever, it should be noted that this approach increases the computational load due to the requirement of evaluating both conditional and unconditional (positive and negative prompts) models at each sampling step, thus necessitating optimization strategies to improve speed."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Accelerating Diffusion Models",
|
| 27 |
+
"text": "The advanced sampling strategies (including DDIM [25 ###reference_b25###], DPM-Solver [7 ###reference_b7###] and DPM-Solver++ [8 ###reference_b8###]) significantly decrease the number of diffusion steps from several hundreds to around 25.\nOn structural pruning,\nBK-SDM [6 ###reference_b6###] introduces different types of efficient diffusion models and propose various distillation strategies.\nOn step distillation,\nProgressive Distillation (PD) [22 ###reference_b22###] and Guided-Distill [12 ###reference_b12###] propose progressive distillation methods, where a student model can generate high-quality images with only 2 diffusion steps.\nAdditionally, Consistency Models (CM) [26 ###reference_b26###] generate image in a single step by utilizing consistency mapping derived from ODE trajectories.\nBased on CM, Latent Consistency Models (LCM) [9 ###reference_b9###] is proposed for accelerating text-to-image synthesis tasks.\nSome recent studies, such as UFOGen [27 ###reference_b27###] and ADD [23 ###reference_b23###] use adversarial techniques to obatain high-quality images in fewer steps.\nBy incorporating LoRA into the distillation process of LCM, without fine-tuning the entire network, LCM-LoRA [10 ###reference_b10###] achieves a reduction in the memory overhead of distillation,\nas well as the ability for accelerating diverse models and tasks.\n###figure_2###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "HyperNetwork and Adapter Methods",
|
| 33 |
+
"text": "These approaches primarily focus on fine-tuning pre-existing models for specific tasks without extensive retraining.\nHyperNetworks [2 ###reference_b2###], with the aim of training a small recurrent neural network to influence the weights of a larger one,\nhave found their way into adjusting the behavior of GANs and diffusion models.\nTo retrofit existing models with new capabilities, adapters have been shown effective in vision-language tasks and text-to-image generation.\nControlNet [29 ###reference_b29###] tailors SD output by conditioning. T2I-adapters [13 ###reference_b13###] offers fine-tuned control over attributes such as color and style.\nIP-Adapter [28 ###reference_b28###], which is an efficient and lightweight adapter, enables image prompt capability for pretrained text-to-image diffusion models."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Method",
|
| 39 |
+
"text": ""
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Problem Formulation",
|
| 45 |
+
"text": "The latent diffusion process can be inferred by optimizing the subsequent equation:\nwhere symbolizes the noisy latent representation of an image, is the corresponding prompt, represents the text encoder transforming to a conditional embedding, and symbolizes a time step, sampled from a uniform distribution . The noise adheres to a standard Gaussian distribution, i.e., . During the inference process, two texts, a positive prompt and a negative prompt , are applied as conditions of two independent diffusion steps:\nwhere , , and represent the positive noise, negative noise, and the final noise, respectively. This process requires two forwards of the model in order to compute the final noise, leading to potential computational inefficiency.\nIn this study, we propose a strategy to predict the final noise in a single forward pass.\nwhere is a decoupled network with and , which can be optimized independently."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Negative-positive Offset Learning",
|
| 51 |
+
"text": "The overall framework and details of our method is illustrated in Figure 2 ###reference_###.\nDuring training, the negative text embedding is fed into the SUN adapter, which consists of several cross-attention blocks.\nAnd the SUN adapter interacts with the original U-Net through the cross-attention blocks.\nDuring inference, the SUN adapter can be directly plugged into any fine-tuned SDs.\nTo embody the interaction amongst , and the proposed decoupled network , a cross-attention mechanism is integrated. The original interaction between and can be expressed as follows:\nwhere refers to the multi-head self-attention operation, , , are the query, key, and value matrices of the attention operation. Respectively, , , represent the weight matrices of the trainable linear projection layers.\nTo insert negative text embedding, a new attention operation is added for the decoupled network :\nwhere is shared from Equation 4 ###reference_###,\nand , represent the key and value of the negative text embedding.\n###figure_3### The computation of the final feature, denoted as Z, is critical to the overall interaction between , , and as it encapsulates the negative impact of the text . This is demonstrated through a subtraction operation:\nwhere is a function called Attention Normalization which lies in the necessity to balance the contributions from the positive and negative text prompts and to regulate the scale of the feature vectors. It is defined as:\nwhere is a function that computes the magnitude of a vector, providing an objective measure of the contribution from each feature. The parameters and are learnable weights that allow the model to adaptively control the strength of influence of the negative prompt. Attention Normalization helps to improve the generalization of SUN, which we describe in the experiments chapter."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.3",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Multi-Step Consistency (MSC) Distillation",
|
| 57 |
+
"text": ""
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.3.1",
|
| 61 |
+
"parent_section_id": "3.3",
|
| 62 |
+
"section_name": "3.3.1 Vanilla CFG Distillation",
|
| 63 |
+
"text": ".\nTo imitate the behavior of classifier-free diffusion model, one of the objective is to encourage the output of the student to resemble the prediction by classifier-free guidance:\nwhere is the conditional embedding of the positive prompt, represents the conditional embedding of the negative prompt, and is the final noise given by teacher\u2019s CFG.\nIt is worth to notice that that there are two differences from the original CFG-Distill [12 ###reference_b12###] method.\nFirst, the original SD model is frozen and only the parameters of the adapter network are optimized.\nSecond, various negative prompts are used in training instead of a fixed empty prompt to ensure that the model is still controlled by negative prompts when producing content.\nThese changes make the optimization goal closer to the inference procedure, and make the adapter more versatile."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.3.2",
|
| 67 |
+
"parent_section_id": "3.3",
|
| 68 |
+
"section_name": "3.3.2 Multi-step Consistency Loss",
|
| 69 |
+
"text": ".\nIn order to further improve the model to sample high-quality and consistent images in fewer steps, we use optimized step-distillation and add MSC loss on this basis to reduce the gap between the student and the teacher.\nAs the SUN adapter has already accepted negative prompts as input, there is no need for pre-distillation to remove CFG as done in Guided-Distill [12 ###reference_b12###]. To maintain a stable teacher, we also choose not to progressively distill it multiple times like PD [22 ###reference_b22###].\nGiven the noisy input at time and the teacher\u2019s sampling process from time to by N steps in continuous time space,\nthe objective for the student network is to obtain the same diffusion state at time in one step.\nTo perform the sampling process in the continuous time space, we divide the time to into segments, and get as the time interval for each inference process.\nFor in and let ,\nthe ideal noisy sample at time can be obtained by iteratively inferencing the teacher network via classifier-free guidance:\nIn order for the model to generate from in one step, the network should predict\n approximately. According to DDIM updating rule, we have\nThe corresponding step-distillation loss is calculated as\nIt is important to note that there is a discrepancy between the output of the student network and the teacher network, and this discrepancy will accumulate with iterative sampling, leading to inaccurate results.\nTo address the issue, we introduce the MSC loss to rectify the step-distillation loss (as shown in Fig. 3 ###reference_###). When selecting the value of , we randomly replace it with the student\u2019s output from the previous moment with a probability of . Regardless of whether the input is sampled from the teacher\u2019s\nsampling process or the student\u2019s, the student is forced to generate to ensure that the next moment follows the original trajectory without deviation:\nwhere\nConsidering with CFG-distill loss, the overall optimization target is\nThe entire training process proceeds as shown in Algorithm 1 ###reference_thm1###."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Experiments",
|
| 75 |
+
"text": ""
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.1",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Details of Implementation",
|
| 81 |
+
"text": ""
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.1.1",
|
| 85 |
+
"parent_section_id": "4.1",
|
| 86 |
+
"section_name": "4.1.1 Dataset",
|
| 87 |
+
"text": ".\nTo train the proposed network,\nwe use LAION-Aesthetics-6+ which is a sub set of\nLAION-5B [24 ###reference_b24###]\ncontaining 12M text-image pairs with predicted aesthetics scores higher than 6.\nEach sample from the original dataset includes one prompt (deemed a positive prompt) and a corresponding image.\nSubsequently, we leveraged two distinct strategies to collect negative prompts: (1) extracting negative prompts from AIGC websites such as PromptHero [17 ###reference_b17###];\n(2) utilizing large language models to generate a negative counterpart for the positive prompt.\nWe then split every negative prompt into phrases with comma, resulting in a total of 832 distinct phrases.\nDuring the training, in order to generate a negative prompt, we uniformly sample 0 to 100 phrases from all the phrases, and then join them into a complete prompt (e.g. \"watermark, blurry, ugly, bad anatomy, bad hands, error, missing fingers\")."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.1.2",
|
| 91 |
+
"parent_section_id": "4.1",
|
| 92 |
+
"section_name": "4.1.2 Configuration for Experiments",
|
| 93 |
+
"text": ".\nWe use the widely-used Stable Diffusion v1.5 [20 ###reference_b20###] for the base model. In our SUN adapter, we incorporate 16 cross-attention modules that is trainable during the distillation, resulting in a total parameter count of 18.5 M.\nOur method is implemented based on the Diffusers library [5 ###reference_b5###] and PyTorch [16 ###reference_b16###]. The training is launched on a single machine with 4 A100 GPUs for approximately 5k steps using the batch size of 32.\nUtilizing acceleration libraries allows the model to be trained on a single machine with 8 V100 GPUs for around 20k steps with a batch size of 8. The results derived from both machine configurations are competitive.\nWe utilize the AdamW optimizer, maintaining a constant learning rate of 0.0001 and a weight decay of 0.01. The training process involves resizing the image\u2019s shortest side to 512, followed by a 512 512 center crop.\nFor MSC loss, is set to 1.0, is set 0.25, is set 0.1."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.1.3",
|
| 97 |
+
"parent_section_id": "4.1",
|
| 98 |
+
"section_name": "4.1.3 Baselines and Evaluation",
|
| 99 |
+
"text": ". \nFor training-free methods, we use DDIM [25 ###reference_b25###], DPM-Solver [7 ###reference_b7###], and DPM-Solver++ [8 ###reference_b8###] schedulers.\nFor training-requiring methods, we compare with Guided-Distill [12 ###reference_b12###] and LCM [9 ###reference_b9###]. Since there has been no open-sourced training-required method before, we reproduce Guided-Distill following the paper, on our dataset configurations.\nSince our method also belongs to adaptating-free methods that require training only once and can used with other pre-trained models, and we compare it to LCM-LoRA [10 ###reference_b10###].\nFollowing previous works, we test on LAION-Aesthetics-6+ dataset.\nWe use FID and CLIP scores to evaluate the performances, where we generate 30K images using\n10K text prompts of test set with 3 random seeds.\n###figure_4###"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.2",
|
| 103 |
+
"parent_section_id": "4",
|
| 104 |
+
"section_name": "Qualitative Results",
|
| 105 |
+
"text": "Without further training, we insert SUN pre-trained based on SD v1.5 into different popular diffusion models from CIVITAI [1 ###reference_b1###], including Anything v5, Realistic Vision v5.1, Toon You, and Fancy Pet. These models all use the same network structure and noise-prediction as SD v1.5.\nTo display the result, we mainly compare our method with LCM-LoRA, which is also a training-free acceleration method. Additionally, we compare with Guided-Distill, which requires training, to perform further training on each model for comparisons. We regard the results of DPM-solver ++ (25 steps) as ground truths. By reducing the number of inference steps of each method, we compare the difference between its generated results and the ground truth.\nAs shown in Fig. 4 ###reference_###, DPM-solver++ produce significantly poorer quality images when using a smaller number of steps (e.g. 4, 8 steps). LCM-LoRA can enhance the quality of images in the above situation without training, but the generated images may vary significantly from the ground truth. In contrast, SUN not only produces high-quality images but also generates consistent results with the ground truth at different choices of sampling steps. This reflects that SUN as a universal acceleration module is more versatile when being plugged into new models.\nCompared with the training-hungry method, SUN also has advantages in content consistency, which shows that MSC objective plays a role in reducing student-teacher discrepancies. At the same time, since SUN only trains the adapter parameters, it maintains well the image style and quality from the original model.\n###table_1### ###table_2###"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.3",
|
| 109 |
+
"parent_section_id": "4",
|
| 110 |
+
"section_name": "Quantitative Evaluation",
|
| 111 |
+
"text": "We first use SD v1.5 to test because all distillation-based acceleration methods are trained on SD v1.5.\nAs shown in Tab. 1 ###reference_###, SUN contains the smallest number of parameters among all distillation methods,\nmaking it more efficient in training and better to reduce the risk of overfitting.\nThe quality of the generated images is evaluated mainly by using a standard test set as the reference. SUN is a competitive method in distribution difference (FID) and semantic consistency (CLIP score), and it achieves the best results when compared to other methods with the same parameter magnitude.\nFurthermore, we evaluate the quantitative result of SUN as a universal acceleration add-on and compare it with existing techniques.\nWe tested three different models that have been already fine-tuned on specialized datasets.\nAs the styles of the new models are diverse, there is no standard reference set, such as LAION-5B or MSCOCO, to evaluate FIDs.\nTo better reflect the consistency of the generated images before and after acceleration, for testing pretrained diffusion model, we use the 25-step DPM-Solver++ to generate 30k samples using the same prompts in Sec. 4.1.3 ###reference_.SSS3###, and then take them as reference for computing FID. As shown in Fig. 2 ###reference_###, SUN is demonstrated to surpass other acceleration methods on all models, therefore being a preferable acceleration method.\n###table_3### ###table_4### As an important supplement, we test the time consumption of each acceleration method on different hardware platforms (Tab. 3 ###reference_###). SUN is faster than baseline (DPM-solver++ 25 steps) by more than 10x in terms of U-Net time consumption, and is faster than LCM-LoRA due to the advantage of parameter quantity.\n###figure_5### ###figure_6### ###figure_7###"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.4",
|
| 115 |
+
"parent_section_id": "4",
|
| 116 |
+
"section_name": "Other Results",
|
| 117 |
+
"text": ""
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.4.1",
|
| 121 |
+
"parent_section_id": "4.4",
|
| 122 |
+
"section_name": "4.4.1 Alter the Negative Prompt.",
|
| 123 |
+
"text": "Since the ability to modify negative prompts is essential for creators in image creation,\nwe further use different negative prompts for one positive prompt. The experimental results (Fig. 5 ###reference_###) showed that SUN effectively learned the content of the negative prompt rather than fitting a specific style, achieving the same effect as CFG."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "4.4.2",
|
| 127 |
+
"parent_section_id": "4.4",
|
| 128 |
+
"section_name": "4.4.2 Image-to-Image and Inpainting.",
|
| 129 |
+
"text": "Besides text-to-image generation, SUN can also be used as a plug-in to accelerate image-to-image as well as inpaining diffusion models.\nAs shown in the Fig. 6(a) ###reference_sf1###, without any training on the target model, SUN is able to generate results comparable to the original model with only 4 steps."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "4.4.3",
|
| 133 |
+
"parent_section_id": "4.4",
|
| 134 |
+
"section_name": "4.4.3 ControlNets.",
|
| 135 |
+
"text": "Additional structure control is a popular application for text-to-image diffusion models.\nAs our SUN does not change the original network structure, it is fully compatible with\nexisting controllable tools (as shown in Fig. 6(b) ###reference_sf2###).\n###figure_8### ###figure_9### ###figure_10###"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "4.5",
|
| 139 |
+
"parent_section_id": "4",
|
| 140 |
+
"section_name": "Ablation Study",
|
| 141 |
+
"text": "In our research, we carried out ablation studies to evaluate two key methodological contributions of Sec 3 ###reference_###. Figure 7 ###reference_### demonstrates that MSC is crucial in ensuring that the model generates consistent content, whether in very few or multiple steps.\nAs shown in Fig. 8 ###reference_###, Attention Normalization further reduces the fitting degree of SUN to the base model and helps achieve high-quality generation capabilities on different pre-trained models.\nAdditionally, we do ablation studies (Fig.9 ###reference_### and Tab.4 ###reference_###) to assess the impact of training strategy parameters on the results."
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "5",
|
| 145 |
+
"parent_section_id": null,
|
| 146 |
+
"section_name": "Conclusion",
|
| 147 |
+
"text": "In this work, we introduced SpeedUpNet (SUN), a novel and universal Stable-Diffusion acceleration module that can be seamlessly integrated into different fine-tuned Stable-Diffusion models without further training, once it is trained on a base Stable-Diffusion model. SUN proposes a method that utilizes an adapter for the cross-attention layers in U-Net, along with a Multi-Step Consistency (MSC) loss. This approach is specifically designed to quantify and stabilize the offset in image generation caused by negative prompts relative to positive prompts.\nOur empirical evaluations demonstrate that SUN significant reduces in the number of inference steps to just 4 steps and eliminates the need for classifier free guidance, which leads to a speedup of over 10 times compared to the baseline 25-step DPM-solver++, while preserving both the quality and generation consistency during the acceleration. Moreover, SUN is compatible with other generation tasks such as Inpainting [11 ###reference_b11###] and Image-to-Image generation, enabling the use of controllable tools like ControlNet [29 ###reference_b29###]."
|
| 148 |
+
}
|
| 149 |
+
],
|
| 150 |
+
"appendix": [],
|
| 151 |
+
"tables": {
|
| 152 |
+
"1": {
|
| 153 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.6.2.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S4.T1.2.1\" style=\"font-size:90%;\">\nQuantitative results on LAION-Aesthetic-6+ dataset.\nWith training only a few parameters on cross-attention, SUN achieves the best FID/CLIP scores above the existing adapting-free methods, the results are also competitive with SOTA methods that require finetuning the entire diffusion model.\nGuidance scale is 8.0, resolution is .\n</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.4\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T1.4.2.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.4.2.3.1\" style=\"font-size:90%;\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.4.2.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.4.2.4.1\" style=\"font-size:90%;\">Params</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.4.2.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.4.2.5.1\" style=\"font-size:90%;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.5.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.4.2.5.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.4.2.5.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.4.2.5.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.4.2.5.1.2.1.1.1\">Adapting</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.4.2.5.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.4.2.5.1.2.1.2.1\">free</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.4.2.5.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"3\" id=\"S4.T1.3.1.1\">\n<span class=\"ltx_text\" id=\"S4.T1.3.1.1.1\" style=\"font-size:90%;\">FID </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S4.T1.4.2.2\">\n<span class=\"ltx_text\" id=\"S4.T1.4.2.2.1\" style=\"font-size:90%;\">CLIP score </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.3.1\"><span class=\"ltx_text\" id=\"S4.T1.4.3.1.1\" style=\"font-size:90%;\">4-step</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.3.2\"><span class=\"ltx_text\" id=\"S4.T1.4.3.2.1\" style=\"font-size:90%;\">8-step</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.3.3\"><span class=\"ltx_text\" id=\"S4.T1.4.3.3.1\" style=\"font-size:90%;\">12-step</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.3.4\"><span class=\"ltx_text\" id=\"S4.T1.4.3.4.1\" style=\"font-size:90%;\">4-step</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.3.5\"><span class=\"ltx_text\" id=\"S4.T1.4.3.5.1\" style=\"font-size:90%;\">8-step</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.3.6\"><span class=\"ltx_text\" id=\"S4.T1.4.3.6.1\" style=\"font-size:90%;\">12-step</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.1\">\n<span class=\"ltx_text\" id=\"S4.T1.4.4.1.1\" style=\"font-size:90%;\">DDIM </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.4.4.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08887v4#bib.bib25\" title=\"\">25</a><span class=\"ltx_text\" id=\"S4.T1.4.4.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.2\"><span class=\"ltx_text\" id=\"S4.T1.4.4.2.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.3\"><span class=\"ltx_text\" id=\"S4.T1.4.4.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.4\"><span class=\"ltx_text\" id=\"S4.T1.4.4.4.1\" style=\"font-size:90%;\">22.38</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.5\"><span class=\"ltx_text\" id=\"S4.T1.4.4.5.1\" style=\"font-size:90%;\">13.83</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.6\"><span class=\"ltx_text\" id=\"S4.T1.4.4.6.1\" style=\"font-size:90%;\">12.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.7\"><span class=\"ltx_text\" id=\"S4.T1.4.4.7.1\" style=\"font-size:90%;\">0.258</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.8\"><span class=\"ltx_text\" id=\"S4.T1.4.4.8.1\" style=\"font-size:90%;\">0.292</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.9\"><span class=\"ltx_text\" id=\"S4.T1.4.4.9.1\" style=\"font-size:90%;\">0.315</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.4.5.1\">\n<span class=\"ltx_text\" id=\"S4.T1.4.5.1.1\" style=\"font-size:90%;\">DPM++ </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.4.5.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08887v4#bib.bib8\" title=\"\">8</a><span class=\"ltx_text\" id=\"S4.T1.4.5.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.5.2\"><span class=\"ltx_text\" id=\"S4.T1.4.5.2.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.5.3\"><span class=\"ltx_text\" id=\"S4.T1.4.5.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.5.4\"><span class=\"ltx_text\" id=\"S4.T1.4.5.4.1\" style=\"font-size:90%;\">18.43</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.5.5\"><span class=\"ltx_text\" id=\"S4.T1.4.5.5.1\" style=\"font-size:90%;\">12.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.5.6\"><span class=\"ltx_text\" id=\"S4.T1.4.5.6.1\" style=\"font-size:90%;\">12.03</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.5.7\"><span class=\"ltx_text\" id=\"S4.T1.4.5.7.1\" style=\"font-size:90%;\">0.266</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.5.8\"><span class=\"ltx_text\" id=\"S4.T1.4.5.8.1\" style=\"font-size:90%;\">0.295</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.5.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.5.9.1\" style=\"font-size:90%;\">0.336</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.4.6.1\">\n<span class=\"ltx_text\" id=\"S4.T1.4.6.1.1\" style=\"font-size:90%;\">Guided-Distill </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.4.6.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08887v4#bib.bib12\" title=\"\">12</a><span class=\"ltx_text\" id=\"S4.T1.4.6.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.6.2\"><span class=\"ltx_text\" id=\"S4.T1.4.6.2.1\" style=\"font-size:90%;\">860M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.6.3\"><span class=\"ltx_text\" id=\"S4.T1.4.6.3.1\" style=\"font-size:90%;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.6.4\"><span class=\"ltx_text\" id=\"S4.T1.4.6.4.1\" style=\"font-size:90%;\">15.12</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.6.5\"><span class=\"ltx_text\" id=\"S4.T1.4.6.5.1\" style=\"font-size:90%;\">13.89</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.6.6\"><span class=\"ltx_text\" id=\"S4.T1.4.6.6.1\" style=\"font-size:90%;\">12.44</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.6.7\"><span class=\"ltx_text\" id=\"S4.T1.4.6.7.1\" style=\"font-size:90%;\">0.272</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.6.8\"><span class=\"ltx_text\" id=\"S4.T1.4.6.8.1\" style=\"font-size:90%;\">0.281</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.6.9\"><span class=\"ltx_text\" id=\"S4.T1.4.6.9.1\" style=\"font-size:90%;\">0.314</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.4.7.1\">\n<span class=\"ltx_text\" id=\"S4.T1.4.7.1.1\" style=\"font-size:90%;\">LCM </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.4.7.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08887v4#bib.bib9\" title=\"\">9</a><span class=\"ltx_text\" id=\"S4.T1.4.7.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.7.2\"><span class=\"ltx_text\" id=\"S4.T1.4.7.2.1\" style=\"font-size:90%;\">860M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.7.3\"><span class=\"ltx_text\" id=\"S4.T1.4.7.3.1\" style=\"font-size:90%;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.7.4.1\" style=\"font-size:90%;\">11.10</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.7.5.1\" style=\"font-size:90%;\">11.84</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.7.6\"><span class=\"ltx_text\" id=\"S4.T1.4.7.6.1\" style=\"font-size:90%;\">12.02</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.7.7\"><span class=\"ltx_text\" id=\"S4.T1.4.7.7.1\" style=\"font-size:90%;\">0.286</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.7.8\"><span class=\"ltx_text\" id=\"S4.T1.4.7.8.1\" style=\"font-size:90%;\">0.288</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.7.9\"><span class=\"ltx_text\" id=\"S4.T1.4.7.9.1\" style=\"font-size:90%;\">0.320</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.4.8.1\">\n<span class=\"ltx_text\" id=\"S4.T1.4.8.1.1\" style=\"font-size:90%;\">LCM-Lora </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.4.8.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08887v4#bib.bib10\" title=\"\">10</a><span class=\"ltx_text\" id=\"S4.T1.4.8.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.8.2\"><span class=\"ltx_text\" id=\"S4.T1.4.8.2.1\" style=\"font-size:90%;\">67.5M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.8.3\"><span class=\"ltx_text\" id=\"S4.T1.4.8.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.8.4\"><span class=\"ltx_text\" id=\"S4.T1.4.8.4.1\" style=\"font-size:90%;\">16.83</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.8.5\"><span class=\"ltx_text\" id=\"S4.T1.4.8.5.1\" style=\"font-size:90%;\">14.30</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.4.8.6\"><span class=\"ltx_text\" id=\"S4.T1.4.8.6.1\" style=\"font-size:90%;\">13.11</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.8.7\"><span class=\"ltx_text\" id=\"S4.T1.4.8.7.1\" style=\"font-size:90%;\">0.271</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.8.8\"><span class=\"ltx_text\" id=\"S4.T1.4.8.8.1\" style=\"font-size:90%;\">0.277</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.8.9\"><span class=\"ltx_text\" id=\"S4.T1.4.8.9.1\" style=\"font-size:90%;\">0.319</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S4.T1.4.9.1\"><span class=\"ltx_text\" id=\"S4.T1.4.9.1.1\" style=\"font-size:90%;\">SUN (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.4.9.2\"><span class=\"ltx_text\" id=\"S4.T1.4.9.2.1\" style=\"font-size:90%;\">18.5M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.4.9.3\"><span class=\"ltx_text\" id=\"S4.T1.4.9.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.4.9.4\"><span class=\"ltx_text\" id=\"S4.T1.4.9.4.1\" style=\"font-size:90%;\">13.23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.4.9.5\"><span class=\"ltx_text\" id=\"S4.T1.4.9.5.1\" style=\"font-size:90%;\">12.08</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.4.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.9.6.1\" style=\"font-size:90%;\">11.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.4.9.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.9.7.1\" style=\"font-size:90%;\">0.288</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.4.9.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.9.8.1\" style=\"font-size:90%;\">0.297</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.4.9.9\"><span class=\"ltx_text\" id=\"S4.T1.4.9.9.1\" style=\"font-size:90%;\">0.328</span></td>\n</tr>\n</table>\n</figure>",
|
| 154 |
+
"capture": "Table 1: \nQuantitative results on LAION-Aesthetic-6+ dataset.\nWith training only a few parameters on cross-attention, SUN achieves the best FID/CLIP scores above the existing adapting-free methods, the results are also competitive with SOTA methods that require finetuning the entire diffusion model.\nGuidance scale is 8.0, resolution is .\n"
|
| 155 |
+
},
|
| 156 |
+
"2": {
|
| 157 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T2.7.2.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S4.T2.2.1\" style=\"font-size:90%;\">\nQuantitative results on knowledge distillation FID with various pretrained diffusion models.\nEach generated samples set is compared to the corresponding ground truth set generated by 25-step-DPMSolver++ scheduler (using prompts from LAION-Aesthetic-6+).\nSUN significantly surpasses baselines in 4, 8, and 12 steps, demonstrating its ability to seamlessly switch to other diffusion models without any training.\nGuidance scale is 8.0, resolution is .\n</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T2.5\">\n<tr class=\"ltx_tr\" id=\"S4.T2.5.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T2.5.3.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.5.3.4.1\" style=\"font-size:90%;\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.5.3.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.5.3.5.1\" style=\"font-size:90%;\">Params</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.5.3.6\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.5.3.6.1\" style=\"font-size:90%;\"><span class=\"ltx_text\" id=\"S4.T2.5.3.6.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T2.5.3.6.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.5.3.6.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T2.5.3.6.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.5.3.6.1.2.1.1.1\">Training</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.5.3.6.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.5.3.6.1.2.1.2.1\">free</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T2.5.3.6.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T2.3.1.1\">\n<span class=\"ltx_text\" id=\"S4.T2.3.1.1.1\" style=\"font-size:90%;\">Rea v5.1 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T2.4.2.2\">\n<span class=\"ltx_text\" id=\"S4.T2.4.2.2.1\" style=\"font-size:90%;\">RevA </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T2.5.3.3\">\n<span class=\"ltx_text\" id=\"S4.T2.5.3.3.1\" style=\"font-size:90%;\">Any v5 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.4.1\"><span class=\"ltx_text\" id=\"S4.T2.5.4.1.1\" style=\"font-size:90%;\">4-step</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.4.2\"><span class=\"ltx_text\" id=\"S4.T2.5.4.2.1\" style=\"font-size:90%;\">8-step</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.4.3\"><span class=\"ltx_text\" id=\"S4.T2.5.4.3.1\" style=\"font-size:90%;\">4-step</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.4.4\"><span class=\"ltx_text\" id=\"S4.T2.5.4.4.1\" style=\"font-size:90%;\">8-step</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.4.5\"><span class=\"ltx_text\" id=\"S4.T2.5.4.5.1\" style=\"font-size:90%;\">4-step</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.4.6\"><span class=\"ltx_text\" id=\"S4.T2.5.4.6.1\" style=\"font-size:90%;\">8-step</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.1\">\n<span class=\"ltx_text\" id=\"S4.T2.5.5.1.1\" style=\"font-size:90%;\">DDIM </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.5.5.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08887v4#bib.bib25\" title=\"\">25</a><span class=\"ltx_text\" id=\"S4.T2.5.5.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.2\"><span class=\"ltx_text\" id=\"S4.T2.5.5.2.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.3\"><span class=\"ltx_text\" id=\"S4.T2.5.5.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.5.4\"><span class=\"ltx_text\" id=\"S4.T2.5.5.4.1\" style=\"font-size:90%;\">25.32</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.5\"><span class=\"ltx_text\" id=\"S4.T2.5.5.5.1\" style=\"font-size:90%;\">21.94</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.5.6\"><span class=\"ltx_text\" id=\"S4.T2.5.5.6.1\" style=\"font-size:90%;\">27.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.7\"><span class=\"ltx_text\" id=\"S4.T2.5.5.7.1\" style=\"font-size:90%;\">22.46</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.5.8\"><span class=\"ltx_text\" id=\"S4.T2.5.5.8.1\" style=\"font-size:90%;\">29.88</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.5.9\"><span class=\"ltx_text\" id=\"S4.T2.5.5.9.1\" style=\"font-size:90%;\">23.39</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.5.6.1\">\n<span class=\"ltx_text\" id=\"S4.T2.5.6.1.1\" style=\"font-size:90%;\">DPM-Solver++ </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.5.6.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08887v4#bib.bib8\" title=\"\">8</a><span class=\"ltx_text\" id=\"S4.T2.5.6.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.6.2\"><span class=\"ltx_text\" id=\"S4.T2.5.6.2.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.6.3\"><span class=\"ltx_text\" id=\"S4.T2.5.6.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.6.4\"><span class=\"ltx_text\" id=\"S4.T2.5.6.4.1\" style=\"font-size:90%;\">24.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.6.5\"><span class=\"ltx_text\" id=\"S4.T2.5.6.5.1\" style=\"font-size:90%;\">21.12</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.6.6\"><span class=\"ltx_text\" id=\"S4.T2.5.6.6.1\" style=\"font-size:90%;\">26.02</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.6.7\"><span class=\"ltx_text\" id=\"S4.T2.5.6.7.1\" style=\"font-size:90%;\">21.38</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.6.8\"><span class=\"ltx_text\" id=\"S4.T2.5.6.8.1\" style=\"font-size:90%;\">29.06</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.6.9\"><span class=\"ltx_text\" id=\"S4.T2.5.6.9.1\" style=\"font-size:90%;\">22.75</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.5.7.1\">\n<span class=\"ltx_text\" id=\"S4.T2.5.7.1.1\" style=\"font-size:90%;\">Guided-Distill </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.5.7.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08887v4#bib.bib12\" title=\"\">12</a><span class=\"ltx_text\" id=\"S4.T2.5.7.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.7.2\"><span class=\"ltx_text\" id=\"S4.T2.5.7.2.1\" style=\"font-size:90%;\">860M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.7.3\"><span class=\"ltx_text\" id=\"S4.T2.5.7.3.1\" style=\"font-size:90%;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.7.4\"><span class=\"ltx_text\" id=\"S4.T2.5.7.4.1\" style=\"font-size:90%;\">20.31</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.7.5\"><span class=\"ltx_text\" id=\"S4.T2.5.7.5.1\" style=\"font-size:90%;\">16.33</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.7.6\"><span class=\"ltx_text\" id=\"S4.T2.5.7.6.1\" style=\"font-size:90%;\">22.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.7.7\"><span class=\"ltx_text\" id=\"S4.T2.5.7.7.1\" style=\"font-size:90%;\">17.23</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.7.8\"><span class=\"ltx_text\" id=\"S4.T2.5.7.8.1\" style=\"font-size:90%;\">25.57</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.7.9\"><span class=\"ltx_text\" id=\"S4.T2.5.7.9.1\" style=\"font-size:90%;\">18.49</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.5.8.1\">\n<span class=\"ltx_text\" id=\"S4.T2.5.8.1.1\" style=\"font-size:90%;\">LCM-Lora </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.5.8.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.08887v4#bib.bib10\" title=\"\">10</a><span class=\"ltx_text\" id=\"S4.T2.5.8.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.8.2\"><span class=\"ltx_text\" id=\"S4.T2.5.8.2.1\" style=\"font-size:90%;\">67.5M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.8.3\"><span class=\"ltx_text\" id=\"S4.T2.5.8.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.8.4\"><span class=\"ltx_text\" id=\"S4.T2.5.8.4.1\" style=\"font-size:90%;\">21.88</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.8.5\"><span class=\"ltx_text\" id=\"S4.T2.5.8.5.1\" style=\"font-size:90%;\">17.42</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.8.6\"><span class=\"ltx_text\" id=\"S4.T2.5.8.6.1\" style=\"font-size:90%;\">23.44</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.8.7\"><span class=\"ltx_text\" id=\"S4.T2.5.8.7.1\" style=\"font-size:90%;\">18.11</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.8.8\"><span class=\"ltx_text\" id=\"S4.T2.5.8.8.1\" style=\"font-size:90%;\">26.34</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.8.9\"><span class=\"ltx_text\" id=\"S4.T2.5.8.9.1\" style=\"font-size:90%;\">19.77</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S4.T2.5.9.1\"><span class=\"ltx_text\" id=\"S4.T2.5.9.1.1\" style=\"font-size:90%;\">SUN (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T2.5.9.2\"><span class=\"ltx_text\" id=\"S4.T2.5.9.2.1\" style=\"font-size:90%;\">18.5M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T2.5.9.3\"><span class=\"ltx_text\" id=\"S4.T2.5.9.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.5.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.9.4.1\" style=\"font-size:90%;\">19.60</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T2.5.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.9.5.1\" style=\"font-size:90%;\">15.73</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.5.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.9.6.1\" style=\"font-size:90%;\">20.27</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T2.5.9.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.9.7.1\" style=\"font-size:90%;\">16.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.5.9.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.9.8.1\" style=\"font-size:90%;\">22.52</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.5.9.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.9.9.1\" style=\"font-size:90%;\">16.17</span></td>\n</tr>\n</table>\n</figure>",
|
| 158 |
+
"capture": "Table 2: \nQuantitative results on knowledge distillation FID with various pretrained diffusion models.\nEach generated samples set is compared to the corresponding ground truth set generated by 25-step-DPMSolver++ scheduler (using prompts from LAION-Aesthetic-6+).\nSUN significantly surpasses baselines in 4, 8, and 12 steps, demonstrating its ability to seamlessly switch to other diffusion models without any training.\nGuidance scale is 8.0, resolution is .\n"
|
| 159 |
+
},
|
| 160 |
+
"3": {
|
| 161 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T3.5.2.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S4.T3.2.1\" style=\"font-size:90%;\">\nTime consumption for an image (seconds) using Diffusers Pipeline. <span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.2.1.1\">Non batch parallel</span> puts positive prompts and negative prompts into two batches for the inference process.\n</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.6\">\n<tr class=\"ltx_tr\" id=\"S4.T3.6.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T3.6.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.6.1.1.1\" style=\"font-size:90%;\">Method (steps)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T3.6.1.2\"><span class=\"ltx_text\" id=\"S4.T3.6.1.2.1\" style=\"font-size:90%;\">V100 (FP32)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T3.6.1.3\"><span class=\"ltx_text\" id=\"S4.T3.6.1.3.1\" style=\"font-size:90%;\">M1Pro (FP16)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.2.1\"><span class=\"ltx_text\" id=\"S4.T3.6.2.1.1\" style=\"font-size:90%;\">pipeline</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.2.2\"><span class=\"ltx_text\" id=\"S4.T3.6.2.2.1\" style=\"font-size:90%;\">unet</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.2.3\"><span class=\"ltx_text\" id=\"S4.T3.6.2.3.1\" style=\"font-size:90%;\">pipeline</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.2.4\"><span class=\"ltx_text\" id=\"S4.T3.6.2.4.1\" style=\"font-size:90%;\">unet</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.6.3.1\"><span class=\"ltx_text\" id=\"S4.T3.6.3.1.1\" style=\"font-size:90%;\">DPM-Solver++ (25)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.3.2\"><span class=\"ltx_text\" id=\"S4.T3.6.3.2.1\" style=\"font-size:90%;\">3.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.6.3.3\"><span class=\"ltx_text\" id=\"S4.T3.6.3.3.1\" style=\"font-size:90%;\">3.16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.3.4\"><span class=\"ltx_text\" id=\"S4.T3.6.3.4.1\" style=\"font-size:90%;\">21.24</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.3.5\"><span class=\"ltx_text\" id=\"S4.T3.6.3.5.1\" style=\"font-size:90%;\">20.09</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.6.4.1\"><span class=\"ltx_text\" id=\"S4.T3.6.4.1.1\" style=\"font-size:90%;\"><span class=\"ltx_text\" id=\"S4.T3.6.4.1.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T3.6.4.1.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.6.4.1.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T3.6.4.1.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.6.4.1.1.2.1.1.1\">DPM-Solver++ (25)</span></span>\n<span class=\"ltx_tr\" id=\"S4.T3.6.4.1.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.6.4.1.1.2.1.2.1\">(<span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.6.4.1.1.2.1.2.1.1\">non batch parallel)</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T3.6.4.1.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.4.2\"><span class=\"ltx_text\" id=\"S4.T3.6.4.2.1\" style=\"font-size:90%;\">3.67</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.4.3\"><span class=\"ltx_text\" id=\"S4.T3.6.4.3.1\" style=\"font-size:90%;\">3.42</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.4.4\"><span class=\"ltx_text\" id=\"S4.T3.6.4.4.1\" style=\"font-size:90%;\">22.21</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.4.5\"><span class=\"ltx_text\" id=\"S4.T3.6.4.5.1\" style=\"font-size:90%;\">21.07</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.6.5.1\"><span class=\"ltx_text\" id=\"S4.T3.6.5.1.1\" style=\"font-size:90%;\">DPM-Solver++ (4)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.5.2\"><span class=\"ltx_text\" id=\"S4.T3.6.5.2.1\" style=\"font-size:90%;\">0.684</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.6.5.3\"><span class=\"ltx_text\" id=\"S4.T3.6.5.3.1\" style=\"font-size:90%;\">0.420</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.5.4\"><span class=\"ltx_text\" id=\"S4.T3.6.5.4.1\" style=\"font-size:90%;\">3.97</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.5.5\"><span class=\"ltx_text\" id=\"S4.T3.6.5.5.1\" style=\"font-size:90%;\">2.94</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.6.6.1\"><span class=\"ltx_text\" id=\"S4.T3.6.6.1.1\" style=\"font-size:90%;\">Guided-Distill (4)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.6.2\"><span class=\"ltx_text\" id=\"S4.T3.6.6.2.1\" style=\"font-size:90%;\">0.459</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.6.6.3\"><span class=\"ltx_text\" id=\"S4.T3.6.6.3.1\" style=\"font-size:90%;\">0.243</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.6.4\"><span class=\"ltx_text\" id=\"S4.T3.6.6.4.1\" style=\"font-size:90%;\">2.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.6.5\"><span class=\"ltx_text\" id=\"S4.T3.6.6.5.1\" style=\"font-size:90%;\">1.55</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.6.7.1\"><span class=\"ltx_text\" id=\"S4.T3.6.7.1.1\" style=\"font-size:90%;\">LCM-LoRA (4)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.7.2\"><span class=\"ltx_text\" id=\"S4.T3.6.7.2.1\" style=\"font-size:90%;\">0.521</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.6.7.3\"><span class=\"ltx_text\" id=\"S4.T3.6.7.3.1\" style=\"font-size:90%;\">0.317</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.7.4\"><span class=\"ltx_text\" id=\"S4.T3.6.7.4.1\" style=\"font-size:90%;\">2.56</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.6.7.5\"><span class=\"ltx_text\" id=\"S4.T3.6.7.5.1\" style=\"font-size:90%;\">1.69</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S4.T3.6.8.1\"><span class=\"ltx_text\" id=\"S4.T3.6.8.1.1\" style=\"font-size:90%;\">SUN (Ours) (4)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.6.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.8.2.1\" style=\"font-size:90%;\">0.485</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T3.6.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.8.3.1\" style=\"font-size:90%;\">0.274</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.6.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.8.4.1\" style=\"font-size:90%;\">2.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.6.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.8.5.1\" style=\"font-size:90%;\">1.62</span></td>\n</tr>\n</table>\n</figure>",
|
| 162 |
+
"capture": "Table 3: \nTime consumption for an image (seconds) using Diffusers Pipeline. Non batch parallel puts positive prompts and negative prompts into two batches for the inference process.\n"
|
| 163 |
+
},
|
| 164 |
+
"4": {
|
| 165 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T4.5.2.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S4.T4.2.1\" style=\"font-size:90%;\">\nAblative study of hyperparameter in Multi-step Consistency loss.\nEvaluated by FID, using a excessively large value makes training difficult.\n</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T4.3\">\n<tr class=\"ltx_tr\" id=\"S4.T4.3.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T4.3.1.1\">\n<span class=\"ltx_text\" id=\"S4.T4.3.1.1.1\" style=\"font-size:90%;\">(MSC)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.3.1.2\"><span class=\"ltx_text\" id=\"S4.T4.3.1.2.1\" style=\"font-size:90%;\">Rea v5.1(4)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.3.1.3\"><span class=\"ltx_text\" id=\"S4.T4.3.1.3.1\" style=\"font-size:90%;\">(8)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.3.1.4\"><span class=\"ltx_text\" id=\"S4.T4.3.1.4.1\" style=\"font-size:90%;\">Any v5(4)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.3.1.5\"><span class=\"ltx_text\" id=\"S4.T4.3.1.5.1\" style=\"font-size:90%;\">(8)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.3.2.1\"><span class=\"ltx_text\" id=\"S4.T4.3.2.1.1\" style=\"font-size:90%;\">0.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.2.2\"><span class=\"ltx_text\" id=\"S4.T4.3.2.2.1\" style=\"font-size:90%;\">22.41</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.2.3\"><span class=\"ltx_text\" id=\"S4.T4.3.2.3.1\" style=\"font-size:90%;\">18.77</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.2.4\"><span class=\"ltx_text\" id=\"S4.T4.3.2.4.1\" style=\"font-size:90%;\">25.62</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.2.5\"><span class=\"ltx_text\" id=\"S4.T4.3.2.5.1\" style=\"font-size:90%;\">20.98</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T4.3.3.1\"><span class=\"ltx_text\" id=\"S4.T4.3.3.1.1\" style=\"font-size:90%;\">0.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.3.2.1\" style=\"font-size:90%;\">19.60</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.3.3.1\" style=\"font-size:90%;\">15.73</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.3.4.1\" style=\"font-size:90%;\">22.52</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.3.5.1\" style=\"font-size:90%;\">16.17</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S4.T4.3.4.1\"><span class=\"ltx_text\" id=\"S4.T4.3.4.1.1\" style=\"font-size:90%;\">0.25</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.4.2\"><span class=\"ltx_text\" id=\"S4.T4.3.4.2.1\" style=\"font-size:90%;\">20.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.3.4.3\"><span class=\"ltx_text\" id=\"S4.T4.3.4.3.1\" style=\"font-size:90%;\">17.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.4.4\"><span class=\"ltx_text\" id=\"S4.T4.3.4.4.1\" style=\"font-size:90%;\">24.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.4.5\"><span class=\"ltx_text\" id=\"S4.T4.3.4.5.1\" style=\"font-size:90%;\">16.69</span></td>\n</tr>\n</table>\n</figure>",
|
| 166 |
+
"capture": "Table 4: \nAblative study of hyperparameter in Multi-step Consistency loss.\nEvaluated by FID, using a excessively large value makes training difficult.\n"
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
"image_paths": {
|
| 170 |
+
"1": {
|
| 171 |
+
"figure_path": "2312.08887v4_figure_1.png",
|
| 172 |
+
"caption": "Figure 1: \nVisualization of offset between positive and negative guidances.\nWhile finetuned SD can generate images of very different styles, the substraction of predictions guided by positive and negative text (offset) is relatively consistent in different SDs.",
|
| 173 |
+
"url": "http://arxiv.org/html/2312.08887v4/x1.png"
|
| 174 |
+
},
|
| 175 |
+
"2": {
|
| 176 |
+
"figure_path": "2312.08887v4_figure_2.png",
|
| 177 |
+
"caption": "Figure 2: The overall framework of the proposed SUN.\nSUN adapter is introduced to process and understand the negative prompt, which consists of several cross attention (CA) blocks.\nEach CA of SUN is placed side by side on each block of the original U-Net.\nEach block introduces a new K matrix and a V matrix, while sharing the Q with the original U-Net.\nAttention Normalization technique is proposed for stablability.",
|
| 178 |
+
"url": "http://arxiv.org/html/2312.08887v4/x2.png"
|
| 179 |
+
},
|
| 180 |
+
"3": {
|
| 181 |
+
"figure_path": "2312.08887v4_figure_3.png",
|
| 182 |
+
"caption": "Figure 3: \nAn illustraion of Muiti-step Consistency (MSC). When distilling a faster student model, teacher-student discrepancy exists and gradually accumulates, causing the content generated by the student to be inconsistent with the teacher (from the same noise). Based on the step distillation method, MSC is used to train the student to approach the teacher\u2019s trajectory even when error occurs, thus ensuring consistency in muiti-step samplings.",
|
| 183 |
+
"url": "http://arxiv.org/html/2312.08887v4/x3.png"
|
| 184 |
+
},
|
| 185 |
+
"4": {
|
| 186 |
+
"figure_path": "2312.08887v4_figure_4.png",
|
| 187 |
+
"caption": "Figure 4: Generation comparisons with different SOTA methods on different numbers of diffusion steps. The proposed SUN can produce high-quality images with only a few steps. In addition, the proposed SUN achieves the highest consistency to the ground truth with only 4 steps.",
|
| 188 |
+
"url": "http://arxiv.org/html/2312.08887v4/x4.png"
|
| 189 |
+
},
|
| 190 |
+
"5": {
|
| 191 |
+
"figure_path": "2312.08887v4_figure_5.png",
|
| 192 |
+
"caption": "Figure 5: The proposed SUN maintains the controllability of negative prompts when eliminating the need for CFG.",
|
| 193 |
+
"url": "http://arxiv.org/html/2312.08887v4/x5.png"
|
| 194 |
+
},
|
| 195 |
+
"6(a)": {
|
| 196 |
+
"figure_path": "2312.08887v4_figure_6(a).png",
|
| 197 |
+
"caption": "(a) Image-to-image and inpainting.\nFigure 6: Without extra training, SUN can accelerate other image-generation tasks, such as inpainting and image-to-image generation. SUN is also compatible with ControlNet.",
|
| 198 |
+
"url": "http://arxiv.org/html/2312.08887v4/x6.png"
|
| 199 |
+
},
|
| 200 |
+
"6(b)": {
|
| 201 |
+
"figure_path": "2312.08887v4_figure_6(b).png",
|
| 202 |
+
"caption": "(b) ControlNet.\nFigure 6: Without extra training, SUN can accelerate other image-generation tasks, such as inpainting and image-to-image generation. SUN is also compatible with ControlNet.",
|
| 203 |
+
"url": "http://arxiv.org/html/2312.08887v4/x7.png"
|
| 204 |
+
},
|
| 205 |
+
"7": {
|
| 206 |
+
"figure_path": "2312.08887v4_figure_7.png",
|
| 207 |
+
"caption": "Figure 7: Ablation on our proposed Multi Step Consistency loss.\nThe addition of MSC allows for the generation of samples with consistent content in 4 to 8 or more steps.",
|
| 208 |
+
"url": "http://arxiv.org/html/2312.08887v4/x8.png"
|
| 209 |
+
},
|
| 210 |
+
"8": {
|
| 211 |
+
"figure_path": "2312.08887v4_figure_8.png",
|
| 212 |
+
"caption": "Figure 8: Ablation on our proposed Attention Normalization.\nIt enables SUN as a pluggable module to have stable generation capabilities on different pre-trained diffusion models (Realistic Vision V5.1 and Rev Animated).",
|
| 213 |
+
"url": "http://arxiv.org/html/2312.08887v4/x9.png"
|
| 214 |
+
},
|
| 215 |
+
"9": {
|
| 216 |
+
"figure_path": "2312.08887v4_figure_9.png",
|
| 217 |
+
"caption": "Figure 9: Ablative study of \u0394\u0394\\Deltaroman_\u0394 in the training strategy (4 steps).\n0.25 achieves better performance in quality and consistency.",
|
| 218 |
+
"url": "http://arxiv.org/html/2312.08887v4/x10.png"
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
"validation": true,
|
| 222 |
+
"references": [],
|
| 223 |
+
"url": "http://arxiv.org/html/2312.08887v4"
|
| 224 |
+
}
|
20241001/2312.17397v2.json
ADDED
|
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Classifier-free graph diffusion for molecular property targeting",
|
| 3 |
+
"abstract": "This work focuses on the task of property targeting: that is, generating molecules conditioned on target chemical properties to expedite candidate screening for novel drug and materials development. DiGress is a recent diffusion model for molecular graphs whose distinctive feature is allowing property targeting through classifier-based (CB) guidance. While CB guidance may work to generate molecular-like graphs, we hint at the fact that its assumptions apply poorly to the chemical domain. Based on this insight we propose a classifier-free DiGress (FreeGress), which works by directly injecting the conditioning information into the training process. CF guidance is convenient given its less stringent assumptions and since it does not require to train an auxiliary property regressor, thus halving the number of trainable parameters in the model. We empirically show that our model yields significant improvement in Mean Absolute Error with respect to DiGress on property targeting tasks on QM9 and ZINC-250k benchmarks. As an additional contribution, we propose a simple yet powerful approach to improve the chemical validity of generated samples, based on the observation that certain chemical properties such as molecular weight correlate with the number of atoms in molecules.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Generating molecules with desired chemical properties is crucial to enable fast candidate screening in the early stages of drug [4 ###reference_b4###] or novel materials [14 ###reference_b14###] development. Although the advent of deep learning has achieved remarkable accomplishments in computational chemistry, this strategic endeavor remains an area of active research. In the past, much of the researchers\u2019 focus has been on the property optimization task, which is based on modifying a generated molecule such that it acquires some solicited chemical property [11 ###reference_b11###]. However, driven by the success of conditional generative models [16 ###reference_b16###], an equally important \u2013 though slightly different \u2013 task has emerged recently: property targeting, that is, to natively generate molecules which satisfy pre-specified chemical desiderata [21 ###reference_b21###].\nDenoising Diffusion Probabilistic Models (DDPMs) [9 ###reference_b9###] have been shown to achieve state-of-the-art performance in conditional generation, for example to synthesize images from user-provided textual guidance [20 ###reference_b20###]. Due to their flexibility and excellent performance, DDPMs have been extended to the chemical domain, being applied to tasks such as distribution learning, property optimization, and property targeting with promising results [19 ###reference_b19###].\nDiGress [24 ###reference_b24###] is one of the first successful applications of DDPMs to molecular generation. Under the hood, DiGress is based on a discrete diffusion process [2 ###reference_b2###] which gradually applies noise to a molecular graph according to a transition matrix, while denoising is performed by a graph transformer network [6 ###reference_b6###]. The most interesting feature of DiGress is the possibility to perform conditional generation for property targeting through classifier-based (CB) guidance [5 ###reference_b5###]. Loosely speaking, CB guidance requires to train a separate classifier to predict the conditional information from noisy samples, and to inject the resulting gradients back into the reverse process to bias the generative process. While successful to some extent, CB guidance has been shown to be inherently limited by i) the necessity of training a separate property predictor, which defies the purpose of having a single conditional generative model in the first place, and ii) the fact that the gradients of the property predictor are not always informative and may lead the generative process astray. Due to these limitations, classifier-free (CF) guidance for DDPMs is often preferred. The idea behind CF guidance is to directly incorporate the conditioning vector as input to train the conditional DDPM. With respect to CB guidance, CF guidance has been shown to enable more stable training and better generative performance in general [10 ###reference_b10###].\nWith DiGress for molecular generation specifically, CB guidance is further limited by the fact that the auxiliary model is a regressor whose predictions (the chemical properties) are assumed to be normally distributed even for noisy graphs which are chemically invalid.\nThis motivates our first contribution, which consists of the development and implementation of CF guidance for DiGress, called FreeGress.\nExperimentally, we show that switching from CB to CF guidance is beneficial to improve at property targeting. In particular, we evaluated FreeGress against DiGress on the QM9 [17 ###reference_b17###] and ZINC-250k [8 ###reference_b8###] datasets, where we queried the models to generate molecules with properties as close as possible to a target specification. Comparing the mean absolute error between the target properties and the properties of the generated molecules, FreeGress significantly outperformed DiGress, with improvements up to in the most favourable case. Besides improving performance, FreeGress does not require an auxiliary property regressor, reducing the number of trainable parameters. Furthermore, guided by the observation that certain chemical properties relate to the molecular graph size (the number of atoms in the molecule), we also propose to improve the generative process by first learning the probability of sampling a certain number of nodes given the target property, and then using samples from this distribution to constrain the size of the graph to be generated. Through experiments, we show that this simple method allows to generate more chemically valid graphs without sacrificing performance. Our code and supplementary material are available at https://github.com/Asduffo/FreeGress ###reference_###."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Background and related works",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Deep generative models for molecules",
|
| 21 |
+
"text": "The first approach to use a deep generative model to produce unseen molecules has been proposed by [8 ###reference_b8###]. Essentially, the model is a VAE where both the encoder and decoder are recurrent neural networks trained to reconstruct SMILES strings.\nOther methods related to this generative flavour use the alternative language of SELFIES instead, which is deemed to be more robust than SMILES as every possible character sequence defines a valid molecule.\nA different branch of methods is focused on generating the molecular graph directly.\nOn graph-based models, property optimization has been performed with simple hill climbing in latent space (guided by a property predictor), up to iterative reinforcement learning approaches which assign higher rewards to chemically appealing molecules [25 ###reference_b25###].\nLately, DDPMs that generate the molecular graph have started to be used for distribution learning as well as property optimization tasks."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Denoising Diffusion Probabilistic Models",
|
| 27 |
+
"text": "DDPMs consist of an untrained forward process and a parameterized reverse process . The former iteratively perturbs the initial data point to transform it into an akin to Gaussian noise, while the second is trained to incrementally remove the noise from until is restored. Generation from a trained DDPM amounts to sample from an isotropic Gaussian and iteratively applying the denoising model for steps until a new sample is obtained.\nConditioned generation in DDPMs is achieved by injecting a guidance vector, or guide , to obtain a conditioned reverse process . DDPMs defined as above are not suited for graphs, since Gaussian noising does not work effectively on discrete distributions."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.2.1",
|
| 31 |
+
"parent_section_id": "2.2",
|
| 32 |
+
"section_name": "2.2.1 Classifier-based guidance",
|
| 33 |
+
"text": "CB guidance [5 ###reference_b5###] refactors the reverse process as , where employs an auxiliary classifier trained to predict the guide from a noisy version of the input. A scaled version of the regressor gradient is used to magnify the conditioning signal. CB guidance is limited by the fact that it requires to train the auxiliary classifier, without the possibility of exploiting pre-trained models; moreover, since only few parts of the noisy are actually useful to predict , the resulting gradient could yield undesirable directions in input space."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.2.2",
|
| 37 |
+
"parent_section_id": "2.2",
|
| 38 |
+
"section_name": "2.2.2 Classifier-free guidance",
|
| 39 |
+
"text": "Differently from the CB approach, CF guidance [10 ###reference_b10###] jointly optimizes , the conditioned model, and , the unconditioned model, at the same time. During sampling, the reverse process is computed as the barycentric combination between the conditioned and unconditioned predictions with weight ."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.3",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "DiGress: Denoising Diffusion for Graphs",
|
| 45 |
+
"text": "Here we briefly describe DiGress [24 ###reference_b24###], a discrete DDPM for graphs which is the starting point of this study.\nIn the forward process, DiGress injects noise in the data at time-step by multiplying it with a transition matrix, which loosely speaking specifies the probability of transitioning from one node type to another. The forward process is applied until the training graph is completely corrupted.\nThe reverse process implemented by DiGress is specified as , which is factored (by assuming node and edge independence) as a product of conditionals of the individual nodes and edges given the current graph. In turn, each node conditional is obtained by marginalizing over the node types (resp. edge types for the edge conditionals). Specifically, the nodes (resp. edge) marginal is computed by predicting the true node (resp. edge) types from their noisy intermediates.\nFor conditioned generation, DiGress employs CB guidance. Specifically, the conditioned reverse process is formulated as:\nwhere , is a property regressor, and is a hyper-parameter that magnifies the conditional gradient."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.3.1",
|
| 49 |
+
"parent_section_id": "2.3",
|
| 50 |
+
"section_name": "2.3.1 Limitations",
|
| 51 |
+
"text": "While the development of unconditioned DiGress appears intuitive, the use of CB guidance for the conditional variant leads to design choices and assumptions that are unrealistic in the chemical domain. Firstly, CB guidance relies on an external predictor that learns chemical properties from noisy graphs, which implies that chemically invalid molecules are unnaturally related to valid ones by being assigned the same properties. Secondly, the distribution learned by the auxiliary property regressor is assumed to be Normal with mean , approximated by by minimizing a mean squared error objective. This assumption is unsupported by premise for most denoising steps (where usually represents an invalid molecule), since it implies a distribution of chemical properties for an object that does not even exist in chemical space. Moreover, even if two molecules with graphs and are both valid and similar, their chemical properties can be drastically different, since molecular property landscapes are in general non-smooth [1 ###reference_b1###], violating again the normality assumption. This discussion motivates our intention to develop CF guidance for DiGress with the objective of making the conditional process simpler and at the same time appropriate for the chemical context where it is applied.\n\u2026\\pgfmathresultpt\\pgfmathresultpt\n###figure_1### ###figure_2### ###figure_3###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Classifier-Free Graph Diffusion",
|
| 57 |
+
"text": "This section presents our first contribution, a classifier-free DDPM for conditioned molecular generation named FreeGress. A high-level summarization of the proposed model is sketched in Figure 1 ###reference_###."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.1",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Conditioning on the number of nodes",
|
| 63 |
+
"text": "A general limitation of diffusion models is the fact that the size of a sample (for graphs specifically, the number of nodes) cannot change during the denoising process. Some studies consider the possibility of inserting and removing elements from a generated, mono-dimensional sequence [12 ###reference_b12###] but, to the best of our knowledge, there are no similar works for graphs. In the original implementation of DiGress, the number of nodes the generated graph will have is sampled from the marginal distribution computed from the training and validation sets, respectively. While this is not an issue for unconditioned sampling, we observed that for conditioned sampling, a certain property (e.g. molecular weight) might be featured only by molecules with a specific number of atoms. To solve this problem, we propose that the number of nodes of the graph to be generated is sampled from , which is parametrized as a neural network with two hidden layers and a softmax output layer. Here, the idea is to exploit the guide even before generation starts, by providing a graph size which is correlated with the requested property."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Experiments",
|
| 69 |
+
"text": "Here, we detail the experimental analysis to evaluate FreeGress on property targeting tasks by generating compounds that meet pre-specified properties."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.1",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Datasets",
|
| 75 |
+
"text": "Our experiments were conducted on QM9, a dataset of 133k small molecules (up to 9 heavy atoms) which was also part of the evaluation of DiGress, and ZINC-250k, a collection of 250k drug-like molecules selected from the ZINC dataset. For the latter, the molecules were first preprocessed by removing stereochemistry information. We have also removed scarcely present non-neutrally-charged atoms, leaving only N+ and O-. The latter have been treated as standalone atom types instead of their neutrally-charged counterparts. The final dataset size was consequently reduced to 228k molecules. Further statistics about the datasets are provided in the Supplementary Section 1."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.2",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Metrics and targets",
|
| 81 |
+
"text": "All the experiments were performed using the following setup. After we trained each model, we randomly sampled 100 molecules from the dataset and computed the desired target properties as vectors , which were used as conditioning vectors. Then, we used the vectors 10 times each to perform conditioned generation, for a total of 1000 generated molecules for each model. We computed the desired properties on the generated samples111Using packages such as RDKit [13 ###reference_b13###] and psi4 [23 ###reference_b23###] and compared the results with the properties of the original samples. The metric chosen for the comparison is the Mean Absolute Error (MAE) between the target properties and the properties of the generated molecules:\nwhere are the target properties of the -th molecule from the dataset and is the target property of the -th molecule generated using the properties of the -th molecule as guide.\nThe targeted properties were the dipole moment and the Highest Occupied Molecular Orbital (HOMO) for QM9; for ZINC-250k, we targeted the Log-Partition coefficient (LogP), and the Quantitative Estimation of Drug-likeness (QED). The proposed node inference method was evaluated on the ZINC-250k dataset targeting Molecular Weight (MW)."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.3",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "Experimental details",
|
| 87 |
+
"text": ""
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.3.1",
|
| 91 |
+
"parent_section_id": "4.3",
|
| 92 |
+
"section_name": "4.3.1 QM9 for /HOMO targeting.",
|
| 93 |
+
"text": "We trained and compared a set of FreeGress variants against a set of DiGress models, including an unconditional variant as baseline. To the best of our knowledge, DiGress is the only model that allows for property targeting at the time of writing.\nFreeGress instances were trained with , and with to study the effect of conditional dropout on the generative process.\nWe have also experimented variants with additional features and ; in this case, was excluded as it did not improve on the former in preliminary trials. For DiGress, we evaluated the performance using . To ensure a fair comparison, we trained both DiGress and FreeGress with matching architectural design: specifically, we used 5 graph transformer layers; embedding size of 256 for and 128 for both E and ; 8 attention heads; , 1200 training epochs with batch size 512; Amsgrad optimizer [18 ###reference_b18###] with learning rate of and weight decay of ; . Importantly, since FreeGress does not require an auxiliary regressor, all the variants are trained with half the number of parameters than DiGress (4.6 million vs. 9.2 million, approximately). A single training required up to 12 hours on an nVidia V100 with 16GB of VRAM."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.3.2",
|
| 97 |
+
"parent_section_id": "4.3",
|
| 98 |
+
"section_name": "4.3.2 ZINC-250k for logP/QED targeting.",
|
| 99 |
+
"text": "For these experiments we used a slightly different setup. Specifically, we trained from scratch both DiGress and FreeGress with 12 layers, 1000 training epochs, batch size 256, and doubling the embedding size of , while keeping the other hyper-parameters the same. We used larger models since ZINC-250k includes bigger (in terms of number of atoms) and more diverse molecules. In this case, we did not perform experiments with additional features since they increased training time by a prohibitive amount. FreeGress variants required approximately 16 million parameters, while DiGress used approximately 32 million including the property regressor. Training the various models required approximately 5 days each on an nVidia A100 with 80GB of VRAM."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.3.3",
|
| 103 |
+
"parent_section_id": "4.3",
|
| 104 |
+
"section_name": "4.3.3 ZINC-250k for MW targeting.",
|
| 105 |
+
"text": "These experiments evaluate the node inference method proposed in Section 3.1 ###reference_###. As such, we targeted MW since it trivially depends on the number of nodes. The setup is similar to the logP/QED experiments, with some slight differences. In this case only, we have managed to train FreeGress with additional features and ( was excluded since it did not bring significant improvements in early trials). The node inference model was parameterized as a neural network with 2 layers with 512 units each and ReLU activations."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5",
|
| 109 |
+
"parent_section_id": null,
|
| 110 |
+
"section_name": "Results",
|
| 111 |
+
"text": ""
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "6",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "Conclusions",
|
| 117 |
+
"text": "We introduced FreeGress, a discrete DDPM for graphs with CF guidance, which allows to generate molecules complying with a pre-specified set of desired chemical properties. Through extensive experiments, we have shown that CF guidance allows to generate better (more tailored to the specification) molecules than CB guidance without sacrificing chemical validity. Additionally, we have implemented a form of learned node inference, and shown that inferring the number of nodes from the guide helps the generation whenever the molecular size is connected to the target property.\nIn future studies, we intend to tackle some of the current limitations of FreeGress to expand its applicability. A first research direction is to improve chemical validity designing the forward and reverse processes to use conditional transition matrices, thus constraining the intermediate noising and denoising steps to happen in valid chemical space. Another promising direction is to work with molecular fragments rather than atoms, as it would likely reduce the opportunities to generate an invalid intermediate structure. We intend to validate these findings also in the more general setting of arbitrary graph generation."
|
| 118 |
+
}
|
| 119 |
+
],
|
| 120 |
+
"appendix": [],
|
| 121 |
+
"tables": {
|
| 122 |
+
"1": {
|
| 123 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.123.2.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S4.T1.2.1\" style=\"font-size:90%;\">Results on the QM9 dataset. First two columns are single conditioning, last column is multiple conditioning on and HOMO.\n</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.121\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T1.4.2.3\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T1.3.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T1.4.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.2.4.1\">HOMO</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T1.4.2.2\">\n+<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.2.2.1\">HOMO</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.10.8\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T1.10.8.7\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.5.3.1.1\">MAE </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.4.2.1\">Val. </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.7.5.3.1\">MAE </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.6.4.1\">Val. </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.9.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.7.5.1\">MAE </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.10.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.10.8.6.1\">Val. </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.16.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.16.14.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.16.14.7.1\">Unconditional</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.11.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.12.10.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.13.11.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.14.12.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.15.13.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.16.14.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.121.120.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.121.120.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.121.120.1.1.1\">DiGress</span></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.120.1.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.120.1.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.120.1.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.120.1.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.120.1.6\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.120.1.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.23.21\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.17.15.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.16.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.19.17.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.20.18.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.21.19.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.22.20.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.23.21.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.30.28\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.24.22.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.25.23.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.26.24.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.27.25.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.28.26.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.29.27.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.30.28.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.37.35\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.31.29.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.32.30.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.33.31.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.34.32.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.35.33.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.36.34.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.37.35.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.44.42\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.38.36.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.39.37.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.40.38.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.41.39.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.42.40.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.43.41.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.44.42.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.121.121.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.121.121.2.1\"><span class=\"ltx_text ltx_font_bold ltx_font_smallcaps\" id=\"S4.T1.121.121.2.1.1\">FreeGress</span></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.121.2.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.121.2.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.121.2.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.121.2.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.121.2.6\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.121.2.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.51.49\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.45.43.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.46.44.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.47.45.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.48.46.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.49.47.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.50.48.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.51.49.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.58.56\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.52.50.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.53.51.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.54.52.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.55.53.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.56.54.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.57.55.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.58.56.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.65.63\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.59.57.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.60.58.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.61.59.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.62.60.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.63.61.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.64.62.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.65.63.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.72.70\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.66.64.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.67.65.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.68.66.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.69.67.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.70.68.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.71.69.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.72.70.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.79.77\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.73.71.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.74.72.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.75.73.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.76.74.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.77.75.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.78.76.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.79.77.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.86.84\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.80.78.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.81.79.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.82.80.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.83.81.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.84.82.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.85.83.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.86.84.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.93.91\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.87.85.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.88.86.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.89.87.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.90.88.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.91.89.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.92.90.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.93.91.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.121.122.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" colspan=\"3\" id=\"S4.T1.121.122.3.1\"><span class=\"ltx_text ltx_font_bold ltx_font_smallcaps\" id=\"S4.T1.121.122.3.1.1\">FreeGress<span class=\"ltx_text ltx_font_upright\" id=\"S4.T1.121.122.3.1.1.1\">+extra features</span></span></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.122.3.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.122.3.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.122.3.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.121.122.3.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.100.98\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.94.92.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.95.93.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.96.94.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.97.95.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.98.96.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.99.97.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.100.98.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.107.105\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.101.99.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.102.100.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.103.101.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.104.102.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.105.103.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.106.104.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.107.105.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.114.112\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.108.106.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.109.107.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.110.108.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.111.109.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.112.110.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.113.111.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.114.112.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.121.119\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.115.113.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.116.114.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.117.115.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.118.116.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.119.117.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.120.118.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.121.119.7\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 124 |
+
"capture": "Table 1: Results on the QM9 dataset. First two columns are single conditioning, last column is multiple conditioning on and HOMO.\n"
|
| 125 |
+
},
|
| 126 |
+
"2": {
|
| 127 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.91.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.92.2\" style=\"font-size:90%;\">Results on the ZINC-250k dataset. First two columns are single conditioning, last column is multiple conditioning on LogP and QED.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.89\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.89.90.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T2.89.90.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S5.T2.89.90.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.89.90.1.2.1\">LogP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S5.T2.89.90.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.89.90.1.3.1\">QED</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S5.T2.89.90.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.89.90.1.4.1\">LogP+QED</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T2.6.6.7\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.1.1\">MAE </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.2.2.2.1\">Val. </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.3.3.1\">MAE </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.4.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.4.4.4.1\">Val. </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.5.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.5.1\">MAE </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.6.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.6.1\">Val. </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.12.12.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.12.12.7.1\">Unconditional</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.10.10.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.11.11.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.12.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.89.91.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.89.91.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.89.91.2.1.1\">DiGress</span></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.91.2.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.91.2.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.91.2.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.91.2.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.91.2.6\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.91.2.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.19.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.13.13.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.16.16.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.17.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.18.18.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.19.19.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.26.26\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.20.20.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.21.21.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.22.22.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.23.23.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.24.24.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.25.25.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.26.26.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.33.33\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.27.27.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.28.28.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.29.29.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.30.30.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.31.31.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.32.32.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.33.33.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.40.40\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.34.34.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.35.35.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.36.36.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.37.37.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.38.38.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.39.39.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.40.40.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.89.92.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.89.92.3.1\"><span class=\"ltx_text ltx_font_bold ltx_font_smallcaps\" id=\"S5.T2.89.92.3.1.1\">FreeGress</span></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.92.3.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.92.3.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.92.3.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.92.3.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.92.3.6\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.89.92.3.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.47.47\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.41.41.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.42.42.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.43.43.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.44.44.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.45.45.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.46.46.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.47.47.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.54.54\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.48.48.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.49.49.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.50.50.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.51.51.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.52.52.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.53.53.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.54.54.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.61.61\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.55.55.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.56.56.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.57.57.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.58.58.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.59.59.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.60.60.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.61.61.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.68.68\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.62.62.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.63.63.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.64.64.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.65.65.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.66.66.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.67.67.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.68.68.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.75.75\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.69.69.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.70.70.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.71.71.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.72.72.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.73.73.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.74.74.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.75.75.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.82.82\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.76.76.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.77.77.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.78.78.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.79.79.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.80.80.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.81.81.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.82.82.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.89.89\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T2.83.83.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.84.84.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.85.85.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.86.86.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.87.87.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.88.88.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.89.89.7\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 128 |
+
"capture": "Table 2: Results on the ZINC-250k dataset. First two columns are single conditioning, last column is multiple conditioning on LogP and QED."
|
| 129 |
+
},
|
| 130 |
+
"3": {
|
| 131 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.60.2.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S5.T3.2.1\" style=\"font-size:90%;\">Results of applying the proposed node inference method to <span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T3.2.1.1\">FreeGress</span> while targeting Molecular Weight on the ZINC-250k dataset. The last column reports the improvement against a \u201cno inference\u201d <span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T3.2.1.2\">FreeGress</span> variant.\n</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T3.56\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S5.T3.3.1.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S5.T3.3.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.3.1\">No inference</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S5.T3.3.1.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S5.T3.3.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.4.1\">Improvement</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.9.7\">\n<td class=\"ltx_td\" id=\"S5.T3.9.7.7\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.4.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.4.2.1.1\">MAE </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.3.2.1\">Val. </span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.6.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.4.3.1\">MAE </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.7.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.7.5.4.1\">Val. </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.8.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.8.6.5.1\">MAE </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.9.7.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.9.7.6.1\">Val. </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.56.55.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.56.55.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.56.55.1.1.1\">DiGress</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.55.1.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.55.1.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.55.1.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.55.1.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.55.1.6\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.55.1.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.13.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.10.8.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.11.9.2\">\n p m 6.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.12.10.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.13.11.4\">\n p m 2.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.13.11.5\">38.7%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.13.11.6\">-71.5%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.13.11.7\">-53.1%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.16.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.14.12.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.15.13.2\">\n p m 6.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.16.14.4\">79.3%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.16.14.3\">\n p m 2.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.16.14.5\">38.4%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.16.14.6\">-65.2%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.16.14.7\">-51.6%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.19.17\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.17.15.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.18.16.2\">\n p m 6.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.19.17.4\">80.2%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.19.17.3\">\n p m 3.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.19.17.5\">41.2%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.19.17.6\">-66.4%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.19.17.7\">-48.6%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.22.20\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.20.18.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.21.19.2\">\n p m 6.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.22.20.4\">77.8%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.22.20.3\">\n p m 2.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.22.20.5\">40.4%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.22.20.6\">-66.8%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.22.20.7\">-48.1%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.56.56.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.56.56.2.1\"><span class=\"ltx_text ltx_font_bold ltx_font_smallcaps\" id=\"S5.T3.56.56.2.1.1\">FreeGress</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.56.2.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.56.2.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.56.2.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.56.2.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.56.2.6\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.56.2.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.25.23\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.23.21.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.24.22.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.25.23.4\">54.5%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.25.23.3\">\n p m 1.81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.25.23.5\">75.7%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.25.23.6\">+2.4%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.25.23.7\">+38.9%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.28.26\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.26.24.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.27.25.2\">\n p m 2.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.28.26.4\">60.9%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.28.26.3\">\n p m 0.77</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.28.26.5\">81.4%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.28.26.6\">-44.9%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.28.26.7\">+33.7%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.31.29\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.29.27.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.30.28.2\">\n p m 2.81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.31.29.4\">50.8%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.31.29.3\">\n p m 1.93</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.31.29.5\">79.7%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.31.29.6\">-21.2%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.31.29.7\">+56.9%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.34.32\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.32.30.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.33.31.2\">\n p m 1.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.34.32.4\">46.5%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.34.32.3\">\n p m 1.02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.34.32.5\">66.6%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.34.32.6\">-13.9%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.34.32.7\">+43.2%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.37.35\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.35.33.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.36.34.2\">\n p m 5.47</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.37.35.4\">61.3%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.37.35.3\">\n p m 1.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.37.35.5\">81.3%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.37.35.6\">-33.9%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.37.35.7\">+32.6%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.40.38\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.38.36.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.39.37.2\">\n p m 3.19</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.40.38.4\">53.9%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.40.38.3\">\n p m 1.47</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.40.38.5\">78.4%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.40.38.6\">-2.3%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.40.38.7\">+45.5%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.43.41\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.41.39.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.42.40.2\">\n p m 4.08</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.43.41.4\">48.4%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.43.41.3\">\n p m 2.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.43.41.5\">75.0%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.43.41.6\">-8.5%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.43.41.7\">+54.9%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.56.57.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" colspan=\"3\" id=\"S5.T3.56.57.3.1\"><span class=\"ltx_text ltx_font_bold ltx_font_smallcaps\" id=\"S5.T3.56.57.3.1.1\">FreeGress<span class=\"ltx_text ltx_font_upright\" id=\"S5.T3.56.57.3.1.1.1\">+extra features</span></span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.57.3.2\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.57.3.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.57.3.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.56.57.3.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.47.45\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.44.42.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.45.43.2\">\n p m 4.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.47.45.5\">60.1%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.46.44.3\">\n p m 0.38</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.47.45.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.47.45.6\">-91.85%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.47.45.7\">+43.25%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.50.48\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.48.46.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.49.47.2\">\n p m 9.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.50.48.4\">82.1%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.50.48.3\">\n p m 0.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.50.48.5\">83.7%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.50.48.6\">-91.87%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.50.48.7\">+1.95%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.53.51\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.51.49.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.52.50.2\">\n p m 11.03</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.53.51.4\">73.3%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.53.51.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.53.51.5\">84.7%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.53.51.6\">-93.08%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.53.51.7\">+15.55%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.56.54\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.54.52.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.55.53.2\">\n p m 13.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.56.54.4\">63.2%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.56.54.3\">\n p m 0.40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.56.54.5\">81.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.56.54.6\">-92.97%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.56.54.7\">+28.48%</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 132 |
+
"capture": "Table 3: Results of applying the proposed node inference method to FreeGress while targeting Molecular Weight on the ZINC-250k dataset. The last column reports the improvement against a \u201cno inference\u201d FreeGress variant.\n"
|
| 133 |
+
}
|
| 134 |
+
},
|
| 135 |
+
"image_paths": {
|
| 136 |
+
"1(a)": {
|
| 137 |
+
"figure_path": "2312.17397v2_figure_1(a).png",
|
| 138 |
+
"caption": "Figure 1: A depiction of FreeGress. The forward process, which gradually corrupts a molecule into a random graph, goes from left to the right. The reverse process, which denoises the original graph, goes from right to left. Note that the reverse process allows for a conditioning vector \ud835\udc9a\ud835\udc9a\\bm{y}bold_italic_y and a number of nodes n\ud835\udc5bnitalic_n sampled from a trained neural network p\u03b5subscript\ud835\udc5d\ud835\udf00p_{\\varepsilon}italic_p start_POSTSUBSCRIPT italic_\u03b5 end_POSTSUBSCRIPT.",
|
| 139 |
+
"url": "http://arxiv.org/html/2312.17397v2/"
|
| 140 |
+
},
|
| 141 |
+
"1(b)": {
|
| 142 |
+
"figure_path": "2312.17397v2_figure_1(b).png",
|
| 143 |
+
"caption": "Figure 1: A depiction of FreeGress. The forward process, which gradually corrupts a molecule into a random graph, goes from left to the right. The reverse process, which denoises the original graph, goes from right to left. Note that the reverse process allows for a conditioning vector \ud835\udc9a\ud835\udc9a\\bm{y}bold_italic_y and a number of nodes n\ud835\udc5bnitalic_n sampled from a trained neural network p\u03b5subscript\ud835\udc5d\ud835\udf00p_{\\varepsilon}italic_p start_POSTSUBSCRIPT italic_\u03b5 end_POSTSUBSCRIPT.",
|
| 144 |
+
"url": "http://arxiv.org/html/2312.17397v2/"
|
| 145 |
+
},
|
| 146 |
+
"1(c)": {
|
| 147 |
+
"figure_path": "2312.17397v2_figure_1(c).png",
|
| 148 |
+
"caption": "Figure 1: A depiction of FreeGress. The forward process, which gradually corrupts a molecule into a random graph, goes from left to the right. The reverse process, which denoises the original graph, goes from right to left. Note that the reverse process allows for a conditioning vector \ud835\udc9a\ud835\udc9a\\bm{y}bold_italic_y and a number of nodes n\ud835\udc5bnitalic_n sampled from a trained neural network p\u03b5subscript\ud835\udc5d\ud835\udf00p_{\\varepsilon}italic_p start_POSTSUBSCRIPT italic_\u03b5 end_POSTSUBSCRIPT.",
|
| 149 |
+
"url": "http://arxiv.org/html/2312.17397v2/"
|
| 150 |
+
},
|
| 151 |
+
"3(a)": {
|
| 152 |
+
"figure_path": "2312.17397v2_figure_3(a).png",
|
| 153 |
+
"caption": "Input \u03bc\ud835\udf07\\muitalic_\u03bc: 0.0603\nEst. \u03bc\ud835\udf07\\muitalic_\u03bc: 0.0463\nFigure 3: Curated molecules from the QM9 (top row) and ZINC-250k (bottom row). The input conditioning value and the one estimated (Est.) after generation are displayed below each molecule.",
|
| 154 |
+
"url": "http://arxiv.org/html/2312.17397v2/extracted/5892967/figures/molecules/mu/mu_tgt00603_est00463.png"
|
| 155 |
+
},
|
| 156 |
+
"3(b)": {
|
| 157 |
+
"figure_path": "2312.17397v2_figure_3(b).png",
|
| 158 |
+
"caption": "Input \u03bc\ud835\udf07\\muitalic_\u03bc: 4.2338\nEst. \u03bc\ud835\udf07\\muitalic_\u03bc: 4.1238\nFigure 3: Curated molecules from the QM9 (top row) and ZINC-250k (bottom row). The input conditioning value and the one estimated (Est.) after generation are displayed below each molecule.",
|
| 159 |
+
"url": "http://arxiv.org/html/2312.17397v2/extracted/5892967/figures/molecules/mu/mu_tgt42338_est41238.png"
|
| 160 |
+
},
|
| 161 |
+
"3(c)": {
|
| 162 |
+
"figure_path": "2312.17397v2_figure_3(c).png",
|
| 163 |
+
"caption": "Input HOMO: -6.8083\nEst. HOMO: -6.8604\nFigure 3: Curated molecules from the QM9 (top row) and ZINC-250k (bottom row). The input conditioning value and the one estimated (Est.) after generation are displayed below each molecule.",
|
| 164 |
+
"url": "http://arxiv.org/html/2312.17397v2/extracted/5892967/figures/molecules/homo/HOMO_tgtm68083_estm68604.png"
|
| 165 |
+
},
|
| 166 |
+
"3(d)": {
|
| 167 |
+
"figure_path": "2312.17397v2_figure_3(d).png",
|
| 168 |
+
"caption": "Input HOMO: -7.4559\nEst. HOMO: -7.4632\nFigure 3: Curated molecules from the QM9 (top row) and ZINC-250k (bottom row). The input conditioning value and the one estimated (Est.) after generation are displayed below each molecule.",
|
| 169 |
+
"url": "http://arxiv.org/html/2312.17397v2/extracted/5892967/figures/molecules/homo/HOMO_tgtm74559_estm74632.png"
|
| 170 |
+
},
|
| 171 |
+
"3(e)": {
|
| 172 |
+
"figure_path": "2312.17397v2_figure_3(e).png",
|
| 173 |
+
"caption": "Input LogP: -0.9834\nEst. LogP: -0.9246\nFigure 3: Curated molecules from the QM9 (top row) and ZINC-250k (bottom row). The input conditioning value and the one estimated (Est.) after generation are displayed below each molecule.",
|
| 174 |
+
"url": "http://arxiv.org/html/2312.17397v2/extracted/5892967/figures/molecules/logp/logp_tgtm09834_estm09246.png"
|
| 175 |
+
},
|
| 176 |
+
"3(f)": {
|
| 177 |
+
"figure_path": "2312.17397v2_figure_3(f).png",
|
| 178 |
+
"caption": "Input QED: 0.5034\nEst. QED: 0.5297\nFigure 3: Curated molecules from the QM9 (top row) and ZINC-250k (bottom row). The input conditioning value and the one estimated (Est.) after generation are displayed below each molecule.",
|
| 179 |
+
"url": "http://arxiv.org/html/2312.17397v2/extracted/5892967/figures/molecules/qed/qed_tgt05034_est05297.png"
|
| 180 |
+
},
|
| 181 |
+
"3(g)": {
|
| 182 |
+
"figure_path": "2312.17397v2_figure_3(g).png",
|
| 183 |
+
"caption": "Input MW: 159.05\nEst. MW: 155.03\nFigure 3: Curated molecules from the QM9 (top row) and ZINC-250k (bottom row). The input conditioning value and the one estimated (Est.) after generation are displayed below each molecule.",
|
| 184 |
+
"url": "http://arxiv.org/html/2312.17397v2/extracted/5892967/figures/molecules/mw/mw_tgt15905_est15503.png"
|
| 185 |
+
},
|
| 186 |
+
"3(h)": {
|
| 187 |
+
"figure_path": "2312.17397v2_figure_3(h).png",
|
| 188 |
+
"caption": "Input MW: 401.13\nEst. MW: 405.10\nFigure 3: Curated molecules from the QM9 (top row) and ZINC-250k (bottom row). The input conditioning value and the one estimated (Est.) after generation are displayed below each molecule.",
|
| 189 |
+
"url": "http://arxiv.org/html/2312.17397v2/extracted/5892967/figures/molecules/mw/mw_tgt40113_est40510.png"
|
| 190 |
+
}
|
| 191 |
+
},
|
| 192 |
+
"validation": true,
|
| 193 |
+
"references": [],
|
| 194 |
+
"url": "http://arxiv.org/html/2312.17397v2"
|
| 195 |
+
}
|
20241001/2401.00416v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2401.01643v3.json
ADDED
|
@@ -0,0 +1,220 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "S3Net: Innovating Stereo Matching and Semantic Segmentation with a Single-Branch Semantic Stereo Network in Satellite Epipolar Imagery",
|
| 3 |
+
"abstract": "Stereo matching and semantic segmentation are significant tasks in binocular satellite 3D reconstruction. However, previous studies primarily view these as independent parallel tasks, lacking an integrated multitask learning framework. This work introduces a solution, the Single-branch Semantic Stereo Network (S3Net), which innovatively combines semantic segmentation and stereo matching using Self-Fuse and Mutual-Fuse modules. Unlike preceding methods that utilize semantic or disparity information independently, our method identifies and leverages the intrinsic link between these two tasks, leading to a more accurate understanding of semantic information and disparity estimation. Comparative testing on the US3D dataset proves the effectiveness of our S3Net. Our model improves the mIoU in semantic segmentation from 61.38 to 67.39, and reduces the D1-Error and average endpoint error (EPE) in disparity estimation from 10.051 to 9.579 and 1.439 to 1.403 respectively, surpassing existing competitive methods. Our codes are available at: https://github.com/CVEO/S3Net.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Stereo matching, also known as disparity estimation, uses corrected epipolar images (binocular images) to determine depth information for 3D reconstruction and environmental perception. This is achieved by calculating the horizontal pixel offset of tie-points [1 ###reference_b1###]. Various deep networks for image disparity estimation have achieved desirable results on RGB images, thanks to the rapid development of deep learning. However, these methods are susceptible to the data distribution of binocular images, which may result in training instability and confusion in disparity estimation. This limits their application in binocular or multi-view stereo satellite images [2 ###reference_b2###].\nTo address this issue, recent research has combined semantic segmentation and stereo matching tasks on satellite epipolar images. This has led to a new paradigm called satellite semantic stereo [2 ###reference_b2###]. Semantic features of each pixel can effectively tackle issues such as blurred object disparity boundaries in disparity estimation. Meanwhile, disparity networks can help distinguish foreground and background, addressing a recurring challenge in semantic segmentation. Despite these advancements, most research treats stereo matching and semantic segmentation as separate tasks or focuses on improving their accuracy independently [3 ###reference_b3###], leading to inadequate utilization of their close connection.\nIn this study, we introduce the end-to-end Single-branch Semantic Stereo Network (S3Net), a novel approach that unifies semantic segmentation and disparity estimation to leverage the inherent correlation between semantic content and disparity. In doing so, it captures their inherent connection, thus improving semantic understanding and disparity accuracy. This closely coupled multi-task learning allows for a better understanding of complex scenes, consequently boosting robustness and generalizability."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Methodology",
|
| 15 |
+
"text": "###figure_1### The overall architecture of S3Net is shown in Fig.1 ###reference_###. Unlike traditional designs, our network uses a single branch configuration. It starts with the Disparity-Classification Spatial Feature Extraction Module (DCSFEM) that extracts features from the left and right images, generating a 4D cost volume containing semantic and disparity information. The Mutual-Fuse Module (MFM) then processes this volume, integrating disparity and semantic information. Finally, subjecting the cost volume to both trilinear and bilinear upsampling strategies results in two outputs at the original resolution: a disparity map and a pixel-level classification map."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Disparity-Classification Spatial Feature Extraction Module (DCSFEM)",
|
| 21 |
+
"text": "We design a weight-sharing DCSFEM to merge semantic and disparity tasks, extracting features from both left and right images. This module consists of disparity and semantic feature extraction, using multi-scale and sequence processing strategies respectively. Both processes undergo four times downsampling. We introduce a Self-Fuse Module (SFM, see 2.3 ###reference_###) for multi-scale disparity features, and concatenate the results with semantic features for synergy. The multi-scale features of the image pairs are then stacked to form a 4D cost volume."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Cost Volume",
|
| 27 |
+
"text": "Unlike traditional cost volume stacking methods such as PSMNet [4 ###reference_b4###] and S2Net [3 ###reference_b3###], we employ a selective approach towards stacking the multi-scale image features from both the left and right images, after they are processed through DCSFEM. The resultant structure forms a 4D cost volume (represented as ) with dimensions corresponding to the height, width, number of disparities, and number of feature maps , which inherently includes an array of rich disparity and semantic features. The topmost layer of disparity in this 4D cost volume is reserved for semantic information, whereas the successive layers encapsulate disparity information from multi-scale features."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Self-Fuse Module (SFM)",
|
| 33 |
+
"text": "To enhance the network\u2019s ability to handle noise interference in images and fully excavate intermediate layer information to more comprehensively characterize significant features in images, we have constructed an adaptive SFM module. This module can be divided into 2D and 3D types. Taking the 2D type as an example, it processes the input features through a dual-branch method, each branch applying a similar 2D convolution but with different weight parameters. The output features of the two branches are multiplied element-by-element according to their respective channels, and then output after the same operation. This module allows the network to adaptively control the information flow and achieve dynamic regulation and filtering on all feature information, thereby improving the network\u2019s expressive ability and learning efficiency, making the network more resistant to interference."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "Mutual-Fuse Module (MFM)",
|
| 39 |
+
"text": "This subsection details the use of 3D convolution operations in the MFM module to process three cost volumes (cost1, cost2, cost3) and output processed volumes. A total of three rounds need to be processed. In the first round, only cost1 (the initial cost volume) is inputted. The module begins with 3D SFM (mentioned in 2.3 ###reference_###) processing on cost1, enabling the network to self-adjust information flow and capture practical information. This is followed by disparity dimension isolation, facilitating the fusion of semantic and disparity features. In order to better refine and integrate cost-volume information, we downsample the fused features and generate cost2 and cost3 at different stages via skip-connection, which serve as the input cost2 and cost3 for the next round. Finally, through upsampling, we restore the original shape and connect to the semantic layer as the input cost1 for the next round. After three rounds with different weights, we take the final cost1 as the input for subsequent processing."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Experiments",
|
| 45 |
+
"text": ""
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Experimental settings",
|
| 51 |
+
"text": "We adopted the US3D dataset [2 ###reference_b2###] for training and evaluation in this study. The dataset includes 4292 stereo image pairs of size , each with a classification and disparity map. We cropped 3500 images to for training, used 338 for validation, and 454 for the test.\nWe adopted mIoU as the evaluation metric for semantic segmentation, EPE and D1-Error as the evaluation metrics for disparity estimation, and mIoU-3 [2 ###reference_b2###] as the evaluation metric for considering both disparity and semantic segmentation performance.\nWe implemented our method based on the PyTorch 1.8.1 framework. When training these models, we set the batch size as four. All methods employed in this experiment were trained and tested on a workstation with Nvidia Tesla V100 16-GB GPUs."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Ablation Study",
|
| 57 |
+
"text": "We evaluated our network\u2019s key modules (SFM, DCSFEM, and MFM) on the US3D dataset. We tested them separately, maintaining consistent dataset distribution. Our three-part evaluation included testing dual tasks without SFM, analyzing the disparity module in DCSFEM and MFM, and assessing the semantic module in DCSFEM and MFM. Table 1 ###reference_### shows improved dual task accuracy when SFM supports the integrated modules in DCSFEM and MFM."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.3",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Comparative Analysis with Other Methods",
|
| 63 |
+
"text": ""
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.3.1",
|
| 67 |
+
"parent_section_id": "3.3",
|
| 68 |
+
"section_name": "3.3.1 Compared Methods",
|
| 69 |
+
"text": "Our analysis primarily involves two different aspects: For the purpose of disparity estimation comparison, we conduct an exhaustive evaluation of currently superior algorithms such as PSMNet [4 ###reference_b4###], GwcNet [5 ###reference_b5###], GANet [6 ###reference_b6###], CFNet [7 ###reference_b7###], and S2Net [3 ###reference_b3###]; In evaluating the task of semantic segmentation, we have selected advanced segmentation algorithms including SegFormer [8 ###reference_b8###], PSPNet [9 ###reference_b9###], SDFCNv2 [10 ###reference_b10###], and HRNetV2 [11 ###reference_b11###]."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.3.2",
|
| 73 |
+
"parent_section_id": "3.3",
|
| 74 |
+
"section_name": "3.3.2 Stereo Matching task",
|
| 75 |
+
"text": "As shown in Table 2 ###reference_###, our proposed S3Net significantly outperformed other methods, demonstrating lower D1-Error (9.579) and EPE (1.403). As shown in Fig.2 ###reference_###, although PSMNet and S2Net both showed good results, our method presented more detailed and accurate disparity details, especially at object edges and in areas with rich textures. As shown in the red box in the Fig.2 ###reference_###, our method better reflected the outline of the building and the edge information of the water."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.3.3",
|
| 79 |
+
"parent_section_id": "3.3",
|
| 80 |
+
"section_name": "3.3.3 Semantic Segmentation task",
|
| 81 |
+
"text": "According to the Table 3 ###reference_###, our S3Net demonstrates outstanding performance across different categories and performs better in specific scenarios (such as Water and Bridge). As shown in Fig.3 ###reference_###, although the PSPNet and HRNetV2 respectively show good results on water bodies and buildings, our method presents clearer contours for categories such as buildings, trees, and water.\n###figure_2### ###figure_3###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Conclusion",
|
| 87 |
+
"text": "In the research, we introduce a novel multitask learning framework called the (S3Net) to simultaneously infer disparity maps and classification maps. The uniqueness of our method stems from capitalizing on the strong correlation between these tasks, effectively integrating them via self-fusion and mutual fusion modules for mutual enhancement. Notably, the evaluation results obtained from the US3D dataset and the comparison with other models affirm the feasibility and exceptional performance of our task framework. In the future, we hope to extend the results of this study to applications in multiview stereo matching and 3D reconstruction of multi-sensor data, and further expand the experimentation of this method in various imagery scenarios."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Acknowledgement",
|
| 93 |
+
"text": "This research was funded by the National Natural Science Foundation of China (No.42101346), the China Postdoctoral Science Foundation (No.2020M680109), and the Wuhan East Lake High-tech Development Zone Program of Unveiling and Commanding (No.2023KJB212)."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [],
|
| 97 |
+
"tables": {
|
| 98 |
+
"1": {
|
| 99 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.17.1.1\">Table 1</span>: </span>Results of Ablation Study. (DM and SM represent the disparity module and semantic module in DCSFEM, DCV and SCV represent the disparity cost volume and semantic cost volume in MFM)</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.15\" style=\"width:433.6pt;height:143.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(53.7pt,-17.8pt) scale(1.32898838052018,1.32898838052018) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.15.15\">\n<tr class=\"ltx_tr\" id=\"S3.T1.15.15.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.15.15.16.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.15.15.16.1.1\">SFM</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T1.15.15.16.2\">DCSFEM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T1.15.15.16.3\">MFM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.15.15.16.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.15.15.16.4.1\">mIoU</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.15.15.16.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.15.15.16.5.1\">mIoU-3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.15.15.16.6\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.15.15.16.6.1\">D1-Error</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.15.15.16.7\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.15.15.16.7.1\">EPE</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.15.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.15.15.17.1\">DM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.15.15.17.2\">SM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.15.15.17.3\">DCV</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.15.15.17.4\">SCV</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.4.4.4.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.4.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.4.4.4.6\">64.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.4.4.4.7\">62.72</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.4.4.4.8\">10.443</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.4.4.4.9\">1.483</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.7.7.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.7.7.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.7.7.6\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.7.7.7\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.7.7.8\">11.391</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.7.7.9\">1.567</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.10.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.8.8.8.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.10.10.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.9.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.10.10.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.10.10.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.10.10.6\">52.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.10.10.7\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.10.10.8\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.10.10.9\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.11.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.12.12.12.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.13.13.13.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.14.14.14.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.15.15.15.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.15.15.15.6\">67.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.15.15.15.7\">66.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.15.15.15.8\">9.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.15.15.15.9\">1.403</td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 100 |
+
"capture": "Table 1: Results of Ablation Study. (DM and SM represent the disparity module and semantic module in DCSFEM, DCV and SCV represent the disparity cost volume and semantic cost volume in MFM)"
|
| 101 |
+
},
|
| 102 |
+
"2": {
|
| 103 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.3.1.1\">Table 2</span>: </span>Results of stereo matching on the US3D test set</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T2.1\" style=\"width:433.6pt;height:75.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(61.5pt,-10.7pt) scale(1.39602022731312,1.39602022731312) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.1.1\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.1.2\">Methods</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.1.3\">PSMNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.1.4\">GwcNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.1.5\">GANet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.1.6\">CFNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.1.1\">S<sup class=\"ltx_sup\" id=\"S3.T2.1.1.1.1.1\">2</sup>Net</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.1.7\">Ours</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.2.1\">D1-Error</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.2.2\">11.872</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.2.3\">11.387</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.2.4\">10.876</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.2.5\">11.024</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.2.6\">10.051</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.2.7.1\">9.579</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.1.3.1\">EPE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.1.3.2\">1.695</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.1.3.3\">1.618</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.1.3.4\">1.526</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.1.3.5\">1.57</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.1.3.6\">1.439</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.1.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.3.7.1\">1.403</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 104 |
+
"capture": "Table 2: Results of stereo matching on the US3D test set"
|
| 105 |
+
},
|
| 106 |
+
"3": {
|
| 107 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.1.1\">Table 3</span>: </span>Results of semantic segmentation on the US3D test set</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T3.3\" style=\"width:433.6pt;height:179.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(64.4pt,-26.6pt) scale(1.42222068299996,1.42222068299996) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.3.1\">\n<tr class=\"ltx_tr\" id=\"S3.T3.3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.1.1.1\">Methods</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.1.1.2\">SDFCNv2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.1.1.3\">SegFormer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.1.1.4\">PSPNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.1.1.5\">HRNetV2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.1.1.6\">Ours</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.3.1.2.1\">Ground</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.3.1.2.2\">79.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.3.1.2.3\">80.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.3.1.2.4\">78.28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.3.1.2.5\">80.65</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.3.1.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.3.1.2.6.1\">81.94</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.3.1\">Tree</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.3.2\">64.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.3.3\">64.47</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.3.4\">59.32</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.3.5\">65.53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.3.1.3.6.1\">66.39</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.4.1\">Building</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.4.2\">68.95</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.4.3\">71.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.4.4\">69.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.4.5\">71.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.3.1.4.6.1\">73.45</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.5.1\">Water</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.5.2\">65.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.5.3\">59.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.5.4\">68.82</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.5.5\">68.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.3.1.5.6.1\">79.23</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.6.1\">Bridge</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.6.2\">14.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.6.3\">25.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.6.4\">27.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.6.5\">20.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.1.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.3.1.6.6.1\">35.96</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.1.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.3.1.7.1\">mIoU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.3.1.7.2\">58.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.3.1.7.3\">60.21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.3.1.7.4\">60.51</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.3.1.7.5\">61.38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.3.1.7.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.3.1.7.6.1\">67.39</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 108 |
+
"capture": "Table 3: Results of semantic segmentation on the US3D test set"
|
| 109 |
+
}
|
| 110 |
+
},
|
| 111 |
+
"image_paths": {
|
| 112 |
+
"1": {
|
| 113 |
+
"figure_path": "2401.01643v3_figure_1.png",
|
| 114 |
+
"caption": "Fig. 1: Framework of the Single-branch Semantic Stereo Network (S3Net).",
|
| 115 |
+
"url": "http://arxiv.org/html/2401.01643v3/x1.png"
|
| 116 |
+
},
|
| 117 |
+
"2": {
|
| 118 |
+
"figure_path": "2401.01643v3_figure_2.png",
|
| 119 |
+
"caption": "Fig. 2: The comparison of S3Net with other methods in disparity estimation tasks on the US3D dataset.",
|
| 120 |
+
"url": "http://arxiv.org/html/2401.01643v3/x2.png"
|
| 121 |
+
},
|
| 122 |
+
"3": {
|
| 123 |
+
"figure_path": "2401.01643v3_figure_3.png",
|
| 124 |
+
"caption": "Fig. 3: The comparison of S3Net with other methods in semantic segmentation tasks on the US3D dataset.",
|
| 125 |
+
"url": "http://arxiv.org/html/2401.01643v3/x3.png"
|
| 126 |
+
}
|
| 127 |
+
},
|
| 128 |
+
"validation": true,
|
| 129 |
+
"references": [
|
| 130 |
+
{
|
| 131 |
+
"1": {
|
| 132 |
+
"title": "\u201cA linear pushbroom satellite image epipolar resampling method for digital surface model generation,\u201d",
|
| 133 |
+
"author": "Puyun Liao, Guanzhou Chen, Xiaodong Zhang, Kun Zhu, Yuanfu Gong, Tong Wang, Xianwei Li, and Haobo Yang,",
|
| 134 |
+
"venue": "ISPRS Journal of Photogrammetry and Remote Sensing, vol. 190, pp. 56\u201368, 2022.",
|
| 135 |
+
"url": null
|
| 136 |
+
}
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"2": {
|
| 140 |
+
"title": "\u201cSemantic stereo for incidental satellite images,\u201d",
|
| 141 |
+
"author": "Marc Bosch, Kevin Foster, Gordon Christie, Sean Wang, Gregory D Hager, and Myron Brown,",
|
| 142 |
+
"venue": "in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019, pp. 1524\u20131532.",
|
| 143 |
+
"url": null
|
| 144 |
+
}
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"3": {
|
| 148 |
+
"title": "\u201cS2net: A multitask learning network for semantic stereo of satellite image pairs,\u201d",
|
| 149 |
+
"author": "Puyun Liao, Xiaodong Zhang, Guanzhou Chen, Tong Wang, Xianwei Li, Haobo Yang, Wenlin Zhou, Chanjuan He, and Qing Wang,",
|
| 150 |
+
"venue": "IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1\u201313, 2024.",
|
| 151 |
+
"url": null
|
| 152 |
+
}
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"4": {
|
| 156 |
+
"title": "\u201cPyramid stereo matching network,\u201d",
|
| 157 |
+
"author": "Jia-Ren Chang and Yong-Sheng Chen,",
|
| 158 |
+
"venue": "in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5410\u20135418.",
|
| 159 |
+
"url": null
|
| 160 |
+
}
|
| 161 |
+
},
|
| 162 |
+
{
|
| 163 |
+
"5": {
|
| 164 |
+
"title": "\u201cGroup-wise correlation stereo network,\u201d",
|
| 165 |
+
"author": "Xiaoyang Guo, Kai Yang, Wukui Yang, Xiaogang Wang, and Hongsheng Li,",
|
| 166 |
+
"venue": "in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3273\u20133282.",
|
| 167 |
+
"url": null
|
| 168 |
+
}
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"6": {
|
| 172 |
+
"title": "\u201cGa-net: Guided aggregation net for end-to-end stereo matching,\u201d",
|
| 173 |
+
"author": "Feihu Zhang, Victor Prisacariu, Ruigang Yang, and Philip HS Torr,",
|
| 174 |
+
"venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 185\u2013194.",
|
| 175 |
+
"url": null
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"7": {
|
| 180 |
+
"title": "\u201cCfnet: Cascade and fused cost volume for robust stereo matching,\u201d",
|
| 181 |
+
"author": "Zhelun Shen, Yuchao Dai, and Zhibo Rao,",
|
| 182 |
+
"venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 13906\u201313915.",
|
| 183 |
+
"url": null
|
| 184 |
+
}
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"8": {
|
| 188 |
+
"title": "\u201cSegformer: Simple and efficient design for semantic segmentation with transformers,\u201d",
|
| 189 |
+
"author": "Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo,",
|
| 190 |
+
"venue": "Advances in Neural Information Processing Systems, vol. 34, pp. 12077\u201312090, 2021.",
|
| 191 |
+
"url": null
|
| 192 |
+
}
|
| 193 |
+
},
|
| 194 |
+
{
|
| 195 |
+
"9": {
|
| 196 |
+
"title": "\u201cPyramid scene parsing network,\u201d",
|
| 197 |
+
"author": "Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia,",
|
| 198 |
+
"venue": "in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2881\u20132890.",
|
| 199 |
+
"url": null
|
| 200 |
+
}
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"10": {
|
| 204 |
+
"title": "\u201cSdfcnv2: An improved fcn framework for remote sensing images semantic segmentation,\u201d",
|
| 205 |
+
"author": "Guanzhou Chen, Xiaoliang Tan, Beibei Guo, Kun Zhu, Puyun Liao, Tong Wang, Qing Wang, and Xiaodong Zhang,",
|
| 206 |
+
"venue": "Remote Sensing, vol. 13, no. 23, pp. 4902, 2021.",
|
| 207 |
+
"url": null
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"11": {
|
| 212 |
+
"title": "\u201cHigh-resolution representations for labeling pixels and regions,\u201d",
|
| 213 |
+
"author": "Ke Sun, Yang Zhao, Borui Jiang, Tianheng Cheng, Bin Xiao, Dong Liu, Yadong Mu, Xinggang Wang, Wenyu Liu, and Jingdong Wang,",
|
| 214 |
+
"venue": "arXiv preprint arXiv:1904.04514, 2019.",
|
| 215 |
+
"url": null
|
| 216 |
+
}
|
| 217 |
+
}
|
| 218 |
+
],
|
| 219 |
+
"url": "http://arxiv.org/html/2401.01643v3"
|
| 220 |
+
}
|
20241001/2401.04978v2.json
ADDED
|
@@ -0,0 +1,549 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Closed-Form Interpretation of Neural Network Classifiers with Symbolic Gradients",
|
| 3 |
+
"abstract": "I introduce a unified framework for finding a closed-form interpretation of any single neuron in an artificial neural network. Using this framework I demonstrate how to interpret neural network classifiers to reveal closed-form expressions of the concepts encoded in their decision boundaries. In contrast to neural network-based regression, for classification, it is in general impossible to express the neural network in the form of a symbolic equation even if the neural network itself bases its classification on a quantity that can be written as a closed-form equation. The interpretation framework is based on embedding trained neural networks into an equivalence class of functions that encode the same concept. I interpret these neural networks by finding an intersection between the equivalence class and human-readable equations defined by a symbolic search space. The approach is not limited to classifiers or full neural networks and can be applied to arbitrary neurons in hidden layers or latent spaces.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Artificial neural networks have become the most powerful tool in a wide range of machine learning topics such as image classification, speech recognition, or natural language processing. In many fields, artificial neural networks have achieved better than human performance, and the number of these fields is consistently growing. The larger and more powerful the artificial neural networks become the more elusive is their decision-making to us humans. The black-box nature of these models makes it very challenging to comprehend their decision-making processes.\nInterpreting artificial neural networks is a central endeavor in many applications. As artificial neural networks are entering domains where a detailed understanding of the decision-making process is crucial, it becomes increasingly important to interpret their inner workings. These fields contain safety-critical, medical [1 ###reference_b1###, 2 ###reference_b2###] applications where human lives could be in danger or the legislative domains [3 ###reference_b3###, 4 ###reference_b4###] in which each decision needs to be supported by law. Self-driving cars are in danger of causing accidents because of a lack of understanding of the underlying machine learning algorithms [5 ###reference_b5###]. Further, in many scientific applications [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###] the primary utility of a model is to help scientists understand an underlying phenomenon much more than making accurate predictions.\nThere has been a lot of progress in identifying which input features contribute the most to a certain decision [15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###]. Using these approaches it is possible to identify where danger lingers in an image of a traffic situation or find malignant cancer cells in MRI scans. However, from this understanding of low-level features, it is still a long road to develop a comprehensive understanding of the concepts an artificial neural network learns in order to make its predictions. The review [18 ###reference_b18###] provides an overview of explainable artificial intelligence, covering methods for understanding, visualizing, and interpreting deep learning models.\nThere are many examples, especially in artificial scientific discovery, where a low-level interpretation is not enough. In many scientific fields, it is imperative to gain a deep understanding of the concept that is underlying neural network predictions. Some of the most central scientific tasks are a) understanding the dynamics of systems b) explaining new phases of matter and c) finding conserved quantities and symmetry invariants. In recent years, many scientists have developed methods to approach these problems. There are methods for the automated discovery of the dynamics of a system [19 ###reference_b19###], symbolic regression for conserved quantities [20 ###reference_b20###], and support vector machines that map out phase diagrams [21 ###reference_b21###, 22 ###reference_b22###]. These methods are inherently interpretable. Similarly, much more powerful neural network approaches have been developed to address these problems, which tend to be more successful in learning the underlying concepts but conceal these concepts from a human scientist. However, there have been publications that report successful strategies to discover the underlying concepts from neural networks. These include understanding the equations behind a dynamical system [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###], extracting phase signatures from a neural network [9 ###reference_b9###, 10 ###reference_b10###] or revealing conserved quantities and symmetry invariants with neural networks [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###]. It is important to note that many of the above examples are tackled by neural network classifiers while most symbolic interpretation literature focuses on regression problems.\nTo address the problem of interpreting neural network classifiers, it is crucial to acknowledge that concepts encoded in an artificial neural network are stored in a highly convoluted and elusive manner. A neural network does not encode them in a human-readable form, but through complex interactions of thousands if not millions of neurons/perceptrons. In other words, even if the neural network learns a concept that could be put into a human-readable form, a neural network hides this concept through complex and highly nonlinear transformations.\nThis observation leads to the initial idea behind the interpretation framework proposed in this manuscript. Is it possible to extract a human-readable concept that is learned by a neural network by getting rid of the elusive transformation that conceals it?\nThe current paper provides an answer to this question by embedding the neural network into an equivalence class of functions that learn the same concept. A closed-form expression of this concept can be recovered by finding the intersection of this equivalence class with the space of human-readable functions defined through a symbolic search space.\nTo the best of my knowledge, there are no successful approaches that manage to find closed-form expressions of the high-level concepts learned by arbitrary neurons within a neural network. Much research has been done to find symbolic solutions to regression problems of which some can be extended to classification problems.\nThe most closely related works that also embed the neural network into another function space are about symbolic metamodels[23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###], where the authors reproduce the output of an artificial neural network by minimizing the difference between the neural network output and so-called Meijer G-functions. However, this approach has so far only been successfully employed for low-dimensional regression problems.\nIt was proposed to train a neural network to map the weights of a neural network to a feature vector representing symbolic functions [26 ###reference_b26###].\nWhile their training efficiency and accuracy cannot compete with artificial neural networks, it is possible to directly employ symbolic regression approaches to obtain a symbolic solution that replicates the behaviour of a neural network. Many of these algorithms can be extended to classification problems through margin or hinge loss functions. Common symbolic regression libraries that mostly employ genetic algorithms include Eureqa [20 ###reference_b20###], Operon C++[27 ###reference_b27###], PySINDy [28 ###reference_b28###], Feyn[29 ###reference_b29###], Gene-pool Optimal Mixing Evolutionary Algorithm [30 ###reference_b30###], GPLearn [31 ###reference_b31###] and PySR[32 ###reference_b32###].\nThere are also symbolic regression algorithms that employ neural networks, for example, EQL with embedded symbolic layers[33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###], deep symbolic regression uses recurrent neural networks [36 ###reference_b36###], and symbolic regression with transformers[37 ###reference_b37###, 38 ###reference_b38###]. Further, AI Feynman uses neural networks to simplify expressions [39 ###reference_b39###].\nAn overview of interpretable scientific discovery with symbolic Regression can be found in[40 ###reference_b40###].\nIt is important to note that the process of replicating the output of a neural network faces a fundamental flaw: there is no guarantee that this procedure captures the same features that a neural network learns to solve a specific problem. Hence, replicating the output of a neural network is different from interpreting a neural network. This observation is demonstrated in section 4.3 ###reference_###."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Framework Overview",
|
| 15 |
+
"text": "###figure_1###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Interpreting an Artificial Neural Network",
|
| 21 |
+
"text": "Interpreting an artificial neural network means formulating a mapping between an abstract mechanism or encoding into a domain that a human can understand. Human understanding is a vaguely defined concept since human capabilities range from simply perceiving common objects in the real world to expert knowledge in their respective field. I suggest categorizing interpretations along the following four different attributes, which is more fine-grained than the categories proposed in [18 ###reference_b18###]:\nA) Mechanistic vs. Functional: Mechanistic interpretation is concerned with explaining the mechanisms the neural network employs to solve a problem or to implement a function. Functional interpretation is about understanding the concepts by which a neural network relates the input to the output.\nB) Local vs. Global: Local interpretation is about understanding which elements of a certain data point influence the prediction, while global interpretation explains what features are relevant for a learned concept in general.\nC) Verify vs. Discover: A machine learning practitioner might have a set of hypotheses of concepts in mind that a neural network might learn in order to solve a given task. It is often easy to verify or falsify if these concepts are included in the features learned by an artificial neural network. However, discovering an unknown concept without a list of potential hypotheses is in general very hard.\nD) Low-Level vs. High-Level Features: The neurons of the network jointly implement a complex nonlinear mapping from the input to the output through potentially several hidden layers. The global concept that must be interpreted is usually represented by a neuron in the final layer. Low-level features that are close to the input are usually rather interpretable while concepts in the final layers or some latent space are in general very abstract.\nInterpreting an artificial neural network through the framework presented in this paper falls into the category of functional global discovering high-level interpretation."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Equivalence Class of Functions Containing the Same Information",
|
| 27 |
+
"text": "The goal of the interpretation framework outlined in this manuscript is to find the intersection of the equivalence class of functions that contain the same information as the output neuron (or any other neuron) and the set of human-readable equations, see fig. 1 ###reference_###b.\nFor this purpose, it is important to define what it means that a certain neuron in an artificial neural network learns to encode a certain quantity. This neuron can be seen as a function that depends on the input . contains the full information about a certain quantity if can be faithfully reconstructed from . Conversely, if only contains information from it is possible to reconstruct from the knowledge of . In mathematical terms that means that there exists an invertible function such that .\nFor the purpose of this work, , are assumed to be continuously differentiable functions, where denotes the vector space of differentiable functions from to . Further, is the data manifold which is required to be compact and simply connected. By employing these definitions it is possible to define an equivalence set of all functions that extract the same scalar-valued information from the data:\nIf is the output of an artificial neural network, this defines an equivalence relation between all functions that base their decision on the same quantity . Several different neural networks might encode the same function because, in the context of this section, it does not matter how the output is calculated mechanistically. All realizations of a neural network that is symmetric under permutations or other changes in its weights would still be functionally equivalent and thus the same function within the equivalence class .\nSince it is computationally very inefficient to verify whether two functions or neural networks belong to the equivalence class , according to the above definition, it is convenient to reformulate it. Taking the gradient of a function with respect to the input yields\nSince is invertible, is strictly monotonic, thus , hence the gradients of and are always parallel at every .\nUsing this property, I define the equivalence set , with , which will later form the basis for the neural network interpretation algorithm:\nHere is the Euclidean norm used to normalize the gradients to unit length. The equivalence classes and satisfy the definition of equivalence classes. : a) Reflexivity: , b) Symmetry: implies , Transitivity: if and then . Trivially, if then . It can be proven by using the above definitions that ."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Equivalence of Equivalence Classes",
|
| 33 |
+
"text": ""
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.3.1",
|
| 37 |
+
"parent_section_id": "2.3",
|
| 38 |
+
"section_name": "2.3.1 Proof",
|
| 39 |
+
"text": "Let be continuously differentiable functions ( is the vector space of differentiable functions from to ) and be the data manifold which is required to be compact and simply connected. Then where\nand\nProof: One can see that for each function , hence .\nIt remains to be shown that for each function . It is possible to explicitly construct the function that maps between and . Defining through\nleads to an integrable because a) the images of and are compact, thus maps between compact subsets of and b) is continuous. For any simply connected I can define the -curve , thus a variable transformation within the calculation of the contour integral yields:\nNote, since I base the integral on the product instead of , I avoid problems arising from . Similarly, one can proof the existence of such that and thus is invertible. Having explicitly constructed proofs ."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.3.2",
|
| 43 |
+
"parent_section_id": "2.3",
|
| 44 |
+
"section_name": "2.3.2 Assumptions",
|
| 45 |
+
"text": "In a practical machine learning applications not all assumptions from the prior section that ensure hold true. However, even then can provide a good approximation that allows for a retrieval of the function that a neuron encodes.\nA machine learning data set is bounded since it is finite. However, if there is a divergence in the function that the machine learning model is supposed to approximate, the data set might not be closed and thus not compact. A data set might not be simply connected, especially if it is in the form of categorical data or images.\nA neural network classifier, if successfully trained, tends to approximate a categorical output, which is neither continuous nor differentiable. However, this binary output is typically an approximation mediated by sigmoid or softmax activation functions, which indeed are continuously differentiable. Still, interpreting artificial neural networks with the framework introduced in this paper experiences numerical artifacts if a gradient is taken from a network that contains sigmoid or softmax activation functions. For this reason, I suggest avoiding these activation functions in the design of hidden layers and removing them from the output neuron during the interpretation process (the same argument holds true for tanh or related activation functions).\nThe above definitions of equivalence classes could be extended to piecewise functions. This function set contains many artificial neural networks that include piecewise differentiable activation functions like . However, this causes problems when evaluating derivatives close to . In practice, one can observe that piecewise activation functions lead to computational artifacts when calculating gradients. Hence, I suggest using as the preferred activation function in hidden layers."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.4",
|
| 49 |
+
"parent_section_id": "2",
|
| 50 |
+
"section_name": "Extracting Symbolic Concepts Encoded in Neural Networks",
|
| 51 |
+
"text": "A neural network classifier can be interpreted in practice by finding a representative symbolic function that lives on the intersection between human-readable functions and all functions that contain the same information as a trained neural network, see fig. 1 ###reference_###b. For this purpose, I employ a genetic algorithm to find a function represented by a symbolic tree whose normalized gradients approximate the normalized gradients of a latent neural network model . Here is defined by removing the final activation function . Through this procedure it is possible to find an element belonging to the equivalence class 4 ###reference_### in human-readable form.\nWhile the experiments here focus on interpreting binary classifiers, it is possible to apply the framework to interpret multi-class classifiers or even neurons contained in the hidden layers of neural networks. The interpretation framework contains three algorithmic steps summarized in algorithm 1 ###reference_###: 1) training an artificial neural network, 2) extracting gradients of the output neuron with respect to the input and 3) performing symbolic search to find a symbolic function whose normalized gradients approximate the normalized gradients from the desired neuron of an artificial neural network."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Methods",
|
| 57 |
+
"text": ""
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.1",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Artificial Neural Network",
|
| 63 |
+
"text": "An artificial neural network fig. 1 ###reference_###a is a connected graph containing nodes representing neurons otherwise known as perceptrons and connections representing weighted inputs that are supplied to these nodes. A basic feedforward neural network can be represented mathematically as follows:\nA neural network is applied to an input data point represented through an embedding in an -dimensional space of real numbers. The input gets processed through a number of layers containing weight matrices , biases vectors and non-linear activation functions\nto produce some output\nThe parameters of an artificial neural network are trained on a data set representing the data manifold through gradient descent/backpropagation to minimize an objective/loss/error function such that on all data points on the data manifold."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.2",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "Symbolic Search",
|
| 69 |
+
"text": ""
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.2.1",
|
| 73 |
+
"parent_section_id": "3.2",
|
| 74 |
+
"section_name": "3.2.1 Symbolic Search Space",
|
| 75 |
+
"text": "An interpretable representation of a neural network is a description that humans can make sense of. Each single neuron can be described as a simple linear mapping together with an activation function that can be expressed in terms of a short equation. However, the complexity that arises from combining several thousand or even millions of these neurons conceals any meaning of the function that is approximated by a neural network. In order to discuss the requirements for interpretable representations in the language of mathematics, let me discuss three criteria that are important for comparing representations of equations.\n1)Building blocks: In the realm of mathematical equations human readable equations are typically written in terms of numbers (integers, real numbers, complex numbers), binary (,,) operations, or unary operations that summarize complex operations or arise from the solutions of ubiquitous equations (,,,,).\n2) Complexity: Humans can understand short combinations of the above elementary constituents of equations, however, combining a large number of them prevents interpretability. Hence, to formulate complex objects, it is helpful to summarize and simplify repeating processes in equations to a shorter form. For example, the Lagrangian of the standard model of particle physics [41 ###reference_b41###] fits on a coffee mug in its most simple formulation. However, each element in the equation represents a collection of more complex objects that are conveniently grouped together.\n3) Context: Pricing financial derivatives with the Black-Scholes equation [42 ###reference_b42###] requires a firm understanding of the assumptions made to derive it. Identifying something as a Lagrangian or a conserved quantity means associating a certain functional purpose or properties to an equation. For example, reading the standard model Lagrangian in its full glory is a futile exercise, unless the reader has a deep understanding of what role Lagrangians play in the larger framework of quantum field theory. The same can be said about much simpler equations like the de-Broglie wavelength (-Panck constant, -momentum) [43 ###reference_b43###], which is arguably one of the simplest physics equations. This equation is meaningful because of the context of its formulation, it is instrumental in associating a quantum mechanical wavelength to massive objects.\nThroughout this paper, I base the set of human-readable functions on symbolic trees, see fig. 1 ###reference_###c. These trees can be constructed to contain a user-defined set of elementary operations. Symbolic trees can also be associated with a complexity measure that increases the larger the tree grows leading to several solutions along the Pareto front. Hence, it is possible to address human readability according to aspects 1) and 2). Within the scope of this paper, it is impossible to address the question of the context that gives the equation meaning, as discussed in 3). In recent years, progress in artificial scientific discovery has made it possible to train neural networks to optimize loss functions based on concepts that imply a context, instead of optimizing for regression targets [9 ###reference_b9###, 11 ###reference_b11###, 13 ###reference_b13###, 12 ###reference_b12###]. The interpretation framework developed in the current manuscript is uniquely suitable to address the interpretation in these cases."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.2.2",
|
| 79 |
+
"parent_section_id": "3.2",
|
| 80 |
+
"section_name": "3.2.2 Symbolic Search Algorithm",
|
| 81 |
+
"text": "In order to computationally search through the space of human-readable functions and determine the intersection with the equivalence set described in the previous section, it is convenient to build upon a suitable symbolic regression algorithm based on a backend that performs an efficient search in the space of functions represented by symbolic trees.\nIn principle, a simple way of searching through the space of possible functions could be optimizing a linear combination of user-defined elementary functions. A much better approach can be taken by utilizing evolutionary or genetic algorithms. These algorithms build trees of connected nodes that represent a mathematical function, see fig. 1 ###reference_###c. Each node represents input variables, numeric parameters, as well as unary and binary operators. A genetic algorithm modifies, evolves, and adds nodes to optimize an objective function on some underlying training data.\nMany such algorithms have been developed in recent years, these include: Eureqa [20 ###reference_b20###], Operon C++ [27 ###reference_b27###], PySINDy [28 ###reference_b28###], Feyn [29 ###reference_b29###], Gene-pool Optimal Mixing Evolutionary Algorithm [30 ###reference_b30###] and GPLearn [31 ###reference_b31###]. Other symbolic regression techniques based on neural networks are EQL [33 ###reference_b33###, 34 ###reference_b34###], AI Feynman [39 ###reference_b39###], Deep symbolic regression [36 ###reference_b36###] and Symbolic regression with transformers [37 ###reference_b37###].\nIn this paper, I build upon PySR [32 ###reference_b32###], a high-performance symbolic regression algorithm for Python. PySR\u2019s internal search algorithm is a multi-population genetic algorithm, which consists of an evolve-simplify-optimize loop, designed for the optimization of unknown scalar constants in newly-discovered empirical expressions. In order to optimize the computation speed, PySR\u2019s backend is coded in Julia as the library SymbolicRegression.jl."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "3.3",
|
| 85 |
+
"parent_section_id": "3",
|
| 86 |
+
"section_name": "Interpretation Algorithm",
|
| 87 |
+
"text": "###figure_2###"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "3.3.1",
|
| 91 |
+
"parent_section_id": "3.3",
|
| 92 |
+
"section_name": "3.3.1 Training a Neural Network For Binary Classification",
|
| 93 |
+
"text": "The first part of the interpretation procedure involves training an artificial neural network for binary classification on a two-class data set, see algorithm 1 ###reference_###.1, to approximate the decision boundary that separates the two classes, see fig. 2 ###reference_###a. After successful training I obtain the full model and the latent model by removing the final sigmoid activation function, which has a cleaner correlation to a closed-form expression of the decision boundary , see fig. 2 ###reference_###b:\nI implemented one neural network architecture for all experiments using the Python library tensorflow [44 ###reference_b44###]. This neural network consists of two hidden layers with 1000 neurons each and ELU activation functions. I compared different activation functions and ELU activations yielded gradients least affected by computational artifacts. The final layer contains one single neuron together with a sigmoid activation function, commonly used for binary classification. The weights and biases in the hidden layers are regularized with an penalty of . Further, I employ dropout regularization after each hidden layer with a chance of . I train the neural networks using the Adam optimizer and use learning rate decay callbacks that reduce the learning rate by 50% loss stops decreasing. Further early stop callbacks stop the training process when converged. For this reason, it is enough to set the number of epochs large enough such that the early stopping is always triggered. The batch size in all cases is 100. I train on of the data set and use another 20% for validation. Since I am not interested in calculating any test accuracy here, the validation and test set are the same."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "3.3.2",
|
| 97 |
+
"parent_section_id": "3.3",
|
| 98 |
+
"section_name": "3.3.2 Obtaining Gradients of Latent Model",
|
| 99 |
+
"text": "The second step is described in algorithm 1 ###reference_###.2, which is used to collect the gradient information, see fig. 2 ###reference_###a, for the symbolic search step. It involves potentially adding additional unlabelled data points to the training data set , either from available unlabelled data, sampling from the data manifold, or by perturbing training data. This data set is denoted . I omitted increasing the training data set in my experiments, but in data-scarce domains, this could lead to a significant improvement in the symbolic search results.\nThe sigmoid activation function prevents the training on data points for which the neural network makes almost certain predictions of or . In these cases, the sigmoid activation function is almost flat suppressing any gradient information. For this reason, I delete select data points for which from . In my experiments, I choose .\nAfterward, I calculate the normalized gradients of the latent model , with respect to the input variables, on the remaining data points. Finally, together with the inputs they get stored in the form of a labeled data set that is used as training data of the symbolic search algorithm."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "3.3.3",
|
| 103 |
+
"parent_section_id": "3.3",
|
| 104 |
+
"section_name": "3.3.3 Symbolic Search",
|
| 105 |
+
"text": "The third step of the interpretation algorithm builds upon symbolic regression, see fig. 1 ###reference_###c, to find human-readable equations describing the decision boundary . The algorithm is summarized in algorithm 1 ###reference_###.3. This step employs the backend of the Python symbolic regression module PySR [32 ###reference_b32###] coded in Julia called SymbolicRegression.jl.\nThe interpretation framework proposed in this paper has requirements beyond what symbolic regression algorithms typically do. Symbolic regression models are represented by trees that are trained on a labeled data set to reproduce an output . However, the task in this manuscript is to perform symbolic regression on the normalized gradient of a specific neuron within the neural network. Thus, the gradient set takes on the role of the data set label . Further, the symbolic tree must be differentiable in order to calculate the normalized gradients of the tree itself. Finally, the desired solution of the symbolic search procedure is not finding a closed form expression of the normalized gradient , but the function itself. An additional limitation of SymbolicRegression.jl is its limitation to one-dimensional regression targets in the standard form.\nThe extremely customizable library SymbolicRegression.jl can be programmed to 1) accept gradient information as data set, 2) calculate gradients of a symbolic regression tree 3) normalize these gradients and 4) optimize a custom loss function that compares normalized gradients.\nI choose the mean square error (MSE) loss function between latent model normalized gradient and normalized gradient of the symbolic tree as the most straight-forward choice of the objective function to ascertain that both and fall into the same equivalence class defined by eq.4 ###reference_###. This loss function is equivalent to the cosine loss (CSL) based on the scalar product of two vectors.\nThe symbolic regression model is trained to find a closed-form function corresponding to the decision boundary using the following building blocks:\ninput variables\nfloating-point constants\nbinary operators: +, -, *, /\nunary operators: sin, exp\nThe hyperparameters are a batch size of 25, a maximum number of epochs of 200, an early stop callback, and a maximum tree size of 30 nodes.\nThe output of the symbolic search algorithm is not just a single closed-form function, but a set of symbolic functions that balance complexity and accuracy along the Pareto front. More precisely, the result contains functions with low complexity and low accuracy, and more complex functions are added if the complexity can be justified by an increase in accuracy."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4",
|
| 109 |
+
"parent_section_id": null,
|
| 110 |
+
"section_name": "Results",
|
| 111 |
+
"text": ""
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.1",
|
| 115 |
+
"parent_section_id": "4",
|
| 116 |
+
"section_name": "Data",
|
| 117 |
+
"text": "I apply the outlined interpretation procedure to several neural networks that are tasked with classifying data sets that are each grouped into two classes. The classes are separated through a domain boundary defined by a decision formula and each data point is obtained by uniformly sampling within ranges that contain the decision boundary. The interpretation framework attempts to recover these functions from the neural networks.\nI perform experiments on different data sets outlined in table 1 ###reference_###. For each data set, I sample 10000 data points and categorize them into classes according to the decision formulas. In all cases, the data sets are created with multiplicative Gaussian noise of .\n###figure_3###"
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.2",
|
| 121 |
+
"parent_section_id": "4",
|
| 122 |
+
"section_name": "Experiments 1-6, Interpreting Decision Function",
|
| 123 |
+
"text": "Applying the proposed interpretation framework to the data sets yields sets of formulas along the Pareto front. All formulas are equally valid and trade complexity for accuracy. The Pareto fronts for the experiments are collected in fig. 3 ###reference_###.\nThe interpretation in experiment 1 is able to correctly recover the decision function . In experiment 2, the closest match is which seems incorrect at first glance. However, if we promote the transformation , we can map the current solution to a very good approximation of the true function . In experiment 3 the algorithm correctly recovers . Similarly, promoting the transformation in experiment 4 yields a very good approximation to the true function . In experiment 5, I recover the equation . However, in this experiment, the correct function is in a very flat region of the Pareto front. Lastly, experiment 6 fails to learn the division and instead replaces it with a subtraction . Most likely the algorithm learns a Taylor approximation of the true function and removes any remaining constants since they do not contribute when calculating gradients. It is of note that in most experiments, the correct function can be found when the Pareto front exhibits the steepest change.\nThese experiments can be compared to direct symbolic classification in cases where there is no ambiguity in what high-level feature is responsible for the class boundary as in experiments 1-6. Symbolic classification is based on symbolic regression together with a loss function that enables a categorical output. The application of symbolic classification and the corresponding results can be found in appendix A ###reference_###. It recovers three out of the six decision formulas together with almost exact values for the biases/thresholds."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "4.3",
|
| 127 |
+
"parent_section_id": "4",
|
| 128 |
+
"section_name": "Experiment 7, Symbolic Classification is not Neural Network Interpretation",
|
| 129 |
+
"text": "###figure_4### Experiment 7 is different in the sense that the features are correlated and there are several possible concepts a model could learn to successfully classify the data. These correlations are common in machine learning data sets, eg when distinguishing between humans and dogs, one possible concept might be the presence of tails while another would be the color of the fur/skin. I train a neural network and interpret it similarly to the previous experiments. Further, I train a symbolic classification model to classify the data. In fig. 4 ###reference_###a one can see that the symbolic classification model selects one of the high-level features to make its decision. The interpretation procedure reveals that the neural network bases its prediction on a combination of the two high-level features , see fig. 4 ###reference_###b. In fig. 4 ###reference_###c it is shown that the neural network has a very low correlation with either of the single features and confirms a very strong correlation between the concept learned by the neural network and the result of the interpretation framework. In this section, one can see that symbolic classification does not necessarily provide an interpretation of the neural network, whereas the framework presented in this paper does."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "5",
|
| 133 |
+
"parent_section_id": null,
|
| 134 |
+
"section_name": "Discussion",
|
| 135 |
+
"text": "In this paper, I introduce a comprehensive framework for interpreting neural network classifiers that is equally capable of interpreting single neurons in hidden layers or latent spaces. The solution presents the learned concept as a closed-form function. The interpretation method is based on the idea of finding an intersection between the space of human-readable functions and all functions that contain the same information as a specific neuron in an artificial neural network. Through the interpretation procedure, it is possible to get rid of uninterpretable transformations that conceal the true function and focus on the simplest form of the underlying concept.\nI demonstrate the power of the interpretation method in section 4 ###reference_### at the example of different data sets with two to six input variables each. In 4 cases the interpretation procedure recovers the exact functional form of the true decision function, while in 2 experiments the procedure finds a very good symbolic approximation. In one experiment the neural network has the freedom of choosing between two different high-level features to make its prediction. The interpretation method reveals that the neural network learns a combination of both features.\nComparing these results with other works is impossible since most related works only replicate neural network regressors. As demonstrated in experiment 7, if there are multiple high-level concepts that can be employed to solve a specific task, replication is not interpretation. The most similar research articles manage to explain neural network regressors involving one variable [23 ###reference_b23###] or two variables [24 ###reference_b24###]. It is possible to compare the neural network interpretation technique to symbolic classification, see appendix A ###reference_###. Direct symbolic classification only finds three out of six closed-form expressions of the decision boundaries whereas gradient-based interpretation manages to find suitable solutions for all cases. There are multiple potential reasons for this performance difference. Neural networks are superior learners than genetic search algorithms. Symbolic search in the proposed interpretation framework is tasked with finding a closed-form expression extending deep into the full training manifold and hence experiences training signals from all data points. Direct symbolic classification only accumulates error signals from violations of the decision boundary. Neural networks are well suited to deal with noisy data such that the artificially created data of normalized gradients for the symbolic search steps could be essentially noise-free. Further, unlimited additional artificial data generation in algorithm 1 ###reference_###.2 would further enable better symbolic performance.\nThe interpretation method can also be used to simplify the interpretation of neural network regressors. Consider for example\nTo find a symbolic expression of a neural network learning the function in the context of regression, use the framework in this paper to find , then prepare the symbolic regression problem for and use your favorite symbolic regression algorithm to solve a simple one-dimensional symbolic regression problem.\nThere is of course the limitation of not being able to learn functions of a single variable since the interpretation method\u2019s goal is to reduce the concept learned by a neural network to a one-dimensional function. This also means one loses the numerical value of the threshold/bias of the output neuron. This is however not a problem, since solving a one-dimensional symbolic regression problem or fitting a threshold/bias is a minor challenge compared to the complexity of problems solved by the presented interpretation framework.\nWhen applying the interpretation algorithm to practical problems it might be useful to sort out equations from the Pareto Front that violate dimensional constraints. Similarly, dimensional analysis could also be used inside the symbolic search algorithm itself. Further, one might have some inductive bias in mind that is better represented through other function sets. There is no need to use symbolic trees to represent the solution, any suitable differentiable function space can do. Lastly, this interpretation method is more useful in bottleneck layers and when neurons are disentangled, since the framework cannot capture information distributed among multiple neurons.\nThis article is accompanied by the article \u201dClosed-Form Interpretation of Neural Network Latent Spaces with Symbolic Gradients\u201d [45 ###reference_b45###], where we show how to interpret latent spaces of neural networks in closed form.\nThe code used for this project can be found at https://github.com/sjwetzel/PublicSymbolicNNInterpretation ###reference_NNInterpretation###"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "6",
|
| 139 |
+
"parent_section_id": null,
|
| 140 |
+
"section_name": "Acknowledgements",
|
| 141 |
+
"text": "I thank the National Research Council of Canada for their partnership with Perimeter on the PIQuIL. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. This work was supported by Mitacs through the Mitacs Accelerate program. I also thank the PySR team, especially Miles Cranmer, for their helpful comments on github."
|
| 142 |
+
}
|
| 143 |
+
],
|
| 144 |
+
"appendix": [
|
| 145 |
+
{
|
| 146 |
+
"section_id": "Appendix 1",
|
| 147 |
+
"parent_section_id": null,
|
| 148 |
+
"section_name": "Appendix A Symbolic Classification",
|
| 149 |
+
"text": "###figure_5### The most closely related technique to learn what neural network classifiers learn is symbolic classification. However, it is important to note that symbolic classification is not an interpretation technique, it can only provide equivalent results if there is only one possible high-level concept that a neural network could learn in order to make its predictions. Symbolic classification is based on symbolic regression with hinge/margin loss function, more precisely, the loss function I use is\nI perform symbolic classification on experiments 1-6 with the same symbolic regression algorithm and hyperparameters as the experiments in the main body of this paper. The model is trained using the following building blocks: input variables , floating-point constants, binary operators: +, -, *, /, unary operators: sin, exp. The hyperparameters are a batch size of 25, a maximum number of epochs of 200, an early stop callback, and a maximum tree size of 30 nodes.\nThe Pareto fronts describing the results can be found in fig. 5 ###reference_###. In three experiments 1, 3, 4 the symbolic classification algorithm finds the exact decision boundary, while in experiments 2, 5, 6 the algorithm fails to discover anything meaningful. Symbolic classification has the advantage of also recovering the threshold/bias of the last layer in contrast to the interpretation framework presented in this manuscript. However, symbolic regression adds increased complexity to its algorithm because it spends nodes on the exact determination of the bias/threshold. Further, the gradient-based interpretation framework has the freedom to choose from a larger function set and can thus provide more simplifications in its solutions."
|
| 150 |
+
}
|
| 151 |
+
],
|
| 152 |
+
"tables": {
|
| 153 |
+
"1": {
|
| 154 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Data sets are created by randomly sampling data points from multidimensional uniform distributions that are divided into two classes to represent binary classification problems.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.8\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.8.9.1\">\n<td class=\"ltx_td\" id=\"S4.T1.8.9.1.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.8.9.1.2\">Variables</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S4.T1.8.9.1.3\">Decision Formula</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.1.2\">Experiment 1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.3\">2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.2.2\">Experiment 2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.3\">2</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.2.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.3.3.2\">Experiment 3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3\">3</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.3.3.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.4.4.2\">Experiment 4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.3\">3</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.4.4.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.5.5.2\">Experiment 5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.5.3\">3</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.5.5.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.6.6.2\">Experiment 6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.3\">6</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.6.6.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.8.3\">Experiment 7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.8.4\">4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.8.2\">\n vs \n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 155 |
+
"capture": "Table 1: Data sets are created by randomly sampling data points from multidimensional uniform distributions that are divided into two classes to represent binary classification problems.\n"
|
| 156 |
+
}
|
| 157 |
+
},
|
| 158 |
+
"image_paths": {
|
| 159 |
+
"1": {
|
| 160 |
+
"figure_path": "2401.04978v2_figure_1.png",
|
| 161 |
+
"caption": "Figure 1: a: An artificial neural network is a connected graph consisting of nodes representing neurons and weighted connections between them. A neural network predicts an approximate target F\u2062(\ud835\udc31)=y^\u2248y\ud835\udc39\ud835\udc31^\ud835\udc66\ud835\udc66F(\\mathbf{x})=\\hat{y}\\approx yitalic_F ( bold_x ) = over^ start_ARG italic_y end_ARG \u2248 italic_y here in the context of binary classification. Removing the final sigmoid activation function allows the extraction of the latent model f\ud835\udc53fitalic_f from the full neural network F\ud835\udc39Fitalic_F for easier interpretability.\nb: The interpretation framework is based on finding the intersection between human-readable functions and the equivalence class 4 of functions that contain the same information as an output neuron of the neural network. The space of human-readable functions can be defined through a symbolic search space with elementary functions and complexity that matches the user\u2019s knowledge. This space can be computationally explored by genetic algorithms whose structure is mathematically represented by c: a symbolic tree. A tree consists of connected nodes containing variables, numeric parameters, unary and binary operators. Symbolic search is performed by a genetic algorithm that modifies, evolves and adds nodes to optimize an objective function on some underlying training data.",
|
| 162 |
+
"url": "http://arxiv.org/html/2401.04978v2/extracted/5891364/combined2.png"
|
| 163 |
+
},
|
| 164 |
+
"2": {
|
| 165 |
+
"figure_path": "2401.04978v2_figure_2.png",
|
| 166 |
+
"caption": "Figure 2: a: Two class data of Experiment 1 separated by decision boundary g\u2062(\ud835\udc31)=x12+2\u2062x22=1\ud835\udc54\ud835\udc31superscriptsubscript\ud835\udc65122superscriptsubscript\ud835\udc65221g(\\mathbf{x})=x_{1}^{2}+2x_{2}^{2}=1italic_g ( bold_x ) = italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 2 italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 1. A neural network F\ud835\udc39Fitalic_F is trained to classify the data. Afterward, a symbolic model T\ud835\udc47Titalic_T is trained to reproduce the normalized gradients of F\ud835\udc39Fitalic_F which coincide with the normalized gradients of function that defines the decision boundary g\ud835\udc54gitalic_g. b: Empirical correlation between true function g\ud835\udc54gitalic_g and the neural network F\ud835\udc39Fitalic_F. Removing the sigmoid activation function from F\ud835\udc39Fitalic_F defines the latent model f\ud835\udc53fitalic_f which has an almost linear correlation with g\ud835\udc54gitalic_g. However, this correlation is not linear and defines the function f=\u03d5\u2062(g)\ud835\udc53italic-\u03d5\ud835\udc54f=\\phi(g)italic_f = italic_\u03d5 ( italic_g ) with which I ascertain the equivalence relation f\u223cgsimilar-to\ud835\udc53\ud835\udc54f\\sim gitalic_f \u223c italic_g assuring that F\ud835\udc39Fitalic_F and f\ud835\udc53fitalic_f contain the same information as g\ud835\udc54gitalic_g and thus F,f\u2208H~g\ud835\udc39\ud835\udc53subscript~\ud835\udc3b\ud835\udc54F,f\\in\\tilde{H}_{g}italic_F , italic_f \u2208 over~ start_ARG italic_H end_ARG start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT according to eq.2.",
|
| 167 |
+
"url": "http://arxiv.org/html/2401.04978v2/extracted/5891364/data_nn_combined3.png"
|
| 168 |
+
},
|
| 169 |
+
"3": {
|
| 170 |
+
"figure_path": "2401.04978v2_figure_3.png",
|
| 171 |
+
"caption": "Figure 3: The results of fitting a symbolic model T\ud835\udc47Titalic_T to the normalized gradients of the neural network are presented along the Pareto front. The Pareto front collects several possible results with decreasing Mean Square Error (MSE) and increasing complexity. The closest match to the true underlying function is often found at the point of steepest change of the Pareto front.",
|
| 172 |
+
"url": "http://arxiv.org/html/2401.04978v2/extracted/5891364/ParetoFront.png"
|
| 173 |
+
},
|
| 174 |
+
"4": {
|
| 175 |
+
"figure_path": "2401.04978v2_figure_4.png",
|
| 176 |
+
"caption": "Figure 4: a: Results of fitting a symbolic classification model to the data of experiment 7. b: Interpretation of a neural network classifying the same data set. c: Empirical correlation between symbolic classification, the proposed interpretation method, and latent model f\ud835\udc53fitalic_f. Symbolic classification learns a different high-level feature than the neural network. The interpretation framework presented in this paper correctly interprets the neural network.",
|
| 177 |
+
"url": "http://arxiv.org/html/2401.04978v2/extracted/5891364/SymClassCombined2.png"
|
| 178 |
+
},
|
| 179 |
+
"5": {
|
| 180 |
+
"figure_path": "2401.04978v2_figure_5.png",
|
| 181 |
+
"caption": "Figure 5: The results of fitting a symbolic classification model T\ud835\udc47Titalic_T to six experiments. The Pareto front collects several possible results with decreasing Mean Square Error (MSE) and increasing complexity. The closest match to the true underlying function is often found at the point of steepest change of the Pareto front.",
|
| 182 |
+
"url": "http://arxiv.org/html/2401.04978v2/extracted/5891364/SymClassParetoFront.png"
|
| 183 |
+
}
|
| 184 |
+
},
|
| 185 |
+
"validation": true,
|
| 186 |
+
"references": [
|
| 187 |
+
{
|
| 188 |
+
"1": {
|
| 189 |
+
"title": "Explainable deep learning in healthcare: A methodological survey from an attribution view.",
|
| 190 |
+
"author": "Di Jin, Elena Sergeeva, Wei\u2010Hung Weng, Geeticka Chauhan, and Peter Szolovits.",
|
| 191 |
+
"venue": "WIREs Mechanisms of Disease, 14(3), January 2022.",
|
| 192 |
+
"url": null
|
| 193 |
+
}
|
| 194 |
+
},
|
| 195 |
+
{
|
| 196 |
+
"2": {
|
| 197 |
+
"title": "To explain or not to explain?\u2014artificial intelligence explainability in clinical decision support systems.",
|
| 198 |
+
"author": "Julia Amann, Dennis Vetter, Stig Nikolaj Blomberg, Helle Collatz Christensen, Megan Coffee, Sara Gerke, Thomas K. Gilbert, Thilo Hagendorff, Sune Holm, Michelle Livne, Andy Spezzatti, Inga Str\u00fcmke, Roberto V. Zicari, and Vince Istvan Madai.",
|
| 199 |
+
"venue": "PLOS Digital Health, 1(2):e0000016, February 2022.",
|
| 200 |
+
"url": null
|
| 201 |
+
}
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"3": {
|
| 205 |
+
"title": "Explainable ai under contract and tort law: legal incentives and technical challenges.",
|
| 206 |
+
"author": "Philipp Hacker, Ralf Krestel, Stefan Grundmann, and Felix Naumann.",
|
| 207 |
+
"venue": "Artificial Intelligence and Law, 28(4):415\u2013439, January 2020.",
|
| 208 |
+
"url": null
|
| 209 |
+
}
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"4": {
|
| 213 |
+
"title": "Legal requirements on explainability in machine learning.",
|
| 214 |
+
"author": "Adrien Bibal, Michael Lognoul, Alexandre de Streel, and Beno\u00eet Fr\u00e9nay.",
|
| 215 |
+
"venue": "Artificial Intelligence and Law, 29(2):149\u2013169, July 2020.",
|
| 216 |
+
"url": null
|
| 217 |
+
}
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"5": {
|
| 221 |
+
"title": "Interpretable hierarchical symbolic regression for safety-critical systems with an application to highway crash prediction.",
|
| 222 |
+
"author": "Thomas Veran, Pierre-Edouard Portier, and Fran\u00e7ois Fouquet.",
|
| 223 |
+
"venue": "Engineering Applications of Artificial Intelligence, 117:105534, January 2023.",
|
| 224 |
+
"url": null
|
| 225 |
+
}
|
| 226 |
+
},
|
| 227 |
+
{
|
| 228 |
+
"6": {
|
| 229 |
+
"title": "Deep learning for universal linear embeddings of nonlinear dynamics.",
|
| 230 |
+
"author": "Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton.",
|
| 231 |
+
"venue": "Nature Communications, 9(1), November 2018.",
|
| 232 |
+
"url": null
|
| 233 |
+
}
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"7": {
|
| 237 |
+
"title": "Discovering symbolic models from deep learning with inductive biases.",
|
| 238 |
+
"author": "Miles Cranmer, Alvaro Sanchez Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, and Shirley Ho.",
|
| 239 |
+
"venue": "Advances in Neural Information Processing Systems, 33:17429\u201317442, 2020.",
|
| 240 |
+
"url": null
|
| 241 |
+
}
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"8": {
|
| 245 |
+
"title": "Rediscovering orbital mechanics with machine learning.",
|
| 246 |
+
"author": "Pablo Lemos, Niall Jeffrey, Miles Cranmer, Shirley Ho, and Peter Battaglia.",
|
| 247 |
+
"venue": "Machine Learning: Science and Technology, 4(4):045002, October 2023.",
|
| 248 |
+
"url": null
|
| 249 |
+
}
|
| 250 |
+
},
|
| 251 |
+
{
|
| 252 |
+
"9": {
|
| 253 |
+
"title": "Machine learning of explicit order parameters: From the ising model to su(2) lattice gauge theory.",
|
| 254 |
+
"author": "Sebastian J. Wetzel and Manuel Scherzer.",
|
| 255 |
+
"venue": "Physical Review B, 96(18), November 2017.",
|
| 256 |
+
"url": null
|
| 257 |
+
}
|
| 258 |
+
},
|
| 259 |
+
{
|
| 260 |
+
"10": {
|
| 261 |
+
"title": "Correlator convolutional neural networks as an interpretable architecture for image-like quantum matter data.",
|
| 262 |
+
"author": "Cole Miles, Annabelle Bohrdt, Ruihan Wu, Christie Chiu, Muqing Xu, Geoffrey Ji, Markus Greiner, Kilian Q. Weinberger, Eugene Demler, and Eun-Ah Kim.",
|
| 263 |
+
"venue": "Nature Communications, 12(1), June 2021.",
|
| 264 |
+
"url": null
|
| 265 |
+
}
|
| 266 |
+
},
|
| 267 |
+
{
|
| 268 |
+
"11": {
|
| 269 |
+
"title": "Discovering symmetry invariants and conserved quantities by interpreting siamese neural networks.",
|
| 270 |
+
"author": "Sebastian J. Wetzel, Roger G. Melko, Joseph Scott, Maysum Panju, and Vijay Ganesh.",
|
| 271 |
+
"venue": "Physical Review Research, 2(3), September 2020.",
|
| 272 |
+
"url": null
|
| 273 |
+
}
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"12": {
|
| 277 |
+
"title": "Discovering invariants via machine learning.",
|
| 278 |
+
"author": "Seungwoong Ha and Hawoong Jeong.",
|
| 279 |
+
"venue": "Physical Review Research, 3(4), December 2021.",
|
| 280 |
+
"url": null
|
| 281 |
+
}
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"13": {
|
| 285 |
+
"title": "Machine learning conservation laws from trajectories.",
|
| 286 |
+
"author": "Ziming Liu and Max Tegmark.",
|
| 287 |
+
"venue": "Physical Review Letters, 126(18), May 2021.",
|
| 288 |
+
"url": null
|
| 289 |
+
}
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"14": {
|
| 293 |
+
"title": "Interpretable and explainable machine learning for materials science and chemistry.",
|
| 294 |
+
"author": "Felipe Oviedo, Juan Lavista Ferres, Tonio Buonassisi, and Keith T. Butler.",
|
| 295 |
+
"venue": "Accounts of Materials Research, 3(6):597\u2013607, June 2022.",
|
| 296 |
+
"url": null
|
| 297 |
+
}
|
| 298 |
+
},
|
| 299 |
+
{
|
| 300 |
+
"15": {
|
| 301 |
+
"title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.",
|
| 302 |
+
"author": "Sebastian Bach, Alexander Binder, Gr\u00e9goire Montavon, Frederick Klauschen, Klaus-Robert M\u00fcller, and Wojciech Samek.",
|
| 303 |
+
"venue": "PLOS ONE, 10(7):e0130140, July 2015.",
|
| 304 |
+
"url": null
|
| 305 |
+
}
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"16": {
|
| 309 |
+
"title": "Axiomatic attribution for deep networks.",
|
| 310 |
+
"author": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan.",
|
| 311 |
+
"venue": "In International conference on machine learning, pages 3319\u20133328. PMLR, 2017.",
|
| 312 |
+
"url": null
|
| 313 |
+
}
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"17": {
|
| 317 |
+
"title": "Learning important features through propagating activation differences.",
|
| 318 |
+
"author": "Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje.",
|
| 319 |
+
"venue": "In International conference on machine learning, pages 3145\u20133153. PMLR, 2017.",
|
| 320 |
+
"url": null
|
| 321 |
+
}
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"18": {
|
| 325 |
+
"title": "Methods for interpreting and understanding deep neural networks.",
|
| 326 |
+
"author": "Gr\u00e9goire Montavon, Wojciech Samek, and Klaus-Robert M\u00fcller.",
|
| 327 |
+
"venue": "Digital Signal Processing, 73:1\u201315, February 2018.",
|
| 328 |
+
"url": null
|
| 329 |
+
}
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"19": {
|
| 333 |
+
"title": "Discovering governing equations from data by sparse identification of nonlinear dynamical systems.",
|
| 334 |
+
"author": "Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz.",
|
| 335 |
+
"venue": "Proceedings of the National Academy of Sciences, 113(15):3932\u20133937, March 2016.",
|
| 336 |
+
"url": null
|
| 337 |
+
}
|
| 338 |
+
},
|
| 339 |
+
{
|
| 340 |
+
"20": {
|
| 341 |
+
"title": "Distilling free-form natural laws from experimental data.",
|
| 342 |
+
"author": "Michael Schmidt and Hod Lipson.",
|
| 343 |
+
"venue": "Science, 324(5923):81\u201385, April 2009.",
|
| 344 |
+
"url": null
|
| 345 |
+
}
|
| 346 |
+
},
|
| 347 |
+
{
|
| 348 |
+
"21": {
|
| 349 |
+
"title": "Kernel methods for interpretable machine learning of order parameters.",
|
| 350 |
+
"author": "Pedro Ponte and Roger G. Melko.",
|
| 351 |
+
"venue": "Physical Review B, 96(20), November 2017.",
|
| 352 |
+
"url": null
|
| 353 |
+
}
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"22": {
|
| 357 |
+
"title": "Probing hidden spin order with interpretable machine learning.",
|
| 358 |
+
"author": "Jonas Greitemann, Ke Liu, and Lode Pollet.",
|
| 359 |
+
"venue": "Physical Review B, 99(6), February 2019.",
|
| 360 |
+
"url": null
|
| 361 |
+
}
|
| 362 |
+
},
|
| 363 |
+
{
|
| 364 |
+
"23": {
|
| 365 |
+
"title": "Demystifying black-box models with symbolic metamodels.",
|
| 366 |
+
"author": "Ahmed M Alaa and Mihaela van der Schaar.",
|
| 367 |
+
"venue": "Advances in Neural Information Processing Systems, 32, 2019.",
|
| 368 |
+
"url": null
|
| 369 |
+
}
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"24": {
|
| 373 |
+
"title": "Symbolic metamodels for interpreting black-boxes using primitive functions, 2023.",
|
| 374 |
+
"author": "Mahed Abroshan, Saumitra Mishra, and Mohammad Mahdi Khalili.",
|
| 375 |
+
"venue": "URL: https://arxiv.org/abs/2302.04791, doi:10.48550/ARXIV.2302.04791.",
|
| 376 |
+
"url": null
|
| 377 |
+
}
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"25": {
|
| 381 |
+
"title": "Learning outside the black-box: The pursuit of interpretable models.",
|
| 382 |
+
"author": "Jonathan Crabbe, Yao Zhang, William Zame, and Mihaela van der Schaar.",
|
| 383 |
+
"venue": "Advances in neural information processing systems, 33:17838\u201317849, 2020.",
|
| 384 |
+
"url": null
|
| 385 |
+
}
|
| 386 |
+
},
|
| 387 |
+
{
|
| 388 |
+
"26": {
|
| 389 |
+
"title": "xrai: Explainable representations through ai, 2020.",
|
| 390 |
+
"author": "Christiann Bartelt, Sascha Marton, and Heiner Stuckenschmidt.",
|
| 391 |
+
"venue": "URL: https://arxiv.org/abs/2012.06006, doi:10.48550/ARXIV.2012.06006.",
|
| 392 |
+
"url": null
|
| 393 |
+
}
|
| 394 |
+
},
|
| 395 |
+
{
|
| 396 |
+
"27": {
|
| 397 |
+
"title": "Operon c++: an efficient genetic programming framework for symbolic regression.",
|
| 398 |
+
"author": "Bogdan Burlacu, Gabriel Kronberger, and Michael Kommenda.",
|
| 399 |
+
"venue": "In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, GECCO \u201920. ACM, July 2020.",
|
| 400 |
+
"url": null
|
| 401 |
+
}
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"28": {
|
| 405 |
+
"title": "Pysindy: A comprehensive python package for robust sparse system identification.",
|
| 406 |
+
"author": "Alan Kaptanoglu, Brian de Silva, Urban Fasel, Kadierdan Kaheman, Andy Goldschmidt, Jared Callaham, Charles Delahunt, Zachary Nicolaou, Kathleen Champion, Jean-Christophe Loiseau, J. Kutz, and Steven Brunton.",
|
| 407 |
+
"venue": "Journal of Open Source Software, 7(69):3994, January 2022.",
|
| 408 |
+
"url": null
|
| 409 |
+
}
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"29": {
|
| 413 |
+
"title": "An approach to symbolic regression using feyn, 2021.",
|
| 414 |
+
"author": "Kevin Ren\u00e9 Brol\u00f8s, Meera Vieira Machado, Chris Cave, Jaan Kasak, Valdemar Stentoft-Hansen, Victor Galindo Batanero, Tom Jelen, and Casper Wilstrup.",
|
| 415 |
+
"venue": "URL: https://arxiv.org/abs/2104.05417, doi:10.48550/ARXIV.2104.05417.",
|
| 416 |
+
"url": null
|
| 417 |
+
}
|
| 418 |
+
},
|
| 419 |
+
{
|
| 420 |
+
"30": {
|
| 421 |
+
"title": "Improving model-based genetic programming for symbolic regression of small expressions.",
|
| 422 |
+
"author": "M. Virgolin, T. Alderliesten, C. Witteveen, and P. A. N. Bosman.",
|
| 423 |
+
"venue": "Evolutionary Computation, 29(2):211\u2013237, 2021.",
|
| 424 |
+
"url": null
|
| 425 |
+
}
|
| 426 |
+
},
|
| 427 |
+
{
|
| 428 |
+
"31": {
|
| 429 |
+
"title": "Gplearn version 0.4.2.",
|
| 430 |
+
"author": "Trevor Stephens.",
|
| 431 |
+
"venue": "https://github.com/trevorstephens/gplearn, 2022.",
|
| 432 |
+
"url": null
|
| 433 |
+
}
|
| 434 |
+
},
|
| 435 |
+
{
|
| 436 |
+
"32": {
|
| 437 |
+
"title": "Interpretable machine learning for science with pysr and symbolicregression.jl, 2023.",
|
| 438 |
+
"author": "Miles Cranmer.",
|
| 439 |
+
"venue": "URL: https://arxiv.org/abs/2305.01582, doi:10.48550/ARXIV.2305.01582.",
|
| 440 |
+
"url": null
|
| 441 |
+
}
|
| 442 |
+
},
|
| 443 |
+
{
|
| 444 |
+
"33": {
|
| 445 |
+
"title": "Extrapolation and learning equations, 2016.",
|
| 446 |
+
"author": "Georg Martius and Christoph H. Lampert.",
|
| 447 |
+
"venue": "URL: https://arxiv.org/abs/1610.02995, doi:10.48550/ARXIV.1610.02995.",
|
| 448 |
+
"url": null
|
| 449 |
+
}
|
| 450 |
+
},
|
| 451 |
+
{
|
| 452 |
+
"34": {
|
| 453 |
+
"title": "Learning equations for extrapolation and control.",
|
| 454 |
+
"author": "Subham Sahoo, Christoph Lampert, and Georg Martius.",
|
| 455 |
+
"venue": "In International Conference on Machine Learning, pages 4442\u20134450. PMLR, 2018.",
|
| 456 |
+
"url": null
|
| 457 |
+
}
|
| 458 |
+
},
|
| 459 |
+
{
|
| 460 |
+
"35": {
|
| 461 |
+
"title": "Occamnet: A fast neural model for symbolic regression at scale, 2020.",
|
| 462 |
+
"author": "Owen Dugan, Rumen Dangovski, Allan Costa, Samuel Kim, Pawan Goyal, Joseph Jacobson, and Marin Solja\u010di\u0107.",
|
| 463 |
+
"venue": "URL: https://arxiv.org/abs/2007.10784, doi:10.48550/ARXIV.2007.10784.",
|
| 464 |
+
"url": null
|
| 465 |
+
}
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"36": {
|
| 469 |
+
"title": "Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients.",
|
| 470 |
+
"author": "Brenden K Petersen, Mikel Landajuela Larma, Terrell N Mundhenk, Claudio Prata Santiago, Soo Kyung Kim, and Joanne Taery Kim.",
|
| 471 |
+
"venue": "In International Conference on Learning Representations, 2020.",
|
| 472 |
+
"url": null
|
| 473 |
+
}
|
| 474 |
+
},
|
| 475 |
+
{
|
| 476 |
+
"37": {
|
| 477 |
+
"title": "End-to-end symbolic regression with transformers.",
|
| 478 |
+
"author": "Pierre-Alexandre Kamienny, St\u00e9phane d\u2019Ascoli, Guillaume Lample, and Fran\u00e7ois Charton.",
|
| 479 |
+
"venue": "Advances in Neural Information Processing Systems, 35:10269\u201310281, 2022.",
|
| 480 |
+
"url": null
|
| 481 |
+
}
|
| 482 |
+
},
|
| 483 |
+
{
|
| 484 |
+
"38": {
|
| 485 |
+
"title": "Neural symbolic regression that scales.",
|
| 486 |
+
"author": "Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, and Giambattista Parascandolo.",
|
| 487 |
+
"venue": "In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 936\u2013945. PMLR, 18\u201324 Jul 2021.",
|
| 488 |
+
"url": null
|
| 489 |
+
}
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"39": {
|
| 493 |
+
"title": "Ai feynman: A physics-inspired method for symbolic regression.",
|
| 494 |
+
"author": "Silviu-Marian Udrescu and Max Tegmark.",
|
| 495 |
+
"venue": "Science Advances, 6(16), April 2020.",
|
| 496 |
+
"url": null
|
| 497 |
+
}
|
| 498 |
+
},
|
| 499 |
+
{
|
| 500 |
+
"40": {
|
| 501 |
+
"title": "Interpretable scientific discovery with symbolic regression: A review, 2022.",
|
| 502 |
+
"author": "Nour Makke and Sanjay Chawla.",
|
| 503 |
+
"venue": "URL: https://arxiv.org/abs/2211.10873, doi:10.48550/ARXIV.2211.10873.",
|
| 504 |
+
"url": null
|
| 505 |
+
}
|
| 506 |
+
},
|
| 507 |
+
{
|
| 508 |
+
"41": {
|
| 509 |
+
"title": "Let\u2019s have a coffee with the standard model of particle physics!",
|
| 510 |
+
"author": "Julia Woithe, Gerfried J Wiener, and Frederik F Van der Veken.",
|
| 511 |
+
"venue": "Physics Education, 52(3):034001, March 2017.",
|
| 512 |
+
"url": null
|
| 513 |
+
}
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"42": {
|
| 517 |
+
"title": "The pricing of commodity contracts.",
|
| 518 |
+
"author": "Fischer Black.",
|
| 519 |
+
"venue": "Journal of Financial Economics, 3(1\u20132):167\u2013179, January 1976.",
|
| 520 |
+
"url": null
|
| 521 |
+
}
|
| 522 |
+
},
|
| 523 |
+
{
|
| 524 |
+
"43": {
|
| 525 |
+
"title": "Xxxv. a tentative theory of light quanta.",
|
| 526 |
+
"author": "Louis de Broglie.",
|
| 527 |
+
"venue": "The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 47(278):446\u2013458, February 1924.",
|
| 528 |
+
"url": null
|
| 529 |
+
}
|
| 530 |
+
},
|
| 531 |
+
{
|
| 532 |
+
"44": {
|
| 533 |
+
"title": "TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.",
|
| 534 |
+
"author": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng.",
|
| 535 |
+
"venue": "Software available from tensorflow.org.",
|
| 536 |
+
"url": null
|
| 537 |
+
}
|
| 538 |
+
},
|
| 539 |
+
{
|
| 540 |
+
"45": {
|
| 541 |
+
"title": "Closed-form interpretation of neural network latent spaces with symbolic gradients, 2024.",
|
| 542 |
+
"author": "Zakaria Patel and Sebastian J. Wetzel.",
|
| 543 |
+
"venue": "URL: https://arxiv.org/abs/2409.05305, doi:10.48550/ARXIV.2409.05305.",
|
| 544 |
+
"url": null
|
| 545 |
+
}
|
| 546 |
+
}
|
| 547 |
+
],
|
| 548 |
+
"url": "http://arxiv.org/html/2401.04978v2"
|
| 549 |
+
}
|
20241001/2401.09108v2.json
ADDED
|
@@ -0,0 +1,215 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Reproducibility via neural fields of visual illusions induced by localized stimuli",
|
| 3 |
+
"abstract": "This paper focuses on the modeling of experiments conducted by Billock and Tsou [V. A. Billock and\nB. H. Tsou, Proc. Natl. Acad. Sci. USA, 104 (2007), pp. 8490\u20138495] using an Amari-type neural field that models the average membrane potential of neuronal activity in the primary visual cortex (V1). The study specifically focuses on a regular funnel pattern localized in the fovea or the peripheral visual field. It aims to comprehend and model the visual phenomena induced by this pattern, emphasizing their nonlinear nature. The research involves designing sensory inputs that mimic the visual stimuli from Billock and Tsou\u2019s experiments. The cortical outputs induced by these sensory inputs are then theoretically and numerically studied to assess their ability to model the experimentally observed visual effects at the V1 level. A crucial aspect of this study is the exploration of the effects induced by the nonlinear nature of neural responses. By highlighting the significance of excitatory and inhibitory neurons in the emergence of these visual phenomena, the research suggests that an interplay of both types of neuronal activities plays a crucial role in visual processes, challenging the assumption that the latter is primarily driven by excitatory activities alone.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "1. Introduction",
|
| 9 |
+
"text": "Exploring a mathematically sound approach to understanding visual illusions in human perception using neural dynamics can give us valuable insights into perceptual processes and visual organization [bertalmio2020visual, bertalmio2021cortical], and can reveal much about how precisely the brain works. Neural dynamics refers to the patterns of activity and interactions among neurons that give rise to our ability to see and understand the world. Our visual system processes information in different stages, with specialized neurons at each stage extracting specific details from what we see. The visual system shows dynamic and widespread activity patterns, from detecting basic features like edges and orientations to putting everything together and making sense of it.\nThe brain area which detects basic features such as spatial position, edges, local orientations, and direction in visual stimuli from the retina is the primary visual cortex (V1 for short), [hubel1959, hubel1977ferrier].\nSimple geometric visual hallucinations akin to that classified by Kl\u00fcver [kluver1966] have been theoretically recovered in the last decades via the neural dynamic equation used to model the cortical activity in V1 combined with the bijective nonlinear retino-cortical mapping [schwartz1977, tootell1982] between the visual field and V1, see for instance, [bressloff2001, bressloff2002, ermentrout1979, golubitsky2003, tass1995]. These geometric forms, known as form constants, are obtained near a Turing-like instability using linear stability analysis, (equivariant) bifurcation theory, and pattern selection when the cortical activity is due solely to the random firing of V1 neurons, that is, in the absence of sensory inputs from the retina. However, to function correctly, the primary visual cortex must be primarily driven by sensory information from the retina [hubel1959, hubel1977ferrier], not only by the internal noisy fluctuation of its cells. Several methods have explored how sensory inputs are processed in early visual areas. Experimental studies have been conducted [hebb2005organization], along with experimentally induced phenomena via psychophysical tests [billock2007, billock2012elementary, rogers2021hallucinations, pearson2016sensory, mackay1957, mackay1961]. Additionally, theoretical tools like the Lie transformation group model have been applied to analyze perceptual processes [dodwell1983lie, hoffman1966lie]. Despite these efforts, using theoretical neural dynamics, our understanding of the precise neuronal mechanisms underlying visual illusions remains elusive.\n###figure_1### It has been known since Helmholtz\u2019s work [helmholtz1867] that even simple geometrical patterns comprising black and white zones may induce strong after-images111 In the experiments of [billock2007], observers perceive an illusory image in their visual field after viewing a visual stimulus, and this image persists for a few seconds. This is what we refer to when using the term after-image.\n\naccompanying a visual perception after a few seconds. Then, via redundant and non-redundant stimulation by funnel (fan shape) and tunnel (concentric rings) patterns (see Figure 1 ###reference_###), MacKay [mackay1957, mackay1961] points out that there is some kind of orthogonal response in the visual cortex since funnel pattern induces a superimposed (to the physical stimulus) tunnel pattern as an after-image, and conversely.\nMore recently, Nicks et al. [nicks2021] have built on the foundation of neural field equations of Amari-type [amari1977, Eq. (3)] to model MacKay-type visual illusions induced by specific visual stimuli at the cortical level. Their model, which represents cortical activity in V1, incorporates a fully distributed state-dependent sensory input.\nThis input models the cortical representation via the retino-cortical map of funnel and tunnel patterns.\nTheoretically, they proved these experimental findings, demonstrating an orthogonal response of V1 to visual inputs. The present authors have further sustained this evidence in their previous works [tamekue2024mathematical, tamekue2022reproducing].\n\nIn particular, by using the neural field equation of Amari-type, we have shown that the underlying Euclidean symmetry of V1 (see, for instance, [bressloff2001]) restricts the geometrical shape of visual inputs that can induce a \u201cstrong\u201d after-effect in the primary visual cortex. If the visual input is symmetric with respect to a subgroup of the group of the motion of the plane (refer to [tamekue2024mathematical, Appendix A]), then the induced after-image obtained via the Amari-type equation and the inverse retino-cortical map have the same subgroup as a group of symmetry. The latter suggests that the after-images induced by fully distributed tunnel and funnel patterns (more generally spontaneous patterns obtained through Turing-like instability [bressloff2001, ermentrout1979, tass1995]) that fill all the visual field have the same shape. Moreover, we exhibited in [tamekue2023, tamekue2022reproducing] numerical simulations using the Amari-type equation, showing that if the funnel pattern is localized either in the fovea (centre of the visual field) or in the peripheral visual field, then the induced after-image consisting of the tunnel pattern appears in the white or black complementary region where the stimulus is not localized\u2013also demonstrating orthogonal and non-local response\u2013in V1. These numerical simulations, therefore, sustain the psychophysical experiments reported by Billock and Tsou [billock2007], see also [billock2012elementary]. Note that numerical simulations (including those for rotating after-images that are not considered in this paper) performed in [nicks2021] also support the latter psychophysical experiments.\n###figure_2### ###figure_3###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "1.1. Billock and Tsou\u2019s psychophysical experiments",
|
| 15 |
+
"text": "Significant visual effects associated with funnel and tunnel patterns have been recently observed in the psychophysical experiments conducted by Billock and Tsou [billock2007]. Like the MacKay effect [mackay1957, mackay1961], these authors discovered that introducing biased stimuli elicits orthogonal responses in the visual field. When a physical stimulus is localized at the fovea (the central region of the visual field), the resulting visual illusion appears in the flickering periphery. Conversely, the visual illusion emerges in the flickering centre if the physical stimulus is presented in the periphery.\nSpecifically, when a background flicker is combined with a funnel pattern centered on the fovea (or periphery), the observer experiences the illusory perception of a tunnel pattern in the periphery (or fovea, respectively). Similarly, when the periphery (or fovea) of a tunnel pattern localized at the fovea (or periphery) is subjected to flickering, an illusory rotating funnel pattern is perceived in the periphery (or fovea).\nIn all cases, the illusory contours in the afterimage appear within the nonflickering region, depending on whether the flicker does not extend through the physical stimulus or if the empty region is flickered out of phase. Refer to Fig. 2 ###reference_### for a visual illustration."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "1.2. Neural field model, strategy of study, and presentation of our results",
|
| 21 |
+
"text": "This paper aims to investigate the theoretical modeling of Billock and Tsou\u2019s experiments [billock2007] associated with a regular funnel pattern localized in the fovea or peripheral visual field, as recalled in the previous section.\n\nOur approach is mechanistic: we describe a possible model of how cortical dynamics induce the phenomena under consideration. The matter of why neurons behave this way is outside the scope of this article, albeit being a very active topic of investigation in theoretical neuroscience [CLIFFORD20073125, laparra2015, rentzeperis2023]\n.\nWe will follow the idea of controllability of the Amari-type neural field introduced in [tamekue2024mathematical, tamekue2022reproducing] that we will recall hereafter. In particular, we stress why these visual phenomena are nonlinear, as first pointed out in [tamekue2023].\nFrom a control theory point of view, the first aim is to design a suitable sensory input , V1 representation via the retino-cortical map of visual stimulus from the retina used in the experiment such that the cortical state solution to the following Amari-type control system\nexponentially stabilizes to the stationary state, corresponding to the V1 representation via the retino-cortical map of the induced after-image reported by Billock and Tsou.\n Following [bressloff2001, ermentrout1979], we assume that the perceived image is obtained by applying the inverse retino-cortical map to the cortical state.\nSecondly, we will perform a quantitative and qualitative study of this stationary output to show that it captures all the essential features of the visual illusion announced by Billock and Tsou at the V1 level.\nTo this aim, we follow a numerical analysis approach specifically designed to address the complex nonlinear dynamics characteristic of the considered neural fields model.\nEquation (NF ###reference_###) has been introduced in [amari1977] (see also [cook2022neural] for a recent overview on neural field models) to describe the dynamics of the average membrane potential of the neurons located at the point at time .\nThe time-evolution of the average membrane potential at time is given by the map .\nThe state of cortical activity in V1 at time is assumed to be given by the function .\n\n\nThe neural field equation (NF ###reference_###) can be seen as the combined action of the external stimulus , the natural decay rate or leakage of the neurons\u2019 average membrane potential towards their resting states,\nand an integral term representing the intra-neural connectivity, modulated by the parameter .\nThe latter consists of a convolution product between the synaptic connectivity kernel , modeling the spatial relationship between neurons, and the non-linear term given by the response function applied to , which transforms the activity level of a neural population at location and time into an output signal. (See Definition 1 ###reference_inition1###.) \nTherefore, once the signal reaches V1, it will interact with local neural dynamics captured by this equation. The equation then models how V1 responds to this input while accounting for local interactions (via the connectivity kernel ) and nonlinearities in neural activity (via the response function ).\nIn biological brain tissue, neurons can be excitatory or inhibitory [hubel1959, hubel1977ferrier], and an inhibitory neuron decreases the likelihood that a post-synaptic neuron will send out electrical signals or spike to communicate with other brain cells. A negative value for might capture this inhibitory influence. Notice also that a nonnegative function would imply that all neurons, regardless of their current activity level, provide some excitatory output. This overlooks the crucial role of inhibitory neurons in shaping neural activity. Moreover, as evident from the study we will present in this paper, a model lacking inhibitory activity is likely insufficient for capturing certain phenomena such as that reported by Billock and Tsou. In the latter case, we will also see that a complex interplay between excitatory and inhibitory activity [haider2006neocortical, shu2003turning] in the shape of is required and plays a crucial role since an odd nonlinearity does not model the phenomenon.\nTherefore, the effect that plays the non-linearity on the reproducibility of Billock and Tsou\u2019s experiments using (NF ###reference_###) will be highlighted. As we previously pointed out in [tamekue2023, Fig. 8], these phenomena are wholly nonlinear and strongly depend on the shape of the nonlinear function .\nDifferent models for cortical activity are available (e.g., the original Wilson-Cowan model for excitatory/inhibitory populations [wilson1973]). The choice of an Amari-type neural field is motivated by the fact that equation (NF ###reference_###) is sufficient for describing the spontaneous formation of funnel and tunnel patterns [bressloff2001, Eq. (16)] in V1, and we expect it also to be suitable for describing psychophysical experiments involving these patterns. Moreover, it is more amenable to mathematical analysis and yields the same qualitative behaviors as the original Wilson-Cowan model in many neural processes modeling [ermentrout1998neural].\n\nWe also mention that the Amari-type neural field has been successfully applied to reproduce visual illusions [baspinarCorticalInspired2021, bertalmio2021cortical], and has recently been connected with the Divisive Normalization from visual psychophysics [maloCortical2024].\nIn this work, we focus on fixed contrast stimuli. We stress that as a consequence of our results, one can easily obtain that, for a fixed nonlinearity, the illusory phenomena are not reproduced for small contrast.\nNotice that while sensory inputs in Billock and Tsou\u2019s experiments are time-varying, our study finds that a temporal flicker of the complementary region where the stimulus is not localized is not necessary to reproduce these intriguing visual phenomena (an observation already made in [nicks2021]).\nOur interpretation is that Billock and Tsou\u2019s phenomena result wholly from the underlying non-local and nonlinear properties of neural activity in V1 rather than the temporal flickering of the complementary region where the stimulus is not localized. In particular, the flickering should instead be in the origin of illusory motions that subjects perceived in the after-images.\nThe remaining of the paper is organized as follows: Section 1.3 ###reference_### recalls some general notations used throughout the following. We present assumptions on model parameters used in (NF ###reference_###) in Section 2.1 ###reference_###. Section 2.2 ###reference_### describes the mathematical modeling of visual stimuli associated with funnel patterns used in Billock and Tsou\u2019s experiments. In Section 3 ###reference_###, we recall some preliminary results related to the well-posedness of equation (NF ###reference_###) and those in the direction of modeling Billock and Tsou\u2019s experiments associated with a funnel pattern localized either in the fovea or in the peripheral visual field. The modeling of the phenomena using (NF ###reference_###) starts precisely in Section 4 ###reference_###. In Section 4.1 ###reference_###, we prove that the stationary output of (NF ###reference_###) associated with a pattern of horizontal stripes localized in the left area of V1 does not contain a pattern of vertical stripes in the white complementary region (the right area of V1) but rather a mixture of horizontal and vertical stripes if the response function is linear. In Section 4.2 ###reference_###, we prove that even with certain nonlinear response functions that exhibit strong inhibitory or excitatory influences and a weak slope, or a balance between excitatory and inhibitory influences, the stationary output of (NF ###reference_###) associated with a pattern of horizontal stripes localized in the left area of V1 is identical with that of the linear response function. Section 5 ###reference_### focuses precisely on proving that if, for instance, the response function in (NF ###reference_###) exhibits a good interplay between excitatory and inhibitory influence and a weak slope, then the stationary output associated with a pattern of horizontal stripes localized in the left area of V1 contains a pattern of vertical stripes in the white complementary region (the right area of V1) as Billock and Tsou reported. For this aim, we follow a numerical analysis-type of argument in Section 5.1 ###reference_###, together with an analysis of the corresponding numerical schemes. Section 5.2 ###reference_### presents some numerical simulations that bolster our theoretical study. Finally, in Section 6 ###reference_###, we discuss the main results of our paper and highlight areas for future work. We defer to Appendix A ###reference_###, the proof of some technical results used in the paper."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "1.3",
|
| 25 |
+
"parent_section_id": "1",
|
| 26 |
+
"section_name": "1.3. General notations",
|
| 27 |
+
"text": "In the following, is the dimension of and denote the Euclidean norm of . For , is the Lebesgue space of class of real-valued measurable functions on such that is integrable over if , and is essentially bounded over when . We endow these spaces with their standard norms and .\nWe let \nbe the space of all real-valued functions on such that, is continuous on for and for every . We endow this space with the norm .\nWe let be the Schwartz space of rapidly-decreasing functions,\nand be its dual space, i.e., the space of tempered distributions. Then, and continuously.\nThe Fourier transform of is defined by\nSince , one can extend the above by duality to , and in particular to . The convolution of and , , is\nFinally, the following notation will be helpful: if is a real-valued function defined on , we use to denote the zero level-set of ."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "2. Assumption on parameters and mathematical modeling of visual stimuli",
|
| 33 |
+
"text": "In this section, we will present assumptions that we will consider on the parameters in (NF ###reference_###), specifically on the response function and on the connectivity kernel , as it is highlighted in Section 2.1 ###reference_###. Then, in Section 2.2 ###reference_###, we will present how we mathematically model the visual stimuli used in Billock and Tsou\u2019s experiments associated with a regular funnel pattern localized in the fovea or peripheral visual field that we incorporate as sensory inputs in (NF ###reference_###)."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.1",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "2.1. Assumption on parameters in the Amari-type equation",
|
| 39 |
+
"text": "We make the following assumption on parameters involved in (NF ###reference_###).\nIn this article, we use a spatially homogeneous and isotropic interaction kernel in relation to coordinates . It depends solely on the Euclidean distance among neurons, showing rotational symmetry. The \u201cMexican-hat\u201d distribution is employed, a variant of the Difference of Gaussians (DoG) model with two components. The first Gaussian governs short-range excitatory interactions, and the second Gaussian models long-range inhibitory interactions in V1 neurons. Thus, the connectivity kernel is taken as:\nwhere , and and satisfy and . The latter condition is crucial for explicitly calculating the -norm of , as detailed in (2.4 ###reference_###).\nNote that is equivalent to , and belongs to the Schwartz space . The Fourier transform of is explicitly given by:\nand the maximum of occurs at every vector satisfying . Thus:\nThe -norm of is also explicitly represented by:\nLet us mention that might not satisfy the balanced condition , an equilibrium between excitation and inhibition. Nonetheless, this equilibrium is achieved when .\n###figure_4### ###figure_5### Finally, in the sequel, we use the letter to denote any positive constant depending only on the parameters involved in the definition of .\nThe choice of the response function is crucial, and it is motivated by authors\u2019 previous works [tamekue2023, tamekue2022reproducing]. Indeed, in [tamekue2022reproducing, Figs. 5 and 6] we illustrated the capability of Equation (NF ###reference_###) to reproduce Billock and Tsou experiments with the nonlinear response function , and that does not reproduce the phenomenon, suggesting that certain (non-odd) sigmoidal-type response functions are required to model the phenomenon. In [tamekue2023, Section 4], we briefly explained why the stationary output pattern of the Amari-type (NF ###reference_###) does not capture the essential features of the visual illusions reported by Billock and Tsou\u2019s when the response function is linear. Moreover, still in [tamekue2023, Fig. 8], by considering the \u201csigmoidal-type\u201d response function with and , we figured out ranges on parameters and for which the stationary output pattern of the Amari-type (NF ###reference_###) captures the essential features of the visual illusions reported by Billock and Tsou\u2019s. More precisely, [tamekue2023, Fig. 8] suggests that nonnegative , odd with , nonlinearities with strong inhibitory influence and weak slope as well as nonlinearities with strong excitatory influence and weak slope do not model Billock and Tsou\u2019s experiments associated with a regular funnel pattern localized either in the fovea or in the peripheral visual field. While for other values of and , (NF ###reference_###) with the response function captures the essential features of the visual illusions reported by Billock and Tsou (either the \u201cstrong\u201d or the \u201cweak\u201d phenomenon, as recalled in Section 1.1 ###reference_###).\nObserve also that is a non-smooth \u201cmathematical approximation\u201d of the following sigmoid function, frequently used in neural field models like (NF ###reference_###),\nIn this paper, when referring to a response function, we will always assume the following.\nA response function is a non-decreasing Lipschitz continuous function such that , is differentiable at , and .\nOf particular interest in the rest of the paper is the family of response functions given by\nfor every and , or by\nfor every . Please refer to Figure 3 ###reference_### for a visual illustration.\nNotice that, whenever is finite, is bounded.\nFinally, it is worth emphasizing that the spatially forced pattern-forming mechanism that we are studying is qualitatively the same if instead of we use the smooth sigmoid since the neural field model (NF ###reference_###) is structurally stable.\nFollowing our previous works [tamekue2024mathematical, tamekue2023, tamekue2022reproducing], we assume that is smaller than the threshold parameter where certain geometric patterns spontaneously emerge in V1 in the absence of sensory inputs from the retina, see for instance, [bressloff2001, curtu2004, ermentrout1979, nicks2021]. This threshold parameter is referred to as the bifurcation point, and it is analytically given by\nwhere is defined by (2.3 ###reference_###). Moreover, we let\nbe the largest value of up to which we can ensure the existence and uniqueness of a stationary solution to (NF ###reference_###) in the space . We henceforth assume that\nThe response function is globally bounded for all finite and ensuring that, independently of , the solution of (NF ###reference_###) is uniformly bounded for , for any initial datum and sensory input . See for instance [tamekue2024mathematical, Theorem B.6.].\nAlthough the semilinear response function is unbounded, we prove in Section 3 ###reference_### that this is still true under the assumption ."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.2",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "2.2. Mathematical modeling of visual stimuli",
|
| 45 |
+
"text": "In this section, we mathematically model the cortical representation of visual stimuli associated with funnel patterns used in Billock and Tsou\u2019s experiments. Then, we incorporate them as sensory inputs in (NF ###reference_###). Note that we are devoted to modeling the static version of these phenomena. Here, \u201cstatic\u201d refers to a physical visual stimulus that induces an afterimage on the retina, resulting in illusory contours that do not exhibit apparent motion. Consequently, we will not consider a time-dependent sensory input, which should incorporate the modeling of flickering employed in the experiment. However, as we already pointed out, this consideration will be enough for the corresponding stationary output pattern of (NF ###reference_###) to capture all the essential features (illusory contours) of the after-image reported by Billock and Tsou.\nRecall that the functional architecture of V1 exhibits a remarkable characteristic known as retinotopic organization [tootell1982]: the neurons in the V1 area are arranged orderly, forming a topographic or retinotopic map (well-known as the retino-cortical map). This map represents a two-dimensional projection of the visual image formed on the retina. Notably, neighboring regions of the visual field are represented by neighboring regions of neurons in V1, establishing a bijective relationship.\nUp to the authors\u2019 knowledge, the retino-cortical map was first represented analytically as a complex logarithmic map in [schwartz1977]. Let denote polar coordinates in the visual field (or in the retina) and Cartesian coordinates in V1. The retino-cortical map (see also [tamekue2022reproducing] and references within) is analytically given by\n###figure_6### ###figure_7### Due to the retino-cortical map analytical representation (2.11 ###reference_###) and consistent with spontaneous patterns description [bressloff2001, ermentrout1979], we consider that the function which generates the funnel pattern is given in Cartesian coordinates of V1 by\nLet us point out that one of the fundamental properties of the retinotopic projection of the visual field into V1 is that small objects centred on the fovea (centre of the visual field) have a much larger representation in V1 than do similar objects in the peripheral visual field.\n###figure_8### ###figure_9### As a result, the cortical representation of Billock and Tsou\u2019s visual stimulus associated, e.g., with the funnel pattern localized respectively in the fovea and in the peripheral visual field, should consist of taking the sensory input as\nHere, and are nonnegative real numbers, and is the Heaviside step function, modeling that the funnel pattern is localized in the fovea and the peripheral visual field, respectively. Note that and correspond to sensory inputs consisting of horizontal stripes in the left and right areas of the cortex V1. Indeed, since visual stimuli employed in these experiments are alternating sequences of white and black zones, we represent every cortical function, say , as defined in (2.13 ###reference_###) in terms of a binary image, corresponding to the zero-level set of , in the following way: in the regions where we put the black grayscale and where we put the white grayscale, refer for instance, to Figures 5 ###reference_### and 7 ###reference_###.\nFor ease in the presentation, in the following, we will restrict ourselves to the funnel pattern localized in the left area of V1 since the same analysis can be straightforwardly adapted to ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "3. Preliminary results on the Amari-type equation",
|
| 51 |
+
"text": "In this section, we begin by discussing the concept of a stationary state as it applies to (NF ###reference_###). Following this, we review essential preliminary findings related to the well-posedness of the same equation that is necessary to comprehend the rest of the paper.\nLet . For every , a stationary state to (NF ###reference_###) is a time-invariant solution, viz.\nThe following well-posedness result is [tamekue2024mathematical, Theorem 3.1], which only relies on the global Lipschitz property of the nonlinearity .\nLet . For any initial datum , there exists a unique , solution to (NF ###reference_###). If , there exists a unique stationary state to (NF ###reference_###). Moreover, the following holds.\nIn the following theorem, we prove the uniform boundedness of the solution under the assumptions of Section 2.1 ###reference_###.\nLet , and be the solution of (NF ###reference_###). Then,\nIf , it holds\nwhere is the stationary solution to Equation (NF ###reference_###) given by Theorem 3.1 ###reference_theorem1###.\nIf , we have\nWe recall from Theorem 3.1 ###reference_theorem1### that\nfor all , and every we have\nTherefore, we apply Minkowski\u2019s and Young convolution inequalities to (3.4 ###reference_###), and obtain for any ,\nusing that is -Lipschitz continuous. Applying Gr\u00f6nwall\u2019s Lemma A.1 ###reference_theorem1###\nwith , and to (3.5 ###reference_###) yields (3.3 ###reference_###) for , while for one gets\nInequality (3.2 ###reference_###) follows directly.\n\u220e\nOne also has the following.\nUnder the assumption , for any we let\nThen, for any \nthe stationary solution of (NF ###reference_###) with response function coincides with the unique stationary solution to the same equation with response function .\nBy Theorem 3.1 ###reference_theorem1###, the stationary solution to (NF ###reference_###) with response function is the unique solution of . In particular, by definition of , inequality (3.2 ###reference_###) implies that\nTherefore, one has222Here denotes the characteristic function of the subset ., for a.e. ,\nsince for every . It follows that is a stationary solution for (NF ###reference_###) with nonlinearity . The statement follows by the uniqueness of the stationary solution provided by Theorem 3.1 ###reference_theorem1###.\n\u220e\nApplied, for instance, to Billock and Tsou\u2019s experiments modeling, Proposition 3.3 ###reference_theorem3### implies the following simple but important result.\nUnder the same assumptions as Proposition 3.3 ###reference_theorem3###, let be such that the response function reproduces Billock and Tsou\u2019s experiments. Then, the same is true for any response function such that .\nThe following result proves that the stationary state to (NF ###reference_###) is Lipschitz continuous whenever the sensory input is.\nAssume that . If the sensory input is -Lipschitz continuous on some open set , then the corresponding stationary solution to equation (NF ###reference_###) is also Lipschitz continuous on , with Lipschitz constant upper bounded by\nwhere denotes a constant depending only on .\nLet be the unique stationary solution whose existence is guaranteed by Theorem 3.1 ###reference_theorem1###.\nFor we have that\nSince and , one has that is infinitely differentiable on .\nSince by assumption is -Lipschitz continuous and satisfies , it is straightforward to show that\nIt follows by the Mean Value Theorem that is Lipschitz continuous on . Since is Lipschitz continuous on and using Theorem 3.2 ###reference_theorem2### to upper bound , the result then follows at once.\n\u220e\nThe following simple result will be used hereafter.\nAssume that the response function in (NF ###reference_###) is odd.\nIf , for any sensory input one has .\nThanks to Theorem 3.1 ###reference_theorem1###, we have that and are uniquely defined by and , respectively. Since is odd, one has . Therefore, since Young convolution inequality gives\nIn the following, we prove more general results that provide insight into the qualitative properties of the stationary state of (NF ###reference_###) when the sensory input has a cosine factor.\nLet the sensory input be given by , for and , where . If , then the following hold.\nis -periodic, even and globally Lipschitz continuous with respect to ;\nIf is odd, then is -antiperiodic with respect to . Namely,\nWe assume that for ease of notation. Using that the convolution operator commutes with translation and that the input and the kernel are even with respect to , one deduces that is even with respect to .\nLet us prove that is -periodic with respect to . For a.e. , one has\nIt follows that is the stationary solution associated with and hence it coincides with .\nLet us show that is Lipschitz continuous with respect to . Taking the derivative of (4.2 ###reference_###) with respect to , one finds that for a.e. it holds\nSince by assumption, it follows that\nfor a.e. ,\nshowing that is Lipschitz continuous for a.e. . This completes the proof of item (1).\nLet us now prove item (2). For a.e. , one has\nwhere in the last equality we used the fact that is odd. Hence, is the stationary solution associated with and hence it coincides with .\n\u220e\nOne has the following result related to Billock and Tsou\u2019s experiments, which is the main focus of this paper.\nThe proof is an adaptation of that of [tamekue2024mathematical, Theorem 5.2]. We will present it for the sake of completeness.\nAssume that the response function in (NF ###reference_###) is odd.\nLet the sensory input be given by (2.13 ###reference_###). If , denote by the corresponding stationary state to . Then, for a.e. , the set of zeros of coincides with that of .\nWe assume that for ease of notation. The zeroes of are for every . Let , let us first prove that . Since is -periodic by Proposition 3.7 ###reference_theorem7###, it is enough to prove that . Using that is -antiperiodic and even by Proposition 3.7 ###reference_theorem7###, one gets . Therefore, . Conversely, let be such that . We want to show that . Recall that for a.e. ,\nIf , then from (3.17 ###reference_###), it follows\nBy using (3.17 ###reference_###) once again, one obtains\nwhere and for every\nsatisfies\nSince , the contracting mapping principle ensures that is the unique solution to (3.19 ###reference_###). Moreover, it holds\nsince is odd. So the function is also solution of (3.19 ###reference_###) with input . By uniqueness of solution, one has and that is an odd function with respect to , since is radial and is an odd function. It follows from Fubini\u2019s theorem that the right-hand side of (3.18 ###reference_###) is equal to .\n\u220e\nNote that the assumption in Proposition 3.8 ###reference_theorem8### is technical due to our strategy in the proof. Numerical simulations suggest that the conclusion of the proposition remains valid for all . See, for instance, Figure 8 ###reference_###.\n###figure_10### ###figure_11###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "4. On Billock and Tsou\u2019s experiments modelling",
|
| 57 |
+
"text": "In this section, we investigate the modeling of Billock and Tsou\u2019s phenomena using (NF ###reference_###). In the current study, we aim to elucidate the efficacy of (NF ###reference_###) in mimicking these visual illusions, as briefly reviewed in Section 1.1 ###reference_###. We focus on determining if the model\u2019s output exhibits qualitative concordance with the human experience of these illusions. It is imperative to note that our analysis is mechanistic and strictly qualitative and serves as an illustrative proof of concept for applying Amary-type dynamics (NF ###reference_###) in simulating the perceptual outcomes elicited by visual illusions as previously obtained in [tamekue2024mathematical] for the visual MacKay effect modeling. This study does not endeavor to align its findings with quantitative empirical data, as such data are contingent upon numerous experimental variables that do not have a straightforward relationship with the parameters within our model.\nWe begin by proving that these phenomena are wholly nonlinear in contrast, for instance, to the visual MacKay effect [mackay1957] that we proved in [tamekue2024mathematical] for being a linear phenomenon. Therefore, we will see that (NF ###reference_###) with a linear response function cannot reproduce the psychophysical experiments by Billock and Tsou [billock2007] associated with the funnel pattern stimulus when the corresponding sensory inputs are modeled as in (2.13 ###reference_###)."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "4.1. Unreproducibility of Billock and Tsou experiments: linear response function",
|
| 63 |
+
"text": "This section assumes that the response function is linear. To simplify our analysis, we specifically focus on the funnel pattern centred on the fovea within the visual field. As a result, the corresponding sensory input consists of a localized pattern of horizontal stripes in the left area of the V1 cortex by the retino-cortical map, see Figure 5 ###reference_###.\nPreviously, in [tamekue2023, Proposition 5.], we proved that (NF ###reference_###) with a linear response function is incapable of reproducing Billock and Tsou\u2019s experiments, as verified through direct Fourier transform computations. While this finding sufficed to establish our desired outcome, it failed to offer deeper insights into the qualitative properties of the stationary state associated with the sensory input utilized in these experiments. Specifically, it did not precisely characterize the zero-level set of this stationary state. To address this gap, we draw upon the qualitative properties of the sensory input and utilize tools from complex and harmonic analysis. Consequently, we present the following key results in this section.\nAssume that the response function in (NF ###reference_###) is linear with slope and that the sensory input . If , denote by the corresponding stationary state to . Then, the zero-level set of satisfies\nwhere and are discrete and countable sets, respectively in and .\nSince and , with and , we assume without loss of generality that , and to keep the presentation clear for reader convenience. Therefore, the stationary state satisfies\nwhere the kernel is defined in (2.1 ###reference_###).\nWe pedagogically split the proof of Theorem 4.1 ###reference_theorem1### into several steps. The first result is the following.\nUnder hypotheses of Theorem 4.1 ###reference_theorem1###, the stationary state decomposes as\nHere is given by\nwhere is defined for all by\nfor every .\nWe fix . Since is -periodic and even on , we expand in term of Fourier series as\nThanks to the item (1) of Proposition 3.7 ###reference_theorem7###, one has that the derivative of with respect to is continuous and bounded on . Thus belongs to , the space of real-valued measurable and square-integrable functions over . Since is absolutely continuous (Lipschtiz continuous still by the item (1) of Proposition 3.7 ###reference_theorem7###) on , it follows from [kolmogorov1974, Th\u00e9or\u00e8me 2.] that its Fourier series converges uniformly to on .\nObserve also that (4.7 ###reference_###) defines functions for all , so that one gets for all and for all , the existence of such that\nTherefore, we can substitute (4.6 ###reference_###) into (4.2 ###reference_###) and find the following family of one-dimensional linear integral equations indexed by .\nwhere is the usual Kronecker symbol and the kernel is given for , by\nFor , equations (4.9 ###reference_###) yields to\nwhere is the Dirac distribution at . Taking the Fourier transform of (4.11 ###reference_###) in the space , one obtains for all ,\nIt is not difficult to see that . Since by assumption, one deduces for all , and . It follows that\nIn the case , one has\nFinally, taking respectively the Fourier transform of (4.14 ###reference_###) and the inverse Fourier transform in the space , we find that is given by (4.4 ###reference_###) with defined as in (4.5 ###reference_###).\n\u220e\nDue to Lemma 4.2 ###reference_theorem2###, inverting the kernel defined in (4.5 ###reference_###) and providing an asymptotic behavior of its zeroes on will help to provide thorough information on the zeroes of the function as given by (4.4 ###reference_###). To achieve this, we use tools from complex analysis.\nLet us consider the extension of in the set of complex numbers,\nThen is a meromorphic function on , and its poles are zeroes of the entire function\nThe holomorphic function is an exponential polynomial [berenstein2012, Chapter 3] in with frequencies , and satisfying due to assumptions on and . It is normalized since the coefficient of -frequency equals . A necessary condition for for being factorizable [berenstein2012, Remark 3.1.5, p. 201] is that parameters and are taken so that it is simple. By definition [berenstein2012, Definition 3.1.4, p. 201], is simple if and are commensurable, i.e., is rational, which is equivalent to is rational.\nThanks to Remark 4.3 ###reference_theorem3###, without loss of generality, we can set parameters in the kernel in (2.1 ###reference_###) are such that , and . We also let .\nUsing Theorem A.2 ###reference_theorem2###, and arguing similarly as in the proof of [tamekue2024mathematical, Proposition 5.12.], we can prove that has a discrete and countable set of zeroes in , under the considerations in Remark 4.4 ###reference_theorem4###.\nTo complete the proof of Theorem 4.1 ###reference_theorem1###, it suffices to consider Lemma 4.2 ###reference_theorem2###, Theorem A.2 ###reference_theorem2### and observe that given by (4.3 ###reference_###) satisfies (4.1 ###reference_###).\n\u220e\nA consequence of Theorem 4.1 ###reference_theorem1### is the following.\nAssume and that the response function is linear. Then,\nEquation (NF ###reference_###) does not reproduce Billock and Tsou\u2019s experiments associated with a sensory input consisting of a pattern of horizontal stripes localized in the left area in the cortex V1.\nGiven that the sensory input in equation (NF ###reference_###) is a pattern consisting of horizontal stripes localized in the left area in the cortex V1, Theorem 4.1 ###reference_theorem1### shows that the corresponding stationary state consists of a mixture of patterns of horizontal and vertical stripes in the right area in V1 instead of vertical stripes only, as Billock and Tsou reported.\n\u220e"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "4.2. Unreproducibility of Billock and Tsou\u2019s experiments with certain nonlinear response functions",
|
| 69 |
+
"text": "As we recalled in Section 2.1 ###reference_###, the numerical results provided in [tamekue2023, Fig. 8] suggest that a complex interplay of excitatory and inhibitory activity is required to model complex phenomena like Billock and Tsou\u2019s experiments using the Amari-type neural fields (NF ###reference_###). In particular, they suggest adopting a nonlinear function that allows for positive and negative values but is not odd, breaking the symmetry between excitatory and inhibitory influences. More precisely, [tamekue2023, Fig. 8] suggests that the stationary output of (NF ###reference_###) computed with the following response functions does not capture the essential features of visual illusions reported by Billock and Tsou. For , they are given by:\nNonnegative (wholly excitatory influence) nonlinearities:\nOdd (balanced inhibitory and excitatory influence) nonlinearities:\nNonlinearities with a strong excitatory influence and a weak slope:\nNonlinearities with a strong inhibitory influence and a weak slope:\nThis section aims to provide analytical insight into why the Amari-type neural fields (NF ###reference_###) do not model Billock and Tsou\u2019s experiments when the response function is taken to be one of (NL2) ###reference_ix2###-(NL4) ###reference_ix4###. The main focus will be on the study of the qualitative properties in terms of the zero-level set of the stationary solution to (NF ###reference_###) obtained with each of these nonlinearities when the sensory input is taken as defined in (2.13 ###reference_###). We are currently unable to treat the case (NL1) ###reference_ix1###.\nThe first theorem of this section is the following.\nIf and , then the stationary solution to (NF ###reference_###) with the response function is the solution to (NF ###reference_###) with the linear response function with slope . In particular, if , the nonlinear response functions (NL3) ###reference_ix3### and (NL4) ###reference_ix4### do not model Billock and Tsou\u2019s experiments.\nIf then is the unique solution to thanks to Theorem 3.1 ###reference_theorem1###. Recall from Theorem 3.2 ###reference_theorem2### that . If , then for a.e. , one has\nTherefore, for a.e. , and solves the stationary equation with a linear response function with slope . Finally, to prove the last part of the theorem, it suffices to observe that and , which implies that , and then or . The result then follows at once thanks to the first part of the theorem and Theorem 4.1 ###reference_theorem1###.\n\u220e\n###figure_12### Observe that Theorem 4.6 ###reference_theorem6### also accounts for the case of and . This means that the odd nonlinearity of (NL2) ###reference_ix2### with does not model Billock and Tsou\u2019s experiments. It, therefore, remains to prove that the odd nonlinearity with does not model Billock and Tsou\u2019s experiments.\nFortunately, for all , the odd nonlinearity of (NL2) ###reference_ix2### satisfies all the hypotheses of Proposition 3.8 ###reference_theorem8###, taken as the response function in (NF ###reference_###). One, therefore, has the following result. See, for instance, Figure 8 ###reference_### for a numerical visualization.\nUnder the assumption , Equation (NF ###reference_###) with response function (NL2) ###reference_ix2### does not reproduce Billock and Tsou\u2019s experiments associated with a sensory input consisting of a pattern of horizontal stripes localized in the left area in the cortex V1.\nGiven that the sensory input in Equation (NF ###reference_###) is a pattern consisting of horizontal stripes localized in the left area in the cortex V1, Proposition 3.8 ###reference_theorem8### shows that the corresponding stationary state consists of a mixture of patterns of horizontal and vertical stripes in the right area in V1 instead of vertical stripes only, as Billock and Tsou reported.\n\u220e\nSumming up, the results in this section provide a complete theoretical investigation of Billock and Tsou\u2019s experiments modeling by Equation (NF ###reference_###) with response function , except for the range and . Although outside of the scope of this work, we observe that thanks to Corollary 3.4 ###reference_theorem4###, the study of this range can be reduced to considering the semilinear response function as defined in (2.7 ###reference_###).\nFor fixed parameters , and in the kernel , and chosen such that , we summarize in Figure 9 ###reference_### the ranges of parameters and for which equation (NF ###reference_###) with the response function reproduce the phenomenon or not, for a funnel-like stimulus having the cortical representation defined in (2.13 ###reference_###)."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "5. Numerical analysis and experiments",
|
| 75 |
+
"text": "In this section, we present a numerical scheme for the approximation of stationary solutions of (NF ###reference_###) and analyze its convergence. Finally, we present some numerical experiments obtained using this scheme."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.1",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "5.1. Analysis of the numerical scheme",
|
| 81 |
+
"text": "In this section, for the sake of generality, we assume that the response function satisfies the assumptions in Definition 1 ###reference_inition1###.\nGiven a sensory input , we compute numerical solutions depending on three parameters , , and . These are obtained via the following iterative procedure, where :\nWe start by presenting the following error estimate, whose proof is quite technical and is presented in Appendix A.2 ###reference_###.\nAssume that and let the sensory input be given by\nwhere , and are globally Lipschitz continuous. Then, for any there exists such that for every it holds\nwhere the \u2019s only depend on , , , , and the Lipschitz constants of and .\nThe only part of the proof where the Lipschitz continuity assumption on the sensory input in Theorem 5.1 ###reference_theorem1### is needed is to control the error introduced by the discretization of the integral term of (NF ###reference_###).\nIt is, however, easy to see that the argument of proof can be adapted to more general sensory inputs , under appropriate assumptions on the region where is not Lipschitz continuous.\nIt is immediate from Theorem 5.1 ###reference_theorem1### that to have numerical convergence to the exact stationary solution , one has to choose , , and such that .\nTo compare the zero level-set of the exact solution with their numerical approximations, we introduce the following approximated zero level-set for :\nIn order to define a numerical approximation of the above,\nfor a sensory input as in Theorem 5.1 ###reference_theorem1###, we let . Then, for , we define\nWe have the following result, which guarantees the convergence of the numerical approximations of the zero level-set to the exact set .\nUnder the same assumptions as in Theorem 5.1 ###reference_theorem1###, for any it holds\nfor any such that, for some constant depending only on , , , and the Lipschitz constants of and , it holds\nBy Theorem 5.1 ###reference_theorem1###, there exists such that for any , , and , we have that\nThe estimate (5.8 ###reference_###) immediately follows choosing, e.g., .\nMoreover, by Lipschitz continuity of on , which is guaranteed by Proposition 3.5 ###reference_theorem5###, up to reducing (i.e., reducing ), for all with and such that , we have\nCombining (5.9 ###reference_###) and (5.10 ###reference_###), one easily obtains (5.7 ###reference_###), completing the proof of the statement.\n\u220e"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.2",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "5.2. Simulations for Billock and Tsou experiments",
|
| 87 |
+
"text": "The numerical implementation is obtained using the Julia toolbox from [tamekue:tel-04230895], which implements the numerical scheme presented above.\nThese experiments have been obtained with the parameters:\nThe sensory input is taken to be localized either on the left or on the right part of the cortical space. In the first case we let , while in the second case for and . The choice of the input is precised in the captions, while we collect in Table 1 ###reference_### the values of the parameters.\nWe exhibit in Figures 10 ###reference_### and 11 ###reference_### a numerical reproduction of Billock and Tsou\u2019s experiments, in the sense that the stripes\u2019 frequency is similar to that used in the experiment, for a funnel-like stimulus localized in the fovea and the peripheral visual field. In V1, we have a pattern of black/white horizontal stripes in the left (respectively right) area in V1 and white in the right (respectively left) area in V1.\nWe also exhibit in Figures 12 ###reference_### and 13 ###reference_### a numerical experiment where the stripes\u2019 frequency is not the one of Billock and Tsou\u2019s experiments.\nObserve that each output pattern exhibited in Figures 10 ###reference_###\u201313 ###reference_### captures all the essential features of the after-image reported by Billock and Tsou at the level of V1. It suffices to apply the inverse retino-cortical map to each output pattern to obtain such images at the retina level. See, for instance, [tamekue2023].\n###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20###"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "6. Concluding remarks and discussion",
|
| 93 |
+
"text": "In this paper, we provided a mechanistic modeling of the psychophysical experiments reported by Billock and Tsou [billock2007] using neural dynamics of Amari-type that models the average membrane potential of neuronal activity of spiking neurons\nin the primary visual cortex (V1). We focused on the case where intra-neural connectivity is smaller than the threshold where specific geometrical patterns spontaneously arise in the absence of sensory input from the retina. We considered, in particular, visual stimuli consisting of regular funnel patterns localized in the fovea or peripheral visual field.\nFirstly, the retino-cortical map between the visual field and V1 allowed us to model these visual stimuli as patterns of horizontal stripes localized in the left or right area of V1 that we incorporated as sensory inputs in the neural fields equation.\nThen, through complex and harmonic analysis tools, we have shown that when the neuronal response function of V1 is linear, the output pattern of the equation does not capture the V1 representation of the after-images reported by Billock and Tsou, suggesting that the phenomenon is wholly nonlinear.\nNext, we dived into the study of nonlinear response functions for which the corresponding output patterns of the equation qualitatively capture, at the level of V1, the essential features of the after-images reported by Billock and Tsou.\nThrough this study, we have analytically shown that nonlinear response functions with either balanced inhibitory and excitatory influence or a strong excitatory influence and weak slope or a strong inhibitory influence and weak slope do not reproduce the phenomenon. This suggests that a complex interplay between excitatory and inhibitory influences [haider2006neocortical, shu2003turning] is required for the neural fields equation to model the psychophysical observations reported by Billock and Tsou [billock2007] for a funnel pattern visual stimulus localized either in the fovea or peripheral visual field. Finally, we presented numerical experiments showing that nonlinear response functions other than those enumerated previously can reproduce the phenomenon.\nWhile much remains to be understood about the mechanisms underlying Billock and Tsou\u2019s psychophysical observations, our study provides valuable insights into how the primary visual cortex processes sensory information arising from localized regular funnel patterns in the visual field. In particular, this study supports the experimental finding (see, e.g., [billock2007, Experiment 3-p. 8492 and Discussion-p. 8493]) suggesting that there is an orthogonal response in the unexcited region of V1 as a response to simple geometrical patterns from the retina that do not fill all the visual field or are not regular in shape.\nWe stress that the structure of the visual stimuli related to funnel patterns used by Billock and Tsou at the V1 level was crucial to obtaining the results presented in this paper. The same modeling regarding the tunnel pattern localized in the fovea or the peripheral visual field (see [billock2007, Fig. 3b and 3d]) will not yield the after-images reported by Billock and Tsou.\nIndeed, due to the rotational invariance of these tunnel patterns, the stationary solutions induced by the corresponding sensory inputs will be invariant with respect to translations in the second variable of V1 (see, e.g., [tamekue2024mathematical, Proposition A.1]). In particular, this excludes the possibility of a funnel-like after-image in the unexcited region.\nIn this work, we have focused on time-independent visual stimuli, which turned out to be enough to model (static) nonlocal perceptual phenomena associated with the funnel patterns under consideration. Studying pattern formation from spatiotemporal visual stimuli would be interesting in future work. As an open question related directly to the current study, it will be interesting to analytically show that a nonnegative response function (as, e.g., the response function (NL1) ###reference_ix1### of Section 4.2 ###reference_###), which models wholly excitatory or inhibitory influence, does not reproduce the phenomenon, as suggested by the numerical simulations exhibited in [tamekue2023, Fig. 8]. Moreover, finding a systematic analytical method for explicitly studying the qualitative properties of the output pattern (e.g., the structure of the zero level-set) would be valuable. The starting point could be to investigate the case of the semilinear response function since numerical analysis arguments and simulations suggest that this nonlinearity reproduces the phenomenon."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix 1",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix A Complementary results",
|
| 101 |
+
"text": "This section contains miscellaneous results used in the previous sections. We begin with the following Gronwall\u2019s lemma, see for instance [emmrich1999discrete, Proposition 2.1] for a proof.\nAssume that , satisfies the integral inequality\nfor some and . Then satisfies the pointwise estimate\nwhere .\nThe following result is used to prove that (NF ###reference_###) with a linear response function does not model Billock and Tsou\u2019s observations for a funnel pattern localized either in the fovea or peripheral visual field.\nUnder the considerations of Remark 4.4 ###reference_theorem4###,\nthe kernel defined in (4.5 ###reference_###) satisfies, for any ,\nHere, for any , we have that , and, letting and , we have\nWe recall that for , one has\nWe are looking for poles of the following meromorphic function\nBy careful computations, one finds that the poles of in are given by , , and , where for , one has\nwhere and are given by (A.4 ###reference_###), and and are given by (A.5 ###reference_###). Then the residue of are given for by\nWe now fix , and we let\nWe consider the path straight along the real line axis from to and then counterclockwise along a semicircle centred at in the upper half of the complex plane, , where . Then, by the residue Theorem, one has for all ,\nwhere and are such that\nArguing similarly as in the proof of [tamekue2024mathematical, Theorem B.1. ] we can prove that\nFinally, passing in the limit as in (A.9 ###reference_###) completes the proof.\n\u220e\nWe start by noticing that\nHence, one can take small enough, such that\nConsider the fixed point equation\nThanks to (A.11 ###reference_###), the contraction mapping principle ensures the existence and uniqueness of the solution to the above. In particular, it holds\nConsider now the fixed point equation of the type (A.12 ###reference_###) with . Thanks to (A.11 ###reference_###), this equation admits a unique solution such that\nWe now claim that there exists a constant depending only on the parameters of the coupling kernel such that\nFirst of all, observe that by (A.14 ###reference_###) we have\nNext, for any , by (A.12 ###reference_###) and (A.14 ###reference_###), we have\nwhere\nUsing (A.16 ###reference_###) and the fact that is globally -Lipschitz continuous, one has\nIt is then immediately seen that\nHere, denotes possibly different constants only depending on . As for , we deduce from (A.11 ###reference_###) and (A.21 ###reference_###) that\nCollecting (A.17 ###reference_###), (A.22 ###reference_###), and the above completes the proof of the claim.\nWe are now left to upper-bound for all and small enough. To proceed, we define the squares for . By definition of and , one gets that for every\nwhere\nBy Theorem 3.2 ###reference_theorem2### and the -Lipschitz continuity of , it is immediate to see that\nObserve that there exists such that , for every . Hence, it follows that there exists such that\nOn the other hand, by (A.11 ###reference_###), we have\nTo estimate , we start by noticing that, by construction, there exists such that if and only if . In particular, is Lipschitz continuous on if by Proposition 3.5 ###reference_theorem5###, with Lipschitz constant upper-bounded by defined in (3.10 ###reference_###) where the corresponding is equal to the maximum of the Lipschitz constants of and . Hence, for every we have\nIt follows that\nOn the other hand, for every , , we have\nHence, there exists a constant\nBy collecting the estimates (A.29 ###reference_###), (A.30 ###reference_###), (A.32 ###reference_###), and (A.34 ###reference_###), we obtain that\nFinally, collecting (A.13 ###reference_###), (A.15 ###reference_###) and (A.35 ###reference_###) yields the statement.\n\u220e"
|
| 102 |
+
}
|
| 103 |
+
],
|
| 104 |
+
"tables": {
|
| 105 |
+
"1": {
|
| 106 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.48\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_l ltx_border_rr ltx_border_t\" id=\"S5.T1.3.3.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_rr ltx_border_t\" colspan=\"2\" id=\"S5.T1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_rr ltx_border_t\" colspan=\"2\" id=\"S5.T1.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_rr ltx_border_t\" colspan=\"3\" id=\"S5.T1.3.3.3\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.5\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.6\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.12.12\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_l ltx_border_rr\" id=\"S5.T1.12.12.10\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.4.4.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_rr\" id=\"S5.T1.5.5.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.6.6.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_rr\" id=\"S5.T1.7.7.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.8.8.5\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.9.9.6\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_rr\" id=\"S5.T1.10.10.7\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.11.11.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T1.12.12.9\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.21.21\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_rr ltx_border_tt\" id=\"S5.T1.21.21.10\">Figure\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.09108v2#S5.F10\" title=\"Figure 10 \u2023 5.2. Simulations for Billock and Tsou experiments \u2023 5. Numerical analysis and experiments \u2023 Reproducibility via neural fields of visual illusions induced by localized stimuli\"><span class=\"ltx_text ltx_ref_tag\">10</span></a>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S5.T1.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S5.T1.16.16.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.17.17.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.18.18.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S5.T1.19.19.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.20.20.8\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.21.21.9\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.30.30\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_rr\" id=\"S5.T1.30.30.10\">Figure\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.09108v2#S5.F11\" title=\"Figure 11 \u2023 5.2. Simulations for Billock and Tsou experiments \u2023 5. Numerical analysis and experiments \u2023 Reproducibility via neural fields of visual illusions induced by localized stimuli\"><span class=\"ltx_text ltx_ref_tag\">11</span></a>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.22.22.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S5.T1.23.23.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.24.24.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S5.T1.25.25.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.26.26.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.27.27.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S5.T1.28.28.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.29.29.8\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.30.30.9\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.39.39\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_rr\" id=\"S5.T1.39.39.10\">Figure\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.09108v2#S5.F12\" title=\"Figure 12 \u2023 5.2. Simulations for Billock and Tsou experiments \u2023 5. Numerical analysis and experiments \u2023 Reproducibility via neural fields of visual illusions induced by localized stimuli\"><span class=\"ltx_text ltx_ref_tag\">12</span></a>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.31.31.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S5.T1.32.32.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.33.33.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S5.T1.34.34.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.35.35.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.36.36.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S5.T1.37.37.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.38.38.8\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.39.39.9\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.48.48\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_rr\" id=\"S5.T1.48.48.10\">Figure\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.09108v2#S5.F13\" title=\"Figure 13 \u2023 5.2. Simulations for Billock and Tsou experiments \u2023 5. Numerical analysis and experiments \u2023 Reproducibility via neural fields of visual illusions induced by localized stimuli\"><span class=\"ltx_text ltx_ref_tag\">13</span></a>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.40.40.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr\" id=\"S5.T1.41.41.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.42.42.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr\" id=\"S5.T1.43.43.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.44.44.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.45.45.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr\" id=\"S5.T1.46.46.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.47.47.8\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.48.48.9\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>Parameters used in the presented numerical simulation.</figcaption>\n</figure>",
|
| 107 |
+
"capture": "Table 1. Parameters used in the presented numerical simulation."
|
| 108 |
+
}
|
| 109 |
+
},
|
| 110 |
+
"image_paths": {
|
| 111 |
+
"1": {
|
| 112 |
+
"figure_path": "2401.09108v2_figure_1.png",
|
| 113 |
+
"caption": "Figure 1. Visual illustration of the retino-cortical map, redrawn from [billock2007]. The top-left corresponds to the funnel pattern in the retina, and on the top-right, the corresponding pattern of horizontal stripes is in V1. While the bottom-left corresponds to the tunnel pattern in the retina, and on the bottom-right, the corresponding pattern of vertical stripes is in V1. In particular, these images are regular in shape and symmetrical with respect to a specific subgroup of the plane\u2019s motion group [bressloff2001].",
|
| 114 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/Billock_retinotipical.png"
|
| 115 |
+
},
|
| 116 |
+
"2(a)": {
|
| 117 |
+
"figure_path": "2401.09108v2_figure_2(a).png",
|
| 118 |
+
"caption": "Figure 2. Billock and Tsou\u2019s experiments: the presentation of a funnel pattern stimulus in the centre (image on the top-left) induces an illusory perception of tunnel pattern in surround (image on the top-right) after a flickering of the empty region (the white region surrounding the stimulus pattern on the top-left). We have a reverse effect on the bottom. Adapted from [billock2007, Fig. 3].",
|
| 119 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/BT-stimuli.png"
|
| 120 |
+
},
|
| 121 |
+
"2(b)": {
|
| 122 |
+
"figure_path": "2401.09108v2_figure_2(b).png",
|
| 123 |
+
"caption": "Figure 2. Billock and Tsou\u2019s experiments: the presentation of a funnel pattern stimulus in the centre (image on the top-left) induces an illusory perception of tunnel pattern in surround (image on the top-right) after a flickering of the empty region (the white region surrounding the stimulus pattern on the top-left). We have a reverse effect on the bottom. Adapted from [billock2007, Fig. 3].",
|
| 124 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/BT-afterimages.png"
|
| 125 |
+
},
|
| 126 |
+
"3(a)": {
|
| 127 |
+
"figure_path": "2401.09108v2_figure_3(a).png",
|
| 128 |
+
"caption": "Figure 3. On the left, nonlinear response functions fm,\u03b1\u2062(s)=max\u2061(\u2212m,min\u2061(1,\u03b1\u2062s))subscript\ud835\udc53\ud835\udc5a\ud835\udefc\ud835\udc60\ud835\udc5a1\ud835\udefc\ud835\udc60f_{m,\\alpha}(s)=\\max(-m,\\min(1,\\alpha s))italic_f start_POSTSUBSCRIPT italic_m , italic_\u03b1 end_POSTSUBSCRIPT ( italic_s ) = roman_max ( - italic_m , roman_min ( 1 , italic_\u03b1 italic_s ) ) for different values of m\ud835\udc5amitalic_m and \u03b1\ud835\udefc\\alphaitalic_\u03b1. On the right a 1111D DoG kernel \u03c9\ud835\udf14\\omegaitalic_\u03c9.",
|
| 129 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/family-response-functions.png"
|
| 130 |
+
},
|
| 131 |
+
"3(b)": {
|
| 132 |
+
"figure_path": "2401.09108v2_figure_3(b).png",
|
| 133 |
+
"caption": "Figure 3. On the left, nonlinear response functions fm,\u03b1\u2062(s)=max\u2061(\u2212m,min\u2061(1,\u03b1\u2062s))subscript\ud835\udc53\ud835\udc5a\ud835\udefc\ud835\udc60\ud835\udc5a1\ud835\udefc\ud835\udc60f_{m,\\alpha}(s)=\\max(-m,\\min(1,\\alpha s))italic_f start_POSTSUBSCRIPT italic_m , italic_\u03b1 end_POSTSUBSCRIPT ( italic_s ) = roman_max ( - italic_m , roman_min ( 1 , italic_\u03b1 italic_s ) ) for different values of m\ud835\udc5amitalic_m and \u03b1\ud835\udefc\\alphaitalic_\u03b1. On the right a 1111D DoG kernel \u03c9\ud835\udf14\\omegaitalic_\u03c9.",
|
| 134 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/kernel-omega.png"
|
| 135 |
+
},
|
| 136 |
+
"4": {
|
| 137 |
+
"figure_path": "2401.09108v2_figure_4.png",
|
| 138 |
+
"caption": "Figure 4. Funnel pattern in the centre of the visual field.\n",
|
| 139 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/BT-stimulus_fovea.png"
|
| 140 |
+
},
|
| 141 |
+
"5": {
|
| 142 |
+
"figure_path": "2401.09108v2_figure_5.png",
|
| 143 |
+
"caption": "Figure 5. Horizontal stripes in the left area of V1.\n",
|
| 144 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/BT-stimulus_fovea-V1.png"
|
| 145 |
+
},
|
| 146 |
+
"6": {
|
| 147 |
+
"figure_path": "2401.09108v2_figure_6.png",
|
| 148 |
+
"caption": "Figure 6. Funnel pattern in the peripheral visual field.\n",
|
| 149 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/BT-stimulus_surround-retina.png"
|
| 150 |
+
},
|
| 151 |
+
"7": {
|
| 152 |
+
"figure_path": "2401.09108v2_figure_7.png",
|
| 153 |
+
"caption": "Figure 7. Horizontal stripes in the right area of V1.\n",
|
| 154 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/BT-stimulus_surround-V1.png"
|
| 155 |
+
},
|
| 156 |
+
"8(a)": {
|
| 157 |
+
"figure_path": "2401.09108v2_figure_8(a).png",
|
| 158 |
+
"caption": "Figure 8. On the left, we have the sensory input IL\u2062(x1,x2)=cos\u2061(2\u2062\u03c0\u2062\u03bb\u2062x2)\u2062H\u2062(\u03b8L\u2212x1)subscript\ud835\udc3c\ud835\udc3fsubscript\ud835\udc651subscript\ud835\udc6522\ud835\udf0b\ud835\udf06subscript\ud835\udc652\ud835\udc3bsubscript\ud835\udf03\ud835\udc3fsubscript\ud835\udc651I_{L}(x_{1},x_{2})=\\cos(2\\pi\\lambda x_{2})H(\\theta_{L}-x_{1})italic_I start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_cos ( 2 italic_\u03c0 italic_\u03bb italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_H ( italic_\u03b8 start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) with \u03bb=0.4\ud835\udf060.4\\lambda=0.4italic_\u03bb = 0.4 and \u03b8L=5subscript\ud835\udf03\ud835\udc3f5\\theta_{L}=5italic_\u03b8 start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT = 5. On the right, we have the corresponding stationary output when the response function is the odd nonlinearity f1,1\u2062(s)=max\u2061(\u22121,min\u2061(1,s))subscript\ud835\udc5311\ud835\udc6011\ud835\udc60f_{1,1}(s)=\\max(-1,\\min(1,s))italic_f start_POSTSUBSCRIPT 1 , 1 end_POSTSUBSCRIPT ( italic_s ) = roman_max ( - 1 , roman_min ( 1 , italic_s ) ). The cortical data is defined on the square (x1,x2)\u2208[\u221210,10]2subscript\ud835\udc651subscript\ud835\udc652superscript10102(x_{1},x_{2})\\in[-10,10]^{2}( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) \u2208 [ - 10 , 10 ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT with step \u0394\u2062x1=\u0394\u2062x2=0.01\u0394subscript\ud835\udc651\u0394subscript\ud835\udc6520.01\\Delta x_{1}=\\Delta x_{2}=0.01roman_\u0394 italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = roman_\u0394 italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.01. The parameters in the kernel \u03c9\ud835\udf14\\omegaitalic_\u03c9 are \u03c31=1/\u03c0subscript\ud835\udf0e11\ud835\udf0b\\sigma_{1}=1/\\piitalic_\u03c3 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1 / italic_\u03c0, \u03c32=2/\u03c0subscript\ud835\udf0e22\ud835\udf0b\\sigma_{2}=\\sqrt{2}/\\piitalic_\u03c3 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = square-root start_ARG 2 end_ARG / italic_\u03c0 and \u03ba=1.2\ud835\udf051.2\\kappa=1.2italic_\u03ba = 1.2. Here \u03bc:=0.99\u2062\u03bc0assign\ud835\udf070.99subscript\ud835\udf070\\mu:=0.99\\mu_{0}italic_\u03bc := 0.99 italic_\u03bc start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, where \u03bc0subscript\ud835\udf070\\mu_{0}italic_\u03bc start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is defined in (2.9)-(2.4). These numerical results are obtained using the Julia toolbox from [tamekue:tel-04230895].",
|
| 159 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/BT-stimulus_fovea-V1.png"
|
| 160 |
+
},
|
| 161 |
+
"8(b)": {
|
| 162 |
+
"figure_path": "2401.09108v2_figure_8(b).png",
|
| 163 |
+
"caption": "Figure 8. On the left, we have the sensory input IL\u2062(x1,x2)=cos\u2061(2\u2062\u03c0\u2062\u03bb\u2062x2)\u2062H\u2062(\u03b8L\u2212x1)subscript\ud835\udc3c\ud835\udc3fsubscript\ud835\udc651subscript\ud835\udc6522\ud835\udf0b\ud835\udf06subscript\ud835\udc652\ud835\udc3bsubscript\ud835\udf03\ud835\udc3fsubscript\ud835\udc651I_{L}(x_{1},x_{2})=\\cos(2\\pi\\lambda x_{2})H(\\theta_{L}-x_{1})italic_I start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_cos ( 2 italic_\u03c0 italic_\u03bb italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_H ( italic_\u03b8 start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) with \u03bb=0.4\ud835\udf060.4\\lambda=0.4italic_\u03bb = 0.4 and \u03b8L=5subscript\ud835\udf03\ud835\udc3f5\\theta_{L}=5italic_\u03b8 start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT = 5. On the right, we have the corresponding stationary output when the response function is the odd nonlinearity f1,1\u2062(s)=max\u2061(\u22121,min\u2061(1,s))subscript\ud835\udc5311\ud835\udc6011\ud835\udc60f_{1,1}(s)=\\max(-1,\\min(1,s))italic_f start_POSTSUBSCRIPT 1 , 1 end_POSTSUBSCRIPT ( italic_s ) = roman_max ( - 1 , roman_min ( 1 , italic_s ) ). The cortical data is defined on the square (x1,x2)\u2208[\u221210,10]2subscript\ud835\udc651subscript\ud835\udc652superscript10102(x_{1},x_{2})\\in[-10,10]^{2}( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) \u2208 [ - 10 , 10 ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT with step \u0394\u2062x1=\u0394\u2062x2=0.01\u0394subscript\ud835\udc651\u0394subscript\ud835\udc6520.01\\Delta x_{1}=\\Delta x_{2}=0.01roman_\u0394 italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = roman_\u0394 italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.01. The parameters in the kernel \u03c9\ud835\udf14\\omegaitalic_\u03c9 are \u03c31=1/\u03c0subscript\ud835\udf0e11\ud835\udf0b\\sigma_{1}=1/\\piitalic_\u03c3 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1 / italic_\u03c0, \u03c32=2/\u03c0subscript\ud835\udf0e22\ud835\udf0b\\sigma_{2}=\\sqrt{2}/\\piitalic_\u03c3 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = square-root start_ARG 2 end_ARG / italic_\u03c0 and \u03ba=1.2\ud835\udf051.2\\kappa=1.2italic_\u03ba = 1.2. Here \u03bc:=0.99\u2062\u03bc0assign\ud835\udf070.99subscript\ud835\udf070\\mu:=0.99\\mu_{0}italic_\u03bc := 0.99 italic_\u03bc start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, where \u03bc0subscript\ud835\udf070\\mu_{0}italic_\u03bc start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is defined in (2.9)-(2.4). These numerical results are obtained using the Julia toolbox from [tamekue:tel-04230895].",
|
| 164 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/solution-BT-stimulus_fovea-V1.png"
|
| 165 |
+
},
|
| 166 |
+
"9": {
|
| 167 |
+
"figure_path": "2401.09108v2_figure_9.png",
|
| 168 |
+
"caption": "Figure 9. Summary on the ranges of parameters m\u22650\ud835\udc5a0m\\geq 0italic_m \u2265 0 and \u03b1>0\ud835\udefc0\\alpha>0italic_\u03b1 > 0 where the nonlinearity fm,\u03b1subscript\ud835\udc53\ud835\udc5a\ud835\udefcf_{m,\\alpha}italic_f start_POSTSUBSCRIPT italic_m , italic_\u03b1 end_POSTSUBSCRIPT reproduces the phenomenon or not, for a funnel-like stimulus having the cortical representation defined in (2.13). The parameters reproduce the phenomenon in blue, and in grey, they don\u2019t reproduce it.",
|
| 169 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/reproducibility.png"
|
| 170 |
+
},
|
| 171 |
+
"10(a)": {
|
| 172 |
+
"figure_path": "2401.09108v2_figure_10(a).png",
|
| 173 |
+
"caption": "Figure 10. Left: sensory input I\u2062(x1,x2)=cos\u2061(2\u2062\u03c0\u2062\u03bb\u2062x2)\u2062H\u2062(\u03b8\u2212x1)\ud835\udc3csubscript\ud835\udc651subscript\ud835\udc6522\ud835\udf0b\ud835\udf06subscript\ud835\udc652\ud835\udc3b\ud835\udf03subscript\ud835\udc651I(x_{1},x_{2})=\\cos(2\\pi\\lambda x_{2})H(\\theta-x_{1})italic_I ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_cos ( 2 italic_\u03c0 italic_\u03bb italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_H ( italic_\u03b8 - italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ). Right: corresponding stationary output.",
|
| 174 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/BT-stimulus_fovea-V1.png"
|
| 175 |
+
},
|
| 176 |
+
"10(b)": {
|
| 177 |
+
"figure_path": "2401.09108v2_figure_10(b).png",
|
| 178 |
+
"caption": "Figure 10. Left: sensory input I\u2062(x1,x2)=cos\u2061(2\u2062\u03c0\u2062\u03bb\u2062x2)\u2062H\u2062(\u03b8\u2212x1)\ud835\udc3csubscript\ud835\udc651subscript\ud835\udc6522\ud835\udf0b\ud835\udf06subscript\ud835\udc652\ud835\udc3b\ud835\udf03subscript\ud835\udc651I(x_{1},x_{2})=\\cos(2\\pi\\lambda x_{2})H(\\theta-x_{1})italic_I ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_cos ( 2 italic_\u03c0 italic_\u03bb italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_H ( italic_\u03b8 - italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ). Right: corresponding stationary output.",
|
| 179 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/solution-BT-stimulus_fovea-V1-sigmoid.png"
|
| 180 |
+
},
|
| 181 |
+
"11(a)": {
|
| 182 |
+
"figure_path": "2401.09108v2_figure_11(a).png",
|
| 183 |
+
"caption": "Figure 11. Left: sensory input I\u2062(x1,x2)=cos\u2061(2\u2062\u03c0\u2062\u03bb\u2062x2)\u2062H\u2062(x1\u2212\u03b8)\ud835\udc3csubscript\ud835\udc651subscript\ud835\udc6522\ud835\udf0b\ud835\udf06subscript\ud835\udc652\ud835\udc3bsubscript\ud835\udc651\ud835\udf03I(x_{1},x_{2})=\\cos(2\\pi\\lambda x_{2})H(x_{1}-\\theta)italic_I ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_cos ( 2 italic_\u03c0 italic_\u03bb italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_H ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_\u03b8 ). Right: corresponding stationary output.",
|
| 184 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/BT-stimulus_surround-V1-large.png"
|
| 185 |
+
},
|
| 186 |
+
"11(b)": {
|
| 187 |
+
"figure_path": "2401.09108v2_figure_11(b).png",
|
| 188 |
+
"caption": "Figure 11. Left: sensory input I\u2062(x1,x2)=cos\u2061(2\u2062\u03c0\u2062\u03bb\u2062x2)\u2062H\u2062(x1\u2212\u03b8)\ud835\udc3csubscript\ud835\udc651subscript\ud835\udc6522\ud835\udf0b\ud835\udf06subscript\ud835\udc652\ud835\udc3bsubscript\ud835\udc651\ud835\udf03I(x_{1},x_{2})=\\cos(2\\pi\\lambda x_{2})H(x_{1}-\\theta)italic_I ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_cos ( 2 italic_\u03c0 italic_\u03bb italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_H ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_\u03b8 ). Right: corresponding stationary output.",
|
| 189 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/solution-BT-stimulus_surround-V1-sigmoid.png"
|
| 190 |
+
},
|
| 191 |
+
"12(a)": {
|
| 192 |
+
"figure_path": "2401.09108v2_figure_12(a).png",
|
| 193 |
+
"caption": "Figure 12. Left: sensory input I\u2062(x1,x2)=cos\u2061(2\u2062\u03c0\u2062\u03bb\u2062x2)\u2062H\u2062(\u03b8\u2212x1)\ud835\udc3csubscript\ud835\udc651subscript\ud835\udc6522\ud835\udf0b\ud835\udf06subscript\ud835\udc652\ud835\udc3b\ud835\udf03subscript\ud835\udc651I(x_{1},x_{2})=\\cos(2\\pi\\lambda x_{2})H(\\theta-x_{1})italic_I ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_cos ( 2 italic_\u03c0 italic_\u03bb italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_H ( italic_\u03b8 - italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ). Right: corresponding stationary output.",
|
| 194 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/hf-BT-stimulus_fovea-V1-large.png"
|
| 195 |
+
},
|
| 196 |
+
"12(b)": {
|
| 197 |
+
"figure_path": "2401.09108v2_figure_12(b).png",
|
| 198 |
+
"caption": "Figure 12. Left: sensory input I\u2062(x1,x2)=cos\u2061(2\u2062\u03c0\u2062\u03bb\u2062x2)\u2062H\u2062(\u03b8\u2212x1)\ud835\udc3csubscript\ud835\udc651subscript\ud835\udc6522\ud835\udf0b\ud835\udf06subscript\ud835\udc652\ud835\udc3b\ud835\udf03subscript\ud835\udc651I(x_{1},x_{2})=\\cos(2\\pi\\lambda x_{2})H(\\theta-x_{1})italic_I ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_cos ( 2 italic_\u03c0 italic_\u03bb italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_H ( italic_\u03b8 - italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ). Right: corresponding stationary output.",
|
| 199 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/hf-solution-BT-stimulus_fovea-V1-sigmoid.png"
|
| 200 |
+
},
|
| 201 |
+
"13(a)": {
|
| 202 |
+
"figure_path": "2401.09108v2_figure_13(a).png",
|
| 203 |
+
"caption": "Figure 13. Left: sensory input I\u2062(x1,x2)=cos\u2061(2\u2062\u03c0\u2062\u03bb\u2062x2)\u2062H\u2062(x1\u2212\u03b8)\ud835\udc3csubscript\ud835\udc651subscript\ud835\udc6522\ud835\udf0b\ud835\udf06subscript\ud835\udc652\ud835\udc3bsubscript\ud835\udc651\ud835\udf03I(x_{1},x_{2})=\\cos(2\\pi\\lambda x_{2})H(x_{1}-\\theta)italic_I ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_cos ( 2 italic_\u03c0 italic_\u03bb italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_H ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_\u03b8 ). Right: corresponding stationary output.",
|
| 204 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/hf-BT-stimulus_surround-V1-large.png"
|
| 205 |
+
},
|
| 206 |
+
"13(b)": {
|
| 207 |
+
"figure_path": "2401.09108v2_figure_13(b).png",
|
| 208 |
+
"caption": "Figure 13. Left: sensory input I\u2062(x1,x2)=cos\u2061(2\u2062\u03c0\u2062\u03bb\u2062x2)\u2062H\u2062(x1\u2212\u03b8)\ud835\udc3csubscript\ud835\udc651subscript\ud835\udc6522\ud835\udf0b\ud835\udf06subscript\ud835\udc652\ud835\udc3bsubscript\ud835\udc651\ud835\udf03I(x_{1},x_{2})=\\cos(2\\pi\\lambda x_{2})H(x_{1}-\\theta)italic_I ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = roman_cos ( 2 italic_\u03c0 italic_\u03bb italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_H ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_\u03b8 ). Right: corresponding stationary output.",
|
| 209 |
+
"url": "http://arxiv.org/html/2401.09108v2/extracted/5893898/hf-solution-BT-stimulus_surround-V1-sigmoid.png"
|
| 210 |
+
}
|
| 211 |
+
},
|
| 212 |
+
"validation": true,
|
| 213 |
+
"references": [],
|
| 214 |
+
"url": "http://arxiv.org/html/2401.09108v2"
|
| 215 |
+
}
|
20241001/2401.10226v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2401.10229v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2401.12261v4.json
ADDED
|
@@ -0,0 +1,186 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Cloud-based XAI Services for Assessing Open Repository Models Under Adversarial Attacks",
|
| 3 |
+
"abstract": "The opacity of AI models necessitates both validation and evaluation before their integration into services. To investigate these models, explainable AI (XAI) employs methods that elucidate the relationship between input features and output predictions. The operations of XAI extend beyond the execution of a single algorithm, involving a series of activities that include preprocessing data, adjusting XAI to align with model parameters, invoking the model to generate predictions, and summarizing the XAI results. Adversarial attacks are well-known threats that aim to mislead AI models. The assessment complexity, especially for XAI, increases when open-source AI models are subject to adversarial attacks due to various combinations. To automate the numerous entities and tasks involved in XAI-based assessments, we propose a cloud-based service framework that encapsulates computing components as microservices and organizes assessment tasks into pipelines. The current XAI tools are not inherently service-oriented. This framework also integrates open XAI tool libraries as part of the pipeline composition. We demonstrate the application of XAI services for assessing five quality attributes of AI models: (1) computational cost, (2) performance, (3) robustness, (4) explanation deviation, and (5) explanation resilience across computer vision and tabular cases. The service framework generates aggregated analysis that showcases the quality attributes for more than a hundred combination scenarios.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Artificial intelligence models are increasingly accessible through the open community [1 ###reference_b1###] and facilitate advancements in software applications across various domains.\nThese advancements, represented by innovations in deep-learning models such as ViT, ConvNeXt, CVT, Swin Transformers, SegFormer, and ResNet [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###], underscore the rapid evolution of AI capabilities.\nHowever, the integration of these models into software applications necessitates a strict evaluation [8 ###reference_b8###] of their quality attributes.\nA noted gap in current practices is the absence of a comprehensive service framework that facilitates an understanding of model explainability [9 ###reference_b9###, 10 ###reference_b10###].\nThe importance of explainable AI (XAI) has been increasingly recognized [11 ###reference_b11###], driven by the need to build trust and ensure fairness within AI systems. XAI techniques, which aim to make the decision-making processes of AI models transparent [12 ###reference_b12###], are essential in AI-enabled applications [13 ###reference_b13###].\nAdditionally, adversarial attacks target vulnerabilities of AI models.\nThey are modifications to input data that are less visible to humans but can make AI models give incorrect inferences or predictions [14 ###reference_b14###, 15 ###reference_b15###].\nThe rising threats [16 ###reference_b16###] targeting security-sensitive models introduce significant challenges to the deployment of AI in software services.\nTherefore, effective software quality assurance requires a comparative analysis to guide the development and refinement of AI models, ensuring their robustness and explainability across diverse applications.\nThe integration of XAI techniques into AI models introduces additional computational layers [17 ###reference_b17###]. The operational complexity of XAI evaluation [18 ###reference_b18###] arises from multiple factors of managing diverse data types, integrating explanation methods with various AI models, evaluating explanations, and summarizing the results. These factors lead to evaluation scenarios that require dozens to hundreds of experiments. In addition, the complexity of assessment is further scaled by the product of multiple kinds of adversarial attacks.\nComparative analysis between XAI and adversarial attacks increases the evaluation workload by necessitating the exploration of interactions between models and explanations under various adversarial conditions.\nThe goal is to address these complexities by automating the evaluation pipeline.\nWe propose a cloud-based service framework that encapsulates computing components as microservices and organizes assessment tasks into pipelines.\nThis framework also integrates open XAI tool libraries, which are originally not inherently service-oriented, as part of the pipeline composition.\nIn scenario studies, we assess six vision models against three types\nof adversarial attacks employing five XAI methods,\nresulting in a total of ninety distinct combinations.\nWe evaluate three transformer-based tabular models\nusing two XAI methods across three datasets, resulting in\neighteen combinations. We set pipelines to assess quality attributes for every combination scenario, including\n(1) computational cost, (2) performance, (3) robustness, (4) explanation deviation, and (5) explanation resilience.\nWe assessed these attributes across a range of model and XAI method combinations.\nThe evaluation results indicate that higher explanation deviation requires more computational costs.\nWe demonstrate the impact of adversarial attacks on model performance and their explanation."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Background and Related Work",
|
| 15 |
+
"text": "In this section, we introduce the XAI methods. We review the current XAI tools and frameworks. Then, the taxonomy of adversarial attacks."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Introduction of XAI Methods",
|
| 21 |
+
"text": "Mainstream post-hoc XAI methodologies can be categorized into two distinct types [18 ###reference_b18###]: model-specific and model-agnostic methods. Model-specific methods [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###] derive feature importance values from the internal parameters of the model itself, offering insights directly linked to the model\u2019s internal mechanisms. In contrast, model-agnostic methods [24 ###reference_b24###] establish feature importance by analyzing the relationship between the model\u2019s inputs and outputs without requiring the model\u2019s internal parameters and layers.\nXAI in computer vision often involves generating visual explanations for the decisions made by AI models. The model-specific methods are computationally efficient, rather than the model-agnostic methods [25 ###reference_b25###].\nFor demonstration, Figure 1 ###reference_### presents selected examples derived from the Vision Transformer model [3 ###reference_b3###] through our XAI service framework.\nSignificant variations in results are observed from the same input data. These results indicate the need for establishing systematic XAI evaluations within the XAI service framework, which is notably absent in previous tools and frameworks.\n###figure_1### Grad-CAM [19 ###reference_b19###] utilizes the gradients of the target label flowing into the final convolutional layer to produce a coarse localization map highlighting important regions for prediction. According to an optimization work, Grad-CAM++ [20 ###reference_b20###] extends Grad-CAM by considering the weight of each pixel in the feature maps, allowing better handling of images with multiple occurrences of the same object. HiResCAM [21 ###reference_b21###] generates high-resolution class activation maps, allowing for finer detailed visual explanations with higher computational cost. XGrad-CAM [22 ###reference_b22###] focuses on increasing the linearity of the saliency maps, providing improvements. LayerCAM [23 ###reference_b23###] introduces a layer-wise relevance propagation mechanism, which enables the feature importance across different network layers for more details.\n###figure_2### We also apply XAI methods involving structured data within the tabular models. Mean Centroid Prediff [26 ###reference_b26###] and SHAP (SHapley Additive exPlanations) [24 ###reference_b24###] stand out for widespread acceptance as the baseline. These methods quantify feature importance by assessing the impact of masking each feature on model output. The methods provide a score for each feature, indicating its contribution to model predictions.\nFigure 2 ###reference_### illustrates an example of the top ten out of eighty-three global feature importance explanation from the RT-IoT2022 cybersecurity classification dataset [27 ###reference_b27###].\nWe carry out XAI to determine the impact of the network logs features on the classification of threats.\nWe provide data-driven explanation deviation metrics to evaluate XAI methods in the section V ###reference_### scenario studies."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B XAI Tools Libraries and Frameworks",
|
| 27 |
+
"text": "The role of XAI in providing feature contribution explanations establishes trust in AI models [26 ###reference_b26###].\nBefore introducing XAI as a service, we review the published tools and libraries. The Explainability 360 toolkit by IBM [28 ###reference_b28###] integrates their explanation techniques within the toolkit package. Microsoft\u2019s InterpretML [29 ###reference_b29###] offers support for eight tabular models\u2019 XAI approaches. The recent framework OmniXAI [30 ###reference_b30###] offers a broad range of techniques for XAI.\nHowever, based on our tests and usages, the existing XAI frameworks have the following limitations:\nExpertise-Dependent Usage: The use of these tools often demands specific XAI knowledge expertise, which limits accessibility [31 ###reference_b31###] to software engineers.\nRestricted Methods Support: Each library support limited numbers of different XAI methods [28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###].The disparity in the number of methodologies supported by these tools complicates the selection process and necessitates additional preprocessing.\nEvaluation Procedures Deficiency: The comprehensive study [32 ###reference_b32###] that performed quantitative evaluations on saliency methods is relevant. However, the current XAI tools lack standardized procedures for evaluating explanation results, which limits the selection and enhancement of these XAI tools [33 ###reference_b33###, 10 ###reference_b10###].\nCloud Service Support Limitations: Explicit support for cloud AI services is rare among these tools, affecting their applicability across cloud service environments.\nThese limitations underline the need for a service architecture that addresses these gaps, making XAI accessible and effective for diverse AI-enabled applications."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "II-C Adversarial Attacks Types",
|
| 33 |
+
"text": "Adversarial attacks are designed to mislead models, and pose challenges to AI system security [34 ###reference_b34###].\nThe adversarial attacks are briefly categorized into white-box and black-box attacks, each with distinct methodologies and implications. The taxonomy of adversarial attacks with example works are listed in Figure 3 ###reference_###.\n###figure_3### In white-box attacks, the attacker needs certain knowledge of the model, including its architecture and parameters. Targeted attacks [35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###, 34 ###reference_b34###] aim to manipulate the model to produce a specific, incorrect output. Non-targeted attacks [38 ###reference_b38###, 39 ###reference_b39###] aim to generate any incorrect response from the model.\nIn black-box attacks, where the attacker has no information about the model\u2019s internals, the focus shifts to transferability attacks, query-based methods, and perturbation benchmarks. The perturbation benchmarks refer to algorithmically induced modifications applied to the data with the intent to mislead the model. The benchmark modifications can affect models and further be employed to measure the model\u2019s robustness [44 ###reference_b44###].\nThe study [44 ###reference_b44###] presents corruption and perturbation as two methodologies for modifying images. Corruption typically refers to modifications that simulate natural or environmental degradation. Perturbation denotes generated changes in sequence.\nWe merge these two groups algorithmically modify methods in the study [44 ###reference_b44###], under the broader category of adversarial perturbations.\nIn the assessing scenarios, we adopt the ImageNet-C benchmark [44 ###reference_b44###].\nThe benchmark [44 ###reference_b44###] contains fifteen types of algorithmically generated corruptions from categories such as noise, blur, weather, and digital. The weather category contains corruptions that demonstrate an excessive diversity. In the assessment scenarios, we select a representative algorithm from each of the three main categories. We apply nine distinct corruption perturbations across three levels of severity.\nThe Fooling LIME and SHAP [45 ###reference_b45###] method is a recent study using data perturbation to attack LIME [46 ###reference_b46###] and SHAP [24 ###reference_b24###].\nWe also develop and launch multiple levels of perturbation attacks to the numerical features in selected tabular datasets for scenarios."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "III Overview of AI System\u2019s Quality Attributes",
|
| 39 |
+
"text": "AI-based system refers to a software system that adopts the AI models as components [47 ###reference_b47###]. Software quality is the ability of a software product to meet requirements during its operation in designated conditions [48 ###reference_b48###]. Software quality assurance involves an evaluation process of how well a software product satisfies its stated needs [49 ###reference_b49###].\nHowever, the AI models are distinct from conventional software components. Their core characteristic is being data-centric [50 ###reference_b50###]. Additionally, these components are dynamic and continually evolving as they are exposed to new data over time [50 ###reference_b50###]. Inadequate, inaccurate, or unsuitable training data can result in unreliable models and biased decisions [51 ###reference_b51###]. To address quality assurance, the study [52 ###reference_b52###] presents the concept of early adoption of XAI into model development.\nWe define the quality attributes of AI models and the explanations produced by XAI analysis in terms of model performance and explanation deviation.\nUnder adversarial attacks, these quality attributes are further specified as model robustness and explanation resilience.\nAdditionally, computation cost is considered in relation to deployment.\nThe metrics established for these quality attributes encapsulate the combined states of AI models, XAI methods, adversarial attacks, and datasets.\nThe results can be visualized using a radar chart, where each quality attribute is transformed into a normalized value.\nComputational Cost.\nComputational cost measures the resources required to execute an algorithm. The computational cost includes measuring the utilization of CPU, GPU, and Memory.\nWe record the computational cost attribute that encompasses the runtime and energy consumption of AI models and XAI techniques.\nThese metrics are measured in seconds (s) for runtime and watt-hours (Wh) for energy consumption.\nThe CodeCarbon library [53 ###reference_b53###] supports the program to track resource utilization by monitoring hardware-specific parameters.\nIn addition, the CodeCarbon estimates the environmental carbon footprints based on energy consumption.\nModel Performance. \nThe evaluation of model performance contains a range of metrics. The typical metrics are Top-N accuracy metrics[54 ###reference_b54###], precision, recall, and F1 score [55 ###reference_b55###]. In the multi-class context, we aggregate True Positives (TP), False Positives (FP), True Negatives (TN), and False Negatives (FN) across all classes. In addition, the Area Under the Receiver Operating Characteristics Curve (AUC-ROC) is also a calibrating metric to assess a model\u2019s ability to distinguish between classes [56 ###reference_b56###].\nModel Robustness.\nRobustness evaluation measures a model\u2019s performance degradation under specific adversarial conditions.\nFor computer vision models, the use of an image adversarial perturbation benchmark [44 ###reference_b44###] assesses the model under adversarial conditions. Mean Corruption Error (mCE) represents the model robustness in a previous study [44 ###reference_b44###].\nThis metric aggregates the normalized error rates for a model across corruption types and their respective severity levels. The formulation for mCE [44 ###reference_b44###] is detailed below:\nHere, denotes the model under evaluation. For corruption type , indicates the severity levels. The variable is the error rate of the model , while specifies the error rate of a reference model, such as Alexnet according to the study [44 ###reference_b44###].\nInstead of comparing with a reference model, we set Kolmogorov-Smirnov (K-S) statistic [57 ###reference_b57###] as a quantifiable metric. It represents the comparison of the distribution of the model outputs between two datasets:\nHere, and symbolize the model probabilities distribution of the original and adversarial datasets, respectively. The usage of targets the maximum divergence between these two model inference distributions.\nBy incorporating the K-S statistic, we redefine the robustness against adversarial attacks. Equation 3 ###reference_### represents the assessment. A smaller value indicates better model robustness:\nFor the tabular model, we generate feature perturbations to datasets. Specifically, in the scenario, we make a random perturbation to the feature with the numerical severity factor. Similar to our approach, the adversarial attack [45 ###reference_b45###] introduces a designed biased perturbation, instead of our random perturbation, to attack the SHAP [24 ###reference_b24###] and the LIME [46 ###reference_b46###] methods. The robustness Equation 3 ###reference_### can be used in tabular cases.\n###figure_4### Explanation Deviation.\nExplanation deviation is an assessment of the impact of the feature importance explanation on the model\u2019s outputs.\nAt its core, explanation deviation measures the discrepancy between a model\u2019s predictions when all features are considered versus when only those important features are emphasized.\nTherefore, measuring explanation deviation is a means to validate the actual influence of presumed features on the model\u2019s outputs.\nIn vision scenarios, the saliency map is applied to assess how important each pixel is to a model\u2019s output, creating visual importance maps.\nGrad-CAM [19 ###reference_b19###] is commonly accepted in CNN-based vision model [2 ###reference_b2###] explanation. However, whether these XAI perform as effectively for more transformer-based models remains under evaluation [58 ###reference_b58###].\nExplanation deviation is determined by the prediction change score, which indicates the impact of the saliency map area on the model\u2019s predictions.\nThe masked image is calculated as the element-wise multiplication of the original image and the normalized, three-dimensional saliency mask .\nHere, is defined as the model\u2019s prediction class probability values:\nA smaller change in probability value, then the deviation close to one.\nExtending to the whole dataset, the overall explanation deviation is the statistical analysis of their median value.\nIn tabular scenarios, the XAI method computes feature importance values for each data sample.\nThe explanation deviation is represented by evaluating the consistency in the feature importance order, according to the study [18 ###reference_b18###].\nExplanation Resilience.\nResilience to adversarial attacks is measured by the difference in explanation deviation between non-adversarial and adversarial conditions.\nTherefore, the explanation resilience is quantified by the Equation 5 ###reference_###:\nThe resilience attribute indicates that the explanation deviation metric decreases for adversarial reasons.\nFor tabular data, resilience can be calculated by subtracting the explanation deviation of adversarial data from the original data.\nComparing the original and perturbed situation allows for resilience assessment: The smaller the resilience, the better the explanation can resist the impact of adversarial attacks."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "IV Pipelines of XAI Centric Assessment of Quality Attributes",
|
| 45 |
+
"text": "In this section, we define the pipeline for obtaining quality attributes. Then, we illustrate the cloud-based service architecture. The operational overhead for XAI service is compared with using the existing framework."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "IV-A Define Pipelines for Each Quality Attribute",
|
| 51 |
+
"text": "This study integrates five quality attribute evaluations.\nFigure 4 ###reference_### shows the multiple parallel processes for accessing quality attributes.\nThis multifaceted assessment evaluates quality attributions under both original and adversarial conditions.\nComputational Cost Pipeline: This pipeline records the computational resources used during model inferences and XAI method executions. The average resource consumption is logged by the Coordination Center, when processing a significant volume of data inputs. The record covers AI models, XAI methods and evaluations.\nModel Performance Pipeline: To assess model performance, the pipeline begins with the datasets provided by the Data Processing Microservice.\nThese datasets are sequentially fed into the Models Microservice, which encapsulates the pre-trained model.\nThe outcomes are extracted and persisted. The Evaluation Microservice derives performance metrics.\nExplanation Deviation Pipeline: This pipeline calculates explanation deviation. XAI Methods Microservice are applied to generate explanations. For computer vision tasks, the explanation results are fed back into the model to calculate the prediction confidence drop, thereby calculating explanation deviation. The prediction changes value within consistency metric [18 ###reference_b18###] can also be directly derived from tabular models.\nRobustness Pipeline: To assess model performance decline under adversarial attacks, the pipeline begins with the Data Processing Microservice preparing perturbed data.\nThe Models Microservice then processes both the original and perturbed datasets, yielding two sets of results.\nThe Evaluation Microservice measures shifts in the model\u2019s performance between the original and perturbed datasets.\nExplanation Resilience Pipeline: Similar to the robustness pipeline but with a focus on XAI methods, this pipeline starts with the Data Processing Microservice preparing perturbed datasets.\nThese datasets are then subjected to selected models and XAI methods.\nThe Evaluation Microservice calculates the changes in explanation deviation.\nThe individual pipeline is configurable for AI models, XAI methods, and adversarial datasets. They can be composed into singular or combined quality attributes assessment scenarios. Their deployment and execution need a runtime service-oriented architecture, which is defined in the next subsection."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "IV-B Define Services Architecture for the Pipeline Configuration",
|
| 57 |
+
"text": "We introduce a cloud service architecture designed to develop XAI assessment pipelines. This architecture enables analysis and comparison of various combinations of AI models, XAI methods, datasets, and adversarial attack approaches, as shown in Figure 5 ###reference_###.\n###figure_5### Coordination Center: The microservice executes unit operations based on configuration, communicates with other microservices as per task setup, and records provenance data for transparency and reproducibility.\nData Processing Microservice: This microservice ensures data is correctly formatted and meets XAI algorithm requirements. It also applies adversarial attack conditions.\nAI Model Microservice: This service encapsulates and deploys pre-trained AI models, including open-source models from the HuggingFace Community, within its framework.\nXAI Method Microservice: This service offers XAI that can generate explanations. XAI algorithms [26 ###reference_b26###] and tool libraries [30 ###reference_b30###] can be encapsulated into the service.\nEvaluation Microservice: This service aggregates results and systematically evaluates defined quality attributes.\nCollectively, these components create a cloud-based architecture supporting the complete evaluation process.\nThe architecture\u2019s flexibility allows for switching services to test various AI models or XAI methods, facilitating extensive investigation into numerous combinations.\nAdditionally, the deployed services are reusable across multiple pipelines.\nIn addition, the JSON-based configuration template, as present in Figure 6 ###reference_###, specifies how the service executes the pipeline according to the user\u2019s inputs. The template allows users to define the interaction between different services and customize the evaluation process to suit specific requirements.\nThe execution of pipelines is through the use of coordination centers, each configured with its JSON-based configuration file. Upon receiving a configuration file, a coordination center systematically accesses the specified microservices to execute each pipeline step."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.3",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "IV-C Comparative Evaluation and Service Integration of Existing XAI Frameworks",
|
| 63 |
+
"text": "The architecture enables the encapsulation and customisation of the external XAI algorithms and frameworks.\nXAI microservices can encapsulate not only XAI methods but also published libraries and frameworks. For instance, OmniXAI [30 ###reference_b30###] aggregates enriched XAI methods for various data types.\nWe import the OmniXAI package and employ the related functionalities to compute explanations in the XAI methods microservices.\nThe rest of the required units can continue to adopt our service architecture.\nTo verify the integration of the external tool framework, We compare our XAI service and the recently published OmniXAI framework [30 ###reference_b30###]. We focus on differences in task reproducibility, metrics, and ease of use.\nReproducible: Reproducible ensures that results can be reliably validated.\nOther frameworks offer explanation methods but lack detailed provenance data recording.\nValidating and reproducing an XAI process requires obtaining the original dataset and editing the source code.\nThis demands effort to reproduce the complex XAI process manually.\nOur service framework makes the evaluation pipeline reproducible using provenance data.\nUsers can execute the pipeline with a single command using provenance data as the configuration file.\nEvaluation Metrics: Other XAI frameworks do not provide evaluations for their methods.\nWithout rigorous evaluation, the effectiveness of the explanations provided by different XAI remains undetermined.\nOur framework introduces the calculation of XAI consistency metrics [18 ###reference_b18###].\nWe also define the XAI-centric assessment quality attributes.\nEase of Use: Our framework provides a streamlined SDK interface that simplifies pipeline execution.\nTable I ###reference_### provides a described comparison of the operational steps.\nThe JSON-based configuration template allows users to select datasets, models, and XAI methods in a structured manner.\nBy automating several steps such as data preparation and environment setup, our service minimizes the operational overhead for systematic XAI processes.\nResult Values Comparison: We compared the explanation deviation results of our service framework with those of existing tools using the ImageNet dataset [59 ###reference_b59###, 44 ###reference_b44###] test set, containing ten thousand images.\n###figure_6### Our XAI service includes multiple CAM-based methods [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###]. The recent tool framework OmniXAI [30 ###reference_b30###] employs only Grad-CAM [19 ###reference_b19###] as the XAI method for vision models.\nFigure 7 ###reference_### presents a comparative analysis of the results from implementing Grad-CAM algorithms in our service framework versus OmniXAI.\nThe results show the same prediction confidence distribution. However, our XAI service framework significantly streamlines operations.\nAdditionally, our framework is containerized and suitable for cloud platform environments. The RESTful API design enables easy integration with external cloud model services."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Assessment Scenarios and Quality Attribute Analysis",
|
| 69 |
+
"text": "We introduce experimental scenarios to demonstrate the service\u2019s functionality in scenarios.\nBy running the pipelines defined by the service framework, we aim to investigate the research questions as follows:\nRQ1: Are the explanation deviation generated by XAI methods variable across models with different structures?\nRQ2: What is the relationship between computational cost and explanation deviation in model-XAI combinations?\nRQ3: Considering the known impacts of adversarial perturbations on model performance metrics, how do these perturbations influence the explanation deviation?\nThe related source code and experimental results can be found in the GitHub repository 111https://github.com/ZeruiW/XAIport.\nExperiments are conducted in a controlled environment to ensure consistency.\nThe experimental evaluation used a local setup with an Nvidia RTX 4090 GPU based on the AD102 graphics processor.\nThis GPU features a core clock speed of 2.52 GHz and 24 GB GDDR6X VRAM. All experiments were conducted on a Linux-6.2.0 system, Ubuntu 22.04 LTS, Python 3.8.18 with CUDA 12.1, and PyTorch 2.1.0 to leverage the GPU\u2019s capabilities."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.1",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "The Assessment Scenario of Vision Models",
|
| 75 |
+
"text": "Process Configuration:\nOur dataset comprised 10,000 images from ImageNet [59 ###reference_b59###], covering one thousand classes.\nWe introduced three types of adversarial perturbations in the ImageNet-C [44 ###reference_b44###]: Gaussian Noise, Defocus Blur, and Pixelate, with each type occurring at three severity levels.\nAs list in Table II ###reference_###, the models selected for evaluation are Vision Transformer (ViT) [3 ###reference_b3###], Convolutional Neural Networks Next (ConvNeXt) [6 ###reference_b6###], Convolutions Vision Transformer (CVT) [7 ###reference_b7###], Swin Transformer (Swin) [4 ###reference_b4###], Semantic Segmentation with Transformers (SegFormer) [60 ###reference_b60###], and Residual Networks (ResNet) [2 ###reference_b2###]. These models are top-performance sourced from the Huggingface open repository. The publisher has pre-trained these models.\nWe select the ResNet [2 ###reference_b2###] as a baseline.\nAdditionally, our experiments employ various CAM-based methods [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###] to generate saliency maps for vision models.\nComputational Cost Analysis:\nEvaluating computational costs is essential to understand the practical implications of deploying XAI techniques in applications.\nThe assessment covers processing time and energy consumption for different AI models using XAI techniques.\nTable III ###reference_### illustrates each model\u2019s combined inference time, XAI time, and total energy in the pipelines.\nThe results indicate significant variation in total processing time and energy consumption across models.\nThe Swin Transformer shows the highest total process time and energy consumption, indicating a significant computational demand.\nThe CVT model consumes the second most energy and runtime.\nConversely, ConvNext has the lowest energy consumption and a relatively shorter total processing time.\nThis is reasonable because the employed ConvNext model [6 ###reference_b6###] is a relatively tiny-sized model.\nThis conclusion suggests that CNN-based models, such as ConvNext and ResNet, are still potentially more viable options for applications where energy efficiency and faster processing are crucial.\nIn summary, these findings indicate significant variations in both time and energy consumption across different models.\nXAI consumes significantly higher computational costs compared to model inference.\nModel Performance Analysis:\nThe objective is to evaluate and compare the performance of leading open-source vision models from the Huggingface repository, applying a range of adversarial perturbations.\n###figure_7### Figure 8 ###reference_### displays the F1 scores for six computer vision models under both original and adversarial perturbation conditions, emphasizing their performance.\nInitially, all models exhibit high performance on the original dataset.\nHowever, adversarial perturbations have varying effects on the performance of each model.\nModels indicate performance degradation under adversarial perturbations.\nSpecifically, CVT and SegFormer show obvious vulnerability against adversarial perturbations.\nModel Robustness Analysis:\nThis analysis evaluates model robustness against different levels of adversarial perturbations.\nThe aim is to quantitatively evaluate and compare the robustness by assessing their ability to maintain prediction accuracy.\nTo analyze changes in prediction distributions under adversarial perturbations and further assess model robustness, we employ the introduced Kolmogorov-Smirnov (K-S) statistic.\nInitially, we extract the probability values from the model inference of the original dataset.\nNext, we extract prediction probabilities for each perturbed dataset and compute the K-S statistic.\nA higher K-S value indicates a greater shift in the model\u2019s outputs, implying a significant reduction in robustness.\n###figure_8### Figure 9 ###reference_### shows the K-S values offer perspectives on model robustness.\nSegFormer consistently exhibits higher K-S values across all perturbations, indicating significant prediction shifts under adversarial attacks.\nThis suggests vulnerability in maintaining prediction consistency against such perturbations.\nConversely, models such as ViT and Swin exhibit lower K-S values, implying more robust performance under adversarial conditions.\nThe bar plots in Figure 9 ###reference_### also display K-S statistic values for each perturbation type (Gaussian Noise, Defocus Blur, Pixelate) across three levels (1, 2, 3).\nEach bar\u2019s height indicates the deviation of the model\u2019s prediction distribution from its original, caused by a specific perturbation.\nDifferent colors for each perturbation type enhance visual distinction.\nAs a result, this quantitatively evaluates the robustness of various models under diverse adversarial perturbations. The Vision Transformer and Swin Transformer models exhibit relatively higher robustness.\nExplanation Deviation Analysis:\nThe objective is to evaluate the impact of adversarial perturbations on the explanation deviation.\nThe utilization of saliency maps to annotate images assists developers in identifying the reason for model inaccuracies, thereby enhancing model performance.\n###figure_9### Figure 10 ###reference_### shows prediction change percentages on a heatmap.\nThe explanation deviation attributes can be calculated as one minus the value shown in the figure.\nThe figure is 3-dimensional: the left rows label six vision models, while the right rows specify the original dataset and three types of adversarial perturbation. The labels at the bottom represent various XAI methods.\nThe value in each cell reflects the summarized percentile of prediction value changes observed between the original input and the explanation inputs.\nLower values mean the model\u2019s inference is less affected by irrelevant feature masking, indicating XAI makes a closer approximation of relevant features.\nConsequently, a lower summarized prediction change percentage values in Figure 10 ###reference_###, correlates with better XAI explanation deviation.\nSignificant variations are observed across different model-XAI combinations.\nThe results reveal that the Swin Transformer model maintains consistent explanation deviation across varied XAI methods and adversarial perturbation types, as confirmed through extensive cross-validation.\nExplanation Resilience Analysis:\nThe impact of adversarial perturbations in XAI scenarios remains an under-explored question.\nThis analysis seeks to determine if adversarial perturbations contribute to misleading explanations.\nFigure 10 ###reference_### shows the prediction change percentages under adversarial perturbations, which are used to calculate explanation resilience attributes.\nAdditionally, Table IV ###reference_### offers a concise summary of the explanation resilience results.\nThe smaller the resilience value, the better the explanation can resist the impact of adversarial attacks.\nThe analysis shows that the explanation resilience attribute does not align with performance and robustness attributes.\nThe findings reveal that models, such as Swin Transforme, retain consistent explanation deviation, whereas show a significant decrease when encountering adversarial perturbations."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.2",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "The Assessment Scenario of Tabular Models",
|
| 81 |
+
"text": "In the case of structured data, we present the experimental results for all quality attributes.\nProcess Configuration:\nOur study evaluates tabular models using datasets from various domains.\nCOMPAS [61 ###reference_b61###] Recidivism Risk Score Data and Analysis dataset is instrumental in assessing the predictive accuracy of recidivism, offering a rich source of sensitive real-world data.\nRT-IoT2022 [27 ###reference_b27###], the 2022 Real-Time Internet of Things (IoT) dataset [27 ###reference_b27###] for cybersecurity threats case, is a collection of network traffic data.\nPriceRunner [62 ###reference_b62###] Product Classification and Clustering dataset provides a scenario in the e-commerce domain.\nThe TabTransformer [63 ###reference_b63###], TabNet [64 ###reference_b64###], and FT Transformers [65 ###reference_b65###] are employed as the models for tabular data.\nThe TabTransformer [63 ###reference_b63###], a model inspired by the Transformer architecture, is adapted for tabular data by encoding categorical features into embeddings.\nTabNet [64 ###reference_b64###] utilizes sequential attention to choose features for each decision step.\nThe FT Transformer [65 ###reference_b65###] optimizes the transformer model to handle numerical features.\nIn terms of XAI methods, Mean Centroid Prediff [26 ###reference_b26###] and SHAP [24 ###reference_b24###] are applied.\nComputational Cost Analysis:\nFor the computational cost analysis, we test the time and energy consumption for the three model inference, Mean centroid prediff [26 ###reference_b26###], and SHAP [24 ###reference_b24###] method. Table V ###reference_### presents the recorded time and energy consumption for every thousand samples using the algorithms.\nThe costs for the three transformer-based tabular models are comparable.\nHowever, XAI takes significantly higher computational costs than model inference. It is observed that the Mean Centroid Prediff [26 ###reference_b26###] has higher time and energy consumption compared to SHAP [24 ###reference_b24###].\nModel Performance and Robustness Analysis:\nWe evaluated the performance of various transformer-based tabular models [63 ###reference_b63###, 64 ###reference_b64###, 65 ###reference_b65###] against adversarial perturbations.\nThe performance metrics before and after the adversarial perturbations are shown in Table VI ###reference_###.\nThe adversarial perturbation causes the model performance to decrease. Among the tests, the performance results decrease remarkably, especially for the RT-IoT dataset, which primarily consists of numerical data derived from sensors.\nExplanation Deviation and Resilience Analysis:\nWe employ the Mean Centroid Prediff [26 ###reference_b26###] and SHAP [24 ###reference_b24###] methodologies to calculate feature importance values. These explanation deviations are also shown in Table VI ###reference_###.\nThe average adversarial decrement in explanation deviation for Mean Centroid Prediff stands at 0.047, in contrast to a 0.099 reduction for SHAP.\nThis reveals that Mean Centroid Prediff exhibits superior resilience compared to SHAP; however, it comes at the cost of increased computational demands."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.3",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "Conclusions and Insights on Research Questions",
|
| 87 |
+
"text": "The five quality attributes are designed to comprehensively evaluate the open-source models.\nFigure 11 ###reference_### presents a radar chart visualizing the comparative analysis of selected vision and tabular models across the quality attributes.\nIt is noted that the chart\u2019s values have been normalized, where each value closer to one indicates better the attributes.\nThis evaluation aims to support engineers in selecting and developing explainable models in AI-enabled software.\n###figure_10### Response to RQ1:\nThe evaluation results show variability in the explainability metrics of XAI methods across various models.\nThe scenario studies encompassed various models, ranging from CNN-based to Transformer-based and from vision to tabular data.\nWhen combined with various models, XAI methods offer diverse explanation utilities, as detailed in Figure 10 ###reference_###.\nThis variability highlights the importance of assessing compatibility between models and XAI methods.\nThis study introduces a cloud-based XAI service that automates the evaluation pipeline, thereby streamlining the assessment of explainability metrics across diverse models and methods.\nBy providing ease of customization, this service enables microservice to be reusable and the pipeline reproducible.\nResponse to RQ2:\nThe investigation into the computational costs of XAI shows a notable burden, especially when compared to the costs for model inferences.\nThe experimental XAI pipelines, as summarized in Table VII ###reference_###, show a correlation between a model\u2019s explanation deviation and its computational demands.\nThe Swin Transformer model, offering the highest model performance and explanation deviation, also incurs the highest computational costs, as indicated by processing times and energy consumption metrics.\nThis correlation showcases a trade-off between achieving desirable levels of explanation deviation and the associated computational costs, suggesting the need to evaluate the model-XAI combination and optimize this balance.\nResponse to RQ3:\nThe experiments demonstrate that adversarial perturbations significantly affect both model performance and explanations, as detailed in Table VIII ###reference_###.\nIn addition to p-value, we employ Cliff\u2019s Delta analysis [66 ###reference_b66###] to compare the impact of adversarial attacks.\nA Cliff\u2019s Delta of 0.129 for model performance suggests models are more accurate without adversarial attacks.\nA Cliff\u2019s Delta of 0.428 for explanation deviation indicates greater accuracy of XAI results without adversarial attacks.\nOur cloud-based XAI service offers an automated framework for assessing model-XAI combinations and acquiring quality attributes via executing designed pipelines."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "VI Conclusion",
|
| 93 |
+
"text": "This study proposes an XAI service framework designed to streamline operational complexities in XAI evaluation.\nThe framework provides API and SDK interfaces for operation and enables deployment on cloud platforms.\nIt facilitates data preprocessing, encompassing the processing of datasets, data transformations, and the application of adversarial perturbations.\nThe service encapsulates AI models and XAI methods, enabling flexible combinations arranged by the task configuration.\nWe develop and implement evaluation pipelines to assess five key quality attributes: computational cost, performance, robustness, XAI deviation, and XAI resilience.\nThe findings lead us to conclude with three research questions. First, we observed the variability in metrics results across models and XAI methods. This shows the necessity of evaluation before selecting the XAI method for AI models. This ensures effective explanations are provided to stakeholders. Second, our analysis summarizes the results of computational cost and the explanation metric. High explanation deviation often requires the expense of increased computational resources. Finally, we demonstrate that adversarial perturbations affect both the model and the XAI, thereby emphasizing the importance of incorporating robust model and XAI methods.\nThese multidimensional quality attributes guide researchers and practitioners in making informed decisions in AI-based software development and deployment."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [],
|
| 97 |
+
"tables": {
|
| 98 |
+
"1": {
|
| 99 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Operational Overhead Comparison Between XAI Service and Tool Framework</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.1.1.1.1.1\" style=\"width:65.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.1.1.1\" style=\"font-size:80%;\">Steps</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.1.1.2.1.1\" style=\"width:143.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.2.1.1.1\" style=\"font-size:80%;\">Our Service</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.1.1.3.1.1\" style=\"width:160.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.3.1.1.1\" style=\"font-size:80%;\">OmniXAI<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.12261v4#bib.bib30\" title=\"\">30</a>]</cite> Framework</span></span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.2.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.2.1.1.1.1\" style=\"width:65.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.2.1.1.1.1.1\" style=\"font-size:80%;\">Data Preparation</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.2.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.2.1.2.1.1\" style=\"width:143.1pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.2.1.2.1.1.1\" style=\"font-size:80%;\">Automate data upload with formatted templates.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.2.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.2.1.3.1.1\" style=\"width:160.4pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.2.1.3.1.1.1\" style=\"font-size:80%;\">Script data transformation and insertion.</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.3.2.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.3.2.1.1.1\" style=\"width:65.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.3.2.1.1.1.1\" style=\"font-size:80%;\">Environment Setup</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.3.2.2.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.3.2.2.1.1\" style=\"width:143.1pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.3.2.2.1.1.1\" style=\"font-size:80%;\">Automate environment setup through Docker container.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.3.2.3.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.3.2.3.1.1\" style=\"width:160.4pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.3.2.3.1.1.1\" style=\"font-size:80%;\">Manage dependency installation, library configuration, and compatibility.</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.4.3.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.4.3.1.1.1\" style=\"width:65.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.4.3.1.1.1.1\" style=\"font-size:80%;\">Configuration</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.4.3.2.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.4.3.2.1.1\" style=\"width:143.1pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.4.3.2.1.1.1\" style=\"font-size:80%;\">Utilize JSON for configuration with defined rules.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.3.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.4.3.3.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.4.3.3.1.1\" style=\"width:160.4pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.4.3.3.1.1.1\" style=\"font-size:80%;\">Configure interactions and dataset linkages manually.</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.5.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.5.4.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.5.4.1.1.1\" style=\"width:65.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.5.4.1.1.1.1\" style=\"font-size:80%;\">Pipeline Execution</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.5.4.2.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.5.4.2.1.1\" style=\"width:143.1pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.5.4.2.1.1.1\" style=\"font-size:80%;\">Execute pipeline with a single SDK command.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.5.4.3.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.5.4.3.1.1\" style=\"width:160.4pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.5.4.3.1.1.1\" style=\"font-size:80%;\">Load datasets, inferences, and compute XAI manually.</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.6.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.6.5.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.6.5.1.1.1\" style=\"width:65.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.6.5.1.1.1.1\" style=\"font-size:80%;\">Results Analysis</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.6.5.2.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.6.5.2.1.1\" style=\"width:143.1pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.6.5.2.1.1.1\" style=\"font-size:80%;\">Provide built-in metrics and visualization.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.6.5.3.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.6.5.3.1.1\" style=\"width:160.4pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.6.5.3.1.1.1\" style=\"font-size:80%;\">Summarize results with custom coding.</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.7.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.7.6.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.7.6.1.1.1\" style=\"width:65.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.7.6.1.1.1.1\" style=\"font-size:80%;\">Adjustments</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.7.6.2.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.7.6.2.1.1\" style=\"width:143.1pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.7.6.2.1.1.1\" style=\"font-size:80%;\">Support adjustments via JSON editing.</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.6.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T1.1.7.6.3.1\">\n<span class=\"ltx_p\" id=\"S4.T1.1.7.6.3.1.1\" style=\"width:160.4pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.7.6.3.1.1.1\" style=\"font-size:80%;\">Modify code and restart processes manually.</span></span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 100 |
+
"capture": "TABLE I: Operational Overhead Comparison Between XAI Service and Tool Framework"
|
| 101 |
+
},
|
| 102 |
+
"2": {
|
| 103 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Selected State-of-the-Art Vision Models from the Huggingface Repository</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.1.1.1\">Vision Model</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.1.2.1\">Publisher</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.1.3.1\">Huggingface Repository</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.1.1\">ViT</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.1.2\">Google</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.1.3\">google/vit-large-patch32</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.3.2.1\">ConvNeXt</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.3.2.2\">Meta</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.3.2.3\">facebook/convnext-tiny</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.4.3.1\">CVT</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.4.3.2\">Microsoft</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.4.3.3\">microsoft/cvt-13</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.4.1\">Swin</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.4.2\">Microsoft</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.5.4.3\">microsoft/swin-large-patch4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.5.1\">SegFormer</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.5.2\">Nvidia</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.6.5.3\">nvidia/mit-b0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.7.6.1\">ResNet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.7.6.2\">Microsoft</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.1.7.6.3\">microsoft/resnet50</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 104 |
+
"capture": "TABLE II: Selected State-of-the-Art Vision Models from the Huggingface Repository"
|
| 105 |
+
},
|
| 106 |
+
"3": {
|
| 107 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Analysis of Computational Costs for Vision Models Based on per One Thousand Images</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.1.1.1.1.1\" style=\"width:42.7pt;\">Model</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.1.1.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.1.1.2.1.1\" style=\"width:42.7pt;\">Inference Time (s)</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.1.1.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.1.1.3.1.1\" style=\"width:42.7pt;\">Mean XAI Time (s)</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.1.1.1.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.1.1.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.1.1.4.1.1\" style=\"width:42.7pt;\">Pipeline Energy (Wh)</span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.2.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.1.2.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.2.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.2.1.1.1.1\" style=\"width:42.7pt;\">ViT</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.1.2.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.2.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.2.1.2.1.1\" style=\"width:42.7pt;\">13.35</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.1.2.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.2.1.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.2.1.3.1.1\" style=\"width:42.7pt;\">62.50</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.1.2.1.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.2.1.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.2.1.4.1.1\" style=\"width:42.7pt;\">5.58</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.3.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.3.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.3.2.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.3.2.1.1.1\" style=\"width:42.7pt;\">ConvNext</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.3.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.3.2.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.3.2.2.1.1\" style=\"width:42.7pt;\">5.73</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.3.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.3.2.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.3.2.3.1.1\" style=\"width:42.7pt;\">41.19</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.3.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.3.2.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.3.2.4.1.1\" style=\"width:42.7pt;\">2.67</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.4.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.4.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.4.3.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.4.3.1.1.1\" style=\"width:42.7pt;\">CVT</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.4.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.4.3.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.4.3.2.1.1\" style=\"width:42.7pt;\">16.91</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.4.3.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.4.3.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.4.3.3.1.1\" style=\"width:42.7pt;\">69.76</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.4.3.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.4.3.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.4.3.4.1.1\" style=\"width:42.7pt;\">5.86</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.5.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.5.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.5.4.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.5.4.1.1.1\" style=\"width:42.7pt;\">ResNet</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.5.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.5.4.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.5.4.2.1.1\" style=\"width:42.7pt;\">10.37</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.5.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.5.4.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.5.4.3.1.1\" style=\"width:42.7pt;\">45.34</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.5.4.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.5.4.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.5.4.4.1.1\" style=\"width:42.7pt;\">3.32</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.6.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.6.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.6.5.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.6.5.1.1.1\" style=\"width:42.7pt;\">Swin</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.6.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.6.5.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.6.5.2.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.6.5.2.1.1.1\">32.38</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.6.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.6.5.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.6.5.3.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.6.5.3.1.1.1\">137.24</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.1.6.5.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.6.5.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.6.5.4.1.1\" style=\"width:42.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.6.5.4.1.1.1\">10.07</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.7.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S5.T3.1.7.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.7.6.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.7.6.1.1.1\" style=\"width:42.7pt;\">SegFormer</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S5.T3.1.7.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.7.6.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.7.6.2.1.1\" style=\"width:42.7pt;\">13.62</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S5.T3.1.7.6.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.7.6.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.7.6.3.1.1\" style=\"width:42.7pt;\">65.08</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S5.T3.1.7.6.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.1.7.6.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.7.6.4.1.1\" style=\"width:42.7pt;\">5.73</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 108 |
+
"capture": "TABLE III: Analysis of Computational Costs for Vision Models Based on per One Thousand Images"
|
| 109 |
+
},
|
| 110 |
+
"4": {
|
| 111 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>Summary of Orginal Deviation, Adversarial Deviation and Explanation Resilience for Vision Models</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T4.1.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.1.1.1.1.1\">Model</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.1.1.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.1.1.2.1.1\">Original</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.1.1.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.1.1.3.1.1\">Adversarial</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.1.1.1.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.1.1.4.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.1.1.4.1.1\">Resilience</span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.1.2.1\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.T4.1.2.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.2.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.2.1.1.1.1\">VIT</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.T4.1.2.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.2.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.2.1.2.1.1\">0.267</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.T4.1.2.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.2.1.3.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.2.1.3.1.1\">0.160</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.T4.1.2.1.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.2.1.4.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.2.1.4.1.1\">0.107</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.1.3.2\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.3.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.3.2.1.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.3.2.1.1.1\">ConvNext</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.3.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.3.2.2.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.3.2.2.1.1\">0.265</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.3.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.3.2.3.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.3.2.3.1.1\">0.155</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.3.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.3.2.4.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.3.2.4.1.1\">0.110</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.1.4.3\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.4.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.4.3.1.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.4.3.1.1.1\">CVT</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.4.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.4.3.2.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.4.3.2.1.1\">0.310</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.4.3.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.4.3.3.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.4.3.3.1.1\">0.133</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.4.3.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.4.3.4.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.4.3.4.1.1\">0.177</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.1.5.4\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.5.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.5.4.1.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.5.4.1.1.1\">ResNet</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.5.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.5.4.2.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.5.4.2.1.1\">0.448</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.5.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.5.4.3.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.5.4.3.1.1\">0.351</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.5.4.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.5.4.4.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.5.4.4.1.1\">0.097</span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.1.6.5\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.6.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.6.5.1.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.6.5.1.1.1\">Swin</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.6.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.6.5.2.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.6.5.2.1.1\">0.763</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.6.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.6.5.3.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.6.5.3.1.1\">0.327</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T4.1.6.5.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.6.5.4.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.6.5.4.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.6.5.4.1.1.1\">0.436</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.1.7.6\">\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S5.T4.1.7.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.7.6.1.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.7.6.1.1.1\">SegFormer</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S5.T4.1.7.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.7.6.2.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.7.6.2.1.1\">0.636</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S5.T4.1.7.6.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.7.6.3.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.7.6.3.1.1\">0.614</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S5.T4.1.7.6.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T4.1.7.6.4.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.7.6.4.1.1\">0.022</span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 112 |
+
"capture": "TABLE IV: Summary of Orginal Deviation, Adversarial Deviation and Explanation Resilience for Vision Models"
|
| 113 |
+
},
|
| 114 |
+
"5": {
|
| 115 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE V: </span>Analysis of Computational Costs for Tabular Models Based on per Thousand Rows</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T5.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T5.1.1.1.1\">Model and XAI</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T5.1.1.1.2\">Time (s)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T5.1.1.1.3\">Energy (Wh)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T5.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T5.1.2.1.1\">TabTransformer</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.1.2.1.2\">16.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.1.2.1.3\">0.49</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T5.1.3.2.1\">TabNet</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.1.3.2.2\">19.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.1.3.2.3\">0.56</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T5.1.4.3.1\">FT Transformers</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.1.4.3.2\">15.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.1.4.3.3\">0.47</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T5.1.5.4.1\">Mean Centroid Prediff</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.1.5.4.2\">1112.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.1.5.4.3\">53.87</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T5.1.6.5.1\">SHAP</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.1.6.5.2\">768.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.1.6.5.3\">26.74</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 116 |
+
"capture": "TABLE V: Analysis of Computational Costs for Tabular Models Based on per Thousand Rows"
|
| 117 |
+
},
|
| 118 |
+
"6": {
|
| 119 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T6\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE VI: </span>Analysis of Model Performance and XAI Deviation in Tabular Scenarios</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T6.1\" style=\"width:433.6pt;height:217.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(19.4pt,-9.7pt) scale(1.09838823138349,1.09838823138349) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T6.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T6.1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.1.1.1.1.1\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T6.1.1.1.1.2\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.1.1.1.2.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T6.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.1.1.1.3.1\">Model Performance</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T6.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.1.1.1.4.1\">XAI Deviation</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T6.1.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.1.2.2.1.1\">Original</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T6.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.1.2.2.2.1\">Adv. Changes</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T6.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.1.2.2.3.1\">Original</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T6.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.1.2.2.4.1\">Adv. Changes</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T6.1.1.3.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T6.1.1.3.1.1.1\">COMPAS</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.3.1.2\">TabTransformer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.3.1.3\">0.683</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.3.1.4\">-0.118</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.3.1.5\">0.965</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.3.1.6\">-0.105</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.4.2.1\">TabNet</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.4.2.2\">0.674</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.4.2.3\">-0.114</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.4.2.4\">0.989</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.4.2.5\">-0.087</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.5.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.5.3.1\">FT Transformers</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.5.3.2\">0.690</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.5.3.3\">-0.103</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.5.3.4\">0.965</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.5.3.5\">-0.071</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T6.1.1.6.4.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T6.1.1.6.4.1.1\">RT-IoT</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.6.4.2\">TabTransformer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.6.4.3\">0.921</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.6.4.4\">-0.435</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.6.4.5\">0.987</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.6.4.6\">-0.060</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.7.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.7.5.1\">TabNet</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.7.5.2\">0.883</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.7.5.3\">-0.364</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.7.5.4\">0.985</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.7.5.5\">-0.078</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.8.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.8.6.1\">FT Transformers</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.8.6.2\">0.950</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.8.6.3\">-0.603</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.8.6.4\">0.984</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.8.6.5\">-0.088</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T6.1.1.9.7.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T6.1.1.9.7.1.1\">PriceRunner</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.9.7.2\">TabTransformer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.9.7.3\">0.994</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.9.7.4\">-0.175</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.9.7.5\">0.977</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.1.1.9.7.6\">-0.063</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.10.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.10.8.1\">TabNet</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.10.8.2\">0.991</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.10.8.3\">-0.179</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.10.8.4\">0.973</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T6.1.1.10.8.5\">-0.062</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1.11.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T6.1.1.11.9.1\">FT Transformers</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T6.1.1.11.9.2\">0.997</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T6.1.1.11.9.3\">-0.174</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T6.1.1.11.9.4\">0.973</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T6.1.1.11.9.5\">-0.049</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 120 |
+
"capture": "TABLE VI: Analysis of Model Performance and XAI Deviation in Tabular Scenarios"
|
| 121 |
+
},
|
| 122 |
+
"7": {
|
| 123 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T7\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE VII: </span>Explanation Deviation and Energy Consumption of the Selected Models</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T7.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T7.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T7.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.2.2.3.1\">Models(Vision/Tabular)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T7.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.1.1.1.1\">Explanation Deviation </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T7.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.2.2.2.1\">Energy (Wh) </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T7.2.3.1.1\">Swin</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T7.2.3.1.2\">0.770</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T7.2.3.1.3\">10.07</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.2.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T7.2.4.2.1\">CVT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.4.2.2\">0.692</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.4.2.3\">5.86</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.2.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T7.2.5.3.1\">SegFormer</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.5.3.2\">0.678</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.5.3.3\">5.73</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.2.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T7.2.6.4.1\">ViT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.6.4.2\">0.598</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.6.4.3\">5.58</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.2.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T7.2.7.5.1\">ResNet</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.7.5.2\">0.474</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.7.5.3\">3.32</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.2.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T7.2.8.6.1\">ConvNeXt</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.8.6.2\">0.333</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.8.6.3\">2.67</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.2.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T7.2.9.7.1\">TabNet</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T7.2.9.7.2\">0.983</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T7.2.9.7.3\">0.56</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.2.10.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T7.2.10.8.1\">TabTransformer</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.10.8.2\">0.977</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.2.10.8.3\">0.49</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.2.11.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T7.2.11.9.1\">FT Transformers</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T7.2.11.9.2\">0.974</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T7.2.11.9.3\">0.47</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 124 |
+
"capture": "TABLE VII: Explanation Deviation and Energy Consumption of the Selected Models"
|
| 125 |
+
},
|
| 126 |
+
"8": {
|
| 127 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T8\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE VIII: </span>Model Performance and Explanation Deviation Changes with Adversarial Attacks, Results from 108 XAI Pipelines</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T8.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T8.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T8.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.2.2.3.1\">Attribute</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T8.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.1.1.1.1\">Significant ()</span></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T8.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.2.2.2.1\">Non significant ()</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T8.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T8.2.3.1.1\">Performance</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T8.2.3.1.2\">88.89%</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T8.2.3.1.3\">11.11%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.2.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T8.2.4.2.1\">Deviation</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T8.2.4.2.2\">69.44%</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S5.T8.2.4.2.3\">30.56%</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 128 |
+
"capture": "TABLE VIII: Model Performance and Explanation Deviation Changes with Adversarial Attacks, Results from 108 XAI Pipelines"
|
| 129 |
+
}
|
| 130 |
+
},
|
| 131 |
+
"image_paths": {
|
| 132 |
+
"1": {
|
| 133 |
+
"figure_path": "2401.12261v4_figure_1.png",
|
| 134 |
+
"caption": "Figure 1: Five CAM-based Visual Explanations from Vision Transformer Model with One Image Example.",
|
| 135 |
+
"url": "http://arxiv.org/html/2401.12261v4/extracted/5891706/Charts/example_cam.png"
|
| 136 |
+
},
|
| 137 |
+
"2": {
|
| 138 |
+
"figure_path": "2401.12261v4_figure_2.png",
|
| 139 |
+
"caption": "Figure 2: Top 10 out of 83 SHAP Feature Importance Explanations from FT Transformer on RT-IoT Cybersecurity Threats Dataset.",
|
| 140 |
+
"url": "http://arxiv.org/html/2401.12261v4/extracted/5891706/Charts/iot.png"
|
| 141 |
+
},
|
| 142 |
+
"3": {
|
| 143 |
+
"figure_path": "2401.12261v4_figure_3.png",
|
| 144 |
+
"caption": "Figure 3: Taxonomy of Adversarial Attacks. References: FGSM [35], C&W [36], JSMA [37], AdvGAN [34], DeepFool [38], UAP [39], DaST [40], Houdini [41], ZOO [42], One-Pixel [43], ImageNet-C [44], ImageNet-P [44], Fooling LIME and SHAP [45].",
|
| 145 |
+
"url": "http://arxiv.org/html/2401.12261v4/extracted/5891706/Charts/taxonomy.png"
|
| 146 |
+
},
|
| 147 |
+
"4": {
|
| 148 |
+
"figure_path": "2401.12261v4_figure_4.png",
|
| 149 |
+
"caption": "Figure 4: Assessment Pipelines for Open-source AI Model Quality Attributes.",
|
| 150 |
+
"url": "http://arxiv.org/html/2401.12261v4/extracted/5891706/Charts/evaluation_process_Copy.png"
|
| 151 |
+
},
|
| 152 |
+
"5": {
|
| 153 |
+
"figure_path": "2401.12261v4_figure_5.png",
|
| 154 |
+
"caption": "Figure 5: Cloud-based XAI Service Architecture.",
|
| 155 |
+
"url": "http://arxiv.org/html/2401.12261v4/extracted/5891706/Charts/CloudArchetecture.png"
|
| 156 |
+
},
|
| 157 |
+
"7": {
|
| 158 |
+
"figure_path": "2401.12261v4_figure_7.png",
|
| 159 |
+
"caption": "Figure 7: Explanation Deviation Analysis for XAI Service (a) Versus OmniXAI (b) Using GradCAM on ResNet",
|
| 160 |
+
"url": "http://arxiv.org/html/2401.12261v4/extracted/5891706/Charts/omniresults.png"
|
| 161 |
+
},
|
| 162 |
+
"8": {
|
| 163 |
+
"figure_path": "2401.12261v4_figure_8.png",
|
| 164 |
+
"caption": "Figure 8: Vision Models F1 Scores: Original Dataset and Adversarial Dataset Averages.",
|
| 165 |
+
"url": "http://arxiv.org/html/2401.12261v4/extracted/5891706/Charts/f1.png"
|
| 166 |
+
},
|
| 167 |
+
"9": {
|
| 168 |
+
"figure_path": "2401.12261v4_figure_9.png",
|
| 169 |
+
"caption": "Figure 9: Comparison of K-S Statistics to Assess Model Robustness under Three Levels and Types of Perturbations (0: Identical, 1: Highly Divergent).",
|
| 170 |
+
"url": "http://arxiv.org/html/2401.12261v4/extracted/5891706/Charts/resilience.png"
|
| 171 |
+
},
|
| 172 |
+
"10": {
|
| 173 |
+
"figure_path": "2401.12261v4_figure_10.png",
|
| 174 |
+
"caption": "Figure 10: Heatmaps Illustrating Median Prediction Change Percentage for Original and Adversarial Perturbed Images. Lower Values Indicate Better Explanation Deviation.",
|
| 175 |
+
"url": "http://arxiv.org/html/2401.12261v4/extracted/5891706/Charts/heatmap.png"
|
| 176 |
+
},
|
| 177 |
+
"11": {
|
| 178 |
+
"figure_path": "2401.12261v4_figure_11.png",
|
| 179 |
+
"caption": "Figure 11: Comprehensive Overview of Multiple Quality Attributes Assessment for the Selected Models.",
|
| 180 |
+
"url": "http://arxiv.org/html/2401.12261v4/extracted/5891706/Charts/radar.png"
|
| 181 |
+
}
|
| 182 |
+
},
|
| 183 |
+
"validation": true,
|
| 184 |
+
"references": [],
|
| 185 |
+
"url": "http://arxiv.org/html/2401.12261v4"
|
| 186 |
+
}
|
20241001/2401.15497v5.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2401.17985v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241001/2402.01107v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|