context
stringlengths
250
4.37k
A
stringlengths
250
8.2k
B
stringlengths
250
4.23k
C
stringlengths
250
4.99k
D
stringlengths
250
3.54k
label
stringclasses
4 values
τ(G)=1|V|∏i=2|V|λi=12⁢|E|∏i=1|V|di∏i=2|V|λ̄i⋅\displaystyle\tau(G)=\frac{1}{|V|}\prod\limits_{i=2}^{|V|}\lambda_{i}=\frac{1}% {2|E|}\prod\limits_{i=1}^{|V|}d_{i}\prod\limits_{i=2}^{|V|}\lambdabar_{i}\cdotitalic_τ ( italic_G ) = divide start_ARG 1 end_ARG start_ARG | italic_V | end_ARG ∏ start_POSTSUBSCRIPT italic_i = 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT | italic_V | end_POSTSUPERSCRIPT italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG 2 | italic_E | end_ARG ∏ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT | italic_V | end_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∏ start_POSTSUBSCRIPT italic_i = 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT | italic_V | end_POSTSUPERSCRIPT λ̄ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⋅
Being motivated by the success of the Wiener index, nearly after four decades, in 1993,19931993,1993 , Klein and Randić put forward a novel structure-descriptor topological index, known as the Kirchhoff index [11]. The Kirchhoff index of a graph G𝐺Gitalic_G is calculated using the formula Kf⁡(G)=12⁢∑i=1|V|∑j=1|V|ri⁢jKf𝐺12superscriptsubscript𝑖1𝑉superscriptsubscript𝑗1𝑉subscript𝑟𝑖𝑗\operatorname{Kf}(G)=\frac{1}{2}\sum\limits_{i=1}^{|V|}\sum\limits_{j=1}^{|V|}% r_{ij}roman_Kf ( italic_G ) = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT | italic_V | end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT | italic_V | end_POSTSUPERSCRIPT italic_r start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT, where
As both of the structural descriptors have important applications in graph theory, networking systems, molecular chemistry, and other related fields, researchers have taken a strong interest in the Kirchhoff indices and degree-Kirchhoff indices of graphs in recent years. Finding the explicit formulae for the Kirchhoff index and the degree-Kirchhoff index for any general class of graphs is not an easy task. But, if a suitable automorphism is found for a particular class of graphs, then using the decomposition theorem, one can derive the explicit formula for them.
We obtain the formulae for the total count of spanning trees of Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and Pn′subscriptsuperscript𝑃′𝑛P^{\prime}_{n}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT using the Theorem 4 as follows.
The formulae of the degree-Kirchhoff index of linear hexagonal chains [9], hexagonal Möbius chains [16], linear crossed hexagonal chains [18], linear polyomino chains [8], cylinder phenylene chains [17], random polygonal chains [13], generalized phenylenes [25], cylinder, and Möbius octagonal chain [15], linear pentagonal derivation chain [14, 23] are already available in the literature. The formula to obtain the Kirchhoff index for pentagonal chains was obtained by Wang and Zhang [21] in 2010201020102010 and the degree-Kirchhoff index for the same chain graphs was obtained by He et al. [6] in 2018.20182018.2018 . Although the formulae to find the Kirchhoff and degree-Kirchhoff indices of several cylinder/ Möbius chains have been computed by many researchers in the last few years, those for cylinder/ Möbius pentagonal chains have not been attempted. Very recently, explicit formulae for the Kirchhoff index of the cylinder pentagonal chains and the Möbius pentagonal chains have been obtained by Sahir and Nayeem [19]. We now present explicit formulae for the degree-Kirchhoff index, Kemeny’s constant, Gutman index, and Schultz index of the pentagonal cylinder chain Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT (see Figure 1) and the pentagonal Möbius chain Pn′subscriptsuperscript𝑃′𝑛P^{\prime}_{n}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT (see Figure 2) on |V|=5⁢n𝑉5𝑛|V|=5n| italic_V | = 5 italic_n (n≥2𝑛2n\geq 2italic_n ≥ 2) vertices and |E|=7⁢n𝐸7𝑛|E|=7n| italic_E | = 7 italic_n edges, as a continuation of previous works in that direction. For the above-said graphs, we also derive a relationship between the Schultz and Gutman indices and present a comparison between the Gutman indices and the degree-Kirchhoff indices for different values of n𝑛nitalic_n.
B
In their case this restriction is substantive rather than sacrificing no generality, reflecting the differing structure of the incentive constraints that arise in screening and moral hazard problems. Di Tillio et al., (2017) do not have counterparts of our findings that the number of payment functions in an optimal ambiguous contract is precisely two in the presence of the MLRP condition, though they maintain throughout the screening counterpart of this assumption, in the form of a single-crossing condition.
An implication of our results is that in the context of moral hazard problems, ambiguity and max-min utility drive optimal designs towards simplicity. We thus join a literature, with Holmström and Milgrom, (1987) as a key early entry, endeavoring to explain why actual contracts in moral hazard settings tend to be simple, in contrast to their theoretical counterparts. Carroll, (2015), Carroll and Walton, (2022), and Dai and Toikka, (2022) show that a principal who is uncertain of the actions available to an agent and who has max-min preferences will optimally choose a linear contract. Dütting et al., (2019) show that the same holds for a principal who is uncertain about the technology by which actions turn into outcomes and who has max-min preferences.
A second branch of the literature examines settings in which the agent has ambiguous beliefs that the principal can potentially exploit. Beauchêne et al., (2019) and Cheng, (2020) examine Bayesian persuasion problems in which the sender exploits the ambiguity aversion of the receiver. Bodoh-Creed, (2012) and Di Tillio et al., (2017) examine screening problems with agents who have max-min preferences. Bose and Renou, (2014) examine mechanism design problems in which agents have max-min preferences. Lopomo et al., (2011) examine moral hazard problems with agents who have Bewley preferences. Bose et al., (2006) consider auctions in which the seller and bidders may both be ambiguity averse.
Dai and Toikka, (2022) examine a principal who writes contracts to shape the actions of a team of agents, with the principal holding ambiguous beliefs about the actions available to the agents. Dütting et al., (2019) examine moral hazard problems in which the principal has ambiguous beliefs about the distribution of outcomes induced by the agent’s actions.
A flourishing literature examines design problems in the face of non-Bayesian uncertainty. One branch of this literature examines models in which the principal entertains non-Bayesian uncertainty about the agents. Bergemann and Schlag, (2011) examine monopoly pricing on the part of a principal with ambiguous beliefs about buyers’ valuations. Carrasco et al., (2018) examine screening problems in which the principal is only partially informed of the distribution of agent preferences. Carroll, (2015), Carroll and Walton, (2022) and Kambhampati, (2023) examine moral hazard problems in which the principal has ambiguous beliefs about the set of actions the agent can choose from.
A
Most interestingly, our mechanisms and that of [FS17] is fairly similar – both rely on splitting the dataset and, roughly speaking, computing an approximate median of the queries’ value on each group – but the analyses are wholly different. Their analysis is based on differential privacy. In contrast, Theorem 4 is a simple corollary of our subsampling framework. Indeed, it’s not difficult to show that our mechanism does not satisfy standard (ε,δ)𝜀𝛿(\varepsilon,\delta)( italic_ε , italic_δ )-differential privacy with strong enough parameters to give a sample size bound close to that of Theorem 4.
Subsampling has been thoroughly explored in the context of privacy amplification (see e.g. [BBG18, ZW19] or the book chapter [Ste22]): if 𝒜𝒜\mathcal{A}caligraphic_A is a differentially private algorithm, running 𝒜𝒜\mathcal{A}caligraphic_A on a random subset of the data gives an algorithm with even better privacy parameters. Given the previous applications of differential privacy to adaptive data analysis, this seems like a natural starting point for our work. However, such an approach is not sufficient to analyze subsampling queries. Indeed, subsampling queries do not necessarily satisfy (ε,δ)𝜀𝛿(\varepsilon,\delta)( italic_ε , italic_δ )-differential privacy with sufficiently good parameters to give useful bounds on the bias.
We begin by sketching a proof of our main result in the simplest setting. Specifically, we’ll show that if an analyst asks O~⁢(n2)~𝑂superscript𝑛2\tilde{O}(n^{2})over~ start_ARG italic_O end_ARG ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) subsampling queries, each mapping X1superscript𝑋1X^{1}italic_X start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT to {0,1}01\{0,1\}{ 0 , 1 }, the last query asked will have low bias. This sketch follows the same three steps described in Section 2, except we replace “differential privacy” with average leave-one-out KL stability.
Most interestingly, our mechanisms and that of [FS17] is fairly similar – both rely on splitting the dataset and, roughly speaking, computing an approximate median of the queries’ value on each group – but the analyses are wholly different. Their analysis is based on differential privacy. In contrast, Theorem 4 is a simple corollary of our subsampling framework. Indeed, it’s not difficult to show that our mechanism does not satisfy standard (ε,δ)𝜀𝛿(\varepsilon,\delta)( italic_ε , italic_δ )-differential privacy with strong enough parameters to give a sample size bound close to that of Theorem 4.
Fish, Reyzin, and Rubinstein explored the use of subsampling to speed up classical mechanisms for adaptive data analysis [FRR20]. For example, their mechanism for answering a statistical query φ𝜑\varphiitalic_φ, computes φ𝜑\varphiitalic_φ on a random subsample of the data and adds Laplacian noise to that result. This allows them to retain the accuracy guarantees of prior mechanisms that added Laplacian noise [BNS+16] while also running in sublinear time. In contrast, our work shows that subsampling alone is sufficient, and achieves sample size bounds that improve upon prior work.
A
Let F𝐹Fitalic_F be an outerplanar graph with vertex u∈V⁢(F)𝑢𝑉𝐹u\in V(F)italic_u ∈ italic_V ( italic_F ). If u𝑢uitalic_u is not isolated then there exists a neighbour v∈NF⁢(u)𝑣subscript𝑁𝐹𝑢v\in N_{F}(u)italic_v ∈ italic_N start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ( italic_u ) such that the graph obtained from F𝐹Fitalic_F by subdividing the edge u⁢v𝑢𝑣uvitalic_u italic_v is outerplanar.
If otherwise v𝑣vitalic_v is among the vertices at which 𝑴1′subscriptsuperscript𝑴′1\boldsymbol{M}^{\prime}_{1}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and 𝑴2′subscriptsuperscript𝑴′2\boldsymbol{M}^{\prime}_{2}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are glued together then define 𝑴𝑴\boldsymbol{M}bold_italic_M as the graph obtained from 𝑴′superscript𝑴′\boldsymbol{M}^{\prime}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT by deleting all edges incident with v𝑣vitalic_v but keeping the vertex.
A graph F𝐹Fitalic_F is outerplanar if it does not have K4subscript𝐾4K_{4}italic_K start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT or K2,3subscript𝐾23K_{2,3}italic_K start_POSTSUBSCRIPT 2 , 3 end_POSTSUBSCRIPT as a minor. Equivalent, it is outerplanar if it has a planar drawing such that all its vertices lie on the same face [syslo_characterisations_1979].
Let F𝐹Fitalic_F be an outerplanar graph with vertex u∈V⁢(F)𝑢𝑉𝐹u\in V(F)italic_u ∈ italic_V ( italic_F ). If u𝑢uitalic_u is not isolated then there exists a neighbour v∈NF⁢(u)𝑣subscript𝑁𝐹𝑢v\in N_{F}(u)italic_v ∈ italic_N start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ( italic_u ) such that the graph obtained from F𝐹Fitalic_F by subdividing the edge u⁢v𝑢𝑣uvitalic_u italic_v is outerplanar.
Take an outerplanar embedding of F𝐹Fitalic_F which has some face incident to all the vertices, consider some edge incident to u𝑢uitalic_u that is incident to this face, and subdivide that edge. Since the vertex created by subdivision is incident to the outer face,
D
We recruited 32 participants (17 males, 15 females) via mailing lists and word of mouth, who are mainly from STEM (Science, Technology, Engineering, and Mathematics) fields and business schools, between 19 to 37 years old (M=26,S⁢D=3.57formulae-sequence𝑀26𝑆𝐷3.57M=26,SD=3.57italic_M = 26 , italic_S italic_D = 3.57) with differed experience on robots. After the experiment, the participants were given a questionnaire to collect their demographics, and experience with robots.
The participant should start from position C and move towards the Table in position D to deliver the yellow cup, during which a non-contact interaction between the participant and the Spot was recorded by the motion capture cameras. In the non-stationary conditions, the Spot robot started moving from B to A the moment the participant started moving from C to D. Participants were told to feel free about their walking speed and choice of paths. The walking process of the robot is fully autonomous and so were the participants informed before the experiment. Additionally, the robot obstacle avoidance is disabled so that Spot won’t go off the track. The position and orientation information from the OptiTrack system were recorded as soon as the Spot started to move. After 8 repeats, the whole trajectory data of one participant can be reconstructed in 2D as shown in Figure 5. Because of the unpredictable participant height and the fluctuation of height during walking, the Z-axis coordinate is not considered when calculating personal distance. Due to the fact that the Spot is a legged canine robot with no wheels, it is unlikely for it to walk repetitively on a precise same straight route, as a result of which there are small vertical offsets in the Spot’s trajectories.
In our research, the motion capture system will only track the position and orientation of objects instead of their motions. As a result, marker rigid bodies will take the place of marker skeletons, which are more common in motion capturing. See in Figure 2, rigid bodies are formed by 4 or more markers on the same plane, with a clear pre-set pivot to label its orientation. The position and orientation information will be captured at the sampling rate of 100 Hz for later trajectory reconstruction and distance extraction.
The lab is equipped with an OptiTrack motion capture system, which functions in the outside-in [39] tracking principle. 6 motion cameras are mounted around the experiment zone to take 2D aligned pictures of passive markers on objects, according to the position of retroreflective markers on 2D frames to calculate the real world 3D marker position. The Motive software transfers certain shapes formed by markers into a rigid body, the markers were installed asymmetrically so that the orientation can be identified as in Figure 2. The rigid body coordinate system is left-handed, the same as the world coordinate. The rigid body parameters will be stored in OptiTrack configurations to make them recognizable in every experiment setup. With a sufficient frame rate, the system can capture the in-situ position of the marker rigid bodies in sight. The rigid bodies’ information on positions and orientations is sampled at the rate of 100 Hz. The position information is then multi-casted in a local network with Robot Operating System (ROS)Virtual-Reality Peripheral Network (VRPN)communication toolkit using the UDP protocol to guarantee communication speed.
At the setup stage, six OptiTrack motion cameras were mounted around the experiment zone to capture the in-situ position of the marker rigid bodies in sight. The position and orientation information of the rigid bodies were multi-casted in a local network with ROS built-in UDP communication. The origin of the OptiTrack 3D space coordinate was fixed at the floor center of the lab, then the whole equipment was set up and well-calibrated.
D
θ3=π2−θ1+θ2−θosubscript𝜃3𝜋2subscript𝜃1subscript𝜃2subscript𝜃𝑜\theta_{3}=\frac{\pi}{2}-\theta_{1}+\theta_{2}-\theta_{o}italic_θ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT = divide start_ARG italic_π end_ARG start_ARG 2 end_ARG - italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - italic_θ start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT
Cyclic Coordinate Descent (CCD) is an iterative algorithm used to solve inverse kinematics problems. Yotchon et al. [13] proposed a hybrid approach combining the CCD method with a differential evolution algorithm, a metaheuristic optimization technique, to tackle inverse kinematics challenges. This combined method reliably converges to the target position regardless of the initial conditions. The CCD algorithm works by minimizing joint errors through iterative adjustments of one component of the angular vector at a time. While it can be used in real-time applications, it typically requires multiple iterations to achieve the desired outcome [14]. One of the key benefits of CCD is its simplicity in implementation. The algorithm involves measuring the discrepancy between the target position and the end-effector position, then applying a rotation matrix to reduce this discrepancy to zero, all while considering joint constraints. This process is repeated sequentially for each joint, starting from the end-effector and moving towards the root joint of the kinematic chain.
This subsection explores the use of neural networks to find inverse kinematics (IK) solutions for the human leg. Shah et al. [22] applied deep artificial neural networks to solve the IK problem for a 5-axis serial link robotic manipulator. They achieved an accuracy where the deviation between the end effector and the target, due to actuator limitations, was approximately 0.2 mm; this deviation could potentially be reduced with additional training. Additionally, Demby et al. [23] assessed the performance of artificial neural networks for solving the IK problems of robots with 4, 5, 6, and 7 degrees of freedom (DOF). They used mean square error to evaluate the solutions for the same desired outputs and found that the error decreased as the size of the training set increased. However, they noted that creating an effective training set required a considerable amount of time. The neural network is trained using data generated by the forward kinematics to learn the angular joint values in the configuration space. Specifically, the neural network maps the specified end effector position (xd,yd)Tsuperscriptsubscript𝑥𝑑subscript𝑦𝑑𝑇(x_{d},y_{d})^{T}( italic_x start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT to the corresponding joint configuration q=[θ1,θ2,θ3]T𝑞superscriptsubscript𝜃1subscript𝜃2subscript𝜃3𝑇q=[\theta_{1},\theta_{2},\theta_{3}]^{T}italic_q = [ italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT. A multilayer perceptron (MLP) is utilized to solve the inverse kinematics for the lower limb. The structure of the MLP is depicted in Figure 3. It consists of a two-layer feed-forward network: a hidden layer with 10 interconnected sigmoid neurons and an output layer with 3 linear neurons.
The results of the inverse kinematics simulation for the lower limbs using the Cyclic Coordinate Descent (CCD) method are shown in Figure 5. To determine the position error illustrated in Figure 6, we used these results along with the forward kinematics described in Equation (5) to generate the end effector trajectory. However, the Matlab simulation indicates that this method causes the links to rotate in a sequence that does not match natural movement, resulting in an unnatural appearance of the lower limb’s motion. Therefore, the CCD method fails to account for the physiological constraints of the human leg.
A method to address the pseudo-inverse problem is the Levenberg-Marquardt Damped Least Squares (LMDLS) method. Wampler et al. [16] proposed an approach to determine the optimal damping factor, which balances the angular joint velocities with the tracking error. This approach involves finding the joint angular error vector Δ⁢qΔ𝑞\Delta qroman_Δ italic_q that minimizes the tracking error and the joint velocities. This is achieved by minimizing the following objective function:
A
The comparability graph of the poset in Proposition 2.4 shows that for any fixed hℎhitalic_h, this bound has the right order of magnitude in ε𝜀\varepsilonitalic_ε. As in the case of posets, we can also use the test for Kχ⁢(ℱ)subscript𝐾𝜒ℱK_{\chi(\mathcal{F})}italic_K start_POSTSUBSCRIPT italic_χ ( caligraphic_F ) end_POSTSUBSCRIPT-free subgraphs to test a monotone class of comparability graphs ℱℱ\mathcal{F}caligraphic_F: the probability that we reject an ℱℱ\mathcal{F}caligraphic_F-free comparability graph is negligible.
The goal of this paper is to study testability of finite posets as special digraphs. By a poset, we mean a set equipped with a partial order ≺precedes\prec≺ that is anti-reflexive and transitive. Alon, Ben-Eliezer and Fischer [1] proved that hereditary (closed under induced subgraphs) classes of ordered graphs are strongly testable. This implies the removal lemma for posets and that monotone classes of posets are strongly testable in the following way. We consider a linear extension <<< of the ordering ≺precedes\prec≺ of the poset P𝑃Pitalic_P. To every poset with a linear ordering, we can associate the graph on its base set, where distinct elements x<y𝑥𝑦x<yitalic_x < italic_y are adjacent if x≺yprecedes𝑥𝑦x\prec yitalic_x ≺ italic_y in the poset. A graph with a linear ordering is associated with a poset if and only if it has no induced subgraph with two edges on three vertices, where the smallest and largest vertices are not adjacent. An alternative to the application of this general result is to follow the proof of Alon and Shapira [4] using the poset version of Szemerédi’s regularity lemma proved by Hladký, Máthé, Patel and Pikhurko [18].
Alon and Shapira [4] proved that every monotone property of undirected graphs (that is, closed under the removal of edges and vertices) is strongly testable, see Lovász and Szegedy for an analytic approach [22], while Rödl and Schacht generalized this to hypergraphs [23], see also Austin and Tao [8]. Similar results have been obtained for hereditary classes of graphs and other structures, e.g., tournaments and matrices, see Gishboliner for the most recent summary [13]. We focus on monotone properties and omit the overview of other research directions.
Panna Tímea Fekete’s Project No. 1016492. has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the KDP-2020 funding scheme and supported by the ERC Synergy Grant No. 810115 – DYNASNET.
The relationship between local and global properties of structures is a central theme in combinatorics and computer science. Since the work of Rubinstein and Sudan [25], testing properties by sampling a small number of elements is an emerging research area. A classical result of this kind is the triangle removal lemma by Ruzsa and Szemerédi [26], usually stated in the form that if a graph G𝐺Gitalic_G admits at most δ⁢|V⁢(G)|3𝛿superscript𝑉𝐺3\delta|V(G)|^{3}italic_δ | italic_V ( italic_G ) | start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT triangles then it can be made triangle-free by the removal of at most ε⁢|V⁢(G)|2𝜀superscript𝑉𝐺2\varepsilon|V(G)|^{2}italic_ε | italic_V ( italic_G ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT edges, where δ𝛿\deltaitalic_δ depends only on ε𝜀\varepsilonitalic_ε. This can be applied to obtain a combinatorial proof of Roth’s theorem [24] on 3333-term arithmetic progressions, while the hypergraph removal lemma has been used to prove Szemerédi’s theorem. Removal lemmas were proved for abelian groups by Green [17], for linear systems of equations by Král, Serra and Vena [21], for local affine-invariant properties by Bhattacharyya, Fischer, Hatami, Hatami and Lovett [10] and for permutations by Klimošová and Král [19], and by Fox and Wei [12], as well.
C
This is also because the required tasks depend on the type of source input. Knowledge extraction is commonly applied on unstructured data inputs like text and may not be needed for structured data, e.g. from databases or other knowledge graphs. Furthermore, the entity linking part of knowledge extraction can make an additional entity resolution step unnecessary. As a result, there may be different KG construction pipelines for different use cases and data sources.
Quality assurance is important not only for the resulting KG as an outcome of the KG construction process but also within the different construction tasks, such as selecting good-quality sources (Section 3.1.2), data cleaning for acquired data, knowledge extraction, ontology evolution or entity fusion. The data cleaning approaches mentioned in Section 3.1.4 can also be applied to the KG, e.g., to identify outliers or contradicting information.
The steps of Quality Assurance and KG completion to improve the current version of the KG are not needed for every KG update but may be executed asynchronously, e.g., within separate pipelines (although QA actions such as data cleaning also apply to individual tasks). Furthermore, data and metadata management play a special role compared to the other tasks, since they are necessary throughout the entire pipeline therefore representing a cross-cutting task, as indicated by the central position of metadata management in Figure 2.
Quality Assurance. Quality assurance is a cross-cutting topic playing an important role throughout the whole KG construction process. Quality problems in the KG can be multi-faceted relating to the ontological consistency, the data quality of entities and relations (comprehensiveness), or domain coverage. The coverage aspect may focus on the inclusion of relevant data and the exclusion of unnecessary data. In some scenarios, the timeliness of data can play a critical role in real-time-oriented use cases.
A benchmark could be based on similar settings than for the creation of specific KGs discussed in Section 4 aiming at the initial construction and incremental update of either a domain-specific or cross-domain KG from a defined set of data sources of different kinds. The KG ontology and the KG data model (RDF or property graph) could be predefined to facilitate the evaluation of the resulting KG. The size of the resulting KG should be relatively high and the construction should be challenging with the need of knowledge extraction, entity linking/resolution and entity fusion. Determining the quality of the constructed KG is difficult as it would ideally be based on a near-perfect result (gold standard) for the initial KG and for its updated version(s). For all entity and relation types in the given KG ontology, it has then to be determined to what degree they could correctly be populated compared to the gold standard which requires an extension to known metrics such as precision and recall. Further evaluation criteria include the runtimes for the initial KG construction and for the incremental updates and perhaps the manual effort to set up the construction pipelines. Ideally, an evaluation platform could be established - similar to other areas [243] - for a comparative evaluation of different pipelines with different implementations of the individual construction steps.
B
Initially, the set of waypoints required to generate a trajectory in task space is assumed to be available from the task planner. The trajectory is then planned using the minimum jerk criterion to ensure smooth acceleration of the joints, thereby reducing vibrations and avoiding resonance frequencies. For this simulation example, the initial and final position, velocity, and acceleration values are provided in Table 2. The simulation result of the trajectory planning is depicted in Figure 3.
In this section, the Forward Kinematics (FK) of the lower limbs, depicted in Figure 1, using dual quaternions is established. FK involves computing the positions and orientations of the end-effector in task space from the axes and angles of the joint rotations. The lower limb is decomposed into four segments: the pelvis, thigh, leg, and foot, connected by three joint groups. These include the hip, which rotates about three perpendicular axes; the knee, which moves solely about the z-axis; and the ankle, permitting movement in three planes. Therefore, the degrees of freedom (DOF) of the lower limbs total 7 [16]. Consequently, the position of the end-effector relative to the reference frame ℛ⁢3ℛ3\mathscr{R}3script_R 3, denoted as P⁢E/3𝑃𝐸3P{E/3}italic_P italic_E / 3, can be expressed as:
Afterward, the inverse kinematics (IK) of the lower limb is computed using a multi-layer perceptron trained with the Levenberg-Marquardt backpropagation algorithm, utilizing a dataset of 400,000 samples. The network architecture is illustrated in Figure 4, featuring a two-layer feed-forward structure comprising a hidden layer with 20 interconnected sigmoid neurons and an output layer with 9 linear neurons. The simulation results are presented in Figure 5, achieving a computational time of 0.009 s. Additionally, Figure 6 illustrates the position errors along the x-axis, y-axis, and z-axis, with root mean square errors of 0.0741, 0.0970, and 0.0776, respectively. The negligible position error between the desired and calculated trajectories shown in Figure 7 confirms the method’s accuracy.
This section focuses on the dynamic description of the lower limb shown in Figure 2 using the Dual Quaternion-based recursive Newton-Euler method. This method involves calculating the velocities and accelerations of the center of mass of each link, known as twists, based on the positions, velocities, and accelerations of the lower limb configuration. These calculations adhere to the Newton-Euler propagation law. Subsequently, the wrenches, representing forces and moments acting on each link in 3D space, are derived starting from the wrenches applied to the end effector.
In this paper, the dual quaternion-based theory is applied to the kinematics and dynamics study of the 7-DOF human lower limbs in 3D space. Subsequently, the artificial neural networks method is used to solve the inverse kinematics problem. The efficiency of the artificial neural networks method is verified using the jerk energy criteria. The rest of this paper is organized as follows: Section 2 provides a brief mathematical background on dual quaternions algebra. Section 3 elaborates on the forward kinematics of the human lower limb in 3D space using dual quaternions. Section 4 focuses on the application of the artificial neural network method to solve the inverse kinematics of the lower limb. In Section 5, the dynamical model of the lower limb using dual quaternions based on a recursive Newton-Euler method is developed. Finally, in Section 6, the simulation results are discussed.
B
We note that while we proposed to approximate the “cheap" part as well in Section 3, one other theoretically viable approach is to keep it intact and approximately solve a “proximal type" problem involving hℎhitalic_h; this will lead to replacing L𝐿Litalic_L by δ𝛿\deltaitalic_δ, but the subproblem is even more difficult to solve. However, our theory suggests that we don’t need to solve this subproblem exactly; we only need m≥Lδ𝑚𝐿𝛿m\geq\frac{L}{\delta}italic_m ≥ divide start_ARG italic_L end_ARG start_ARG italic_δ end_ARG; we do not treat this case here.
We consider the variance-reduced cubic Newton method from (Zhou et al., 2019) (referred to as “full VR”), its lazy version where we do not update the snapshot Hessian (“Lazy VR”), the stochastic Cubic Newton method (“SCN”), the Cubic Newton algorithm (“CN”), Gradient Descent with line search (“GD") and Stochastic Gradient Descent (“SGD”). We report the results in terms of time and gradient arithmetic computations needed to arrive at a given level of convergence.
In this work, we proposed a general theory for using stochastic and auxiliary information in the context of the Cubically regularized Newton method. Our theory encapsulates the classical stochastic methods, as well as Variance Reduction and the methods with the Lazy Hessian updates.
Figure 4 shows that compared to other second-order methods, “Lazy VR" has considerable time and computation savings. It also performs closely to gradient descent with line search, which performs very well in this case. Figure 5 shows the same experiment for larger dimensions, most importantly we see that the gap between our "Lazy VR" and the "full VR" method grows with d𝑑ditalic_d, this is in accord with our theory which predicts an increased advantage of "lazy VR" as the dimension grows.
To address these challenges, we can take into account second-order information (the Hessian matrix) and apply Newton’s method (see, e.g., (Nesterov, 2018)). Among the many versions of this algorithm, the Cubic Newton method (Nesterov & Polyak, 2006) is one of the most theoretically established. With the Cubic Newton method, we can guarantee global convergence to an approximate second-order stationary point (in contrast, the pure Newton method without regularization can even diverge when it starts far from a neighborhood of the solution). For a comprehensive historical overview of the different variants of Newton’s method, see Polyak (2007). Additionally, the convergence rate of the Cubic Newton is provably better than those for the first-order methods.
B
In practice, multiple network operators co-exist in a given geographical area, each operating in different frequency band. As a consequence, at a given point in time, multiple UEs are served by different operators in the system. In such a scenario, if an IRS is optimized to cater to the needs of one of the operators, it is not clear whether the IRS will boost or degrade the performance of the other operators in the system. In particular, since the IRS elements are passive, they will reflect the RF signals impinging on them in all frequency bands. So, it is important to understand how an IRS which is controlled by only one operator affects the performance of other operators (called as out-of-band operator in this paper). Although a very few works consider the scenario of the presence of an IRS in multi-band systems [4, 5], these works proceed along the lines of jointly optimizing the IRS phase configurations among all the operators. This approach requires inter operator co-ordination, which is not practical. Moreover, the solutions and analysis provided in these works are not scalable with number of operators (or frequency bands) in the system. More fundamentally, none of these works address the question of the out-of-band (OOB) performance even in the scenario of two operators operating in non-overlapping bands and the IRS is optimized for only one operator. In this paper, we address this question, and to the best of our knowledge, this is the first work which considers the effect of OOB performance due to the presence of an IRS under practical cellular network deployment scenarios. We consider a system with two network operators providing service in non-overlapping frequency bands. We analyze the OOB throughput performance in the presence of an IRS that is optimized to serve the users subscribed to an operator offering wireless services in a different frequency band. Specifically,
We derive the ergodic sum spectral efficiencies (SE) of the two operators as a function of the number of IRS elements, under round-robin scheduling of UEs. We show that the ergodic sum-SE scales quadratically and linearly with the number of IRS elements for the in-band and OOB networks, respectively, even when the OOB operator has no control over the IRS in the environment.
which represents the difference in the SNR/channel gain at a UE q𝑞qitalic_q (OOB-UE) served by BS-Y with and without the IRS in the environment. In Fig. 4, we plot the CCDF of ZN(Y)subscriptsuperscript𝑍𝑌𝑁Z^{(Y)}_{N}italic_Z start_POSTSUPERSCRIPT ( italic_Y ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT, given by (12). Firstly, we notice that the analytical expression as given in Theorem 2 matches well with the simulations which validates the accuracy of Theorem 2 even for smaller values of N𝑁Nitalic_N. Next, we observe that ZN(Y)subscriptsuperscript𝑍𝑌𝑁Z^{(Y)}_{N}italic_Z start_POSTSUPERSCRIPT ( italic_Y ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT is a non-negative random variable for any N>0𝑁0N>0italic_N > 0, which again confirms that almost surely, every possible outcome of the channel gain at an OOB UE with an IRS is at least as good as every possible outcome of the channel gain at the same UE without an IRS. Finally, we observe that the CCDF shifts to the right as the number of IRS elements is increased. On the same plot, we also show the CCDF of received SNR in the absence of IRS, which is in the left-most curve in the figure. This shows that the probability that an operator benefits from the presence of a randomly configured IRS in the vicinity increases with N𝑁Nitalic_N, even for operators who do not control the IRS. These observations confirm our inference from Proposition 1. Further, the instantaneous SNRs witnessed at an arbitrary UE of an OOB operator stochastically dominates the SNR seen by the same UE in the absence of the IRS. Thus, the IRS only enhances the performance of any operator regardless of the frequency band of operation.
In order to study the impact on the OOB performance, we consider the scheduling of UEs in a round-robin (RR) fashion at both BS-X and BS-Y. We note that the performance under opportunistic scheduling at either or both BSs can also be derived along similar lines, e.g., following the approach in [7]. Since the BSs are equipped with a single antenna, only one UE from each network is scheduled for data transmission in every time-slot. We characterize the OOB performance of the network by deriving the ergodic sum-SE of both the networks, and then infer the degree of degradation/enhancement of the OOB performance caused by the IRS to operator Y. The ergodic SE at UE-k𝑘kitalic_k is
We provide an exact characterization of the complementary cumulative distribution function (CCDF) of the difference in the channel gain at an OOB UE with and without the IRS. We determine the probability with which the difference is non-negative as a function of the number of IRS elements, and show that the channel gain with an IRS stochastically dominates the gain without the IRS. Further, the difference in the channel gains with and without the IRS is an increasing function of the number of IRS elements. This confirms that even an OOB UE witnesses benefits that monotonically increase with the number of IRS elements.
A
Some studies explained a DNN by distilling the DNN into another interpretable model (Frosst & Hinton, 2017; Che et al., 2016; Wu et al., 2018; Zhang et al., 2018; Vaughan et al., 2018; Tan et al., 2018). However, most explanation methods did not try to disentangle concepts encoded by a DNN.
∙∙\bullet∙ Unifying empirical findings in the framework of game-theoretic interactions. To unify different attribution methods, Deng et al. (2022b) used interactions as a unified reformulation of different attribution methods. They proved that attributions estimated by each of 14 attribution methods could all be represented as a certain allocation of interaction effects to different input variables.
Based on game theory, we introduced multi-variate interactions (Zhang et al., 2021a, c) and multi-order interactions (Zhang et al., 2021b) to analyze interactions encoded by the DNN. Recently, Ren et al. (2021a) proposed the mathematical formulation for concepts encoded by a DNN, and Ren et al. (2023a) further used such concepts to define the optimal baseline values for Shapely values.
Game-theoretical interactions facilitate the explanation of the representation capacity of a DNN from different perspectives, including the adversarial robustness (Wang et al., 2021a; Ren et al., 2021b), adversarial transferability (Wang et al., 2021b), and generalization power (Zhang et al., 2021b; Zhou et al., 2023). Besides, the game-theoretical interactions can also be utilized to explain the signal processing behavior of DNNs.
Our research group developed a theoretical framework based on game-theoretic interactions, which aims to tackle the following two challenges in XAI, i.e., (1) extracting and quantifying concepts from implicit knowledge representations of DNNs and (2) utilizing these explicit concepts to explain the representational capacity of DNNs. Furthermore, we discovered that game-theoretic interactions provide a new perspective for analyzing the common underlying mechanism shared by previous XAI applications.
D
In comparison, when a DNN was trained to fit a high-order concept, the learning dynamics is detouring. Specifically, the DNN usually first learned low-order concepts. Then, the DNN shifted its attention to concepts of higher orders, and later gradually removed mistakenly learned low-order concepts.
In this section, we analyze the learning dynamics of concepts with a simple experimental setting, i.e., using a DNN to fit a boolean polynomial. We find that a high-order concept is not directly learned, but is likely to be mistakenly encoded as a mixture of low-order concepts in early epochs. In spite of the simplicity of experiments, this finding may still provide conceptual insights into the reason why high-order concepts are more likely to be over-fitted.
All above experimental findings on the generalization power of concepts are related to the phenomenon of the inconsistency of high-order concepts, i.e., high-order concepts are more sensitive to small noises in the input sample than low-order concepts. Therefore, we aim to prove that the interaction effect’s variance of the concept increases with the concept’s order exponentially under a simple setting.
In this paper, we provide a conceptual understanding of the reason why low-order concepts in training data can usually better generalize to testing data than high-order concepts. Specifically, we prove that the average inconsistency of concepts usually increases exponentially along with the order of concepts. We find that DNNs with poorer generalization power usually encode more high-order concepts, and DNNs with stronger generalization power usually encode low-order concepts more quickly. Moreover, we find that low-order concepts are usually learned directly, but high-order concepts are more likely to be mistakenly encoded as a mixture of various incorrect low-order concepts. These all explain the low generalization power of high-order interactive concepts. Section 8 in supplemental materials will introduce future practical values of this study.
Although there is a common heuristic that complex concepts are usually more likely to be over-fitted, people still do not know the exact definition of concepts with an analytic connection to their generalization power. Because we also find the low generalization power of complex (high-order) interactive concepts, in this study, we make the first attempt to clarify the high inconsistency of complex (high-order) concepts, i.e., complex concepts are more sensitive to small noises in the data than simple concepts, which is responsible for the low generalization power of complex (high-order) concepts. Various experiments have verified our findings. This may shed new lights on how to evaluate the generalization power in terms of concepts.
C
We begin our experiment with Lotka-Volterra model, the commonly used simple model of NeuralODEs. Coefficients and initial conditions in the Lotka-Volterra equations were all identical to the setting in [4]. Following [3], we generated training data by numerically over the time span t∈[0,6.1]𝑡06.1t\in[0,6.1]italic_t ∈ [ 0 , 6.1 ], then adding Gaussian noise with zero mean and standard deviation 5%percent55\%5 % of the mean of each channel. We set 4,000 epochs for each experiment, the learning rate was 0.02, and random seeds were used for 10, 20, 30 for each activation function, and standard errors were used. Fig. 3 and Fig. 4 show that our activation function is not only rapidly approaching the minimum of a loss function, but also showing a very stable performance.
We conducted experiment on CIFAR10 which is more challenging model than MNIST in classification fields. ResNet18 which is optimized using SGD on a batch size of 32 with a learning rate of 0.001, momentum of 0.9 is used for the experiment with random seed of 10. Our activation function converges rapidly with respect to the Top-1 accuracy and the Top-5 accuracy in Table3, Table4.
The robust property of MoLU is to approach rapidly to the value of minimum of a loss function without losing stability. This is a truly useful characteristic when training long time-series data by using NeuralODEs(Neural Ordinary Differential Equations). To prove the performance of MoLU, we conducted experiment on NeuralODEs, MNIST, and CIFAR10. In NeuralODEs, the differentiable activation functions are mainly used, so we compared MoLU with GeLU, Mish, SiLU, ELU, Tanh, and in case of the classification, compared it with ReLU, Leaky ReLU and Tanh. We used α=2,β=2formulae-sequence𝛼2𝛽2\alpha=2,\beta=2italic_α = 2 , italic_β = 2.
We conducted experiments with MNIST, the most commonly used datasets in the field of image classification. We used MNIST datasets in torchvision and used 2 layered Networks which is optimized using SGD on a batch size of 64 with a learning rate of 0.001 and a momentum of 0.5 with random seed of 10. We confirmed that our activation function shows a high-performance. Compared to other activation functions, our activation function clearly shows the characteristics of converging rapidly at the beginning of learning.
We begin our experiment with Lotka-Volterra model, the commonly used simple model of NeuralODEs. Coefficients and initial conditions in the Lotka-Volterra equations were all identical to the setting in [4]. Following [3], we generated training data by numerically over the time span t∈[0,6.1]𝑡06.1t\in[0,6.1]italic_t ∈ [ 0 , 6.1 ], then adding Gaussian noise with zero mean and standard deviation 5%percent55\%5 % of the mean of each channel. We set 4,000 epochs for each experiment, the learning rate was 0.02, and random seeds were used for 10, 20, 30 for each activation function, and standard errors were used. Fig. 3 and Fig. 4 show that our activation function is not only rapidly approaching the minimum of a loss function, but also showing a very stable performance.
C
In a nutshell, this technique amounts to traverse in a depth-first search manner an implicit solution tree where nodes are solutions, and where edges are defined by some parent-child relation between solutions. During the traversal, children are obtained by merging trees having adjacent roots along the limit cycle.
A notable feature of this algorithm is that it can moreover be adapted in order to produce the successor (or predecessor) of any given solution in O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) time as well, and only needs linear space. This procedure is then used as a subroutine in order to generate all, non necessarily connected functional digraphs with the same delay and space.
Thus, in general, reverse search only needs memory space that is linear in the height of the solution tree times the space needed to generate children. As of the delay, it only depends on the time needed to compute the parent and the delay needed to generate children when using folklore tricks such as the alternating output technique [26].
We can now exploit the algorithms of Theorem 3.1, and more specifically our ability to generate the successor of a given component, as a subroutine for the efficient generation of arbitrary (non necessarily connected) functional digraphs. In order to avoid generating multiple isomorphic digraphs, we first define an appropriate isomorphism code.
There is a O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )-delay and linear space algorithm generating all connected n𝑛nitalic_n-vertex functional digraphs. Moreover, given any such functional digraph, we can generate its successor (resp., predecessor) in the enumeration in O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) time and using linear space. ∎
A
where the constant C>0𝐶0C>0italic_C > 0 depends on d,c,cF,‖F+G‖Lη2,‖F‖Wη1,2⁢(d−1),𝑑𝑐subscript𝑐𝐹subscriptnorm𝐹𝐺subscriptsuperscript𝐿2𝜂subscriptnorm𝐹subscriptsuperscript𝑊12𝑑1𝜂d,c,c_{F},\|F+G\|_{L^{2}_{\eta}},\|F\|_{W^{1,2(d-1)}_{\eta}},italic_d , italic_c , italic_c start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT , ∥ italic_F + italic_G ∥ start_POSTSUBSCRIPT italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT end_POSTSUBSCRIPT , ∥ italic_F ∥ start_POSTSUBSCRIPT italic_W start_POSTSUPERSCRIPT 1 , 2 ( italic_d - 1 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT end_POSTSUBSCRIPT , and ‖G‖Wη1,2⁢(d−1)subscriptnorm𝐺subscriptsuperscript𝑊12𝑑1𝜂\|G\|_{W^{1,2(d-1)}_{\eta}}∥ italic_G ∥ start_POSTSUBSCRIPT italic_W start_POSTSUPERSCRIPT 1 , 2 ( italic_d - 1 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT end_POSTSUBSCRIPT.
Next, we turn to bound the term IIII{\rm II}roman_II. Under Assumption (C3), the difference of the log-determinants is a Lipschitz function with constant 1/c1𝑐1/c1 / italic_c. In addition, using Lemma 7.3 with Assumption (C1) on the function spaces for the maps, the term IIII{\rm II}roman_II is then bounded by
We now see the role of Assumption (B5): Since ∇r=∇det⁢JG∇𝑟∇detsubscript𝐽𝐺\nabla r=\nabla{\rm det}J_{G}∇ italic_r = ∇ roman_det italic_J start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT, the polynomial asymptotic growth of G𝐺Gitalic_G and its first and second derivative means that |∇r⁢(z)|∇𝑟𝑧|\nabla r(z)|| ∇ italic_r ( italic_z ) | has polynomial growth as well. Hence, the composition of two polynomially-bounded functions is also polynomially bounded, and it is in L2⁢(ℝd;η)superscript𝐿2superscriptℝ𝑑𝜂L^{2}(\mathbb{R}^{d};\eta)italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT ; italic_η ).
We comment on the assumptions in Theorem 3.9 and compare them to those in the pushforward analog, Theorem 3.8. For the pushforward, Assumption (B5) on the asymptotic polynomial growth of the map and its first derivatives implies that the η𝜂\etaitalic_η-weighted Sobolev norms are finite. Thus, Assumption (B5) is a sufficient condition for the map to lie in the function spaces prescribed in Assumption (C1). On the other hand, Assumption (C4) implies that the target distribution is sub-Gaussian [95]; see e.g., [4, Remark 3]. This condition is easier to interpret and verify, compared to the asymptotic polynomial growth of the maps in Assumption (B5), by using various equivalent conditions for sub-Gaussian distributions.
To understand the intuition behind integral II{\rm I}roman_I, first note that if G−1∘F=Idsuperscript𝐺1𝐹IdG^{-1}\circ F={\rm Id}italic_G start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∘ italic_F = roman_Id, then term I=0I0{\rm I}=0roman_I = 0. Hence, this term measures “how far G−1superscript𝐺1G^{-1}italic_G start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT is from F−1superscript𝐹1F^{-1}italic_F start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT.” Here, in order to compare the inverses, we use Lagrange mean value theorem-type arguments in both settings (Lemma 7.1), but there is a major difficulty in the unbounded case: since invertibility on ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT does not guarantee that the determinants and singular values of JG,JFsubscript𝐽𝐺subscript𝐽𝐹J_{G},J_{F}italic_J start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT , italic_J start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT do not go to 00 as |x|→∞→𝑥|x|\to\infty| italic_x | → ∞, we need to require that explicitly in the form of Assumptions (B3) and (B4). Moreover, we find that we need to also bound the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT norm of the second derivative of G𝐺Gitalic_G, hence again the tails condition (B5). We comment that the second derivatives also appear in the case of maps from ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT to ℝℝ\mathbb{R}blackboard_R as a way to control the geometric behavior of level sets, see [32], and that in principle they can be replaced by control over the Lipschitz (or even Hölder) constants of the first derivatives.
C
To the best of our knowledge, MMA-MRNNet is the first architecture to leverage valence-arousal, AUs, and basic expressions as intermediate representations for the task of Facial Expression Intensity Estimation. This approach not only enhances the model’s ability to capture the nuanced dynamics of emotional expressions but also provides a robust framework for handling real-world data with varying input conditions.
[56] proposed a dual-branch FEIE model; the one branch (composed of Temporal CNN and Transformer encoder) handles the visual modality and the other handles the audio one; modality dropout is added for A/V feature fusion. [51] achieved the 3rd place in the ERI challenge of the 5th ABAW; it proposed a methodology that involved extracting features from visual, audio, and text modalities using Vision Transformers, HuBERT, and DeBERTa. Temporal augmentation and SE blocks were applied to enhance temporal generalization [3, 4, 48, 2, 17] and contextual understanding. Features from each modality were then processed through contextual layers and fused using a late fusion strategy.
At first we compare the performance of MMA-MRNNet to that of various baseline [6] and state-of-the-art methods: ViPER and Netease Fuxi Virtual Human methods (which are multi-modal methods exploiting audio, visual and text information); the best performing HFUT-CVers method (presented in the related work section; it is an ensemble multi-modal method exploiting both audio and visual information); USTC-IAT-United method (which was presented in the related work section and is a multi-modal method exploiting both audio and visual information); USTC-AC and NISL-2023 methods (both presented in the related work section; they are ensemble multi-modal methods exploiting both audio and visual information).
[16] presented Supervised Scoring Ensemble (SSE) for emotion recognition. A new fusion structure is presented in which class-wise scoring activations at diverse complementary feature layers are concatenated and used as inputs for second-level supervision, acting as a deep feature ensemble within a single CNN architecture. [60] proposed a deep Visual-Audio Attention Network (VAANet) for video emotion recognition; VAANet integrates spatial, channel-wise, and temporal attentions into a visual 3D CNN and temporal attentions into an audio 2D CNN. A polarity-consistent cross-entropy loss is proposed for guiding the attention generation, which is based on the polarity-emotion hierarchy constraint.
feature encoding module (based on DenseNet121 and DeepSpectrum), a visual feature encoding module (based on PosterV2-Vit), and an audio-visual modality interaction module. [53] proposed ViPER, a modality agnostic late fusion network that leverages a transformer-based model that combines video frames, audio recordings, and textual annotations for FEIE.
C
A conformity function is a mapping ρ:ℝd×𝒴×Ω→ℝ:𝜌→superscriptℝ𝑑𝒴Ωℝ\rho:\mathbb{R}^{d}\times\mathscr{Y}\times\Omega\to\mathbb{R}italic_ρ : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT × script_Y × roman_Ω → blackboard_R such that ρ⁢(x,y)=ρ⁢(x,y,⋅)𝜌𝑥𝑦𝜌𝑥𝑦bold-⋅\rho(x,y)=\rho(x,y,\,\boldsymbol{\cdot}\;)italic_ρ ( italic_x , italic_y ) = italic_ρ ( italic_x , italic_y , bold_⋅ ) is 𝒯𝒯\mathscr{T}script_T-measurable for every x∈ℝd𝑥superscriptℝ𝑑x\in\mathbb{R}^{d}italic_x ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT and every y∈𝒴𝑦𝒴y\in\mathscr{Y}italic_y ∈ script_Y. The sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT associated with a conformity function ρ𝜌\rhoitalic_ρ is defined by Si⁢(ω)=ρ⁢(Xi⁢(ω),Yi⁢(ω),ω)subscript𝑆𝑖𝜔𝜌subscript𝑋𝑖𝜔subscript𝑌𝑖𝜔𝜔S_{i}(\omega)=\rho(X_{i}(\omega),Y_{i}(\omega),\omega)italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_ω ) = italic_ρ ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_ω ) , italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_ω ) , italic_ω ). We say that a conformity function ρ𝜌\rhoitalic_ρ is regular with respect to a specific data sequence if there are no ties among the corresponding conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT almost surely.
Note that the regularity of a specific conformity function ρ𝜌\rhoitalic_ρ is contextual, being inherently dependent on the distribution of the underlying data sequence. Technically, we can always avoid ties among the sequence of conformity scores almost surely by introducing a properly constructed ancillary tie-breaking sequence.
Conformity functions are agnostic to the choice of the specific models or algorithms used to construct μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG, ξ^psubscript^𝜉𝑝\hat{\xi}_{p}over^ start_ARG italic_ξ end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, and π^^𝜋\hat{\pi}over^ start_ARG italic_π end_ARG in Example 1. The intuition is that the associated conformity scores measure the ability of the model to make accurate predictions on the calibration sample, whose information is not used in the model´s training process, and the assumed data sequence exchangeability transfers this assessment of the model’s predictive capacity from the calibration sample to the sequence of future observables. The following result is proved in the Appendix.
Under the data exchangeability assumption, the sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT is exchangeable.
In general, the coverage indicators Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are dependent random variables, since for all future observables the corresponding conformal prediction sets in Definition 3 are defined in terms of the same calibration sample conformity score S(⌈(1−α)⁢(n+1)⌉)subscript𝑆1𝛼𝑛1S_{(\lceil(1-\alpha)(n+1)\rceil)}italic_S start_POSTSUBSCRIPT ( ⌈ ( 1 - italic_α ) ( italic_n + 1 ) ⌉ ) end_POSTSUBSCRIPT. This would still be the case even if we had started with the stronger assumption of an independent and identically distributed data sequence. The interesting fact is that Definition 4 inherits through Lemma 1 the distributional symmetry implied by the data exchangeability assumption, giving us the following result, proved in the Appendix.
A
The model complexity of different action recognition methods on the DailyAction dataset is summarized in Table V, which includes the number of trainable parameters, the number of MACs, and the inference throughput measured at a batch size of 1. VMV-GCN has the lowest computational complexity and maximum throughput among all methods by processing short-duration clips from event samples, but its recognition performance is much lower than ours. Compared to frame-based methods, our model demonstrates leadership in model complexity because the lightweight EVSTr takes sparse voxel-wise representations as input and thus benefits from less redundant computation.
This work proposes a powerful yet lightweight model named Event Voxel Set Transformer (EVSTr) to solve the above problems. EVSTr can flexibly process both short- and long-duration event streams in a voxel-wise way for efficient recognition tasks, including object classification and action recognition. We adopt the event voxel set representation [11, 12] as input, which is robust to noise while maintaining the sparse structure. The core of EVSTr is the event voxel transformer encoder that hierarchically extracts spatiotemporal features from local to global through two novel designs, including Multi-Scale Neighbor Embedding Layer (MNEL) and Voxel Self-Attention Layer (VSAL). To tackle the first issue, MNEL jointly encodes positional and semantic relations between neighboring voxels into attention scores for multi-scale feature aggregation, thereby learning robust local representations. To solve the second problem, VSAL introduces absolute-relative positional encoding to assist vanilla self-attention operators in feature interaction between input elements with spatiotemporal structure, enabling better global modeling. We combine the encoder with a classification head to process a single voxel set converted from the event stream for object classification.
We also analyze the performance of different event representations on recognition tasks. Variant D has a significant drop in accuracy when using point-based representations, indicating that voxel-wise representations preserve local semantics better. Besides, variant E adds bilinear interpolation integration [26] to voxel-wise representations. The results show that our adopted direct integration [11] and bilinear interpolation integration have similar performance on recognition tasks. For computational efficiency, we use direct integration in representation.
Compared to point-based counterparts, our model outperforms state-of-the-art methods and gains a notable improvement (1.9%percent\%% increase) than the second place on the challenging N-Caltech101 dataset, demonstrating the effectiveness of EVSTr. As shown in Fig. 5, we further provide the visualization of feature representations learned by VMV-GCN [12] and ours on the testing samples of N-Caltech101 using t-SNE [45]. The visualization intuitively shows that our model learns more discriminative spatiotemporal representations because several confusing samples are not distinguished well using the previous state-of-the-art method VMV-GCN, such as the region highlighted in a bounding box. Both experimental results and feature visualization prove the superior representation capability of EVSTr, and we attribute the improvement to two strategies in our model. (i) MNEL attentively embeds multi-scale neighbor information into a local representation for each event voxel. The multi-scale attentive aggregation fully explores the positional and semantic relations between neighboring voxels and thus can extract discriminative features. (ii) The VSAL layers exploit the long-range dependencies between voxels via feature interaction, allowing us to learn a better global representation than other methods.
Ablations of the multi-scale attentive aggregation in MNEL on object classification (N-Caltech101) and action recognition (DailyAction) are reported in Table VI. Variants A-C represent different feature aggregation strategies using the event voxel set as input. Variants D and E take different event representations as input, such as the point-wise representation [16] and event voxel with bilinear interpolation integration [26].
D
The scenario models a setup, where new objects are presented to a system that minimizes the risk of an unstable grasp, and therefore “explores” the shape of the object by touching and poking before attempting a grasp and subsequent manipulation. For example, imagine a conveyor belt for sorting objects of different sizes into respective bins, e.g. in a scrapyard, where the robot must be able to pick up any object.
An important part of haptic exploration is the decision where to touch. The object can be touched randomly as done by Smith et al. [22], or always select a position opposite the camera (from “behind”) as Watkins-Vall et al. [20]. However, these are not as effective as an uncertainty-driven approach. Uncertainty can come from the Gaussian distribution [18, 21, 16, 17, 19]; from the Monte Carlo dropout [24]; \addedfrom Neural Radiance Fields (NeRF) [25, 26]; or from the Signed Distance Function (SDF) [1, 27]. Alternatively, it can be learned where to touch as in Smith et al. [23].
The first group of improvements concerns the process of shape completion performed by Implicit Geometric Regularization for Learning Shapes (IGR), which we modified as follows. We use a new, theoretically grounded, method to determine the points with highest uncertainty. In addition, we changed the sampling of points inside the network to respect the input point cloud more closely. Finally, the yield of every haptic exploration is increased by adding not only the contact points to the point cloud but also incorporating the empty space established through the robot movement to the object. The two last mentioned improvements together make the pipeline more robust.
We proposed a new method for shape completion using a combination of visual and haptic feedback. VISHAC outperformed the baseline Act-VH [1] in terms of speed, reconstruction quality and robustness. We experimentally validated VISHAC in both simulated and real-world environments, using 8 objects and an additional one for grasping.
Contributions. We present a pipeline for visuo-haptic shape completion called VISHAC\replaced. We extend the baseline by Rustler et al. [1], which also deals with shape completion. We describe our modifications and improvements with respect to [1] below., importantly extending the work of Rustler et al. [1], which serves as a baseline here.
D
(2) For every C⊆AB𝐶superscript𝐴𝐵C\subseteq A^{B}italic_C ⊆ italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT, we have that C⊆ClY⁢(C)⊆ClX⁢(C)𝐶subscriptCl𝑌𝐶subscriptCl𝑋𝐶C\subseteq\mathrm{Cl}_{Y}(C)\subseteq\mathrm{Cl}_{X}(C)italic_C ⊆ roman_Cl start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ( italic_C ) ⊆ roman_Cl start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ( italic_C ). If C𝐶Citalic_C is X𝑋Xitalic_X-closed, then C𝐶Citalic_C is Y𝑌Yitalic_Y-closed.
This approach based on ideals allows us to recover well-known topologies on ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT, such as the topology of pointwise convergence (referred to as the local topology in this work) and the uniform topology (refer to [17], Section 19).
In Section 4 we develop a method for equipping the set ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT with a topology using Boolean ideals on B𝐵Bitalic_B. This framework is particularly applicable when B=Aω𝐵superscript𝐴𝜔B=A^{\omega}italic_B = italic_A start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT, hence AB=OA(ω)superscript𝐴𝐵superscriptsubscript𝑂𝐴𝜔A^{B}=O_{A}^{(\omega)}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT = italic_O start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_ω ) end_POSTSUPERSCRIPT. However, not all ideals on Aωsuperscript𝐴𝜔A^{\omega}italic_A start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT are useful in the context of clone theory. It is essential for the topologies to behave well with respect to composition. To identify such suitable ideals, we introduce the concepts of substitutive and infinitely substitutive ideals (cf. Definition 5.2 and Proposition 5.3).
In this paper, we adopt the topological approach as we primarily focus on ω𝜔\omegaitalic_ω-operations and relations of arity ω𝜔\omegaitalic_ω, referred to as ω𝜔\omegaitalic_ω-relations. We present a method for defining topologies on sets of functions. The key idea is to choose a Boolean ideal X𝑋Xitalic_X of subsets of B𝐵Bitalic_B to endow ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT with a topology. A subset F𝐹Fitalic_F of ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT is X𝑋Xitalic_X-closed if it satisfies the following condition:
provide some examples of such topologies, that will be studied in the rest of this work. The basic idea is that, for endowing ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT with a topology, it is enough to choose a suitable family X𝑋Xitalic_X of subsets of B𝐵Bitalic_B. The only condition needed for this to give raise to a topology on ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT is that X𝑋Xitalic_X must be closed under finite union. The topologies introduced in this section are a particular case of those defined in [7, Section 1, p. 353], when A𝐴Aitalic_A and B𝐵Bitalic_B are discrete topological spaces.
A
In Section 4, we obtain better time bounds for the special case of finding collections of s𝑠sitalic_s-t𝑡titalic_t mincuts that are pairwise disjoint. Similar to SUM-k𝑘kitalic_k-DMC and COV-k𝑘kitalic_k-DMC, our approach exploits the partial order structure of s𝑠sitalic_s-t𝑡titalic_t mincuts. We use this to efficiently solve the following optimization problem, which we call k𝑘kitalic_k-Disjoint Minimum s𝑠sitalic_s-t𝑡titalic_t Cuts: given a graph G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ), vertices s,t∈V𝑠𝑡𝑉s,t\in Vitalic_s , italic_t ∈ italic_V, and an integer k≤kmax𝑘subscript𝑘maxk\leq k_{\mathrm{max}}italic_k ≤ italic_k start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT, find k𝑘kitalic_k pairwise disjoint s𝑠sitalic_s-t𝑡titalic_t mincuts in G𝐺Gitalic_G. Here, kmaxsubscript𝑘maxk_{\mathrm{max}}italic_k start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT denotes the maximum number of disjoint s𝑠sitalic_s-t𝑡titalic_t mincuts in G𝐺Gitalic_G. Our algorithm is significantly simpler than the previous best algorithm by Wagner [Wag90], which uses a poly-logarithmic number of calls to any min-cost flow algorithm. Our algorithm takes O⁢(F⁢(m,n)+m⁢λ)𝑂𝐹𝑚𝑛𝑚𝜆O(F(m,n)+m\lambda)italic_O ( italic_F ( italic_m , italic_n ) + italic_m italic_λ ) time, where F⁢(m,n)𝐹𝑚𝑛F(m,n)italic_F ( italic_m , italic_n ) is the time required by a unit-capacity max-flow computation, and λ𝜆\lambdaitalic_λ is the size of an s𝑠sitalic_s-t𝑡titalic_t mincut in the graph. By plugging in the running time of the current fastest deterministic max-flow algorithms of [LS20, Kat20], we obtain the following time bounds. When λ≤m1/3+o⁢(1)𝜆superscript𝑚13𝑜1\lambda\leq m^{1/3+o(1)}italic_λ ≤ italic_m start_POSTSUPERSCRIPT 1 / 3 + italic_o ( 1 ) end_POSTSUPERSCRIPT, our algorithm improves upon the previous best runtime for this problem.
We now present the NP-hardness proof for the decision version of Min-k𝑘kitalic_k-DMC. For simplicity, we consider Min-k𝑘kitalic_k-DMC reformulated as a minimization problem by means of the relationship maxS∈Uk⁡dmin⁢(S)=minS∈Uk⁡d^min⁢(S)subscript𝑆subscript𝑈𝑘subscript𝑑min𝑆subscript𝑆subscript𝑈𝑘subscript^𝑑min𝑆\max_{S\in U_{k}}d_{\mathrm{min}}(S)=\min_{S\in U_{k}}\hat{d}_{\mathrm{min}}(S)roman_max start_POSTSUBSCRIPT italic_S ∈ italic_U start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_d start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( italic_S ) = roman_min start_POSTSUBSCRIPT italic_S ∈ italic_U start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT over^ start_ARG italic_d end_ARG start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( italic_S ), where
Contrary to the hardness of finding diverse global mincuts in a graph [HKK+22], in Section 3 we show that both Sum-k𝑘kitalic_k-DMC and Cov-k𝑘kitalic_k-DMC can be solved in polynomial time. We show this via a reduction to the submodular function minimization problem (SFM) on a lattice, which is known to be solvable in strongly polynomial time when the lattice is distributive [GLS12, IFF01, Sch00].
In Section 5, we prove that the decision version of Min-k𝑘kitalic_k-DMC is already NP-hard when k=3𝑘3k=3italic_k = 3. The proof is split into three parts. First, we show that a variant of the constrained minimum vertex cover problem on bipartite graphs (Min-CVCB) of Chen and Kanj [CK03] is NP-hard. Then, we give a reduction from this problem to 2-Fixed 3-DMC, a constrained version of Min-3333-DMC. Finally, we provide a polynomial time reduction from 2-Fixed 3-DMC to Min-3333-DMC, which implies the hardness of the general problem.
In contrast to the polynomial-time algorithms of the previous sections, here we show that k𝑘kitalic_k-DMC is NP-hard when considering dminsubscript𝑑mind_{\mathrm{min}}italic_d start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT as the diversity measure. We called this variant Min-k𝑘kitalic_k-DMC in Section 1. The hardness proof is split into three parts. In Section 5.1, we first show that a variant of the constrained minimum vertex cover problem on bipartite graphs (Min-CVCB) of Chen and Kanj [CK03] is NP-hard. Next, in Section 5.2 we give a reduction from this problem to 2-Fixed 3-DMC, a constrained version of 3333-DMC. Finally, in Section 5.3, we give a polynomial time reduction from 2-Fixed 3-DMC to Min-3333-DMC, which completes the proof that Min-k𝑘kitalic_k-DMC is NP-hard.
C
We let X𝑋Xitalic_X be the real line and the random transition Γ(⋅|u)\Gamma(\cdot|u)roman_Γ ( ⋅ | italic_u ) be a Gaussian 𝒩⁢(u,1)𝒩𝑢1\mathcal{N}(u,1)caligraphic_N ( italic_u , 1 ) with mean u𝑢uitalic_u. The space of measures is XM=[0,1]subscript𝑋𝑀01X_{M}=[0,1]italic_X start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT = [ 0 , 1 ], representing all Gaussians with 1 variance with means between 0 and 1. Then for 𝔐∼𝒰⁢[0,1]similar-to𝔐𝒰01\mathfrak{M}\sim\mathcal{U}[0,1]fraktur_M ∼ caligraphic_U [ 0 , 1 ] being the uniform measure between 00 and 1111, we have 𝐄a,b∼𝒰⁢[0,1]⁢[2ℐ⁣(𝒩⁢(a,1):𝒩⁢(b,1))]=O⁢(1)subscript𝐄similar-to𝑎𝑏𝒰01delimited-[]superscript2ℐ:𝒩𝑎1𝒩𝑏1𝑂1\mathbf{E}_{a,b\sim\mathcal{U}[0,1]}\left[2^{\mathcal{I}(\mathcal{N}(a,1):% \mathcal{N}(b,1))}\right]=O(1)bold_E start_POSTSUBSCRIPT italic_a , italic_b ∼ caligraphic_U [ 0 , 1 ] end_POSTSUBSCRIPT [ 2 start_POSTSUPERSCRIPT caligraphic_I ( caligraphic_N ( italic_a , 1 ) : caligraphic_N ( italic_b , 1 ) ) end_POSTSUPERSCRIPT ] = italic_O ( 1 ).
Quantum information theory studies the limits of communicating through quantum channels. This section shows the limitations of the algorithmic content of [ure states and their measurements. Given a measurement apparatus E𝐸Eitalic_E, there is only a tiny fraction of quantum pure states on which E𝐸Eitalic_E’s application produces coherent information. This is independent of the number of measurement outcomes of E𝐸Eitalic_E.
In quantum mechanics, given a quantum state |ψ⟩ket𝜓\ket{\psi}| start_ARG italic_ψ end_ARG ⟩, a measurement, or POVM, E𝐸Eitalic_E produces a probability measure E⁢|ψ⟩𝐸ket𝜓E\ket{\psi}italic_E | start_ARG italic_ψ end_ARG ⟩ over strings. This probability represents the classical information produced from the measurement.
We prove conservation of probabilities over successively general spaces. This includeds finite sequences, infinite sequences, and T0subscript𝑇0T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, second countable topologies. Conservation of probabilities over the case of finite and infinite sequences follow directly from conservation inequalities over random processing in individual sequences [Lev84, Lev74, Ver21, G2́1]. However there is benefit in revisiting these results in the context of manipulations of probabilities. Probabilities are ubiquitious in mathematics, such as the result of quantum measurements, as detailed in this paper. This is particular true when the results are generalized to arbitrary topologies. Information between probability measures is achieved through a mapping from the general topology to infinite sequences and then applying the information function between individual sequences. We use the set of reals as an example and then show conservation of information over computable convolutions. One example is the smoothing of a signal due to a Gaussian function, which results in degradation of self algorithmic information.
Another interesting property of quantum mechanics is that the vast majority of quantum pure states themselves will have negligible algorithmic self information 𝐈𝒬(|ψ⟩:|ψ⟩){\mathbf{I}}_{\mathcal{Q}}(\ket{\psi}:\ket{\psi})bold_I start_POSTSUBSCRIPT caligraphic_Q end_POSTSUBSCRIPT ( | start_ARG italic_ψ end_ARG ⟩ : | start_ARG italic_ψ end_ARG ⟩ ). For this definition we use the information term introduced in [Eps19]. From this reference, we get the following result.
A
We propose Context Normalization (CN), a novel approach that utilizes defined contexts to capture underlying distribution variations. In CN, each sample in a mini-batch is normalized using the mean and standard deviation specific to its context. By treating contexts as components of a Gaussian mixture, we learn their parameters during model training, eliminating the need for the EM algorithm. This leads to improved efficiency and simplified implementation of CN.
It is important to mention that the baseline models (ViT with standard preprocessing and ViT with batch normalization) collapsed in this blended dataset as the two datasets have different structures, and simple normalization does not allow a suitable representation of the data. Context normalization, on the other hand, gives an adaptive representation per dataset (according to the contexts), which makes training possible. As shown in Table VIII, models with context normalization technique achieve good results on the blended dataset. It is also interesting to notice that this performance in terms of accuracy is biased by the MNIST digits dataset, which is less difficult to learn than CIFAR-100. More precisely, the model with CN-Patches achieves 55.04% accuracy and 81.83% top-5 accuracy, which exceeds the results of all baseline models (ref. Figure VII) trained on CIFAR-100. The model with CN-Channels gives 50.99% accuracy and 78.55% top-5 accuracy.
We have proposed a novel approach called ”context normalization” (CN) that enhances deep neural network training in terms of training stability, fast convergence, higher learning rate, and viable activation functions. Similar to the conventional mixture normalization (MN) method, our approach is driven by the hypothesis that any continuous function can be approximated in some sense by a weighted sum of Gaussian distributions with finite mean vectors and covariance matrices. In other words, our methodology assumes that the data distribution is a mixture of Gaussian models. However, unlike the mixture normalization technique that invokes the expectation maximization (EM) algorithms to estimate the Gaussian components parameters, our proposed methodology relies on the notion of concept that represents a cluster of related data. In fact, a supervised deep neural network is built and trained in order to learn the Gaussian components parameters. Once these optimal values are determined after convergence, they are utilized during the CN procedure performed on a deep neural network activation layer. CN alleviates the slow estimation of Gaussian component parameters inherent to EM in the scenario of large datasets. Furthermore, unlike MN, CN provides non linear decision boundaries between context which reflects more reality. Our experimental results demonstrate the superiority of context normalization over batch normalization and mixture normalization, showcasing enhanced convergence and generalization performance. The proposed method, when applied specifically to images, introduces CN-Channels and CN-Patches for training, and CN and CN+ for inference. With its flexibility to adapt various representations and tasks, context normalization proves to be a valuable tool in some application such as image classification.
Based on the Mixture Normalization (MN) hypothesis proposed by [6] (ref. to Figure 1), our Context Normalization (CN) approach operates under a similar assumption that data can be effectively represented by a mixture of multiple components, as opposed to batch normalization (BN)  [4]). In the Context Normalization (CN) approach, a fundamental concept is introduced, namely, the notion of context, which represents a cluster of samples sharing common characteristics that can be efficiently grouped together. Unlike the Expectation-Maximization (EM) algorithm [10] typically employed for parameters estimation in each component, CN utilizes a deep neural network to learn these parameters through context-based normalization. In our approach, we assign a unique identifier to each context and utilize it for normalization during training. Samples within the same context share the same identifier, allowing for alignment in a shared space that aligns with the target task. This approach not only facilitates the normalization of samples within the same context but also enables the estimation of optimal parameters for all contexts, promoting the convergence of the model. By leveraging these context identifiers, our approach enhances the alignment and adaptability of the model to different contexts, leading to improved performance.
Through a comprehensive set of experiments, we demonstrate that CN not only accelerates model convergence, but also achieves superior final test accuracy. These results highlight the effectiveness of our proposed method in improving the overall performance of models.
D
Another challenge for OOD detection is on datasets with a large number of ID classes and high-resolution images, e.g., ImageNet-1k (Deng et al., 2009). Fig. 7 presents the detection performance of DFB using ImageNet-1k as in-distribution dataset and on four OOD datasets, including two new high resolution datasets, ImageNet-O (Hendrycks et al., 2021) and SUN (Xiao et al., 2010). To examine the impact of the number of classes, we show the results using C∈{200,300,500,1000}𝐶2003005001000\mathit{C}\in\{200,300,500,1000\}italic_C ∈ { 200 , 300 , 500 , 1000 } randomly selected ID classes from ImageNet-1k (C=1000𝐶1000C=1000italic_C = 1000 is the full ImageNet-1k data).
We further propose a novel OOD detection framework DFB that utilizes dense prediction networks to segment the foreground and background from in-distribution training data, and jointly learn foreground and background features. It then leverages these background features to define background OOD scores and seamlessly combines them with existing foreground-based OOD methods to detect OOD samples from both foreground and background aspects. Comprehensive results on popular OOD benchmarks with diverse background features show that DFB can significantly improve the detection performance of four different existing methods. Through this work, we promote the design of OOD detection algorithms to achieve more holistic OOD detection in real-world applications. In our future work, we plan to exploit the background features for zero/few-shot OOD detection through outlier label-based hard prompts (Ding and Pang, 2024) or learnable soft prompts (Li et al., 2024a) to large pre-trained vision-language models.
The Reasons behind the Effectiveness of DFB. We aim to understand the effectiveness of DFB from two perspectives, including the foreground and background OOD scoring, and the latent features learned in DFB, with the results on the Textures dataset reported in Figs. 4 and 5 respectively. We can see in Fig. 4 that the background OOD scores in DFB enable a significantly better ID and OOD separation than the foreground OOD scores, indicating that the ID and OOD samples can be easier to be separated by looking from the background features than the semantic features since there can be more background differences than the foreground ones in each ID/OOD image. From the feature representation perspective, compared to the features learned in the vanilla K𝐾Kitalic_K-class classifier in Fig. 5 (left), the features learned by the (K+1)𝐾1(K+1)( italic_K + 1 )-class classifier in DFB (Fig. 5 (right)) are more discriminative in distinguishing OOD samples from ID samples, which demonstrates that the classifier can learn better ID representation after disentangling foreground and background features.
The results show that DFB can consistently and significantly outperform its base model Energy with increasing number of ID classes on four diverse OOD datasets, indicating the effectiveness of DFB working in large-scale semantic space. On the other hand, as expected, both Energy and DFB are challlenged by the large semantic space, and thus, their performance decreases with more ID classes. Extending to large-scale semantic space is a general challenge for existing OOD detectors. We leave it for future work.
Comparison to SotA Methods. DFB is also compared with five very recent SotA methods, including MaxLogit (Hendrycks et al., 2022), KL-Matching (Hendrycks et al., 2022), ReAct (Sun et al., 2021), MaSF (Haroush et al., 2022) and DML+ (Zhang and Xiang, 2023) , with their results reported at the top of Tabs. 1 and 2. Among all our four DFB methods and the SotA methods, ViM-DFB is consistently the best performer except the CIFAR10 data in Tab. 2 where Energy-DFB is the best detector. This is mainly because the ViM is generally the best semantic-feature-based OOD scoring method, and DFB can perform better when the plug-in base model is stronger. Further, it is impressive that although the base models MSP, ODIN and Energy that largely underperform the SotA competing methods, DFB can significantly boost their performance and outperform these SotA competing methods on nearly all cases in Tabs. 1 and 2.
C
The task of low-light image enhancement (LLIE) aims to improve the visibility of images which are captured under low-light conditions. Under-exposed images are often degraded in a variety of ways in addition to their lack of visibility. Notably, low-light regions of an image typically contain degraded color information, a lack of detail as well as intensive noise. LLIE techniques aim to brighten low-light regions of an image while maintaining color accuracy and minimizing noise. The demand for brightening and enhancing low-light images often arises due to many downstream algorithms only being performant on images with high-visibility [1]. Some of these downstream tasks include object detection [2], facial recognition [3], surveillance [4] and semantic segmentation [5].
In this paper, we present a framework for post-processing images which have undergone low-light image enhancement. The enhancement of low-light images often reveals a variety of degradations which are hidden in the dark, and thus a need for post-processing is introduced. Furthermore, each low-light enhancement technique can possibly introduce a different form of degradation into its result. We propose using a conditional diffusion model in order to model the distribution between under-exposed and normally-exposed images. Further, we introduce a method of applying the diffusion model as a post-processing technique. Our approach uses the diffusion model to estimate the amount of noise present in an enhanced image in one pass through the model, which can simply be subtracted from the enhanced image to further enhance the image. Moreover, we demonstrate that our approach outperforms competing post-processing denoisers, and we demonstrate its versatility on a variety of low-light datasets with different state-of-the-art low-light image enhancement backbones. In contrast to existing denoisers, we find that our approach is able to improve perceptual quality, while removing noise and other distortions. In future work, our approach could potentially be applied to other image restoration domains.
Existing denoising techniques can be applied to denoise low-light images either before or after contrast enhancement [9, 10]. These denoising techniques range from low-pass filters and algorithms such as block matching and 3D filtering (BM3D) [11], to state-of-the-art DL denoisers [9, 12]. Despite denoisers significantly reducing noise, they often introduce blurriness into the denoised output. As a result, removing the amplified noise in a brightened low-light image often comes at the cost of removing detail, especially in high-frequency regions of the image.
LLIE techniques have existed for many decades and can be divided into non-learning-based methods and learning-based methods. Popular examples of traditional techniques which do not require learning from data include variants of histogram equalization (HE) [6, 20] and gamma correction (GC) [21]. HE adjusts the global contrast of an image via a single transformation function. However, low-light images often require contrast enhancements that vary dynamically depending on local regions of the image. Thus, techniques such as GC adjust an image via a non-linear per-pixel transform to brighten dark regions while leaving bright regions relatively unaffected. Despite achieving reasonable results, the abovementioned traditional methods often require post-processing techniques in order to deal with amplified noise after enhancement, and struggle to perform well across diverse scenes.
Simply adjusting the contrast of low-light images using a technique such as histogram equalization [6] is often insufficient due to the amplification of noise [1, 7]. Learning-based methods have emerged which significantly outperform traditional methods. However, even the state-of-the-art deep learning (DL) techniques still introduce a variety of artifacts in different scenarios [8].
D
Another motivation for our analysis stems from the connection between navigation in the physical space and knowledge space. Previous research has demonstrated that the same neural regions that are responsible for navigation in physical space are also involved in navigating the knowledge space: the hippocampus and entorhinal cortex, which contain cells that encode spatial information and enable spatial navigation, also play essential roles in other neural processes such as social cognition and memory [7, 8]. Various individual differences have been observed in spatial navigation: spatial abilities decline linearly with age [9, 10]; males generally perform better than females at spatial navigation tasks [11, 10]; and people growing up outside cities are generally better at spatial navigation [12]. Given the connections and differences between knowledge space and physical space, it is important to study if the individual differences in navigation in physical space are also present in knowledge space.
To gain insights into online navigation behaviors, researchers conducted a series of studies using Wikipedia as an observational setting [19, 20, 21, 22, 23] and utilized its well-documented network of articles as the framework for navigation studies [24, 25]. The wide range of topics represented in Wikipedia (https://en.wikipedia.org/) and the platform’s popularity make it a prime candidate for investigating empirical navigation behavior. In a popular online navigation game on Wikipedia, implemented in several versions such as the Wikispeedia (https://dlab.epfl.ch/wikispeedia/play/) and the Wikigame (https://www.thewikigame.com/), players try to go from one Wikipedia article (source) to another (target) through the hyperlinks of other articles within the Wikipedia website. Several navigation patterns on the Wikipedia knowledge network have been discovered: players typically first navigate to more general and popular articles and then narrow down to articles that are semantically closer to the target [26]; players’ search is not Markovian, meaning that a navigation step depends on the previous steps taken by the players [27]. When it comes to individual differences in navigation on Wikipedia, however, there is still a lack of understanding as the navigation patterns discovered so far have not taken into account personal information such as age and sex thus research has not revealed the behaviors and preferences of different demographic groups. As such, further investigations are needed to understand better how these factors may influence navigation patterns.
In the game sessions, players are given two Wikipedia pages as the source and the target in each game. To reduce the disparities in prior knowledge among the participants, the source and target pages are chosen to be similarly distanced (2 or 3 steps away on the Wikipedia network) pages about renowned individuals from various domains such as artists, directors, scientists, and politicians, spanning different historical periods and encompassing both genders. The players start from the source page and navigate to the target page by clicking on the hyperlinks to other Wikipedia articles on the page. To win each game, they should reach the target page in at most 7 steps (Least-click game) or within 150 seconds (Speed-race game). Each participant plays nine rounds of games grouped into three sessions with a one-minute break between the sessions. After the game sessions, participants first finished a 50-question Big Five personality test (https://openpsychometrics.org/tests/IPIP-BFFM/) measuring their five personality traits: openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism. To control other factors that may affect navigation performance, we then asked six groups of questions about their i) employment status, ii) education background, iii) spatial navigation habit, and their prior experience with iv) the Wikipedia navigation game, v) the Wikipedia website and vi) computer games. Lastly, we asked participants demographic questions about their age, gender, ethnicity, political position, and language skills. See the Supplementary Material for a complete list of the questions in the survey. One of the games with the source page "Alexander the Great" and target page "Tim Burton" turned out to be much more difficult than the other games (>3⁢σabsent3𝜎>3\sigma> 3 italic_σ), and is therefore counted as an outlier and excluded from our analysis. After the exclusion, the eight rounds of navigation tasks reached a Cronbach’s alpha score of 0.76, indicating fair internal reliability of the navigation task.
Our study highlights the role of individual characteristics in participants’ navigation performance within the knowledge space, with this influence being modulated by constraints such as time and distance. We discovered that prior experience with Wikipedia, the navigation game, and familiarity with the target page are significant predictors of better navigation, likely due to the nature of the game. Controlling these factors, being young and multilingual consistently predict good navigation performance irrespective of the type of constraints, indicating the fundamental role of age and multilingualism in knowledge space navigation.
To gain a better understanding of how navigation on the knowledge network is affected by individual characteristics, we conducted an online experiment where we hired 445 participants from the US to play nine rounds of Wikipedia navigation games (illustration in Fig. 1) and to fill in a survey afterwards about their personal information such as age, gender, and answer questions which enabled us to characterize their big five personality traits [28] (details in Methods). In each game, players can opt for a Speed-race or a Least-clicks challenge. To win, they must reach the target page within 150 seconds for the Speed-race games or in 7 steps for Least-clicks games. We sought to answer the question of whether individuals with certain characteristics possess an advantage over others in our navigation tasks, and if so, which are those characteristics. Moreover, using a uniqueness measure proposed in this work, we investigated if certain players are more creative than others, meaning that they not only tend to win the navigation games but also take unusual routes to the target.
A
During the LHC Run⁢2Run2\mathrm{Run~{}2}roman_Run 2, the simulation of physics events at LHCb has taken more than 80% of the distributed computing resources available to the experiment, namely the pledged CPU time. The experiment has just resumed data taking after a major upgrade and will operate with higher luminosity and trigger rates collecting data samples at least one order of magnitude larger than in the previous LHC runs.
Meeting the foreseen needs in Run 3 conditions using only the traditional strategy for simulation, namely detailed simulation, will far exceed the pledged resources. Hence, the LHCb Collaboration is making great efforts to modernize the simulation software stack [8, 9] and develop novel and faster simulation options [10, 11, 12, 13, 14].
Developing new simulation techniques is an unavoidable requirement for LHCb to tackle the demand for simulated samples expected for Run 3 and those will follow. The ultra-fast simulation approach is a viable solution to reduce the pressure on pledged CPU resources and succeeds in describing the uncertainties introduced in the detection and reconstruction steps through the use of deep generative models.
As mentioned in the previous Section, the validation of the ultra-fast philosophy of Lamarr is based on the comparison between the distributions obtained from models trained on detailed simulation and the ones resulting from standard simulation strategies.
Several strategies have been developed to reduce the computational cost of the simulation phase based on resampling techniques [15] or parameterizations of energy deposits [10, 12, 13]. These options offer cheaper alternative solutions to reproduce the low-level response of the LHCb detector and are typically named fast simulation strategies.
A
ℒpretrain=λ1⋅ℒalign+λ2⋅ℒcontact.subscriptℒpretrain⋅subscript𝜆1subscriptℒalign⋅subscript𝜆2subscriptℒcontact\mathcal{L}_{\text{pretrain}}=\lambda_{1}\cdot\mathcal{L}_{\text{align}}+% \lambda_{2}\cdot\mathcal{L}_{\text{contact}}.caligraphic_L start_POSTSUBSCRIPT pretrain end_POSTSUBSCRIPT = italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ⋅ caligraphic_L start_POSTSUBSCRIPT align end_POSTSUBSCRIPT + italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⋅ caligraphic_L start_POSTSUBSCRIPT contact end_POSTSUBSCRIPT .
We propose leveraging pretrained protein language model to train protein structure models using cross-modal contrastive learning . Our approach demonstrates superior performances in various evaluation tasks. However, challenges remain, including the scope of language model transfer, data efficiency, generalization, computational resources, and evaluation metrics. Addressing these limitations will be crucial for advancing the utility of pretrained protein language models in protein structure prediction and related applications.
As depicted in Figure 1(b), the architecture of the inference phase utilizes only the pretrained language-enhanced structure model, rendering other flows unnecessary during this stage. Evaluating a pretrained protein structure model within a novel training framework poses significant challenges. To address these challenges, we have developed a comprehensive evaluation system that includes multiple validation tasks, showcasing the model’s representation learning, alignment capability, and generalization ability. Based on the necessity for fine-tuning, we categorize the validation tasks into internal and external/downstream tasks.
Figure 1: (a) The proposed cross-modal contrastive learning framework utilizes a pretrained protein language model to guide the training of the protein structure model through contrastive alignment loss. To reinforce information constraints on the structure, we introduce a self-supervised contact map prediction. (b) The internal and external evaluation tasks for our trained structure model during inference phase.
Furthermore, regarding different levels (Designp vs. Designr), while there is no significant disparity between the residue-level and protein-level pretrained modules in downstream tasks, Designr slightly outperforms Designp overall. This observation suggests that any small gaps between the two levels of pretrained models are further diminished during the fine-tuning process. Additionally, within Group 2 of Table 2, we compared the pretrained model with a non-trained model serving as the backbone. Notably, the pretrained model significantly enhances performance in terms of perplexity and recovery, underscoring the stronger generalization ability of the pretrained structure model for downstream tasks.
B
We adjust the setups of TC layers by increasing/decreasing the kernel size accordingly to ensure the output dimensions are always 512. The results with respect to link prediction accuracy (H@10) are shown in Table VIII. We see that the results of LN-TransE and LN-TransH are relatively stable, introducing more TC layers only slightly increases model performance. While LN-DistMult and LN-ComplEx are more sensitive to different layer numbers, their performance collapses with 4 TC layers partly due to training difficulties and overfitting.
In LiftNet, we adopt TC layers to progressively lift the dimensions. To demonstrate the effectiveness of such a design, we implement LiftNet variants with fully connected (FC) layers for comparison. The experiment is conducted on the largest FB15K237 dataset, with accuracy measured by MRR. Specifically, we include LiftNet variants of 2 to 4 FC layers, and the results are shown in Table IX. We see that in most cases, LiftNet with 2 TC layers achieves more accurate link prediction results than LiftNet with 2 or more FC layers. We do not claim that TC layers are better than FC layers, but in KGE where the model is sensitive to insufficient expressiveness or over-parameterization, LiftNet with TC layers is relatively easier to achieve more accurate link prediction results.
We adjust the setups of TC layers by increasing/decreasing the kernel size accordingly to ensure the output dimensions are always 512. The results with respect to link prediction accuracy (H@10) are shown in Table VIII. We see that the results of LN-TransE and LN-TransH are relatively stable, introducing more TC layers only slightly increases model performance. While LN-DistMult and LN-ComplEx are more sensitive to different layer numbers, their performance collapses with 4 TC layers partly due to training difficulties and overfitting.
The main task is to design an effective f⁢(∗)𝑓f(*)italic_f ( ∗ ) for KGE. An intuitive choice of f⁢(∗)𝑓f(*)italic_f ( ∗ ) is multiple fully connected (FC) layers; however, FC layers require large numbers of parameters and are prone to overfitting for KGE [22]. Inspired by image processing, we refer to feature upsampling techniques for dimension lifting.
The results of LiftNet-based methods for knowledge graph link prediction (accuracy measured by H@10 and MRR) are shown in Fig. 3. Generally, on WN18RR datasets, we observe the link prediction accuracy increases with higher input dimension, and the increase is significant from 4-dimension to 16-dimension. However, after that, the accuracy only slightly increases or even drops, e.g., LN-ComplEx and LN-DistMult in Fig. 3 (a), with higher input dimensions partly due to over-parameterization. That shows the effectiveness of LiftNet with low-dimensional entity representations.
A
Unfortunately, to the best of our knowledge, our problem cannot be formulated as a linear programming one. This represents the biggest drawback of using linear programming for entanglement routing: the amount of detail one can add becomes restricted by the need to formulate the problem as a linear optimization. Nevertheless, the proposed approach allows for the addition of as much detail as needed, provided that monotonic and isotonic routing metrics can still be defined. An interesting way to merge these two directions would be to reformulate this work as a non-linear programming problem. In its present form, the proposed technique cannot be used for entanglement distribution flow models considering non-deterministic swapping protocols because in that case the distribution rate will depend on the swapping order [7], making the problem considerably harder. Nevertheless, the proposed approach remains an extremely versatile tool that can be used in combination with relatively complex quantum network models.
With the surge of research on entanglement distribution and the wide range of models being considered, it is valuable to identify models that can be addressed using the proposed approach, which can be interpreted as a generalization of previous methods for single and multi-objective optimization. It evolves from discrete variables that seek the best path between a source and a set of nodes, to one based on continuous variables [10, 6, 4, 9]. However, it is important to note the limitations of the proposed algorithm. It cannot be used to find the combination of paths that maximizes the entanglement distribution rate from a source to a target [5, 9, 4] using multi-paths. Multi-path routing is often addressed using a linear programming formulation, which provides several advantages [4, 5, 9]. It allows multi-path routing, and can also be applied to scenarios involving multiple sources and multiple targets.
Unfortunately, to the best of our knowledge, our problem cannot be formulated as a linear programming one. This represents the biggest drawback of using linear programming for entanglement routing: the amount of detail one can add becomes restricted by the need to formulate the problem as a linear optimization. Nevertheless, the proposed approach allows for the addition of as much detail as needed, provided that monotonic and isotonic routing metrics can still be defined. An interesting way to merge these two directions would be to reformulate this work as a non-linear programming problem. In its present form, the proposed technique cannot be used for entanglement distribution flow models considering non-deterministic swapping protocols because in that case the distribution rate will depend on the swapping order [7], making the problem considerably harder. Nevertheless, the proposed approach remains an extremely versatile tool that can be used in combination with relatively complex quantum network models.
This paper advances the research on quantum networks by introducing an highly versatile routing approach based on fidelity curves, which can be utilized in conjunction with purification protocols including capacity-achieving purification ones [11]. The fidelity of an ebit can be quantified as the distance between the quantum state of the ebit in question and the state of a maximally entangled ebit [2]. Typically, this fidelity can be manipulated by altering the entanglement distribution rate. Two examples illustrate this trade-off. First, consider the setup proposed by [12], where entanglement between qubits is generated using laser pulses. Increasing the duration of these pulses raises the probability of generating an ebit but reduces its quality, as shown in Fig. 1(a,b). Consequently, if a link’s objective is to generate a large number of qubits with less emphasis on quality, longer laser pulses would be used. Conversely, if the link requires fewer ebits but with high fidelity, shorter pulses would be preferred.
This paper focused on multipartite entanglement distribution for a quantum network connected through links that exhibit a trade-off between entanglement generation rate and fidelity. This is the case with hash-based quantum purification protocols [11] and with photonic models [12]. Two entanglement distribution models were considered: one where only one ebit is sent at each time epoch, and a second so-called flow model, where a large number of ebits are distributed simultaneously. The paper proposed using fidelity curves as a routing metric in both scenarios in combination with a multi-objective optimization algorithm, which finds the best path (or best star) connecting two (or three) nodes in close to linear time. The proposed method can be readily adapted to address routing challenges in various quantum network models, including those incorporating purification protocols between adjacent nodes. Nevertheless, how to deal with multi-path routing with non-deterministic swapping is still an open problem. In conclusion, this work paves the way for entanglement distribution in networks with complex link models, incorporating highly efficient purification protocols, and enabling optimization of quantum routing in more realistic and sophisticated network scenarios.
D
TABLE III: The MWDs of polar codes with different construction methods, where the designed Eb/N0subscript𝐸𝑏subscript𝑁0E_{b}/N_{0}italic_E start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT / italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of the ECBS algorithm are 1.51.51.51.5dB and 2.572.572.572.57dB for (512,256)512256\left(512,256\right)( 512 , 256 ) and (512,384)512384\left(512,384\right)( 512 , 384 ) polar codes, respectively.
Fig. 8 provides the BLER performance of (512,256)512256\left(512,256\right)( 512 , 256 ) and (512,384)512384\left(512,384\right)( 512 , 384 ) polar codes with different construction methods. The MWDs of the polar codes in Fig. 8 are shown in Table III.
Hence, the entropy constraint is a metric to evaluate whether the performance of polar codes with limited list size under SCL decoding can approach the ML performance or not and the proposed ECBS algorithm can improve the MWD of polar codes to show better BLER performance as L𝐿Litalic_L increases. Finally, compared with the construction method in [27], the polar codes constructed by the ECBS algorithm have less Adminsubscript𝐴subscript𝑑A_{d_{\min}}italic_A start_POSTSUBSCRIPT italic_d start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT end_POSTSUBSCRIPT and show better performance in the high SNR region. Specifically, the (512,256)512256\left(512,256\right)( 512 , 256 ) and (512,384)512384\left(512,384\right)( 512 , 384 ) polar codes constructed by the ECBS algorithm with L=64𝐿64L=64italic_L = 64 have about 0.230.230.230.23dB and 0.180.180.180.18dB performance gains at BLER 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT, respectively.
The designed Eb/N0subscript𝐸𝑏subscript𝑁0E_{b}/N_{0}italic_E start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT / italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of the ECBS algorithm are 1.51.51.51.5dB and 2.572.572.572.57dB for (512,256)512256\left(512,256\right)( 512 , 256 ) and (512,384)512384\left(512,384\right)( 512 , 384 ) polar codes, respectively, and the information sets are provided in Appendix. In Fig. 8, the BLER performance of polar codes constructed by the ECBS algorithm approaches its MWUB in the high SNR region and the corresponding Adminsubscript𝐴subscript𝑑A_{d_{\min}}italic_A start_POSTSUBSCRIPT italic_d start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT end_POSTSUBSCRIPT is reduced as L𝐿Litalic_L increases, which leads to the better performance with larger L𝐿Litalic_L.
TABLE III: The MWDs of polar codes with different construction methods, where the designed Eb/N0subscript𝐸𝑏subscript𝑁0E_{b}/N_{0}italic_E start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT / italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of the ECBS algorithm are 1.51.51.51.5dB and 2.572.572.572.57dB for (512,256)512256\left(512,256\right)( 512 , 256 ) and (512,384)512384\left(512,384\right)( 512 , 384 ) polar codes, respectively.
A
The solution for the Searcher will have the following structure. At every branch node j𝑗jitalic_j there is a favored branch Q1subscript𝑄1Q_{1}italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and a positive probability β𝛽\betaitalic_β (the favoring bias) for it to be chosen before looking at the signal. With the remaining probability 1−β1𝛽1-\beta1 - italic_β the search follows the signal.
This paper has addressed the question of how to optimally search for an adversarially hidden target on a tree network in the presence of unreliable signals. We have found optimal solutions for both the Searcher and the Hider that can be calculated recursively, and a closed form expression for the value of the game. Future work might consider a variation of the game we consider here in which the time to traverse an arc depends on the direction of travel, as in the variable speed networks studied in Alpern and Lidbetter (2014).
The solution for the Searcher will have the following structure. At every branch node j𝑗jitalic_j there is a favored branch Q1subscript𝑄1Q_{1}italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and a positive probability β𝛽\betaitalic_β (the favoring bias) for it to be chosen before looking at the signal. With the remaining probability 1−β1𝛽1-\beta1 - italic_β the search follows the signal.
So in particular the Searcher will never choose the unfavored arc (branch) when the signal is for the favored one. The use of biased depth-first Searcher strategies (random choices at every branch node) of the Searcher was introduced in another context in Alpern (2010) and Alpern and Lidbetter (2014), but those distributions are not the same is in the present context.
We can now state and prove our main theorem, which includes an expression for the value of the game. We describe the optimal strategy for the Searcher by giving the favoring bias β𝛽\betaitalic_β of searching the favored branch first (without needing to observe the signal) when at a branch node.
C
For these reasons, especially in the medical domain, it is essential to have computationally feasible methods that can fine-tune existing models towards a smaller set of a specific modality or disease. In this paper, we pick one such method, Textual Inversion, and rigorously explore its capacities for adapting Stable Diffusion to medical imaging,
Several other works studied text-to-image latent diffusion models for medical imaging Chambon et al. (2022a); Akrout et al. (2023). Closest to our work is Chambon et al. (2022b), where the authors explore various methods to adapt a pre-trained Stable Diffusion model to chest X-ray generation.
Several papers have applied diffusion to medical imaging, with a wide range of applications including anomaly detection, segmentation, registration, and modality transfer with image-to-image translation [Kazerouni et al. (2022)]. Specifically for medical image generation, several recent works have trained diffusion models for image generation.
Pre-trained models are often trained on 2D RGB datasets, but many medical imaging modalities are 3D. Recently, studies such as Khader et al. (2023) and Pinaya et al. (2022) have trained diffusion models from scratch on 3D data or even on 4D data Kim and Ye (2022), and Han et al. (2023) use diffusion models conditioned on anatomical masks to generate labeled images for segmentation.
For these reasons, especially in the medical domain, it is essential to have computationally feasible methods that can fine-tune existing models towards a smaller set of a specific modality or disease. In this paper, we pick one such method, Textual Inversion, and rigorously explore its capacities for adapting Stable Diffusion to medical imaging,
B
Semantic-based methods enhance object representations by incorporating additional semantic information, such as segmentation labels [66], instance masks [61, 57], or features extracted using pre-trained vision-language models [32]. On the other hand, 3D layout-based approaches, exemplified by NSG [37] and its successors [48, 9], focus on spatial coordinates, using explicit 3D object placement data to guide object and scene composition. Diverging from conventional techniques, our method innovates by utilizing decomposed, object-specific 3D layouts. This approach enables precise control over scene dynamics, encompassing both object-specific text prompt modifications [39, 23] and spatial manipulation.
Figure 3: Framework Overview. The CompoNeRF model unfolds in three stages: 1) Editing 3D scene, which initiates the process by structuring the scene with 3D boxes and textual prompts; 2) Scene rendering, which encapsulates the composition/recomposition process, facilitating the transformation of NeRFs to a global frame, ensuring cohesive scene construction. Here, we specify design choices between density-based or color-based(without refining density) composition; 3) Joint Optimization, which leverages textual directives to amplify the rendering quality of both global and local views, while also integrating revised text prompts and NeRFs for refined scene depiction.
Our framework interpreted a multi-object text prompt as a collection of localized NeRFs, each associated with a spatial box and an object-specific text prompt, which were then composited to render the entire scene view. We have further enhanced the framework with a specialized composition module for global consistency, effectively mitigating the issue of guidance collapse in the multi-object generation. Utilizing Stable Diffusion model, we have demonstrated that our method, the first to apply a compositional NeRF design to the text-to-3D task, can produce high quality 3D models that feature multiple objects and perform well compared with contemporaneous methods. Looking ahead, we have explored a promising application of CompoNeRF in the realm of scene editing, which allows for the reuse of trained models in scene recomposition. This capability opens up new possibilities and identifies a rich vein of future work to be pursued in this domain.
CompoNeRF’s distinctive feature lies in its capability to recompose scenes by interfacing with decomposed NeRFs, thereby accelerating the creation of new scenes. In contrast to the mesh-based method in Fantasia3D, which requires considerable human effort in mesh modification and graphics engine support for editing, CompoNeRF offers a more streamlined process. Our composition module seamlessly integrates components, requiring minimal adjustments in layout or text prompts, followed by fine-tuning existing offline models to align with the global context during training.
Much like Latent-NeRF and SJC, our CompoNeRF framework encounters the multi-face challenge, where guidance from the Stable Diffusion model may result in conflicting facial features for certain objects, as illustrated in Figure 16. The reason lies in the fact that diffusion model does not always provide reliable guidance that aligns with the desired orientation corresponding to the camera’s viewpoint during sampling. To mitigate the multi-face problem, stronger constraints can be introduced to promote geometric consistency within the 3D representation. CompoNeRF incorporates mesh constraints, akin to those utilized in Latent-NeRF, offering a more detailed 3D layout compared to traditional bounding boxes. As demonstrated in Figure 16, the implementation of exact mesh constraints markedly mitigates the multi-face issue, though it may come at the expense of detail and adaptability.
C
MDCGen (Iglesias et al., (2019) is a feature-rich generator that supports many desiderata in cluster analysis, such as overlap control, different probability distributions, subspace clusters, and the ability to add noise points. In particular, it is nice to be able to place noise points away from the clusters, which is made possible by the grid-based strategy for placing cluster centers. MDCGen does not target the overall geometric characteristics of synthetic data sets, instead giving users low-level control enabling extensive configurability. For example, managing the overlap between clusters involves setting compactness coefficients, grid granularity, and overall scale, compared to only tweaking max_overlap in repliclust.666max_overlap may also have to be tweaked but it can usually stay at max_overlap/10 or a similar value. In the words of the authors, “to enable covering a broad range of dataset possibilities, the parameters are multiple and some training for tuning the tool is required.”
Figure 9 confirms that clustering difficulty rises with increasing overlap. Figure 10 shows the same in the case of non-convex clusters, suggesting that applying distort maintains the desired relationship between overlap and clustering difficulty. Additionally, both figures show how our cluster overlap relates to the silhouette score, a popular metric for quantifying clustering difficulty (Rousseeuw, (1987), Shand et al., (2019)). At a fixed value of max_overlap, the silhouette score decreases markedly with a rise in dimensionality. This is not an artifact of our overlap measure, since plotting clustering performance vs silhouette score shows a similar dependence on dimensionality (not shown). This makes sense since the silhouette score is based on the difference of Euclidean distances, and distances between points tend to become more similar in high dimensions (Aggarwal et al., (2001)).
In Section 3.2, we defined the overlap between two clusters in terms of the error rate of the best minimax linear classifier. We verify that this notion of overlap conveys clustering difficulty by measuring clustering performance on data sets with different degrees of overlap. For this simulation, we consider data sets with two clusters drawn from an archetype we described as “two clusters with very different shapes in p𝑝pitalic_pD”555This verbal description yields an archetype with parameters
Finally, the HAWKS generator (Shand et al., (2019)) controls cluster overlaps using an evolutionary algorithm that evolves the means and covariance matrices of multivariate normal distributions. The paper applies this framework to create data sets with a user-specified silhouette score representing clustering difficulty (Rousseeuw, (1987)). In principle, the evolutionary framework can be extended to attain desired high-level geometric characteristics. Of these, the authors consider two examples, cluster overlaps and elongations (the latter relating to our notion of cluster aspect ratio, as listed in Table 1). An interesting aspect of HAWKS is the ability to generate data sets that maximize the performance difference between two clustering algorithms. This feature is especially useful in two dimensions, since we can then visually examine the data sets to better understand when each algorithm succeeds or fails.
Most of the high-level geometric parameters describing an archetype are based on what we call “max-min sampling.” In this approach, the user controls a geometric attribute by specifying a reference value and max-min ratio. In addition, a constraint ensures that the reference value and is indeed typical for every data set. For example, the aspect ratio of a cluster measures how elongated it is. The reference value aspect_ref sets the typical aspect ratio among all clusters in a data set, while aspect_maxmin sets the ratio of the highest to the lowest aspect ratio. To make sure that aspect_ref is indeed the typical aspect ratio, max-min sampling enforces the location constraint (∏j=1kαj)1k=aspect_refsuperscriptsuperscriptsubscriptproduct𝑗1𝑘subscript𝛼𝑗1𝑘aspect_ref(\prod_{j=1}^{k}\alpha_{j})^{\frac{1}{k}}=\texttt{aspect\_ref}( ∏ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG italic_k end_ARG end_POSTSUPERSCRIPT = aspect_ref. Appendix A gives more details on how we manage different geometric attributes using max-min sampling.
C
BART-Gen Li et al. (2021) is designed for document-level event extraction that can deal with the long-distance dependence issue and co-reference problem. Constrained generation is applied for argument extraction that requires event-specific templates.
The generator is fine-tuned on both trigger prediction and argument prediction simultaneously by training on the pairs of instances with different prefixes ‘TriggerEvent: ’ and ‘Argument: ’ (see §4.4). In order to take the context as input and generate structured event frames, the generator 𝒢𝒢\mathcal{G}caligraphic_G of COFFEE is employed using an encoder-decoder transformer model, such as BART, T5 and mT5 (Lewis et al., 2020; Raffel et al., 2020; Xue et al., 2021). We resort T5 (Raffel et al., 2020) as the base model and encode only ‘[and]’ and ‘[none]’ as additional special tokens based on experimental results.
We preprocess the data by separating original samples into event samples and inserting placeholders for target entities. The instances are processed with distinct prefixes for subtasks: ‘TriggerEvent: ’ and ‘Arguments: ’. Figure 3 shows a data preprocessing example. Details pertaining to our pipeline training and inference process, including specifics about the two-stage fine-tuning, such as the learning rate and batch size, as well as the beam search strategy employed during inference, are elaborated in Appendix A.1.
While impressive results are reported, we identify two major limitations of the current generation-based event extraction methods. Firstly, most of these methods rely on heuristic templates and extensive human knowledge engineering. According to the experiments conducted by Hsu et al. (2022), a slight change in the template might lead to significant performance changes, thus raising the issue of using sub-optimal templates. Secondly, most of these generation-based approaches still require certain oracle information, such as event type and event schema, which necessitate extensive manual annotations. For example, the DEGREE model’s inference process, as demonstrated by Hsu et al. (2022), requires manually designed event-specific templates for each example and iterates over all event types. On the other hand, Text2Event (Lu et al., 2021) also constrains the generation with manually designed templates, which require event schema to be given. However, obtaining this oracle information, such as event type and schema, is unrealistic for a real-world inference system to achieve automatically. Hence, this paper aims to address the Oracle-Free Event Extraction (OFEE) task where only the input context is given.
The DEGREE (Hsu et al., 2022) model is designed to generate ‘invalid’ instances during both the training and inference phases, wherein event-specific knowledge is combined with context even if no such event is mentioned in the context. We eliminated these event-specific templates, leaving only the context sentence as input.
B
\mathcal{H}]^{0}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ↪ [ caligraphic_H ] start_POSTSUPERSCRIPT italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ↪ [ caligraphic_H ] start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT exist and are compact (Fischer and Steinwart, 2020). For the functions in [ℋ]ssuperscriptdelimited-[]ℋ𝑠[\mathcal{H}]^{s}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT with larger s𝑠sitalic_s, we say they have higher regularity (smoothness) with respect to the RKHS.
Before introducing the Lqsuperscript𝐿𝑞L^{q}italic_L start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT-embedding property of the interpolation space [ℋ]ssuperscriptdelimited-[]ℋ𝑠[\mathcal{H}]^{s}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT, we first prove the following lemma, which characterizes the real interpolation between two Lpsuperscript𝐿𝑝L^{p}italic_L start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT spaces with Lorentz space Lp,q⁢(𝒳,μ)superscript𝐿𝑝𝑞𝒳𝜇L^{p,q}(\mathcal{X},\mu)italic_L start_POSTSUPERSCRIPT italic_p , italic_q end_POSTSUPERSCRIPT ( caligraphic_X , italic_μ ). We refer to Appendix A for details of real interpolation and Lorentz spaces.
where m:=min⁡{k∈ℕ:k>r}assign𝑚:𝑘ℕ𝑘𝑟m:=\min\{k\in\mathbb{N}:k>r\}italic_m := roman_min { italic_k ∈ blackboard_N : italic_k > italic_r }. (We refer to Appendix A for the definition of real interpolation and Sawano 2018, Chapter 4.2.2 for more details). It is well known that when r>d2𝑟𝑑2r>\frac{d}{2}italic_r > divide start_ARG italic_d end_ARG start_ARG 2 end_ARG, Hr⁢(𝒳)superscript𝐻𝑟𝒳H^{r}(\mathcal{X})italic_H start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ( caligraphic_X ) is a separable RKHS with respect to a bounded kernel and the corresponding EDR is (see, e.g., Edmunds and Triebel 1996)
It is worth pointing out the relation between the definition (5) and the interpolation space defined through the real method (real interpolation). For details of interpolation of Banach spaces through the real method, we refer to Sawano (2018, Chapter 4.2.2). Specifically, Steinwart and Scovel (2012, Theorem 4.6) reveals that for 0<s<10𝑠10<s<10 < italic_s < 1,
The outline of the rest of the paper is as follows. In Section 2, we introduce basic concepts including priori knowledge of RKHS, integral operators and the definition of the interpolation space. In addition, we formally define the spectral algorithm, which is the main interest of this paper, and provide three examples of common spectral algorithms. In Section 3, we present our main results of the convergence rates and discuss the minimax optimality. Theorem 1 and Theorem 2 show the upper bound and the minimax lower bound, respectively. In Section 4, we further show four kinds of commonly used RKHSs with embedding index α0=1βsubscript𝛼01𝛽\alpha_{0}=\frac{1}{\beta}italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_β end_ARG. This is the ideal case where the minimax optimality can be proved for all 0<s≤2⁢τ0𝑠2𝜏0<s\leq 2\tau0 < italic_s ≤ 2 italic_τ. We verify our results through experiments in Section 5. In Section 6, we make a comparison with previous literature and discuss other applications of our techniques. All the proofs can be found in Section 7. In the appendix, we provide supplementary materials including extended proof, details of the experiments and a table of important notations frequently used throughout the main text.
C
Section 2 briefly reviews a number of existing point cloud simplification techniques which are relevant to our work. Section 3 provides background details regarding the computation of surface variation, GPs with kernels defined on non-Euclidean domains and a greedy subset-of-data scheme for GP inference. Section 4 outlines the proposed GP-based point cloud simplification algorithm. Section 5, in combination with the supplementary material, includes an empirical evaluation of our method on various benchmark and self-acquired point clouds, with comparisons to competing simplification techniques, along with applications to some downstream tasks and ablation studies. Finally, Section 6 summarises our contributions and provides a brief discussion of the scope for future work.
In this section we will introduce a number of existing point cloud simplification techniques, with a particular focus on works which have a feature-preserving element to their approach. Some of the earliest curvature-sensitive simplification techniques were proposed by Pauly et al. [26] and Moenning et al. [25]. The former method, termed Hierarchical Clustering (HC), recursively divides the original point cloud into two sets, until each child set attains a size smaller than a threshold size parameter. Moreover, a variation parameter plays an important role in sparsifying regions of low curvature by selective splitting. The perceptual quality and the size of the simplified cloud depend entirely on these two parameters, which must be carefully and manually tuned, making HC unsuitable for automated applications. Additionally, the surface reconstructions obtained from HC-simplified point clouds are often poor for clouds with complex surfaces, as will be seen in Section 5. This is because it is challenging to tune the parameters of HC in such a way that preservation of sharp features is achieved whilst still ensuring dense coverage of the original cloud.
Approximate Intrinsic Voxel Structure for Point Cloud Simplification (AIVS), introduced by Lv et al. [24], combines global voxel structure and local farthest point sampling to generate simplification demand-specific clouds which can be either isotropic, curvature-sensitive or have sharp edge preservation. As with HC however, AIVS requires manual tuning of user-specified parameters in order to obtain optimal results. Additionally, even in parallel computation mode, AIVS is quite costly in terms of computational runtime. Potamias et al. and Lv et al. do not provide open-source implementations of their curvature-sensitive simplification techniques, which poses a challenge for reproducibility and benchmarking. However, we thank the authors of Potamias et al. for directly providing some simplified point clouds; their results are included later in this paper. Qi et al. [29] introduced PC-Simp, a method which aims to produce uniformly-dense and feature-sensitive simplified clouds, leveraging ideas from graph signal processing. This uniformity depends on a weight parameter which as with HC and AIVS, is user-specified. Alongside simplification, they also apply their technique to point cloud registration. However, in practice PC-Simp is unreliable for complex-surfaced point clouds as it fails to provide a high degree of feature-preservation, regardless of the weight parameter chosen. Additionally, as discussed later in Section 5, the runtime of this technique is considerably longer than any other method tested.
HC and Potamias et al. are the only baselines with shorter runtimes than our method, and obtain maximum Hausdorff distances comparable to those obtained by our approach. However, as discussed in Section 2, tuning the user-specified HC parameters make striking a balance between feature preservation and retaining a sufficient density of points across the cloud relatively challenging. Moreover, there is no control over the size of the simplified cloud, as discussed by the authors [26] and in subsequent work [24]. We tuned this baseline to attempt to balance this trade-off, and whilst the HC-simplified clouds shown in Figures 2 and 3 here, and Figure 3 of the supplementary material, do have clearly preserved features (an observation supported by the high mean surface variation across all clouds), the density of points away from these areas is very low. This leads to inferior mesh reconstructions compared to our approach, as evidenced by the fact that we obtain superior mean Hausdorff distance compared to HC across all three clouds.
Overall, these results show that our approach provides a computationally efficient option for performing point cloud simplification in settings where the user wishes to strike a balance between preserving high fidelity around sharp features in the cloud, and ensuring that the simplified cloud covers the manifold defined by the original cloud with a sufficient density of points. This is important for generating reconstructions which resemble the original meshes, as is evident from visual inspection of the reconstruction results in Figure 2 here and Figure 3 of the supplementary material. In terms of surface reconstruction, our method clearly outperforms all of the other techniques for the Dragon (compare the tail, teeth, horns and the face detailing for all methods and additionally the curved body for HC) and the Armadillo (compare the ears, hands and feet across all the methods) and gives competitive results for Lucy, shown in Figure 2 of the supplementary material. We highlight once again the poor surface reconstructions resulting from the Potamias et al. simplified clouds, compared to those obtained using all of the other baselines. Again, visual inspection of the simplification results for the noisy Armadillo in Figure 3 demonstrates the balanced feature-sensitivity of our method in comparison to others. We experiment with more noise levels in the supplementary material (Section 2). Finally, from Figure 4 we can see how the edge-sampling-based APES simplified clouds have several missing portions including object edges, whereas our method enhances the salient features and captures the overall object structure simultaneously. We do not provide corresponding surface reconstructions and hence quantitative results for this baseline because their low simplified point cloud sizes (N=1,024𝑁1024N=1,024italic_N = 1 , 024 and 512512512512) and aforementioned missing areas will always result in open meshes.
A
Training on full resolution (FullRes): We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive specialized GPU hardware is available.
Training on full resolution (FullRes): We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive specialized GPU hardware is available.
2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT images, we distribute the model over 2 GPUs. The methods HalfRes and PatchDDM were trained on one GPU only. The optimizer we used was AdamW [Loshchilov and Hutter(2017)] with the default parameters.
For three spatial dimensions (i.e. 3D) this means that reducing the input size from 2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT to 1283superscript1283128^{3}128 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT results in a reduction
of 1×1×1⁢\text⁢m⁢m3111\text𝑚superscript𝑚31\times 1\times 1\text{mm}^{3}1 × 1 × 1 italic_m italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, resulting in a total scan size of 240×240×155240240155240\times 240\times 155240 × 240 × 155, which we padded to a size of 256×256×256256256256256\times 256\times 256256 × 256 × 256. The background voxels were set to zero and the range between the first and 99th percentile was normalized
C
To address the aforementioned challenges, substantial research endeavors have been dedicated. The seminal work in [3] introduces FedAvg, a foundational approach for federated optimization that highly depends on iterative model averaging and partial participation, thereby significantly decreasing the communication overhead. FedAvg demonstrates commendable efficiency for IID datasets, but its performance degrades and is unstable under the Non-IID settings. To mitigate the negative effects of client heterogeneity, researchers have proposed the asynchronous FL approach [8, 9, 10] and adopted the client selection mechanism [11, 12, 13]. The asynchronous FL approach facilitates global aggregation upon the receipt of a single local update from any client. However, this approach risks the global model performance, which may be disproportionately influenced by a single client’s local model, leading to suboptimal convergence. Furthermore, partial updates (i.e., client selection mechanism design) select a subset of clients to contribute to the global aggregation, which may introduce significant bias, as the global model predominantly reflects the characteristics of the selected clients, thus compromising model accuracy and raising fairness concerns.
According to [38], other adaptive algorithms such as FedAdagrad and FedYogi are proposed to improve the model convergence rate under the situation of heterogeneous data. FedAdam employs adaptive learning rates and momentum by leveraging local updates from client devices to efficiently update the global model. FedAdagrad adjusts the learning rate based on the historical gradients of each model parameter, allowing the model to converge faster and achieve better performance. FedYogi, inspired by the Yogi optimizer, incorporates elements of adaptive learning rates and momentum to handle non-convex optimization problems in FL scenarios to improve global model convergence and accuracy. We conduct numerical experiments on CIFAR-10 with 20% of participating clients. The experiment results are illustrated in Table V and Fig. 10. Compared with other adaptive FL algorithms, our proposed FedAgg still performs better with higher accuracy and a faster convergence rate.
We systematically conduct numerical experiments designed to elucidate the influence exerted by the aggregation weight α𝛼\alphaitalic_α in the objective function presented in Eq. (13) on the model efficacy and facilitate the practical application and promotion of FedAgg. As depicted in Fig. 12, the decrement of the hyperparameter α𝛼\alphaitalic_α demonstrates that the FL framework accentuates the optimization of the discrepancy between the local model of client i𝑖iitalic_i and the average local model, which in turn, bolsters the precision of the global model and expedites the convergence rate. Our findings underscore the significance of meticulous hyperparameter tuning within the FL systems.
Figure 2: Numerical analysis of model accuracy and training loss curves on the CIFAR-100 dataset featuring IID data distribution. The results underscore the substantial impact of employing the adaptive learning rate scheme based on adaptive optimizer Adam, which enhances model performance and convergence rate.
Nevertheless, in FL systems, the potential of adaptive learning rate-based algorithms in FL remains largely underexplored. Current literature often undervalues the pivotal role of the learning rate, a hyperparameter that requires meticulous tuning to accelerate the convergence speed and FL model performance. In Fig. 2, we observe that employing adaptive learning rates on the first-order optimization algorithms is instrumental in enhancing the efficiency of FL models, especially for the large-scale FL optimization tasks, as highlighted in [14]. Fig. 2 shows a quantitative analysis of model accuracy and training loss curves on the CIFAR-100 dataset with IID data distribution and 20% client participation ratio. The results underscore the profound influence of adaptive learning rate mechanisms, which not only significantly improve model performance but also demonstrate its efficacy in accelerating the convergence rate and enhancing training stability.
D
In this paper, we introduce LLaMA-Adapter, an efficient fine-tuning method that adapts LLaMA into a well-performed instruction-following model. Trained by Alpaca’s instruction-output data, our approach freezes the entire LLaMA model, and proposes a zero-initialized attention mechanism with superior resource efficiency.
Figure 2: Details of Zero-initialized Attention. We insert learnable adaption prompts into the last L𝐿Litalic_L out of N𝑁Nitalic_N transformer layers of LLaMA. To progressively learn the instructional knowledge, we adopt a zero gating factor within the attention for stable training in the early training stages.
Specifically, in LLaMA’s higher transformer layers, we append a set of learnable adaption prompts as prefixes to the word tokens. Then, to avoid the noise from randomly initialized prompts at the early training stage, we equip the frozen self-attention layers with a learnable gating factor.
If the adaption prompts are randomly initialized, they might bring disturbance to the word tokens at the beginning of training, which harms the fine-tuning stability and effectiveness. Considering this, we modify the vanilla self-attention at the last L𝐿Litalic_L layers to be zero-initialized variants, as shown in Figure 2.
Given a pre-trained LLaMA with an N𝑁Nitalic_N-layer transformer, we first insert a set of learnable adaption prompts into its topmost L𝐿Litalic_L layers (L≤N𝐿𝑁L\leq Nitalic_L ≤ italic_N). We denote the prompts as {Pl}l=1Lsuperscriptsubscriptsubscript𝑃𝑙𝑙1𝐿\{P_{l}\}_{l=1}^{L}{ italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_l = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L end_POSTSUPERSCRIPT, where Pl∈ℝK×Csubscript𝑃𝑙superscriptℝ𝐾𝐶P_{l}\in\mathbb{R}^{K\times C}italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_K × italic_C end_POSTSUPERSCRIPT with K𝐾Kitalic_K denoting the prompt length for each layer, and C𝐶Citalic_C equaling the feature dimension of LLaMA’s transformer. The prompting at last L𝐿Litalic_L layers can better tune the language representations with higher-level semantics.
B
Compared with previous methods which leverage particular architectures for VQA or include a complicated fusion encoder, S-ViLM is the most efficient and flexible for various vision-language tasks. S-ViLM achieves better performance than competing methods with the accuracy of 43.5% (+1.4%) and 46.4% (+0.5%) on MSRVTT-QA and MSVD-QA, respectively.
Two evaluation settings are considered: (1) linear probing where the backbone encoder is frozen and only the last linear classifier is trained and (2) end-to-end fine-tuning where both the backbone and the classifier are trained. Top-1 accuracy on UCF101 and HMDB51 is reported in Table 4.
In terms of fine-tuning, different tasks are trained independently with their own set of hyperparameters on the target dataset and more details can be found in Appendix A. For temporal action localization, we fix weights of the pre-trained video encoder and its grouping blocks to extract video features, which are then evaluated by G-TAD (Xu et al., 2020), a commonly used method for TAL.
VQA results on two open-ended datasets are shown in Table 2. To enable S-ViLM to deal with the VQA task, we add a fusion head adapted from BUTD (Anderson et al., 2018) by integrating video and text features with simple linear layers. Then a classifier is inserted after the fusion module to perform question answering as a classification problem.
Video action recognition. We select HMDB51 (Kuehne et al., 2011) containing 6,766 videos with 51 categories and UCF101 (Soomro et al., 2012) containing 13,320 videos with 101 categories. Both linear probing and fine-tuning the whole model are explored.
A
In all experiments, the encoder eθsubscript𝑒𝜃e_{\theta}italic_e start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT is a two-layer multi-layered perceptron with hidden layer dimensions of 512512512512 and 256256256256, and output dimension of 128128128128. We trained the encoder for 50 epochs for the layer prediction and 30 epochs for the multilingual and image–caption benchmarks. We used the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.0010.0010.0010.001 and a batch size of 1024102410241024 representations. We used τ=0.07𝜏0.07\tau=0.07italic_τ = 0.07 for ContraSim training.
As a sentence representation, we experiment with [CLS] token representations and with mean pooling of token representations, since Del and Fishel (2021) noted a difference in similarity in these two cases. We report results with [CLS] representations in the main body and with mean pooling in Appendix A.1; the trends are similar.
Table 1: Layer prediction benchmark accuracy results for language and vision cases. For encoder-based methods we report mean and std over 5 random initializations. For ContraSim, we experiment with training with different datasets (rows) and evaluating on same or different datasets (columns).
Our method, ContraSim, achieves excellent results. When trained on one dataset’s training set and evaluated on the same dataset’s test set, ContraSim achieves perfect accuracy under this benchmark, with a large margin over CKA results. This holds for both language and vision cases. Even when trained on one dataset and evaluated over another dataset, ContraSim surpasses other similarity measures, showing the transferability of the learned encoder projection between datasets. This is true both when transferring across domains (in text, between news texts from the Penn Treebank and Wikipedia texts), and when transferring across classification tasks (in images, between the 10-label CIFAR-10 and the 100-label CIFAR-100).
We trained a different encoder for each model, as opposed to the single encoder we trained in all other experiments. This enables ContraSim to be used with representations with different dimensions. Results are summarized in Table 3. We report results with FAISS sampling. Across all pairs, ContraSim achieves superior results.
B
Label ambiguity also affects conventional methods in a Manhattan world. For example, it is often the case that three orthogonal directions can be estimated using the Gaussian sphere representation of VPs [74]; however, the representation does not regard the difference between front and back directions. For a fair comparison in the evaluation, we selected the errors with the smallest angles from among the candidate ambiguous angles in both the conventional methods and our method. Therefore, the estimated pan-angle ranges from −90∘superscript90-90^{\circ}- 90 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT to 90∘superscript9090^{\circ}90 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT.
Figure 9: Projected 3D VP/ADPs and orthogonal points of VP/ADPs in the Manhattan world to estimate camera rotation. These orthogonal points are obtained as VP/ADPs without camera rotation; that is, pan, tilt, and roll angles are 0∘superscript00^{\circ}0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT. Four VP/ADPs of the labels in the front, right, top, and back-right-bottom (BRB) are shown in a unit sphere as an example of VP/ADPs.
Considering generalized cases of label ambiguity, we annotated the image coordinates of VP/ADPs as follows. We 180∘superscript180180^{\circ}180 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT-rotationally align all labels based on two conditions: 1) the images have back labels without front labels, and 2) the images have right labels without front and left labels. Details of the number of labels can be found in the supplementary materials.
After removing label ambiguity, we ignored back labels because the training and test sets had only 0.1%percent0.10.1\%0.1 % and 0.3%percent0.30.3\%0.3 % back labels, respectively. Therefore, the VP estimator detected 13 points, that is, the five VPs (front, left, right, top, and bottom) and eight ADPs in Table 2. If all VP/ADPs are successfully detected, our method can estimate a unique rotation for over 98% images with two or more unique axes from the VP/ADPs in Table 5.
As shown in Table 2, we annotated the VP/ADPs of the image coordinates and labels on the basis of panoramic-image width and height. We found that some generated fisheye images had label ambiguity; that is, we cannot annotate unique VP/ADP labels for these images. For example, we cannot distinguish one image with a 0∘superscript00^{\circ}0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT-pan angle from another with a 180∘superscript180180^{\circ}180 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT-pan angle because we cannot determine the direction of travel of the cars from one image. In other words, we cannot distinguish front labels from back labels in Table 2. Similarly, we cannot distinguish left labels from right labels.
C
A matrix S∈\mathbb⁢Cn×n𝑆\mathbbsuperscript𝐶𝑛𝑛S\in{\mathbb C}^{n\times n}italic_S ∈ italic_C start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT is Schur if and only if, for each positive definite Q∈\mathbb⁢Cn×n𝑄\mathbbsuperscript𝐶𝑛𝑛Q\in{\mathbb C}^{n\times n}italic_Q ∈ italic_C start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT, Q∗=Qsuperscript𝑄𝑄Q^{*}=Qitalic_Q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = italic_Q, there exists a positive definite H∈\mathbb⁢Cn×n𝐻\mathbbsuperscript𝐶𝑛𝑛H\in{\mathbb C}^{n\times n}italic_H ∈ italic_C start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT, with H∗=Hsuperscript𝐻𝐻H^{*}=Hitalic_H start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = italic_H, such that
Moreover, referring to item (vi), in this case the vector p𝑝pitalic_p is not uniquely determined (up to rescaling) since there exist as many linearly independent eigenvectors as the geometric multiplicity of the zero eigenvalue. In fact, any such selction of p𝑝pitalic_p is a valid one for item (vi) because, with disconnected networks, all the equivalent items of the theorem are true if and only if the solution of (13) converges to zero (a trivial synchronized motion).
The implication (i) ⟸⟸\Longleftarrow⟸ (ii) directly follows from the fact that (15) implies σ⁢(Ak)⊆σ⁢(Ae,k)𝜎subscript𝐴𝑘𝜎subscript𝐴𝑒𝑘\sigma(A_{k})\subseteq\sigma(A_{e,k})italic_σ ( italic_A start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ⊆ italic_σ ( italic_A start_POSTSUBSCRIPT italic_e , italic_k end_POSTSUBSCRIPT ). To show (i) ⟹⟹\Longrightarrow⟹ (ii), note that σ⁢(Ak∗)=(σ⁢(Ak))∗𝜎superscriptsubscript𝐴𝑘superscript𝜎subscript𝐴𝑘\sigma(A_{k}^{*})=(\sigma(A_{k}))^{*}italic_σ ( italic_A start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) = ( italic_σ ( italic_A start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, which means that the spectral abscissa (resp. the spectral radius) of Aksubscript𝐴𝑘A_{k}italic_A start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT and Ak∗superscriptsubscript𝐴𝑘A_{k}^{*}italic_A start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT are the same.
To show the equivalence of the six statements in Theorem 1, the proof is structured as follows. We first prove the equivalence among statements (i), (ii) and (iii). Then, we prove the following chain of implications: (iii) ⟹⟹\Longrightarrow⟹ (iv), followed by (iv) ⟹\implies⟹ (v), (v) ⟹\implies⟹ (vi), and finally (vi) ⟹\implies⟹ (ii). This ensures that all the six statements are equivalent.
Hence, as a consequence of Perron-Frobenius theory, the dominant eigenvalue μ0subscript𝜇0\mu_{0}italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of M𝑀Mitalic_M is real and associated with left and right eigenvectors having non-negative elements (Luenberger, 1979, Chapter 6.5, Theorem 1). In view of Gershgorin’s Circle Theorem (Horn and Johnson, 2012, Chapter 6.1), the eigenvalues of M𝑀Mitalic_M lie in the union of the N𝑁Nitalic_N disks centered at −Di⁢isubscript𝐷𝑖𝑖-D_{ii}- italic_D start_POSTSUBSCRIPT italic_i italic_i end_POSTSUBSCRIPT and with radius ∑j=1N𝒲i⁢j=Di⁢isuperscriptsubscript𝑗1𝑁subscript𝒲𝑖𝑗subscript𝐷𝑖𝑖\sum_{j=1}^{N}{\mathcal{W}}_{ij}=D_{ii}∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT caligraphic_W start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = italic_D start_POSTSUBSCRIPT italic_i italic_i end_POSTSUBSCRIPT, i=1,…,N𝑖1…𝑁i=1,\dots,Nitalic_i = 1 , … , italic_N, which are all included in the left half plane and tangent to the imaginary axis at zero. Therefore, all the eigenvalues have non-positive real part and the only possible eigenvalue whose real part is not strictly negative is zero. M𝑀Mitalic_M is singular, because M⁢𝟏N=𝟎N𝑀subscript1𝑁subscript0𝑁M{\mathbf{1}}_{N}={\mathbf{0}}_{N}italic_M bold_1 start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT = bold_0 start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT.
C
Here we describe the method and results for testing HR⁢Q3𝑅subscript𝑄3{}_{{RQ}_{3}}start_FLOATSUBSCRIPT italic_R italic_Q start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_FLOATSUBSCRIPT: That changes in ONNX operator sets are correlated with increased defects.
This causal asymmetry may be attributable to differences in the requirements of DL model converters and DL compilers. The purpose of DL model converters is interoperability (section 2.1), making compatibility failures a focus and reducing the need for optimizations.
For the reasons noted above, we studied the DL model converters from PyTorch and TensorFlow into the ONNX IR (torch.onnx and tf2onnx, respectively). We note that among ONNX model converters, those for PyTorch and TensorFlow have the most failure data available on GitHub (Table 6).
(1) DL model converters lag behind ONNX releases (this might cause a failure to be mis-attributed to another release, i.e., offset in time); (2) Failures might be in any ONNX available release, not just the most recent (possibly inflating the failure rate of a given release);
We also measure the relationship, assessing the correlation in the number of changes in an ONNX release and the number of failures between its release and the next. We use the Spearman correlation, which is a commonly-used and robust metric for measuring a monotonic relationship between two variables (Fenton and Bieman, 2014).
C
FCUs can be classified into two blocks: generic or embedded, each presenting distinct advantages and limitations. Generic FCUs offer versatility, accommodating various frames and components for customizable configurations, which is beneficial for developers. However, their integration and calibration require technical expertise and time. Embedded FCUs, integrated into complete aerial platforms, provide a user-friendly experience with pre-calibrated hardware for reliable performance. This simplicity suits users seeking ready-to-fly solutions, but may limit adaptability to new technologies or specific needs.
The Alphanumeric Viewer is a component that monitors the state of specific variables of the system, e.g. sensor measurements, values corresponding to state estimation, references for controllers, etc. The information is distributed in different panes to facilitate the search for a specific variable of the system. On the other hand, tools like the Keyboard teleoperation are useful to manipulate the drone swarm in a simple way, sending position and speed commands, which allows one to check the system behavior or take control when the autonomous logic fails. This tool allows to test and debug aerial systems fast and securely.
Developing autonomous aerial systems from scratch is a challenging task that requires extensive expertise in many different areas, such as aerodynamics, control systems, sensor integration, or AI algorithms. This is a common problem in the robotics field, so in recent years the robotics community has witnessed the development of several software stacks focused on the control and guidance of ground robots and articulated robots. Navigation2 [1] and MoveIt [2] are two examples that have gained widespread adoption. However, the same level of collaboration and standardization has not been observed in the field of aerial robotics. Even when frameworks for aerial systems have been developed, they often have a narrow focus, like low-level control or working on a concrete platform, which limits their usefulness in broader applications. This fragmentation of effort can make it difficult to take advantage of the strengths of each framework in a particular application, where relying on well-tested and robust algorithms is critical to enabling developers to focus on innovation and customization rather than software engineering.
In 2018 Ebeid et al. presented a survey of open-source hardware and software comparing their main features [7]. In Table I some relevant flight controller projects are listed. These projects may cover both hardware and software development of these controllers. They range from Open Source Hardware (OSH) and Open Source Software (OSS) to proprietary commercial controllers. Furthermore, most of them allow to simulate their behavior in a fully simulated way or Software-in-the-Loop (SITL) or in a Hardware-in-the-Loop (HITL) way, where the autopilot code actually runs on the specific hardware of the controller. This makes it possible to improve the validation of the systems created, leading to more robust and reliable solutions.
This paper presents a novel open-source framework designed for the development of aerial robotic systems, with a strong focus on multi-robot orientation, platform independence, versatility, and modularity. These features have been validated through a series of experiments in both simulated and real-world scenarios, demonstrating the effectiveness of the framework. Although our initial experiments involved only two drones simultaneously, the results confirm the framework’s fundamental capabilities.
C
In this paper, we have discussed whether or not LLMs can actually be deemed as creative; we started by considering Boden’s three criteria, i.e., value, novelty, and surprise. While LLMs are capable of value and a weak version of novelty and surprise, their inner autoregressive nature seems to prevent them from reaching transformational creativity. Then, we have examined perspectives beyond the creativity of their products. A creative process would require motivation, thinking, and perception, properties that current LLMs do not possess. The social dimension of creativity (usually referred to as the press) would demand to be placed in and influenced by a society of creative agents, requiring LLMs adaptive abilities that are only at a very initial stage. We have also framed the problem of creativity in LLMs, and, more in general, machine creativity, in terms of easy problems, i.e., the technical advancements that will be needed to support the algorithmic generation of outputs and the intrinsic hard problem of introducing forms of self-awareness in the creation process itself.
In this paper, we have discussed whether or not LLMs can actually be deemed as creative; we started by considering Boden’s three criteria, i.e., value, novelty, and surprise. While LLMs are capable of value and a weak version of novelty and surprise, their inner autoregressive nature seems to prevent them from reaching transformational creativity. Then, we have examined perspectives beyond the creativity of their products. A creative process would require motivation, thinking, and perception, properties that current LLMs do not possess. The social dimension of creativity (usually referred to as the press) would demand to be placed in and influenced by a society of creative agents, requiring LLMs adaptive abilities that are only at a very initial stage. We have also framed the problem of creativity in LLMs, and, more in general, machine creativity, in terms of easy problems, i.e., the technical advancements that will be needed to support the algorithmic generation of outputs and the intrinsic hard problem of introducing forms of self-awareness in the creation process itself.
LLMs might be able to generate creative products in the future. However, the fact that they will be able to generate these outputs will not make them intrinsically creative. Indeed, as Floridi and Chiriatti (2020) puts it, it is not what is achieved but how it is achieved that matters. An interesting definition that considers both the what and how dimensions is the one from Gaut (2003): creativity is the capacity to produce original and valuable items by flair. Exhibiting flair means exhibiting a relevant purpose, understanding, judgment, and evaluative abilities. Such properties are highly correlated with those linked with process, i.e., motivation, perception, learning, thinking, and communication (Rhodes, 1961). Motivation is a crucial part of creativity, as it is the first stage of the process. Usually, it comes from an intrinsic interest in the task, i.e., the activity is interesting and enjoyable for its own sake (Deci and Ryan, 1985). However, LLMs lack the intention to write. They can only deal with “presented” problems, which are less conducive to creativity (Amabile, 1996). The process continues with the preparation step (reactivating store of relevant information and response algorithms), the response generation, and its validation and communication (Amabile, 1983). The last two steps allow one to produce different response possibilities and to internally test them in order to select the most appropriate. Again, LLMs do not contain such a self-feedback loop. At the same time, they are not trained to directly maximize value, novelty, or surprise. They only output content that is likely to follow given a stimulus in input (Shanahan, 2024b). In other words, they stop at the first stage of creative learning, i.e., imitation, not implementing the remaining ones, i.e., exploration and intentional deviation from conventions (Riedl, 2018).
In addition, we have also investigated the practical implications of LLMs and their creative role, considering both legal and societal impacts. In fact, the current legal framework does not appear to be completely suited to the fast-moving field of generative AI. Moreover, the impact of these technologies on creative professions and the arts is difficult to forecast at this stage, but will definitely be considerable.
Nonetheless, the outputs from such models are often considered creative by the person interacting with them or exposed to their best productions. Though this is apparently in contrast with what was discussed above, we can explain this phenomenon by considering the fact that our perception does not usually align with theoretical definitions of creativity. Indeed, we do not typically judge the creativity of a product by considering its potential novelty and surprise in relation to its producer, but rather in relation to ourselves. Something can be new for the beholder, leading to a new kind of novelty which we call B-novelty, as it is the one “in the eye of the beholder”, but not new for the producer nor the entire human history. The same applies to surprise: a product can violate the observer’s expectations in many ways without being unexpected considering the entire domain. In other words, the product of an LLM can appear to be creative - or be B-creative - even if it is not truly creative according to the theory of creativity.
C
(a) Existing SFUDA object detection works utilize feature alignment or sample generation to help with the pseudo labeling. These approaches mainly focus on exploiting the source model. (b) Our proposed SUP-ICI utilizes instance-level contrastive learning (CL) to make use of the foreground-background semantic information of the unlabeled target images. Our weighted entropy (WE) loss is also incorporated for label denoising.
Unsupervised domain adaptation (UDA) is a practical setting where the labeled source data are provided for adapting to the unlabeled target data. Most existing methods adopt feature alignment for UDA object detection. In [3], the authors build image-level and instance-level domain classifiers to implement feature alignment in an adversarial manner. Following this, a strong-weak domain alignment model [4] is proposed to focus on the local and global features separately. Authors of [5] and [6] employ multi-level domain feature alignment. Xu et al. [7] propose a categorical regularization framework exploiting the categorical consistency between image-level and instance-level predictions. A center-aware feature alignment method [8] is developed to allow the discriminator to pay more attention to the foreground features. Some other strategies to deal with foreground and background features are explored [9, 10]. In addition to the approaches focusing on the domain shifts, solving the problem of inaccurate label in target domain is another stream [11, 12, 13, 14, 15, 16, 17]. To adapt to the domain shift, the object detector is trained using the source labeled samples and the refined generated annotations in the target domain [11]. Cai et al. [12] simulate unsupervised domain adaptation as semi-supervised learning, and integrate the object relations into the measure of consistency cost between teacher and student modules. Cross-domain distillation [16] is utilized to alleviate the model bias in Mean-Teacher. He et al. [17] design a dual-branch self-distillation framework with a cross-domain perceiver for teacher-student mutual learning. Despite their efficacy, all these methods assume access to the source domain, which may cause privacy issues in the medical field. Unlike them, we perform UDA pulmonary nodule detection by only leveraging a pre-trained source model.
Source-free unsupervised domain adaptation (SFUDA) denotes the setting of adapting to the target domain given only a well-trained source model and unlabeled target data. One stream of the SFUDA methods is implicitly aligning the feature distribution of the source and target domain using the generative adversarial networks (GAN) [42, 43]. Another stream is directly exploiting the knowledge of the source model, especially because the source model can generate noisy labels on the unlabeled target domain [44, 45, 46]. Qiu et al. [47] propose a two-stage method named CPGA, which first utilizes the classifier of the source model to generate source prototypes via contrastive learning, and then align each pseudo-labeled target data to the corresponding source prototypes. Differently, in [48, 49], the authors define the local affinity of the target data, and encourage samples with high local affinity to have consistent predictions. Zhang et al. [50] present a new paradigm called DaC, combining the global class-wise pseudo labeling and the local neighborhood consistency. Besides, some works are tailored for the natural and medical image segmentation [51, 52, 53, 54, 55]. Despite the impressive progress these approaches have made for the image classification and segmentation tasks, they are not applicable to the detection tasks. They only transfer the semantic information, and are not able to achieve precise instance localization. In addition, there exists an inherent class imbalance in the detection network.
Deep learning has achieved remarkable success in various object detection tasks. In the medical field, deep networks are able to reach clinical expert-level performance, e.g. pulmonary nodule detection [1, 2], etc. Nonetheless, these networks are usually domain-specific. In other words, they work well when the training/source and target data are drawn from similar distributions. However, in real-world applications, data from different medical centers or scenarios often have different distributions. The networks obtained from the source domain usually have significant performance degradation on the target datasets, which impedes the applications of the deep learning algorithms in real-world medical image analysis. Furthermore, the medical data are time-consuming and expensive to annotate, leading to the lack of labeled training samples for the target domain. Therefore, object detection under the setting of unsupervised domain adaptation (UDA) has gotten a lot of attention in recent years, and considerable effort has been devoted to this problem setting. Since there exists domain shifts between the source and target domain in terms of the illumination, background, style, object appearance, etc, many works apply feature alignment to reduce the domain gap [3, 4, 5, 6, 7, 8, 9, 10]. Some other works regard the UDA task as training with noisy labels on the target domain [11, 12, 13, 14, 15, 16, 17].
However, medical data often involve private information, which makes them not shareable. Consequently, traditional UDA methods, which often rely on access to labeled source data, are not directly applicable in this context. Thus in this paper, we aim at the more realistic but challenging source-free unsupervised domain adaptation (SFUDA) setting for pulmonary nodule detection. In this setting, we are confined to utilizing only a pre-trained source model along with unlabeled samples from the target domain, thereby circumventing the need for direct access to the sensitive source data.
D
Next, we compare our method with the widely used K-means scenario reduction method. Specifically, we reduce a collection of scenarios (load realizations) into two reduced sets: one with 5555 scenarios and the other with 2222 scenarios. To generate load realizations, we use a Gaussian distribution centered around the input forecast, with standard deviation of 50%percent5050\%50 % of this forecast.
Compare the performance of using our method and the K-means scenario reduction method for the 118-bus system. The resulting average total costs and solving times across 100100100100 test instances are compared against the benchmark solutions obtained from CVXPY.
Compare the performance of using our method and the affine policy method to solve risk-limiting dispatch on the 2000200020002000-bus system. The resulting total costs and solving times are averaged out over 1000100010001000 test instances and compared against the benchmark solutions obtained from CVXPY.
We assess the performance of our method and the K-means scenario reduction approach across different experimental configurations by comparing their average total costs and solving times across 100100100100 test instances against the benchmark solutions obtained using CVXPY. The results are detailed in Table V. Particularly, all cost values are normalized relative to CVXPY’s solutions, and we report the average increase in total cost within the table.
The results of using different methods to solve the risk-limiting dispatch problem in (3) on the 118-bus system and the 2000-bus system are provided in Table II and Table III, respectively. The average total costs of different methods are compared against the cost values produced by CVXPY.
C
Conventionally, BNN researchers have focused on improving predictive performance using human-crafted priors over network parameters or predictive functions (e.g., Louizos et al., 2017; Tran et al., 2020; Matsubara et al., 2021; Fortuin et al., 2021a). However, several concerns have been raised with BNN priors (Wenzel et al., 2020; Noci et al., 2021). It also stands to reason that the vast store of semantic information contained in unlabelled data should be incorporated into BNN priors, and that the potential benefit of doing so likely exceeds the benefit of designing better, but ultimately human-specified, priors over parameters or functions.
Figure 3: BNN Prior Predictives. We investigate prior predictives by computing the probability ρ𝜌\rhoitalic_ρ that particular image pairs have the same label under the prior, and examining the distribution of ρ𝜌\rhoitalic_ρ across different sets of image pairs. We consider three sets of differing semantic similarity: (i) augmented images; (ii) images of the same class; and (iii) images of different classes. Left: Conventional BNN prior. Right: Self-supervised BNN learnt prior predictive. The self-supervised learnt prior reflects the semantic similarity of the different image pairs better than the BNN prior, which is reflected in the spread between the different distributions.
Graphical Evaluation.  First, we visualise the BNN and self-supervised BNN prior predictive (Fig. 1 and 3). The standard BNN prior predictive reflects a belief that all three image pair groups are similarly likely to have the same label, and thus does not capture semantic information well. In contrast, the self-supervised prior reflects a belief that image pairs with higher semantic similarity are more likely to have the same label. In particular, the self-supervised prior is able to distinguish between image pairs of the same class and of different classes, even without access to any ground-truth labels.
We then further demonstrate that self-supervised BNN prior predictives reflect input-pair semantic similarity better than normal BNN priors (§4). To do so, we develop a methodology to better understand the prior predictive distributions of BNNs. Our approach is to measure the probability of pairs of data points having the same label under the prior. Intuitively, pairs of points that are more semantically similar should be more likely to have the same label under the prior predictive. Applying this methodology, we see that the functional priors learned by self-supervised BNNs distinguish same-class input pairs and different-class input pairs much better than conventional BNNs (Fig. 1b).
Figure 1: Self-Supervised Bayesian Neural Networks. (a) Pre-training in self-supervised BNNs corresponds to unsupervised prior learning. We learn a model with a prior distribution such that augmented images likely have the same label and distinct images likely have different labels under the prior predictive. (b) Self-supervised BNN priors assign higher probabilities to semantically consistent image pairs having the same label compared to semantically inconsistent image pairs. Here, semantically consistent image pairs have the same ground-truth label, and semantically inconsistent image pairs have different ground-truth labels. The plot shows a kernel density estimate of the log-probability that same-class and different-class image pairs are assigned the same label under the prior.
D
The assumption in subsection 3.2 on μXsubscript𝜇𝑋\mu_{X}italic_μ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT is satisfied by any Borel measure on a variety of underlying metric spaces that arise in geometric data analysis, including doubling metric spaces, many Banach spaces, and any Hilbert space with the norm metric. Most importantly, it holds for most of the spaces of values taken by shape descriptors, including the barcode space of topological data analysis [38].
Assumptions imposed in the geometric data analysis literature on the probability measure of interest (in the above example this corresponds to η˘t,Tsubscript˘𝜂𝑡𝑇\breve{\eta}_{t,T}over˘ start_ARG italic_η end_ARG start_POSTSUBSCRIPT italic_t , italic_T end_POSTSUBSCRIPT) essentially provide a lower bound on the volume of a ball that has nonempty intersection with the object as a polynomial or affine function of the radius. Notably, this will be the case if the measure is assumed uniform, has a positive density, or more generally satisfies the (a,b)𝑎𝑏(a,b)( italic_a , italic_b )-standard assumption [66, 48, 28].
By subsection 3.2 and subsection 3.2, a natural way to formulate a test for time-invariance of the geometric features is via the corresponding ball volume processes of the shape descriptors. In addition, results in Section 2 show that we can express approximation errors in terms of the Gromov-Hausdorff distance dG⁢Hsubscript𝑑𝐺𝐻d_{GH}italic_d start_POSTSUBSCRIPT italic_G italic_H end_POSTSUBSCRIPT, which allows us to translate dependence assumptions imposed on the latent process to those of the (ℬ,dℬ)ℬsubscript𝑑ℬ(\mathscr{B},d_{\mathscr{B}})( script_B , italic_d start_POSTSUBSCRIPT script_B end_POSTSUBSCRIPT )-valued process (see subsection 4.1 and subsection 4.1). It follows from
Although it is possible to apply this characterization directly for inference, we can often use a more direct approach. Specifically, our second result establishes general conditions under which the geometric features of a stochastic process can in fact be fully characterized by the process of ball volumes (subsection 3.2). The idea is that, although the process generating the point clouds might not satisfy these conditions, the induced process on the space of values for a shape descriptor (such as those mentioned above) will.
A consequence of the preceding theorems is that one can often characterize the geometric features as captured by shape descriptors of a Polish-valued process X𝑋Xitalic_X via the ball volume processes of the fidis. This is convenient for hypothesis testing. For example, note that the ball volume corresponding to the fidi on the time set J𝐽Jitalic_J is given by
D