Datasets:
_id
stringlengths 36
36
 text
stringlengths 5
665k
 marker
stringlengths 3
6
 marker_offsets
sequence  label
stringlengths 28
32


944a766396784f5f8353d7a54d000d23  Our designed network is conceptually simple and modifies twostep approaches [1]} to jointly optimized networks through a message passing module. The first myocardium segmentation network takes LGECMR as an input and generates a probability map. A message passing module then combines the information of probability maps with the original images to reproduce a masked input data for the scar segmentation network. This effectively suppresses noises or artifacts from nonROI regions in scar detection. It is worthy to mention that our framework is highly flexible to various types of segmentation networks for both tasks. In this paper, we employ pretrained TransUNet [2]}, which shows superior performance on medical image segmentation tasks over other commonly used networks such as UNet [3]} An overview of our proposed approach is presented in Fig. REF .
<FIGURE>  [1]  [
[
77,
80
]
]  https://openalex.org/W3000054715 
cc0da086a9524af3bb61430224bd4a2d  In this study, we develop a more accurate approximation method that allows us to describe the dynamics, as well as the equilibrium, of contagion on complex networks in an almost exact manner. Our method, called the messagepassing approach, is a more elaborated version of the conventional meanfield approximation in that the messagepassing approach takes into account the directionality of the spreading process [1]}Messagepassing approaches are approximation methods that have been developed in statistical physics.
In the study of ferromagnetism, Ising model, in which each of atomic spins is in one of two states \(\lbrace 1,+1\rbrace \) , was initially analyzed based on the meanfield theory developed by [2]}. A more accurate solution of Ising model was obtained using Bethe approximation [3]}, and then a variant of the messagepassing approximation, called the cavity method, was developed as an extended version of Bethe approximation, which can be applied to wider classes of models in statistical physics [4]}..
In the meanfield method, the probability of a neighboring player being active is given as a function of the probability that the neighbor's neighbors are active, and a fixed point of the selfconsistent equation corresponds to the steady state of the spreading process, that is, a Nash equilibrium [5]}, [6]}, [7]}. However, the recursive expression of the selfconsistent equation necessarily incorporates a repercussion of peer effects among neighboring players because social influence, or “messages,” may be transmitted multiple times between the neighbors. The messagepassing method overcomes this problem by imposing a directionality condition that one of the neighbors to which a message will be passed at time step \(t+1\) is not yet active at time step \(t\) . In general, meanfield methods are accurate enough in the limit of large degrees, where network density is sufficiently high, but they do not necessarily provide good approximations for sparse networks [8]}, [9]}. This is because the meanfield assumption would not be harmful for sufficiently dense networks in which nodes are uniformly well connected. On the other hand, for sparse networks in which there is a large heterogeneity in the connectivity of each node, messagepassing approaches are generally more accurate [8]}, [9]}.
Nevertheless, in the previous studies on network games, little quantitative validation has been performed to examine if the meanfield approximation correctly captures the “true” Nash equilibrium.
Indeed, our quantitative validation based on synthetic networks confirms that the meanfield method tends to overestimate the equilibrium fraction of active players, while the messagepassing method can make an almost exact prediction for both types of contagion we examined.
 [6]  [
[
1333,
1336
]
]  https://openalex.org/W1533368239 
835a1f2e8c814e5c88a65cda3ac543b7  These studies in the field of network science usually take the threshold rules as given, but our work provides a microfoundation from a gametheoretic perspective; in coordination games and utilitybased games, the threshold value is respectively obtained as a function of the payoff parameters and the preference parameters. Many studies analyzing fractionalthreshold models employ variants of the messagepassing equation proposed by [1]} and [2]} to calculate the steadystate equilibrium.
While the messagepassing approach is not new in network science, to the best of our knowledge, we are the first to provide formal proofs for the existence of and convergence to a fixed point of the messagepassing equation. Furthermore, we derive generalized cascade conditions, in both monoplex and multiplex models, that include the conventional cascade conditions obtained in previous studies as special cases [3]}, [1]}, [5]}, [6]}, [7]}.
 [3]  [
[
908,
911
]
]  https://openalex.org/W2114696370 
88660a3842aa4cd6a24010c279e736cd  These studies in the field of network science usually take the threshold rules as given, but our work provides a microfoundation from a gametheoretic perspective; in coordination games and utilitybased games, the threshold value is respectively obtained as a function of the payoff parameters and the preference parameters. Many studies analyzing fractionalthreshold models employ variants of the messagepassing equation proposed by [1]} and [2]} to calculate the steadystate equilibrium.
While the messagepassing approach is not new in network science, to the best of our knowledge, we are the first to provide formal proofs for the existence of and convergence to a fixed point of the messagepassing equation. Furthermore, we derive generalized cascade conditions, in both monoplex and multiplex models, that include the conventional cascade conditions obtained in previous studies as special cases [3]}, [1]}, [5]}, [6]}, [7]}.
 [6]  [
[
926,
929
]
]  https://openalex.org/W2052177518 
5421b098ae0b4e4ba49ef18577d7d6c3  There is a strand of literature on continuousaction games on networks in which each player takes an action represented by a real value \(x\ge 0\) [1]}, [2]}. Typically, player \(i\) maximizes the following quadratic utility function
\(u_i(x_i;{\bf {x}}_{i}) = \alpha x_i  \frac{1}{2}x_i^2 +\gamma \sum _{j\ne i} \mathcal {A}_{ij}x_ix_j,\)
 [2]  [
[
154,
157
]
]  https://openalex.org/W3123506665 
c49dd44efbb148459484526db8a88ec6  We call the condition \(\lim _{q\downarrow \rho _0} G^\prime (q)>1\) the generalized firstorder cascade condition since it is essentially a generalized version of the standard cascade condition proposed by [1]}, [2]}, and [3]}:
 [2]  [
[
214,
217
]
]  https://openalex.org/W2028985170 
d7f5e6e0e5324a52b332192db819e193  In reality, people are connected to each other in a wide variety of social contexts. These include online spaces such as Twitter and Facebook, as well as physical spaces such as schools and work places.
In network science, such situations are modeled as multiplex networks, in which each layer represents a single network formed in a particular social context [1]}If a common set of nodes forms multiple networks, the set of networks (or layers) is called a multiplex network. If the sets of nodes in different layers are not common, the networks are collectively called multilayer networks..
 [1]  [
[
360,
363
]
]  https://openalex.org/W2965889227 
0716a8ceaa0c46498ba9d3b517d4a938  Let \(q_t^{\ell }\) denote the probability of a randomly chosen neighbor in layer \(\ell \in \lbrace A,B\rbrace \) being active. The recursion equations for \(q_t^A\) and \(q_t^B\) are given by [1]}, [2]}:
\(q_t^{A} &= \rho _0 + (1\rho _0)\sum _{k_B=0}^\infty p_{k_B}\sum _{m_B=0}^{k_{B}}\mathcal {B}_{m_B}^{k_B}\left(q_{t1}^{B}\right)\sum _{k_A=1}^\infty \frac{k_A p_{k_A}}{z_A}\sum _{m_A=0}^{k_A1}\mathcal {B}_{m_A}^{k_A1}\left(q_{t1}^{A}\right)\mathcal {F}(m_A,m_B,\mbox{$k$}), \\&\equiv g^{(A)}(q_{t1}^A,q_{t1}^B), \\q_t^{B} &= \rho _0 + (1\rho _0)\sum _{k_A=0}^\infty p_{k_A}\sum _{m_A=0}^{k_{A}}\mathcal {B}_{m_A}^{k_A}\left(q_{t1}^{A}\right)\sum _{k_B=1}^\infty \frac{k_B p_{k_B}}{z_B}\sum _{m_B=0}^{k_B1}\mathcal {B}_{m_B}^{k_B1}\left(q_{t1}^{B}\right)\mathcal {F}(m_A,m_B,\mbox{$k$}), \\&\equiv g^{(B)}(q_{t1}^A,q_{t1}^B).\)
 [1]  [
[
198,
201
]
]  https://openalex.org/W2027367074 
1cd09ec47cc54965ba80fcba6786e5a3  For a given \(V_\mu (n)\) the Wilson Dirac operator with optional clover term (REF ) is defined as [1]}
\(D_\mathrm {W}(n,m)=\sum _\mu \gamma _\mu \nabla _\mu ^\mathrm {std}(n,m)\frac{ar}{2}\triangle ^\mathrm {std}(n,m)+m_0\delta _{n,m}+aC(n,m)\)
 [1]  [
[
100,
103
]
]  https://openalex.org/W2172949211 
af12af854f004ce5bceba74dd6dbf3e2  with \((\rho _1,\rho _2,\rho _3,\rho _4)\equiv (64,16,4,1)/432\) and \((\lambda _0,\lambda _1,\lambda _2,\lambda _3,\lambda _4)\equiv (240,8,4,2,1)/64\) .
The sum in (REF ) extends over the positive Euclidean directions, i.e. \(\mu \in \lbrace 1,\ldots ,4\rbrace \) , and the bare quark mass \(m_0\) undergoes both additive and multiplicative renormalization.
In Eq. (REF ) the last sum extends over (positive and negative) indices \((\nu ,\rho ,\sigma )\) whose absolute values are pairwise unequal and different from \(\mu \) (which is \(>0\) ).
In Eq. (REF ) the last sum extends over indices \((\mu ,\nu ,\rho ,\sigma )\) whose absolute values are pairwise unequal.
Here \(W_\mathrm {dir}(n)\) denotes a link in direction “dir” which may be onaxis (dir=\(\mu \) ) or offaxis with Euclidean length \(\sqrt{2}\) (dir=\(\mu \nu \) ), \(\sqrt{3}\) (dir=\(\mu \nu \rho \) ), \(\sqrt{4}\) (dir=\(\mu \nu \rho \sigma \) ).
This \(W_\mathrm {dir}(n)\) is defined as the average of all chains of \(V\) links that connect \(n\) and \(n+\mathrm {dir}\) with the minimum number of hops.
How the \(V\) links (contained in \(W\) and \(C\) ) relate to the original \(U\) links has been explained in Sec. .
As a result, \(W_\mathrm {dir}(n)\) is a legitimate parallel transporter from \(n+\mathrm {dir}\) to \(n\) , see Tab. REF for details.
More details on the physics motivation and the freefield behavior of this operator are given in Refs. [1]}, [2]}.
 [1]  [
[
1455,
1458
]
]  https://openalex.org/W2016049803 
b166acc035474541b8f2da6fc020f9eb  For a given \(V_\mu (n)\) the Susskind (“staggered”) Dirac operator is defined as [1]}, [2]}
\(D_\mathrm {S}(n,m)=\sum _{\mu } \eta _\mu (n)\,\frac{1}{2}\,[V_{\mu }(n)\delta _{n+\hat{\mu },m}V_{\mu }^\dagger (n\hat{\mu })\delta _{n\hat{\mu },m}] + m_0\delta _{n,m}\)
 [1]  [
[
83,
86
]
]  https://openalex.org/W2160049695 
845918f7886a458abf859bbb2edf4e0c  For a given \(V_\mu (n)\) the Susskind (“staggered”) Dirac operator is defined as [1]}, [2]}
\(D_\mathrm {S}(n,m)=\sum _{\mu } \eta _\mu (n)\,\frac{1}{2}\,[V_{\mu }(n)\delta _{n+\hat{\mu },m}V_{\mu }^\dagger (n\hat{\mu })\delta _{n\hat{\mu },m}] + m_0\delta _{n,m}\)
 [2]  [
[
89,
92
]
]  https://openalex.org/W4255397497 
4fbcbe80e2c34b00abd7245b47762a35  In Figs. REF –REF no sign of numerical imprecision is seen; the three symbols at a given iteration count (for either sp or dp) are just horizontally displaced.
A second issue is worth mentioning.
On the Skylake architecture the Brillouin operator converges in about twice the time of the Wilson operator.
The additive mass shift of the two Dirac operators is roughly in the samePreliminary spectroscopy on a handful of configurations suggests \(M_\pi ^\mathrm {wils}\simeq 760\,\mathrm {MeV}\) and \(M_\pi ^\mathrm {bril}\simeq 670\,\mathrm {MeV}\) . ballpark.
Thus the timings of Sec. and Sec. (where \(D_\mathrm {B}\) seemed about an order of magnitude more expensive than \(D_\mathrm {W}\) ) do not represent the last word on the relative cost of these two Dirac operators.
The reason is the more compact eigenvalue spectrum of \(D_\mathrm {B}\) [reaching up to \(\mathrm {Re}(z)=2+am_0\) in the free field case] in comparison to \(D_\mathrm {W}\) [which extends to \(\mathrm {Re}(z)=8+am_0\) ].
Hence at fixed pion mass, the matrixvector cost explosion (in trading \(D_\mathrm {W}\) for \(D_\mathrm {B}\) ) is mitigated by a reduced condition number (see also the discussion in Refs. [1]}, [2]}, [3]}).
 [3]  [
[
1210,
1213
]
]  https://openalex.org/W2577574760 
1de591b7a8a94b30ae22797b9c93800a  Similar to sparse convolutions, an efficient implementation of SkipConv requires blockwise structured sparsity in the feature maps [1]}, [2]}, for two main reasons.
First, block structures can be leveraged to reduce the memory overhead involved in gathering and scattering of input and output tensors [1]}. Additionally, many hardware platforms perform the convolutions distributed over small patches (\(8\times 8\) ), so do not leverage any finegrained spatial sparsity smaller than these block sizes.
 [1]  [
[
133,
136
],
[
303,
306
]
]  https://openalex.org/W2963896595 
60c00cbe27114845b8eb079ebb42a258  Similar to sparse convolutions, an efficient implementation of SkipConv requires blockwise structured sparsity in the feature maps [1]}, [2]}, for two main reasons.
First, block structures can be leveraged to reduce the memory overhead involved in gathering and scattering of input and output tensors [1]}. Additionally, many hardware platforms perform the convolutions distributed over small patches (\(8\times 8\) ), so do not leverage any finegrained spatial sparsity smaller than these block sizes.
 [2]  [
[
139,
142
]
]  https://openalex.org/W3035678286 
077beae6362847fcb3efecdf19cfff9d  We use EfficientDet [1]}, the state of the art architecture for object detection, and apply SkipConv on top of it. We conduct our experiments on D0 to D3 as the most efficient configurations [1]}, though more expensive configurations, D4 to D7, can similarly benefit from SkipConv.
Each model is initialized with pretrained weights from MS COCO dataset [3]} and trained using SGD optimizer with momentum \(0.9\) , weight decay \(4e5\) and an initial learning rate of \(0.01\) for 4 epochs. We decay the learning rate of a factor of 10 at epoch 3. All models are trained with minibatches of size 4 using four GPUs, where synchronized batchnorm is used to handle small effective batch sizes.
We use SkipConv with learned gates, which is trained for each EfficientDet configuration using the sparsity loss coefficient set to \(\beta =0.01\) .
During training we apply random flipping as data augmentation. The clip length is set to 4 frames both for training and inference.
<FIGURE>  [1]  [
[
20,
23
],
[
192,
195
]
]  https://openalex.org/W3034971973 
0bab4bee7d7844a5b62a73cc1f78c1b6  Moreover, we observe that SkipConv outperforms DFF [1]} both in terms of accuracy and computational cost.
We hypothesize that DFF performances, solely relying on opticalflow to warp features across frames, are sensitive to the accuracy of the predicted motion vectors.
However, there are lots of small objects (distant vehicles) in this dataset for which optical flow predictions are noisy and inaccurate.
Finally, our experiments demonstrate that SkipConv achieves the state of the art accuracy on UADETRAC dataset, reported by SpotNet [2]}, with orders of magnitude less computes (\(6.36\) vs \(972.0\) GMAC).
<TABLE>  [1]  [
[
52,
55
]
]  https://openalex.org/W2552900565 
dd2b75bcb67b4dbc926f449ae9843270  We conduct our experiments on the JHMDB dataset [1]}, a collection of 11,200 frames from 316 video clips, labeled with 15 body joints.
Video sequences are organized according to three standard train/test partitions and we report average results over the three splits.
We evaluate the performance using the standard PCK metric [2]}. Given a bounding box of the person with height \(h\) and width \(w\) , PCK considers a candidate keypoint to be a valid match if its distance with the groundtruth keypoint is lower than \(\alpha \cdot \max (h, w)\) . We set \(\alpha =0.2\) . Our experimental setup is consistent with prior works [3]}, [4]}, [5]}.
 [5]  [
[
642,
645
]
]  https://openalex.org/W2991833656 
346185ae232a48b0ab2e6f9b79132005  We investigate how the theoretical speed ups, measured by MAC count reductions, translate to actual wall clock runtimes. Following [1]} we use im2col based implementation of sparse convolutions. This algorithm reformulates the convolution as a matrix multiplication between input tensor and convolution kernels flattened as two matrices. The multiplication is computed only on nonsparse columns while filling the other columns by zero. We report the overall wall clock time spent on conv layers vs SkipConv layers for a HRNetw32 architecture. The runtimes are reported on CPUIntel Xeon e51620 @ 3.50GHz..
As reported in Table REF , the MAC count reductions obtained by SkipConv translate to wall clock runtimes. The improvements on runtimes are roughly half of the theoretical speed ups as MAC count does not count for memory overheads involved in sparse convolutions. The gap between theoretical and real runtime improvements can be further reduced through highly optimized CUDA kernels as demonstrated in [2]}, [3]}.
 [1]  [
[
131,
134
]
]  https://openalex.org/W2604998962 
685b2706019c42919f9282ca4e095647  In this section we analyze the amount of sparsity induced by SkipConv in different levels of a backbone network.
To this end, we refer to the pose estimation experiments described in Sec. 4.2 of the main paper, and we rely on the same setting by considering the JHMDB dataset [1]} with a HRNetw32 backbone network [2]}.
We train the SkipConv model with Gumbel gates under different sparsity objectives, by varying \(\beta \) in \([1e5,5e5,10e5,15e5]\) .
For completeness, we also report the performance of these models, that score \([0.95, 0.94, 0.93, 0.91]\) in PCK respectively.
We then measure how the firing probability of gates in SkipConv changes at different depths in the network.
 [1]  [
[
277,
280
]
]  https://openalex.org/W2034014085 
14c3cb7345e142d886e3963b6450d2c1  The organizer baseline F1 scores for the validation and test data are 0.58 and 0.654 respectively.
The details of the baseline are given in [1]}.
The obtained results with our submitted runs are given in Table REF .
For SkipGRun, we achieved 0.6913 of F1 score with 0.6952 and 0.6893 of precision and recall respectively.
The SkipGRun outperformed the CbowRun by around 0.40 in terms of F1 score.
CbowRun outperformed the organizers' baseline by a slight margin, however, SkipGRun outperformed the baseline by around 0.4 in terms of averaged F score.
 [1]  [
[
140,
143
]
]  https://openalex.org/W3115081393 
0fba4fb1a8f142f482d2969b5e2944c8  Our first main cluster combinatorics conjecture (Conjecture REF ) asserts that every cluster monomial in \({A}({\rm SL}_k,\mathbb {S})\) is the invariant of
a planar tagged diagram and also of a tagged diagram with no cycles on interior vertices. This conjecture extends those from [1]}, [2]} to higher rank, and more novelly to surfaces with punctures.
 [2]  [
[
289,
292
]
]  https://openalex.org/W2963568783 
68cbfeaac4c24847b2ac220df29cf847  Let \(\mathbb {S} = (\mathbf {S},\mathbb {M})\) be an oriented marked surface [1]}. The set of marked points \(\mathbb {M}\) decomposes into the set of punctures
\(\mathbb {M}_{\circ } := \mathbb {M} \cap \text{int }\mathbf {S}\) and the set of boundary points \(\mathbb {M}_{\partial } := \mathbb {M} \cap \partial \mathbf {S}\) . Denote by \(S_{g,h}\) the oriented closed genus \(g\) surface with \(h\) punctures and by \(D_{n,h}\) the \(n\) gon with \(h\) punctures.
 [1]  [
[
79,
82
]
]  https://openalex.org/W2153279415 
fa7b9751e369495d92b9fc856e78d96d  Remark 3.8 We do not prove here that our initial clusters provide rational coordinate systems on \(\mathcal {A}^{\prime }({\rm SL}_k,\mathbb {S})\) although we believe this should be true. One might be able to prove these statements by mimicking the proofs of these statements given in [1]}, or by establishing the cluster fibration property from Remark REF .
 [1]  [
[
287,
290
]
]  https://openalex.org/W1977026965 
b8527e52546647c7bf9d8f1f0b8a7544  cf. [1]} or [2]}.
 [1]  [
[
4,
7
]
]  https://openalex.org/W2963663079 
9e122ff6e5f04b47871a3f243b2e0aeb  As an example, whenever a tensor diagram \(T\) has a crossing, one can apply the following crossing removal relation [1]}:
\(\begin{tikzpicture}[thick, decoration={markings,mark=at position 1 with {[scale=1.7]{>}}},postaction={decorate},shorten >=0.4pt] (1,1)(.55,.2);[thick] (.55,.2)(.2,1);\node at (1.2,.5) {a};[thick, decoration={markings,mark=at position 1 with {[scale=1.7]{>}}},postaction={decorate},shorten >=0.4pt] (.2,1)(.25,.2);[thick] (.25,.2)(1.0,1);\node at (1.2,.5) {a};\node at (.4,.5) {b};\node at (1.5,0) {=};\node at (4.0,.8) {\small a};\node at (4.,.8) {\small b};\node at (3.7,.05) {\small ac};\node at (6.4,.8) {\small b};\node at (6.4,.8) {\small a};\node at (6.7,.05) {\small b+c};\node at (5.25,.5) {\small c};\node at (5.2,.7) {\tiny b+ca};\end{tikzpicture}\node at (2.5,0) {\displaystyle \sum _{c }};[thick, decoration={markings,mark=at position 1 with {[scale=1.7]{>}}},postaction={decorate},shorten >=0.4pt] (4.4,1)(4.4,.4);[fill= black] (4.4,.3) circle [radius = .08];[thick, decoration={markings,mark=at position 1 with {[scale=1.7]{>}}},postaction={decorate},shorten >=0.4pt] (4.4,.25)(4.4,.4);[fill= white] (4.4,.5) circle [radius = .08];[thick, decoration={markings,mark=at position 1 with {[scale=1.7]{>}}},postaction={decorate},shorten >=0.4pt] (4.4,.58)(4.4,1);[thick] (4.4,1)(4.4,1.2);[thick, decoration={markings,mark=at position 1 with {[scale=1.7]{>}}},postaction={decorate},shorten >=0.4pt] (6.1,1)(6.1,.4);[fill= white] (6.1,.3) circle [radius = .08];[thick, decoration={markings,mark=at position 1 with {[scale=1.7]{>}}},postaction={decorate},shorten >=0.4pt] (6.1,.25)(6.1,.4);[fill= black] (6.1,.5) circle [radius = .08];[thick, decoration={markings,mark=at position 1 with {[scale=1.7]{>}}},postaction={decorate},shorten >=0.4pt] (6.1,.58)(6.1,1);[thick] (6.1,1)(6.1,1.2);[thick, decoration={markings,mark=at position 1 with {[scale=1.7]{>}}},postaction={decorate},shorten >=0.4pt] (4.4,.3)(6.0,.3);[thick, decoration={markings,mark=at position 1 with {[scale=1.7]{>}}},postaction={decorate},shorten >=0.4pt] (6.1,.5)(4.5,.5);\)
 [1]  [
[
118,
121
]
]  https://openalex.org/W3100433565 
97d1822903b148c9b652d84d7deccce3  The two compositions \(\rho \circ \sigma \) and \(\sigma \circ \rho \) correspond to Dehn twists about simple closed curves with geometric intersection number two. Any two such mapping classes generate the pure mapping class group of \(S_{0,4}\) , see [1]}. There are clearly finitely many tagged triangulations of \(S_{0,4}\) modulo the pure mapping class group.
 [1]  [
[
256,
259
]
]  https://openalex.org/W4240028494 
ff88f3885c3f498a801d5d2467a8fc34  Let \(\kappa _{\mu }:=(\mu /\ell )^{1/2}\) with \(\ell :=\mathcal {G}\) being the total length of the graph \(\mathcal {G}\) . Clearly, the constant function \(\kappa _{\mu }\) is always a solution of (REF ) in \(H_{\mu }^1(\mathcal {G})\) for some \(\lambda \in \mathbb {R}\) , and hence a constrained critical point of \(E(\cdot \,,\mathcal {G})\) on \(H^1_\mu (\mathcal {G})\) . Furthermore, following [1]}, we can give a variational characterization of \(\kappa _{\mu }\) . In the next proposition and in the rest of the paper, we use the notation \(d^2_{H^1_\mu (\mathcal {G})} E(u, \mathcal {G})\) for the constrained Hessian of \(E(\cdot \,\mathcal {G})\) on \(H^1_\mu (\mathcal {G})\) . This is different from the unconstrained Hessian \(E^{\prime \prime }(u,\mathcal {G})\) in general, and we refer to the proof of Lemma REF below for more details (see also [2]} for a general exposition on “second order derivatives" on Riemannian manifolds).
 [1]  [
[
412,
415
]
]  https://openalex.org/W2889679869 
0c03645e5d2842299d7bcac47e37e02e  We are now ready to state a rather general minmax principle which combines the monotonicity trick [1]} and the minmax theorem with second order information by Fang and Ghoussoub [2]}, see also [3]}. A similar result, in the unconstrained setting, was recently proved in [4]}.
 [2]  [
[
180,
183
]
]  https://openalex.org/W2130696197 
b7fdd2c5b7694673ae1110ae3f757343  However, the radiative mechanism powering flares is still disputed. The most common proposed mechanisms are: synchrotron with a cooling break; synchrotron selfcompton (SSC); inverse compton (IC); and Synchrotron [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}.
Simultaneous determination during an Xray flare of the photon index (\(\Gamma \) ) in the nearinfrared (NIR; \(\Gamma _{\rm {IR}}\) ) and Xray (\(\Gamma _{\rm {X}}\) ) bands allows us to discriminate synchrotron and synchrotron with a cooling break from the other radiative mechanisms.
It is expected that \(\Gamma _{\rm {X}}=\Gamma _{\rm {IR}}\) or \(\Gamma _{\rm {X}}=\Gamma _{\rm {IR}}+0.5\) for the synchrotron and synchrotron with a cooling break model, respectively [18]}, [19]}, [14]}, [21]}. Any other value would favour either SSC or IC scenarios.
 [8]  [
[
255,
258
]
]  https://openalex.org/W1633681515 
0f4fa238d8804a429be8f4fefef2b8d0  During OBSID 22230, we observed a peak count rate of 0.09 ph s\(^{1}\) in the 2–8 keV band.
Given the instrumental set up, pileup effects are negligible even at the peak (e.g. [1]}).
By using the [1]} conversion factors, we estimate a total observed (absorbed) energy of \({\sim }3.2\times 10^9\) erg released during the flare in the 2–8 keV band. Following the classification of [1]}, this flare belongs to the group of moderate flares in the Xray band.
 [1]  [
[
179,
182
],
[
199,
202
],
[
384,
387
]
]  https://openalex.org/W152943885 
fb1136e3d3fe43618ccdf35f12b5ecbe  The IR flare reported in this paper is among the brightest ever observed. It is the third brightest flare observed with GRAVITY, although it is significantly shorter than the flares observed in 2019. The left panel of fig:fluence shows the flux distribution of Sgr A\(^\star \) [1]} and compares the peak fluxes of three flares possessing an Xray counterpart. The flare under investigation here is almost an order of magnitude fainter and a factor of \(\sim 23\) shorter than previously analysed very bright Xray flares [2]}, [3]}.
Thanks to the frequent observations of Sgr A\(^\star \) 's Xray emission, more than a hundred Xray flares of Sgr A\(^\star \) have been detected so far by Chandra and XMMNewton [4]}, [5]}, [6]}, [7]}, [8]}). Figure REF highlights the fluence and duration of the Xray flare detected here and compared to previously detected flares.
 [7]  [
[
736,
739
]
]  https://openalex.org/W2160697532 
3d5bcb5532ca4e659fd0e92266518291  The SYN–SSC scenario has severe problems: First, it requires magnetic fields of \({\sim } 10^4~\mathrm {G}\) , source regions around \({\sim } 0.001 \mathrm {R_s}\) , and densities \({\sim } 10^{12}~\mathrm {cm^{3}}\) . These parameters are extreme compared to the submm ambient conditions. Even ignoring this, the synchrotron cooling time scale in such a strong magnetic field is on the order of \(0.1\) seconds in the IR and of the order of 1 millisecond in the Xray.
Despite flares of Sgr A* being highly variable, spikes on timescales shorter than tens of seconds have never been observed in the IR band. We attribute this lack of short timescale IR variability to the effects of the cooling time of the electrons, which smooth out any variation shorter than a few seconds.
We rule out, [1]} and [2]}, the scenario in which the IR flare is generated from synchrotron emission with a thermal distribution, and the Xray flare is synchrotron selfCompton. This is a direct consequence of the negative Xray spectral slope. If the observed Xray slope were flat or positive, the requirement of a \(\gamma _{max}<10^2\) would be relaxed. This is because for a positive or flat spectral slopes the emission can stem from the rising or flat part of SSC spectrum. In turn, this relaxes the requirement for very large magnetic fields, because the peak of the synchrotron component at \(\nu _{max,syn}\) can be shifted by \(\gamma _{max}\) as well and not only by the magnetic field.
 [2]  [
[
806,
809
]
]  https://openalex.org/W1500274797 
6a4b6a2f624349d9b90d986514f66f13  The expected runtime bound follows immediately from the proof of Theorem \(\ref {thm:confatom}\) above. For the utility, recall that for the original exponential mechanism [1]}:
\(\mu _X(S_\varepsilon ) \le \frac{1}{\nu (S_{\varepsilon /2})} \exp \left( \frac{\epsilon \varepsilon }{4 \Delta _L}\right)\)
 [1]  [
[
173,
176
]
]  https://openalex.org/W4234281613 
704f6e88871c42f0996596ac71a96ffb  Theorem B (Equivalent version of Beurling's Theorem, [1]}).
A closed subspace of \(H^{2}\) is shiftinvariant iff it is invariant under multiplication by every bounded analytic function in \(H^{\infty }\) .
 [1]  [
[
53,
56
]
]  https://openalex.org/W2118082066 
d2a572f9ab04416da9fa434fff66b42e  Our purpose in the theorem given below is to demonstrate that the Theorem 6.1 in [1]}, which is the key result that essentially characterizes the invariant subspaces on uniform algebras, can actually be proved without the use of Kolmogoroff's theorem on the \(\left( L^{p},L^{1}\right) \) boundedness of the conjugation operator \(\left( 0<p<1\right) \) as defined above on uniform algebras and used in [1]} for observing convergence in measure for the conjugate of a sequence of \(L^{1}\) functions. We also
eliminate the use of uniform integrability.
 [1]  [
[
81,
84
],
[
405,
408
]
]  https://openalex.org/W3038696418 
4cd056bb07df4bf689f422ac8ed3ff9d  In the present paper by means of DFT+\(U\) calculation the electronic and magnetic properties of bulk LaMnO\(_3\) and BaTiO\(_3\) , as well as LMO/BTO heterostructure have been demonstrated. Within the chosen approach and computational parameters the bulk components of the heterostructure were confirmed to be insulators. In the heterostructure geometry the decrease of the band gap with increasing the number of BTO overlayers was demonstrated. It was found that the curve tends to zero, but the system remains semiconductor up to six BTO overlayers. It means that the conducting state arises with more ferroelectric overlayers, that is consistent with experiment from Ref. [1]} where the thickness of ferroelectric film was much higher. It was shown that the LMO/BTO systen possesses relatively high total magnetization, which is expected to increase with increasing the electron doping.
 [1]  [
[
678,
681
]
]  https://openalex.org/W3120426488 
051c4a9c7f2e44fd93b2c78252917020  We compare the complexity of LISTACE with other channel estimators, including LDGEC[1]}, ISTA[2]}, ISTANet\(^+\)[3]}, SSD[4]} and orthogonal matching pursuit (OMP)[5]}. As shown in Table REF , the complexities of the SSD and the OMP algorithms are \(O(MN_{RF}QL^2\Omega ^2)\) and \(O(MN_{RF}QL^3\Omega ^3)\) , respectively, where \(\Omega \) is the beam windows and much smaller than \(N\) (\(\Omega \) is assumed as \(\Omega =4\) when \(N = 256\) ). Compared to the DLbased LDGEC network and ISTANet\(^+\) , the proposed LISTACE and LISTACEHyper have lower computational complexity because the complexity of LISTACE is mainly determined by matrix multiplication, which is \(O(QN_{RF}NM)\) , while the complexity of LDGEC network is \(O(MN^3)\) determined by matrix inversion. The computational complexity of ISTANet\(^+\) is \(\mathcal {O}(MNk^2C_{\text{in}}C_{\text{out}})\) , dominated by convolution operation, where \(k\) is the size of the filter and \(C_{\text{in}}\) and \(C_{\text{out}}\) are the numbers of input and output channels of convolution, respectively, and \(C_{\text{in}}=C_{\text{out}}=32 \) in [3]}. Although LISTACE has more trainable parameters than ISTANet\(^+\) , it can achieve better performance shown by simulation results in the next section.
 [4]  [
[
123,
126
]
]  https://openalex.org/W2966084980 
8efca86fb45543ec91fc4def5e00b663  With the condition \(\gamma =2\kappa \) , the lasing threshold requirement \(g = \gamma +4\frac{\kappa ^2}{\gamma }\) becomes \(g=2\gamma =4\kappa \) . Therefore, the exceptional point and lasing threshold conditions are satisfied at the same time. The photonic system is PTsymmetric with balanced total gain and loss. Then the Laurent decomposition of the transfer function gives [1]}
\(\begin{aligned}G(\theta )\sim \begin{bmatrix}1 & 0 & 0 & 1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\1 & 0 & 0 & 1\end{bmatrix}\cdot \frac{\gamma }{2} \cdot \frac{1}{\theta ^2}\end{aligned}\)
 [1]  [
[
383,
386
]
]  https://openalex.org/W3141574774 
a89382cd5d9d458590ba64ea1c353a40  Remark Here the dimension restriction is due to the trapping phenomenon, i.e. we need the flow \(\mathcal {M}\) to be trapped between two asymptotically conical selfexpanders, which is only known in low dimensions [1]}.
 [1]  [
[
216,
219
]
]  https://openalex.org/W3027442782 
cdcb4e5c55414eca859999f877d1f688 
A number \(\mu \in \mathbb {R}\) is an eigenvalue of \(L_\Sigma \) if there exists \(f \in W^{2}(\Sigma )\) such that \(L_\Sigma f = \mu f\) . By works of Bernstein–Wang [1]}, when \(\Sigma \) is smooth, \(L_\Sigma \) has a discrete spectrum and we can therefore order the eigenvalues of \(L_\Sigma \) . It follows that the index of \(\Sigma \) is equal to the number of negative eigenvalues of \(L_\Sigma \) . Let \(\mathrm {nul}(\Sigma )\) denote the nullity of the operator \(L_\Sigma \) , i.e. the multiplicity of eigenvalue 0. \(f \in W^2\) is called a Jacobi field if \(f \in \ker L_\Sigma \) .
 [1]  [
[
176,
179
]
]  https://openalex.org/W2883778827 
6e427cd614524081a1218b61d97ce2ee  We establish, using PDE method similar to [1]}, the existence of an \(I\) parameter family of ancient solutions to the RMCF starting from \(\Sigma \) . Each one of these solutions will correspond to a MCF coming out of \(\mathcal {C}\) that is not selfsimilar.
 [1]  [
[
42,
45
]
]  https://openalex.org/W2920651408 
8aa9c01ab75947c5937b3c00ad67b1b5  We now follow the ideas of [1]} to establish higher regularity of the solutions obtained above. First notice that for a given initial data \(a = (a_1,\ldots ,a_I)\) , \(\tau _{}(a_1,\ldots ,a_I)\) solves the linear homogeneous equation \(\frac{\partial }{\partial s} v = L_\Sigma v\) , hence by replacing \(v\) by \(v \tau _{}(a_1,\ldots ,a_I)\) we will WLOG assume that \(v\) is a solution to linearproblem with
\(\Pi _{}(v(\cdot ,0)) = 0.\)
 [1]  [
[
27,
30
]
]  https://openalex.org/W3122089952 
0e8c9dfd0c944c2c93557ae4ceb044d2 
for two hypersurfaces \(\Sigma _1\) and \(\Sigma _2\) , whenever the limit is defined (possibly \(\infty \) ). In particular, they showed in [1]} that when \(\Sigma _1\) is a hypersurface trapped between two selfexpanders asymptotic to the same cone \(\mathcal {C}\) , then \(E_{\mathrm {rel}}[\Sigma _1, \Gamma ]\) is welldefined (possibly \(\infty \) , but not \(\infty \) ) for any selfexpander \(\Gamma \) asymptotic to \(\mathcal {C}\) . Because of this, \(E_{\mathrm {rel}}\) is the natural and more suitable quantity to study in the trapped case (and, in fact, \(E_{\mathrm {rel}} = E_{\mathrm {rel}}^*\) in the trapped case  see entropyequivalence). Unfortunately, in order for a graph \(\Sigma _v\) to be trapped, \(v\) needs to have a very good spatial decay near infinity:
\(v(p) = O(\left\mathbf {x}(p)\right^{n1} e^{\frac{\left\mathbf {x}(p)\right^2}{4}}).\)
 [1]  [
[
143,
146
]
]  https://openalex.org/W2949545924 
93b1a500fe94458fb7fda44010ce6187  By Huisken's monotonicity formula, any singularity of the flow must have entropy less than \(\lambda [\mathbb {S} \times \mathbb {R}]\) . By [1]}, it must be a round sphere \(\mathbb {S}^2\) . However, as any tame ancient RMCF is asymptotically conical (as \(\Sigma \) is asymptotically conical), it cannot encounter a compact singularity at the first singular time. Thus, any such flow must remain smooth for all time. The second conclusion follows in view of maintheorem.
 [1]  [
[
141,
144
]
]  https://openalex.org/W770226755 
73baa18c512345388c8eec696346050f  The proof is similar except one uses [1]} instead of [2]}. Essentially, the same argument follows through until the conclusion \(\operatorname{supp}\nu \cap \mathbb {S}^3\) is a closed smooth minimal surface in \(\mathbb {S}^3\) . It follows from the resolution of Willmore conjecture [3]} that \(\operatorname{supp}\nu \cap \mathbb {S}^3\) must be an equatorial sphere as any other such minimal surface has (Gaussian) area ratio at least \(\frac{2\pi ^2}{4\pi } = \frac{\pi }{2}\) . Hence \(\nu \) is flat and \(\Sigma \) is smooth.
 [2]  [
[
53,
56
]
]  https://openalex.org/W2897785250 
5c8487bc90e0406ea85b8a2ee3858c68  Following [1]} we discretize the gauge field by introducing link and plaquette variables
\(U_{x,\mu } &= \exp (iga_\mu A^a_\mu (x+\hat{\mu } / 2) t^a) \, \, \in \, \, \mathrm {SL}(N_c,\mathbb {C}), \\U_{x,\mu \nu }(x) &= U_{x, \mu }U_{x+\mu , \nu }U_{x+\nu ,\mu }^{1} U_{x, \nu }^{1},\)
 [1]  [
[
10,
13
]
]  https://openalex.org/W2071844098 
83b684c691404e9d81f144f85f48b0a6  [htb]
The GAMP Algorithm[1]}
[1] Given measurement matrix \({\bf {\Phi }} \in {\mathbb {C}^{{M_\phi } \times {N_\phi }}}\) and sequence of measurement value \({\bf {y}} \in \mathbb {C} ^{{M_\phi } \times 1}\) .
Initialization: Set environment prior parameter \(\bf {q}\) . Defined \({g_{\rm {in}}}\left( \cdot \right)\) and \({g_{\rm {out}}}\left( \cdot \right)\) from (REF ), (REF ). Set \(t_i = 0\) , \({\bf {\hat{s}}}\left( {  1} \right) = 0\) , \({\hat{x}_{{n_\phi }}}\left( {{t_i}} \right) > 0\) , \(\sigma _{{n_\phi }}^{\rm {x}}\left( {{t_i}} \right) > 0\) .
 [1]  [
[
24,
27
]
]  https://openalex.org/W2166670884 
bade64aabd984eb6a7f98f81c1cf026d  We model the DYNAPSE neuromorphic hardware [1]} with the following configurations.
 [1]  [
[
44,
47
]
]  https://openalex.org/W2749476078 
69550c58bcd547dc92bf5ec4ab330559  A variety of effects, including spinodal dewetting and nucleation at impurities [1]}, [2]}, [3]}, can cause the dewetting of nematic films.
In particular, such dewetting can involve competition between many effects, including internal elastic forces, alignment forces on the interfaces, gravity, van der Waals forces, and in cases in which an external electromagnetic field is applied, electromagnetic forces [4]}.
Many experimental studies have considered delicate balances between a number of these effects in different situations, for instance close to the isotropic–nematic phase transition [5]}, [6]}, [7]}, near to a contact line [8]}, [9]}, [10]}, or in the presence of an external electromagnetic field [11]}, [12]}, [13]}.
Since in the present work we consider lengthscales greater than a nanometrescale, it is appropriate to neglect van der Waals forces [14]} and we consider only the competition between elastic forces, alignment forces on the interfaces, and gravity.
 [14]  [
[
866,
870
]
]  https://openalex.org/W2485244922 
8c005ceec31a42e4875429a34afd410a  Convergence rate. Earlier landscape analysis on the lowrank matrix recovery [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, combined with the convergence guarantee for the nonconvex optimization [7]}, [8]}, indicates polynomial convergence towards the secondorder stationary point. More recently, the authors in [9]} achieved nearly linear convergence for the rank1 phase retrieval problem.
 [1]  [
[
77,
80
]
]  https://openalex.org/W2963404710 
2e0a04fe71934a55a7f000235b8535d0  Convergence rate. Earlier landscape analysis on the lowrank matrix recovery [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, combined with the convergence guarantee for the nonconvex optimization [7]}, [8]}, indicates polynomial convergence towards the secondorder stationary point. More recently, the authors in [9]} achieved nearly linear convergence for the rank1 phase retrieval problem.
 [2]  [
[
83,
86
]
]  https://openalex.org/W2964156132 
fffd261d644144d6860c2d44b9de068f  Convergence rate. Earlier landscape analysis on the lowrank matrix recovery [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, combined with the convergence guarantee for the nonconvex optimization [7]}, [8]}, indicates polynomial convergence towards the secondorder stationary point. More recently, the authors in [9]} achieved nearly linear convergence for the rank1 phase retrieval problem.
 [3]  [
[
89,
92
]
]  https://openalex.org/W3113425034 
706901e9e7b541c59beee6121a0d7194  Convergence rate. Earlier landscape analysis on the lowrank matrix recovery [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, combined with the convergence guarantee for the nonconvex optimization [7]}, [8]}, indicates polynomial convergence towards the secondorder stationary point. More recently, the authors in [9]} achieved nearly linear convergence for the rank1 phase retrieval problem.
 [6]  [
[
107,
110
]
]  https://openalex.org/W2604130501 
abffa4a24f8d414484446a3191e21b2f  We point out that the result of [1]} is consistent with our global analysis results (Section REF ). In addition, we stress that the nearly linear convergence is common among other lowrank matrix recovery problems, at least from the manifold optimization perspective. Our work explores the following aspects: (1) whether the nearly optimal and fast convergence rate can be proved for the general rankr matrix recovery; (2) how a weak isometry property affects the results; and (3) what is the common mechanism behind many different kinds of lowrank matrix recovery problems.
 [1]  [
[
32,
35
]
]  https://openalex.org/W3123272904 
8005f3f3a4d944aca4c60e0a16c4f1f7 
Perturbed firstorder schemes. There are a few studies on the convergence of perturbed firstorder schemes towards secondorder stationary points both in the Euclidean and the Riemannian settings, see [1]}, [2]}, [3]}, [4]}, [5]}. These results show that general global convergence rate is polynomial and almost dimensionfree. Whereas the intermittent perturbations help perturbed schemes escape the saddles better than nonperturbed firstorder schemes in the worst case [3]}, they also prevent a very accurate approximation of the ground truth without further (and sometimes complicated) modifications.
Randomly initialized firstorder schemes. Though it has been proved that randomly initialized gradient descent asymptotically escapes saddles and only converges to the local minima [7]}, [8]}, [9]}, [10]}, its convergence rate is much less clear. In the worst case, when the initialization is close to the stable manifold of saddle points, the convergence towards the local minima slows down substantially. Indeed, the authors of the previous work [3]} show that, in the worst case, the randomly initialized gradient descent can take exponential time to escape from the saddles. Despite such worst case scenario, the optimal efficiency of saddle escape behavior in a more general sense remains unclear. A recent answer to this question is given by the authors of [12]}, who show that for the rank1 phase retrieval problem, gradient descent with random initialization has a nearly linear and almost dimensionfree convergence rate, improving upon the previous polynomial convergence rate. This motivates us to study the mechanism behind the fast convergence rate and establish similar results for general rankr matrix recovery problems.
 [1]  [
[
202,
205
]
]  https://openalex.org/W2964106499 
41db9a401f0c4b66b6fe109440a21757 
Perturbed firstorder schemes. There are a few studies on the convergence of perturbed firstorder schemes towards secondorder stationary points both in the Euclidean and the Riemannian settings, see [1]}, [2]}, [3]}, [4]}, [5]}. These results show that general global convergence rate is polynomial and almost dimensionfree. Whereas the intermittent perturbations help perturbed schemes escape the saddles better than nonperturbed firstorder schemes in the worst case [3]}, they also prevent a very accurate approximation of the ground truth without further (and sometimes complicated) modifications.
Randomly initialized firstorder schemes. Though it has been proved that randomly initialized gradient descent asymptotically escapes saddles and only converges to the local minima [7]}, [8]}, [9]}, [10]}, its convergence rate is much less clear. In the worst case, when the initialization is close to the stable manifold of saddle points, the convergence towards the local minima slows down substantially. Indeed, the authors of the previous work [3]} show that, in the worst case, the randomly initialized gradient descent can take exponential time to escape from the saddles. Despite such worst case scenario, the optimal efficiency of saddle escape behavior in a more general sense remains unclear. A recent answer to this question is given by the authors of [12]}, who show that for the rank1 phase retrieval problem, gradient descent with random initialization has a nearly linear and almost dimensionfree convergence rate, improving upon the previous polynomial convergence rate. This motivates us to study the mechanism behind the fast convergence rate and establish similar results for general rankr matrix recovery problems.
 [2]  [
[
208,
211
]
]  https://openalex.org/W2963092340 
6b0f78b6c97a419b943c7dc1c42eeac5  In this section, we introduce the optimization technique on the lowrank matrix manifold, namely the projected gradient descent (PGD) with soft retraction onto the manifold. This Riemannian gradient descent technique has been studied in [1]}, [2]}, [3]}, [4]}. For example, [3]} and [4]} use the Riemannian gradient descent to solve lowrank matrix recovery problems. These works also point out that the PGD enjoys light computational cost.
In this paper, we will focus on the global analysis of such manifold optimization technique with random initialization.
 [1]  [
[
237,
240
]
]  https://openalex.org/W1993468393 
842fc9b84a0747b2a8d171d0b50932be  The lottery ticket hypothesis suggests that using a larger original network size increases the number of subnetworks which may turn out to be winning tickets [1]}.
To investigate this hypothesis for the case of policy distillation, we analyzed the effect of the initial network size on the lottery ticket effect (figure REF , right column).
Against this initial intuition, we observe that smaller dense networks are capable of maintaining strong performance at higher levels of absolute sparsity as compared to their larger counterparts.
Furthermore, the initial network size does not have a strong effect on the relative performance gap between the ticket configuration (mask/weights) and the baseline (permuted/permuted).
We suspect that larger networks can not realize their combinatorial potential due to a an unfavorable layerwise pruning bias introduced by initialization schemes such as the Kaiming family [2]}. An imbalance between input size and hidden layer size can have strong impact on which weights are targeted by IMP. We further investigate this relationship in section REF .
 [1]  [
[
159,
162
]
]  https://openalex.org/W2963813662 
5f3baae7adf94c2285b61a979b3753ce  Providing more empirical evidence for our previous claims, we find that for most MLP and CNN agents trained on the MinAtar games the ticket effect is explained by the IMPdiscovered mask (see figure REF ).
Strengthening an observation in [1]}, we observe that the performance deteriorates at different levels of network sparsity depending on the considered game. Freeway agents keep performing well even for high levels of sparsity, while agents trained on Breakout and Space Invaders continually get worse as the sparsity level increases.
In general we find that the qualitative results obtained for MLP agents generalize well to CNNbased agents.
The only major difference is that unlike the Asterix CNN agent, the MLP agent improves their performance at moderate levels of sparsity.
In summary, we provide evidence for the strong contribution of the mask to the lottery ticket effect in DRL (both onpolicy and offpolicy algorithms). The results generalize between different architectures indicating that the strength of the overall lottery ticket effect is mainly dictated by the combination of the taskspecification of the environment and the DRL algorithm.
<FIGURE>  [1]  [
[
238,
241
]
]  https://openalex.org/W2948130861 
4f8ab44b9f0d43ebadf70fbcce799dae  The convergence process (REF ) holds by virtue of an application of the BanachAlaogluBourbaki theorem (cf., e.g., Theorem 3.6 of [1]}) to the estimates (REF ) and (REF ).
The nontivial part of the proof amounts to identifying the weakstar limits \(v_\kappa \) and \(w_\kappa \) and to showing that the weakstar limit \(u_\kappa \) solves Problem .
To begin with, we show that \(w_\kappa =u_\kappa ^{\alpha 2} u_\kappa \) . Given any \(u \in W_0^{1,p}(\Omega )\) , we look for a number \(\beta \in \mathbb {R}\) for which
\(\leftu^{\alpha 2} u\right^{\beta 2} u^{\alpha 2} u =u.\)
Simple algebraic manipulations transform the latter into
\(u^{(\alpha 2)(\beta 2) +(\beta 2)+(\alpha 2)} u = u,\)
and we observe that a sufficient condition insuring this is that \(\beta \) satisfies
\((\alpha 1)(\beta 2)+(\alpha 2)=0,\)
which is equivalent to writing
\(\beta = 2+\dfrac{2\alpha }{\alpha 1}=\dfrac{2\alpha 2+2\alpha }{\alpha 1}=\dfrac{\alpha }{\alpha 1}=\alpha ^{\prime }.\)
Let \(v:=u^{\alpha 2} u\) and observe that if \(\beta =\alpha ^{\prime }\) then \(v^{\beta 2} v \in W_0^{1,p}(\Omega )\) so that the set
\(S:=\lbrace v;(v^{\alpha ^{\prime }2}v) \in W_0^{1,p}(\Omega )\rbrace \)
is nonempty. define the seminorm
\(M(v):=\left\Vert \nabla (v^{\alpha ^{\prime }2}v)\right\Vert _{L^p(\Omega )}^\frac{1}{\alpha ^{\prime }1},\quad \textup { for all }v \in S,\)
and define the set
\({M}:=\lbrace v\in S; M(v)\le 1\rbrace .\)
An application of the PoincaréFriedrichs inequality gives that there exists a constant \(c_0=c_0(\Omega )>0\) such that
\(\begin{aligned}1 &\ge M(v) \ge c_0^\frac{1}{\alpha ^{\prime }1} \left\Vert v^{\alpha ^{\prime }2} v\right\Vert _{W_0^{1,p}(\Omega )}^\frac{1}{\alpha ^{\prime }1} \ge c_0^\frac{1}{\alpha ^{\prime }1} \left\Vert v^{\alpha ^{\prime }2} v\right\Vert _{L^p(\Omega )}^\frac{1}{\alpha ^{\prime }1}\\&=c_0^\frac{1}{\alpha ^{\prime }1} \left(\int _{\Omega } \leftv^{\alpha ^{\prime }2} v\right^p\, \mathrm {d}x\right)^{1/(p(\alpha ^{\prime }1))}=c_0^\frac{1}{\alpha ^{\prime }1} \left(\int _{\Omega } v^{(\alpha ^{\prime }1)p}\, \mathrm {d}x\right)^{1/(p(\alpha ^{\prime }1))}=c_0^\frac{1}{\alpha ^{\prime }1} \Vert v\Vert _{L^{(\alpha ^{\prime }1)p}(\Omega )}.\end{aligned}\)
Let \(\lbrace v_k\rbrace _{k=1}^\infty \) be a sequence in \({M}\) . Since, by the RellichKondrašov theorem, we have that \(W_0^{1,p}(\Omega ) \hookrightarrow \hookrightarrow L^p(\Omega )\) we obtain that, up to passing to a subsequence, there exists an element \(w \in L^p(\Omega )\) such that
\((v_k^{\alpha ^{\prime }2} v_k) \rightarrow w,\quad \textup { in } L^p(\Omega ),\quad \textup { as }k\rightarrow \infty .\)
Since \(1<\alpha <2\) and \(2.8\le p \le 5\) , then \(\alpha ^{\prime }>2\) and it thus results that \(1<p^{\prime }<p<(\alpha ^{\prime }1)p<\infty \) (so that \(L^{(\alpha ^{\prime }1)p}(\Omega )\) is uniformly convex; cf., e.g., [1]}) and that \(\lbrace v_k\rbrace _{k=1}^\infty \) is bounded in \(L^{p^{\prime }}(\Omega )\) . The reflexivity of \(L^{p^{\prime }}(\Omega )\) puts us in a position to apply the BanachEberleinSmulian theorem (cf., e.g., Theorem 5.144 of [3]}) and extract a subsequence, still denoted \(\lbrace v_k\rbrace _{k=1}^\infty \) , that weakly converges to an element \(v \in L^{(\alpha ^{\prime }1)p}(\Omega ) \hookrightarrow L^{p^{\prime }}(\Omega )\) .
Consider the mapping
\(v \in L^{p^{\prime }}(\Omega ) \mapsto (v^{\alpha ^{\prime }2} v) \in L^p(\Omega ),\)
and observe that this mapping is hemicontinuous and monotone, being the mapping \(\xi \in \mathbb {R} \rightarrow (\xi ^{\alpha ^{\prime }2}\xi ) \in \mathbb {R}\) , with \(\alpha ^{\prime }>2\) thanks to (REF ), continuous and monotone.
Therefore, an application of Theorem 9.132 of [3]} gives that \(w=v^{\alpha ^{\prime }2}v \in L^p(\Omega )\) .
Therefore, the convergence (REF ) reads:
\((v_k^{\alpha ^{\prime }2} v_k) \rightarrow w=(v^{\alpha ^{\prime }2} v),\quad \textup { in } L^p(\Omega ),\quad \textup { as }k\rightarrow \infty .\)
In order to show that \({M}\) is relatively compact in \(L^{(\alpha ^{\prime }1)p}(\Omega )\) , we have to show that every sequence \(\lbrace v_k\rbrace _{k=1}^\infty \subset {M}\) admits a convergent subsequence in \(L^{(\alpha ^{\prime }1)p}(\Omega )\) . Observe that we can extract a subsequence, still denoted \(\lbrace v_k\rbrace _{k=1}^\infty \) that weakly converges to an element \(v\) in \(L^{(\alpha ^{\prime }1)p}(\Omega )\) . Since \((\alpha ^{\prime }1)p>1\) , an application of Lemma REF gives:
\(\begin{aligned}&\left\Vert v_k\Vert _{L^{(\alpha ^{\prime }1)p}(\Omega )}\Vert v\Vert _{L^{(\alpha ^{\prime }1)p}(\Omega )}\right=\left\left(\int _{\Omega } v_k^{(\alpha ^{\prime }1)p} \, \mathrm {d}x\right)^{\frac{1}{(\alpha ^{\prime }1)p}}\left(\int _{\Omega } v^{(\alpha ^{\prime }1)p} \, \mathrm {d}x\right)^{\frac{1}{(\alpha ^{\prime }1)p}}\right\\&\le \left\int _{\Omega } v_k^{(\alpha ^{\prime }1)p} \, \mathrm {d}x\int _{\Omega } v^{(\alpha ^{\prime }1)p} \, \mathrm {d}x\right^{\frac{1}{(\alpha ^{\prime }1)p}}=\left\int _{\Omega } \leftv_k^{\alpha ^{\prime }2}v_k\right^p \, \mathrm {d}x\int _{\Omega } \leftv^{\alpha ^{\prime }2}v\right^p \, \mathrm {d}x\right^{\frac{1}{(\alpha ^{\prime }1)p}}\\&=\left\left\Vert v_k^{\alpha ^{\prime }2}v_k\right\Vert _{L^p(\Omega )}^p\left\Vert v^{\alpha ^{\prime }2}v\right\Vert _{L^p(\Omega )}^p\right^{\frac{1}{(\alpha ^{\prime }1)p}}.\end{aligned}\)
An application of (REF ) gives that the righthand side of the latter term tends to zero as \(k \rightarrow \infty \) , thus establishing that
\(\Vert v_k\Vert _{L^{(\alpha ^{\prime }1)p}(\Omega )}\rightarrow \Vert v\Vert _{L^{(\alpha ^{\prime }1)p}(\Omega )},\quad \textup { as }k\rightarrow \infty .\)
Since the space \(L^{(\alpha ^{\prime }1)p}(\Omega )\) is uniformly convex, an application of Theorem 5.123 of [3]} gives that
\(v_k \rightarrow v,\quad \textup { in }L^{(\alpha ^{\prime }1)p}(\Omega ),\)
thus establishing the sought relative compactness.
The established relative compactness of the set \({M}\) in \(L^{(\alpha ^{\prime }1)p}(\Omega )\) and the sixth convergence in the process (REF ) (which in turn implies that the timederivatives in the sense of distributions are uniformly bounded) allow us apply Dubinskii's compactness theorem (Theorem REF ) with \(A_0=L^{(\alpha ^{\prime }1)p}(\Omega )\) , \(A_1=W^{1,p^{\prime }}(\Omega )\) , \(q_0=q_1=2\) , so that
\(\Pi _\ell \mathbf {u}_{\kappa ,\ell }^{\alpha 2} \Pi _\ell \mathbf {u}_{\kappa ,\ell } \rightarrow w_\kappa ,\quad \textup { in } L^2(0,T;L^{(\alpha ^{\prime }1)p}(\Omega )) \textup { as } \ell \rightarrow 0,\)
where, once again, the monotonicity of \(\xi \in \mathbb {R} \mapsto \xi ^{\alpha 2} \xi \) , the first convergence in the process (REF ) and Theorem 9.132 of [3]} imply that
\(w_\kappa = u_\kappa ^{\alpha 2} u_\kappa .\)
Second, we show that \(v_\kappa =u_\kappa ^\frac{\alpha 2}{2} u_\kappa \) . Given any \(u \in W_0^{1,p}(\Omega )\) , we look for a number \(\beta \in \mathbb {R}\) for which
\(\leftu^\frac{\alpha 2}{2} u\right^\frac{\beta 2}{2} u^\frac{\alpha 2}{2} u =u.\)
We observe that a sufficient condition insuring this is that \(\beta \) satisfies
\(\left(\dfrac{\alpha 2}{2}+1\right)\dfrac{\beta 2}{2}+\dfrac{\alpha 2}{2}=0,\)
which is equivalent to writing
\(\beta = 2+2\left(\dfrac{2\alpha }{2}\dfrac{2}{\alpha }\right)=\dfrac{4}{\alpha }.\)
Let \(v:=u^\frac{\alpha 2}{2} u\) and observe that if \(\beta =4/\alpha \) then \(v^\frac{\beta 2}{2} v \in W_0^{1,p}(\Omega )\) so that the set
\(\tilde{S}:=\lbrace v;(v^\frac{(4/\alpha )2}{2}v) \in W_0^{1,p}(\Omega )\rbrace \)
is nonempty. define the seminorm
\(\tilde{M}(v):=\left\Vert \nabla (v^\frac{(4/\alpha )2}{2}v)\right\Vert _{L^p(\Omega )}^\frac{\alpha }{2}=\left\Vert \nabla (v^\frac{2\alpha }{\alpha } v)\right\Vert _{L^p(\Omega )}^\frac{\alpha }{2},\quad \textup { for all }v \in \tilde{S},\)
and define the set
\(\tilde{{M}}:=\lbrace v\in \tilde{S}; \tilde{M}(v)\le 1\rbrace .\)
An application of the PoincaréFriedrichs inequality gives that there exists a constant \(c_0=c_0(\Omega )>0\) such that
\(\begin{aligned}1 &\ge \tilde{M}(v) \ge c_0^\frac{\alpha }{2} \left\Vert v^\frac{2\alpha }{\alpha } v\right\Vert _{W_0^{1,p}(\Omega )}^\frac{\alpha }{2} \ge c_0^\frac{\alpha }{2} \left\Vert v^\frac{2\alpha }{\alpha } v\right\Vert _{L^p(\Omega )}^\frac{\alpha }{2}=c_0^\frac{\alpha }{2} \left(\int _{\Omega } \leftv^\frac{2\alpha }{\alpha } v\right^p\, \mathrm {d}x\right)^{\alpha /(2p)}\\&=c_0^\frac{\alpha }{2} \left(\int _{\Omega } v^\frac{2p}{\alpha }\, \mathrm {d}x\right)^{\alpha /(2p)}=c_0^\frac{\alpha }{2} \Vert v\Vert _{L^\frac{2p}{\alpha }(\Omega )}.\end{aligned}\)
Let \(\lbrace v_k\rbrace _{k=1}^\infty \) be a sequence in \(\tilde{{M}}\) . Since, by the RellichKondrašov theorem, we have that \(W_0^{1,p}(\Omega ) \hookrightarrow \hookrightarrow L^p(\Omega )\) we obtain that, up to passing to a subsequence, there exists an element \(w \in L^p(\Omega )\) such that
\((v_k^\frac{2\alpha }{2} v_k) \rightarrow w,\quad \textup { in } L^p(\Omega ),\quad \textup { as }k\rightarrow \infty .\)
Since \(1<\alpha <2\) and \(2.8\le p \le 5\) , it thus results that \(1 \le p^{\prime } <2<p<\frac{2p}{\alpha }<2p<\infty \) and that \(\lbrace v_k\rbrace _{k=1}^\infty \) is bounded in \(L^\frac{2p}{\alpha }(\Omega )\) . The reflexivity of \(L^\frac{2p}{\alpha }(\Omega )\) puts us in a position to apply the BanachEberleinSmulian theorem (cf., e.g., Theorem 5.144 of [3]}) and extract a subsequence, still denoted \(\lbrace v_k\rbrace _{k=1}^\infty \) , that weakly converges to an element \(v \in L^{p^{\prime }}(\Omega )\) .
Consider the mapping
\(v \in L^{p^{\prime }}(\Omega ) \mapsto (v^\frac{2\alpha }{2} v) \in L^p(\Omega ),\)
and observe that this mapping is hemicontinuous and monotone, being the mapping \(\xi \in \mathbb {R} \rightarrow (\xi ^\frac{2\alpha }{2}\xi ) \in \mathbb {R}\) continuous and monotone.
Therefore, an application of Theorem 9.132 of [3]} gives that \(w=v^\frac{2\alpha }{\alpha }v \in L^p(\Omega )\) .
Therefore, the convergence (REF ) reads:
\((v_k^\frac{2\alpha }{\alpha } v_k) \rightarrow w=(v^\frac{2\alpha }{\alpha } v),\quad \textup { in } L^p(\Omega ),\quad \textup { as }k\rightarrow \infty .\)
In order to show that \(\tilde{{M}}\) is relatively compact in \(L^\frac{2p}{\alpha }(\Omega )\) , we have to show that every sequence \(\lbrace v_k\rbrace _{k=1}^\infty \subset \tilde{{M}}\) admits a convergent subsequence in \(L^\frac{2p}{\alpha }(\Omega )\) .
Since \(2p/\alpha >1\) , an application of Lemma REF gives:
\(\begin{aligned}&\left\Vert v_k\Vert _{L^\frac{2p}{\alpha }(\Omega )}\Vert v\Vert _{L^\frac{2p}{\alpha }(\Omega )}\right=\left\left(\int _{\Omega } v_k^{\frac{2p}{\alpha }} \, \mathrm {d}x\right)^{\frac{\alpha }{2p}}\left(\int _{\Omega } v^{\frac{2p}{\alpha }} \, \mathrm {d}x\right)^{\frac{\alpha }{2p}}\right\\&\le \left\int _{\Omega } v_k^{\frac{2p}{\alpha }} \, \mathrm {d}x\int _{\Omega } v^{\frac{2p}{\alpha }} \, \mathrm {d}x\right^{\frac{\alpha }{2p}}=\left\int _{\Omega } \leftv_k^{\frac{2\alpha }{\alpha }}v_k\right^p \, \mathrm {d}x\int _{\Omega } \leftv^{\frac{2\alpha }{\alpha }}v\right^p \, \mathrm {d}x\right^{\frac{\alpha }{2p}}\\&=\left\left\Vert v_k^{\frac{2\alpha }{\alpha }}v_k\right\Vert _{L^p(\Omega )}^p\left\Vert v^{\frac{2\alpha }{\alpha }}v\right\Vert _{L^p(\Omega )}^p\right^{\frac{\alpha }{2p}}.\end{aligned}\)
An application of (REF ) gives that the righthand side of the latter term tends to zero as \(k \rightarrow \infty \) , thus establishing that
\(\Vert v_k\Vert _{L^\frac{2p}{\alpha }(\Omega )}\rightarrow \Vert v\Vert _{L^\frac{2p}{\alpha }(\Omega )},\quad \textup { as }k\rightarrow \infty .\)
Since the space \(L^\frac{2p}{\alpha }(\Omega )\) is uniformly convex, an application of Theorem 5.123 of [3]} gives that
\(v_k \rightarrow v,\quad \textup { in } L^\frac{2p}{\alpha }(\Omega ),\)
thus establishing the sought relative compactness.
The latter shows that
\(v_k \rightarrow v,\quad \textup { in } L^\frac{2p}{\alpha }(\Omega ) \textup { as } k\rightarrow \infty ,\)
in turn implying that the set \(\tilde{{M}}\) is relatively compact in \(L^\frac{2p}{\alpha }(\Omega )\) , as it was to be proved. The established relative compactness of the set \(\tilde{{M}}\) in \(L^\frac{2p}{\alpha }(\Omega )\) and the fourth convergence in the process (REF ) (which in turn implies that the timederivatives in the sense of distributions are uniformly bounded) allow us apply Dubinskii's compactness theorem (Theorem REF ) with \(A_0=L^\frac{2p}{\alpha }(\Omega )\) , \(A_1=L^2(\Omega )\) , \(q_0=q_1=2\) , so that
\(\Pi _\ell \mathbf {u}_{\kappa ,\ell }^\frac{\alpha 2}{2} \Pi _\ell \mathbf {u}_{\kappa ,\ell } \rightarrow v_\kappa ,\quad \textup { in } L^2(0,T;L^\frac{2p}{\alpha }(\Omega )) \textup { as } \ell \rightarrow 0,\)
where, once again, the monotonicity of \(\xi \in \mathbb {R} \mapsto \xi ^\frac{\alpha 2}{2} \xi \) , the first convergence in the process (REF ) and Theorem 9.132 of [3]} imply that
\(v_\kappa = u_\kappa ^\frac{\alpha 2}{2} u_\kappa .\)
We are left to show that the weakstar limit \(u_\kappa \) is a solution for Problem . Let \(v \in \mathcal {D}(\Omega )\) and let \(\psi \in \mathcal {C}^1([0,T])\) . For each \(0 \le n \le N1\) , multiply (REF ) by \(\lbrace v \psi (n\ell )\rbrace \) , getting
\(\begin{aligned}&\dfrac{\psi (n\ell )}{\ell }\int _{\Omega }\lbrace u_{\kappa ,\ell }^{n+1}^{\alpha 2} u_{\kappa ,\ell }^{n+1}  u_{\kappa ,\ell }^{n}^{\alpha 2}u_{\kappa ,\ell }^{n}\rbrace v \, \mathrm {d}x\\&\quad +\psi (n\ell )\int _{\Omega }\mu \nabla u_{\kappa ,\ell }^{n+1}^{p2} \nabla u_{\kappa ,\ell }^{n+1} \cdot \nabla v\, \mathrm {d}x\psi (n\ell )\int _{\Omega }\dfrac{\lbrace u_{\kappa ,\ell }^{n+1}\rbrace ^{}}{\kappa } v \, \mathrm {d}x\\&=\int _{\Omega }\left(\dfrac{1}{\ell } \int _{n\ell }^{(n+1)\ell } \tilde{a}(t) \, \mathrm {d}t\right) v \psi (n\ell ) \, \mathrm {d}x.\end{aligned}\)
Multiplying (REF ) by \(\ell \) and summing over \(0 \le n \le N1\) , we obtain
\(\begin{aligned}&\sum _{n=0}^{N1} \ell \int _{\Omega }\dfrac{u_{\kappa ,\ell }^{n+1}^{\alpha 2} u_{\kappa ,\ell }^{n+1}  u_{\kappa ,\ell }^{n}^{\alpha 2}u_{\kappa ,\ell }^{n}}{\ell } v \psi (n\ell ) \, \mathrm {d}x\\&\quad +\sum _{n=0}^{N1} \ell \int _{\Omega }\mu \nabla u_{\kappa ,\ell }^{n+1}^{p2} \nabla u_{\kappa ,\ell }^{n+1} \cdot \nabla (\psi (n\ell ) v)\, \mathrm {d}x\\&\quad \dfrac{1}{\kappa } \sum _{n=0}^{N1} \ell \int _{\Omega } \lbrace u_{\kappa ,\ell }^{n+1}\rbrace ^{} v \psi (n\ell ) \, \mathrm {d}x=\sum _{n=0}^{N1} \ell \int _{\Omega }\left(\dfrac{1}{\ell } \int _{n\ell }^{(n+1)\ell } \tilde{a}(t) \, \mathrm {d}t\right) v \psi (n\ell ) \, \mathrm {d}x.\end{aligned}\)
For the sake of brevity, define \(\psi _\ell (t):=\psi (n\ell )\) , \(n\ell \le t \le (n+1)\ell \) and \(0 \le n \le N1\) . Equation (REF ) can be thus rearranged as follows:
\(\begin{aligned}&\int _{0}^{T} \int _{\Omega } D_\ell (\Pi _\ell \mathbf {u}_{\kappa ,\ell }^{\alpha 2}\Pi _\ell \mathbf {u}_{\kappa ,\ell }) v \, \mathrm {d}x \psi _\ell (t) \, \mathrm {d}t\\&\quad \int _{0}^{T} \int _{\Omega } \nabla \cdot \left(\mu \nabla (\Pi _\ell \mathbf {u}_{\kappa ,\ell })^{p2} \nabla (\Pi _\ell \mathbf {u}_{\kappa ,\ell })\right) v\, \mathrm {d}x \psi _\ell (t) \, \mathrm {d}t\\&\quad \dfrac{1}{\kappa } \int _{0}^{T} \int _{\Omega } \lbrace \Pi _\ell \mathbf {u}_{\kappa ,\ell }\rbrace ^{} v\, \mathrm {d}x \psi _\ell (t) \, \mathrm {d}t=\int _{0}^{T} \left(\int _{\Omega } \tilde{a}(t) v \, \mathrm {d}x\right) \psi _\ell (t) \, \mathrm {d}t.\end{aligned}\)
Letting \(\ell \rightarrow 0\) and exploiting the convergence process (REF ) and the Riemann integrability of \(\psi \) , we obtain:
\(\begin{aligned}&\int _{0}^{T} \left\langle \dfrac{\, \mathrm {d}}{\, \mathrm {d}t}\left(u_\kappa ^{\alpha 2} u_\kappa \right), v \right\rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \psi (t) \, \mathrm {d}t+\int _{0}^{T} \int _{\Omega } g_\kappa (t) v \, \mathrm {d}x \psi (t) \, \mathrm {d}t\\&=\int _{0}^{T} \int _{\Omega } \tilde{a}(t) v \, \mathrm {d}x \psi (t) \, \mathrm {d}t.\end{aligned}\)
Let us rearrange the first term on the lefthand side of equation (REF ) as follows:
\(\begin{aligned}&\dfrac{1}{\ell } \sum _{n=0}^{N1} \ell \int _{\Omega } \lbrace u_{\kappa ,\ell }^{n+1}^{\alpha 2} u_{\kappa ,\ell }^{n+1}  u_{\kappa ,\ell }^{n}^{\alpha 2} u_{\kappa ,\ell }^{n}\rbrace v \psi (n\ell ) \, \mathrm {d}x\\&=\int _{\Omega } \Bigg \lbrace \left[u_{\kappa ,\ell }^{1}^{\alpha 2} u_{\kappa ,\ell }^{1} u_0^{\alpha 2}u_0\right] v \psi (0)\\&\qquad + \left[u_{\kappa ,\ell }^{2}^{\alpha 2} u_{\kappa ,\ell }^{2} u_{\kappa ,\ell }^{1}^{\alpha 2}u_{\kappa ,\ell }^{1}\right] v \psi (\ell )\\&\qquad + \dots \\&\qquad +\left[u_{\kappa ,\ell }^{N1}^{\alpha 2} u_{\kappa ,\ell }^{N1} u_{\kappa ,\ell }^{N2}^{\alpha 2}u_{\kappa ,\ell }^{N2}\right] v \psi ((N1)\ell )\\&\qquad \left[u_{\kappa ,\ell }^{N}^{\alpha 2} u_{\kappa ,\ell }^{N} u_{\kappa ,\ell }^{N1}^{\alpha 2}u_{\kappa ,\ell }^{N1}\right] v \psi (T)\Bigg \rbrace \, \mathrm {d}x\\&=\int _{\Omega } u_0^{\alpha 2} u_0 v \psi (0) \, \mathrm {d}x\\&\quad +\int _{\Omega } \Bigg \lbrace \left[u_{\kappa ,\ell }^1^{\alpha 2} u_{\kappa ,\ell }^1 v (\psi (\ell )\psi (0))\right]+\left[u_{\kappa ,\ell }^2^{\alpha 2} u_{\kappa ,\ell }^2 v (\psi (2\ell )\psi (\ell ))\right]\\&\qquad + \dots \\&\qquad +\left[u_{\kappa ,\ell }^{N2}^{\alpha 2} u_{\kappa ,\ell }^{N2} v (\psi ((N1)\ell )\psi ((N2)\ell ))\right]\\&\qquad + \left[u_{\kappa ,\ell }^{N1}^{\alpha 2} u_{\kappa ,\ell }^{N1} v (\psi (T)\psi ((N1)\ell ))\right]\Bigg \rbrace \, \mathrm {d}x\\&\quad +\int _{\Omega } u_{\kappa ,\ell }^N^{\alpha 2} u_{\kappa ,\ell }^N v \psi (T) \, \mathrm {d}x\\&=\sum _{n=0}^{N1} \ell \int _{\Omega } u_{\kappa ,\ell }^n^{\alpha 2} u_{\kappa ,\ell }^n v \left[\dfrac{\psi (n\ell )\psi ((n1)\ell )}{\ell }\right] \, \mathrm {d}x\\&\quad +\int _{\Omega } u_{\kappa ,\ell }^N^{\alpha 2} u_{\kappa ,\ell }^N v \psi (T) \, \mathrm {d}x\int _{\Omega } u_0^{\alpha 2} u_0 v \psi (0) \, \mathrm {d}x.\end{aligned}\)
Therefore, equation (REF ) can be thoroughly rearranged as follows:
\(\begin{aligned}&\int _{\Omega } u_{\kappa ,\ell }^N^{\alpha 2} u_{\kappa ,\ell }^N v \psi (T) \, \mathrm {d}x\int _{\Omega } u_0^{\alpha 2} u_0 v \psi (0) \, \mathrm {d}x\\&\quad \sum _{n=0}^{N1} \ell \int _{\Omega } u_{\kappa ,\ell }^n^{\alpha 2} u_{\kappa ,\ell }^n v \left[\dfrac{\psi (n\ell )\psi ((n1)\ell )}{\ell }\right] \, \mathrm {d}x\\&\quad +\sum _{n=0}^{N1} \ell \int _{\Omega }\mu \nabla u_{\kappa ,\ell }^{n+1}^{p2} \nabla u_{\kappa ,\ell }^{n+1} \cdot \nabla (\psi (n\ell ) v)\, \mathrm {d}x\\&\quad \dfrac{1}{\kappa } \sum _{n=0}^{N1} \ell \int _{\Omega } \lbrace u_{\kappa ,\ell }^{n+1}\rbrace ^{} v \psi (n\ell ) \, \mathrm {d}x=\sum _{n=0}^{N1} \ell \int _{\Omega }\left(\dfrac{1}{\ell } \int _{n\ell }^{(n+1)\ell } \tilde{a}(t) \, \mathrm {d}t\right) v \psi (n\ell ) \, \mathrm {d}x.\end{aligned}\)
Letting \(\ell \rightarrow 0\) in (REF ) thus gives:
\(\begin{aligned}&\int _{0}^{T} \int _{\Omega } u_\kappa ^{\alpha 2} u_\kappa v \, \mathrm {d}x \dfrac{\, \mathrm {d}\psi }{\, \mathrm {d}t}\, \mathrm {d}t+\int _{0}^{T} \langle g_\kappa (t), v \rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \psi (t) \, \mathrm {d}t\\&\quad +\int _{0}^{T} \int _{\Omega } [\chi _\kappa \psi (T)u_0^{\alpha 2}u_0 \psi (0)] v \, \mathrm {d}x = \int _{0}^{T} \int _{\Omega } \tilde{a}(t) v \, \mathrm {d}x\psi (t) \, \mathrm {d}t.\end{aligned}\)
Observe that an application of the Sobolev embedding theorem (cf., e.g., Theorem 6.61 of [3]}) and an integration by parts in (REF ) give:
\(\begin{aligned}&\int _{0}^{T} \int _{\Omega } u_\kappa ^{\alpha 2} u_\kappa v \, \mathrm {d}x \dfrac{\, \mathrm {d}\psi }{\, \mathrm {d}t} \, \mathrm {d}t+\langle u_\kappa (T)^{\alpha 2} u_\kappa (T), \psi (T) v\rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )}\\&\quad \int _{\Omega } u_\kappa (0)^{\alpha 2} u_\kappa (0) \psi (0) v \, \mathrm {d}x + \int _{0}^{T} \langle g_k(t), v \rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \psi (t) \, \mathrm {d}t\\&=\int _{0}^{T}\int _{\Omega } \tilde{a}(t) v \, \mathrm {d}x \psi (t) \, \mathrm {d}t.\end{aligned}\)
Comparing equations (REF ) and (REF ) gives
\(\begin{aligned}&\langle u_\kappa (T)^{\alpha 2} u_\kappa (T), \psi (T) v\rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \int _{\Omega } u_\kappa (0)^{\alpha 2} u_\kappa (0) \psi (0) v \, \mathrm {d}x\\&=\int _{0}^{T} \int _{\Omega } [\chi _\kappa \psi (T)u_0^{\alpha 2}u_0 \psi (0)] v \, \mathrm {d}x\end{aligned}\)
Since \(\psi \in \mathcal {C}^1([0,T])\) is arbitrarily chosen, let us specialize \(\psi \) in (REF ) in a way such that \(\psi (0)=0\) . We obtain
\(\langle u_\kappa (T)^{\alpha 2} u_\kappa (T)  \chi _\kappa , \psi (T) v\rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )}=0,\quad \textup { for all }v \in \mathcal {D}(\Omega ).\)
Since the duality in (REF ) is continuous with respect to \(v\) , and since \(\mathcal {D}(\Omega )\) is, by definition, dense in \(W_0^{1,p}(\Omega )\) , we immediately infer that:
\(u_\kappa (T)^{\alpha 2} u_\kappa (T) = \chi _\kappa \in L^{\alpha ^{\prime }}(\Omega ).\)
It is immediate to observe that
\(\chi _\kappa ^{\alpha ^{\prime }2} \chi _\kappa = \leftu_\kappa (T)^{\alpha 2} u_\kappa (T)\right^{\alpha ^{\prime }2} \chi _\kappa =u_\kappa (T)^{2\alpha } \left[u_\kappa (T)^{\alpha 2} u_\kappa (T)\right]=u_\kappa (T) \in L^\alpha (\Omega ).\)
Let us now specialize \(\psi \) in (REF ) in a way such that \(\psi (T)=0\) . We obtain
\(\int _{\Omega } \left(u_\kappa (0)^{\alpha 2} u_\kappa (0)  u_0^{\alpha 2}u_0 \right)\psi (0) v \, \mathrm {d}x=0,\quad \textup { for all }v \in \mathcal {D}(\Omega ).\)
Since the integration in (REF ) is continuous with respect to \(v\) , and since \(\mathcal {D}(\Omega )\) is, by definition, dense in \(W_0^{1,p}(\Omega )\) , we immediately infer that:
\(u_\kappa (0)^{\alpha 2} u_\kappa (0) = u_0^{\alpha 2}u_0,\)
so that the injectivity of the monotone and hemicontinuous operator \(\xi \mapsto \xi ^{\alpha 2} \xi \) in turn implies that:
\(u_\kappa (0)=u_0 \in K.\)
The last thing to check is that \(g_\kappa =B_\kappa (u_\kappa )\) . For each \(0 \le n \le N1\) , multiply (REF ) by \(u_{\kappa ,\ell }^{n+1}\) and apply Lemma REF , thus getting
\(\begin{aligned}&\dfrac{1}{\alpha ^{\prime }} \sum _{n=0}^{N1} \left\lbrace \left\Vert u_{\kappa ,\ell }^{n+1}^{\alpha 2}u_{\kappa ,\ell }^{n+1}\right\Vert _{L^{\alpha ^{\prime }}(\Omega )}^{\alpha ^{\prime }}  \left\Vert u_{\kappa ,\ell }^{n}^{\alpha 2}u_{\kappa ,\ell }^{n}\right\Vert _{L^{\alpha ^{\prime }}(\Omega )}^{\alpha ^{\prime }}\right\rbrace \\&\quad +\sum _{n=0}^{N1} \ell \left\langle B_\kappa (u_{\kappa ,\ell }^{n+1}), u_{\kappa ,\ell }^{n+1} \right\rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )}\\&\le \sum _{n=0}^{N1} \ell \int _{\Omega } \left(\dfrac{1}{\ell }\int _{n\ell }^{(n+1)\ell } \tilde{a}(t\, \mathrm {d}t\right) u_{\kappa ,\ell }^{n+1} \, \mathrm {d}x,\end{aligned}\)
which in turn implies:
\(\begin{aligned}&\dfrac{1}{\alpha ^{\prime }} \left\Vert u_{\kappa ,\ell }^{N}^{\alpha 2}u_{\kappa ,\ell }^{N}\right\Vert _{L^{\alpha ^{\prime }}(\Omega )}^{\alpha ^{\prime }}\\&\quad +\int _{0}^{T} \left\langle B_\kappa (\Pi _\ell \mathbf {u}_{\kappa ,\ell }), \Pi _\ell \mathbf {u}_{\kappa ,\ell } \right\rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )}\\&\le \int _{0}^{T} \int _{\Omega } \tilde{a}(t) \Pi _\ell \mathbf {u}_{\kappa ,\ell } \, \mathrm {d}x \, \mathrm {d}t\dfrac{1}{\alpha ^{\prime }} \left\Vert u_0^{\alpha 2}u_0\right\Vert _{L^{\alpha ^{\prime }}(\Omega )}^{\alpha ^{\prime }}.\end{aligned}\)
We now exploit a trick developed by Minty [12]} which is, by now, classical. Passing to the \(\liminf \) as \(\ell \rightarrow 0\) in (REF ) and keeping in mind the convergence process (REF ) as well as the identities (REF )–(REF ) gives, on the one hand:
\(\begin{aligned}&\dfrac{1}{\alpha ^{\prime }} \Vert u_{\kappa }(T)\Vert _{L^{\alpha }(\Omega )}^{\alpha }+\liminf _{\ell \rightarrow 0}\int _{0}^{T} \left\langle B_\kappa (\Pi _\ell \mathbf {u}_{\kappa ,\ell }), \Pi _\ell \mathbf {u}_{\kappa ,\ell } \right\rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \, \mathrm {d}t\\&=\dfrac{1}{\alpha ^{\prime }} \left\Vert u_{\kappa }(T)^{\alpha 2}u_{\kappa }(T)\right\Vert _{L^{\alpha ^{\prime }}(\Omega )}^{\alpha ^{\prime }}+\liminf _{\ell \rightarrow 0}\int _{0}^{T} \left\langle B_\kappa (\Pi _\ell \mathbf {u}_{\kappa ,\ell }), \Pi _\ell \mathbf {u}_{\kappa ,\ell } \right\rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \, \mathrm {d}t\\&\le \int _{0}^{T} \int _{\Omega } \tilde{a}(t) u_\kappa \, \mathrm {d}x \, \mathrm {d}t+\dfrac{1}{\alpha ^{\prime }} \left\Vert u_0^{\alpha 2}u_0\right\Vert _{L^{\alpha ^{\prime }}(\Omega )}^{\alpha ^{\prime }}=\int _{0}^{T} \int _{\Omega } \tilde{a}(t) v \, \mathrm {d}x \psi (t) \, \mathrm {d}t+\dfrac{1}{\alpha ^{\prime }}\Vert u_0\Vert _{L^\alpha (\Omega )}^\alpha .\end{aligned}\)
On the other hand, the specializations \(v=u_\kappa \) and \(\psi \equiv 1\) in (REF ), and an application of Lemma REF give:
\(\begin{aligned}&\dfrac{1}{\alpha ^{\prime }}\Vert u_\kappa (T)\Vert _{L^\alpha (\Omega )}^\alpha +\int _{0}^{T} \langle g_\kappa (t), v \rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \psi (t) \, \mathrm {d}t\\&=\int _{0}^{T} \int _{\Omega } \tilde{a}(t) v \, \mathrm {d}x \psi (t) \, \mathrm {d}t+\dfrac{1}{\alpha ^{\prime }}\Vert u_0\Vert _{L^\alpha (\Omega )}^\alpha .\end{aligned}\)
Combining (REF ) and (REF ) gives:
\(\liminf _{\ell \rightarrow 0}\int _{0}^{T} \left\langle B_\kappa (\Pi _\ell \mathbf {u}_{\kappa ,\ell }), \Pi _\ell \mathbf {u}_{\kappa ,\ell } \right\rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )}\le \int _{0}^{T} \langle g_\kappa (t), v \rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \psi (t) \, \mathrm {d}t,\)
Let \(w\in L^p(0,T;W_0^{1,p}(\Omega ))\) . An application of the strict monotonicity of the operator \(B_\kappa \) defined in (REF ) and (REF ) gives:
\(\begin{aligned}&\int _{0}^{T} \langle g_\kappa B_\kappa w, u_\kappa w\rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \, \mathrm {d}t\\&\ge \liminf _{\ell \rightarrow 0} \int _{0}^{T} \langle B_\kappa (\Pi _\ell \mathbf {u}_{\kappa ,\ell })B_\kappa w, \Pi _\ell \mathbf {u}_{\kappa ,\ell }w \rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \, \mathrm {d}t \ge 0.\end{aligned}\)
Let \(\lambda >0\) and specialize \(w=u_\kappa \lambda v\) in (REF ), where \(v\) is arbitrarily chosen in \(L^p(0,T;W_0^{1,p}(\Omega ))\) . We obtain that:
\(\int _{0}^{T} \langle g_\kappa B_\kappa (u_\kappa \lambda v)), \lambda v \rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \, \mathrm {d}t \ge 0.\)
Dividing (REF ) by \(\lambda >0\) and letting \(\lambda \rightarrow 0^+\) gives:
\(\int _{0}^{T} \langle g_\kappa B_\kappa u_\kappa ), v \rangle _{W^{1,p^{\prime }}(\Omega ), W_0^{1,p}(\Omega )} \, \mathrm {d}t \ge 0.\)
By the arbitrariness of \(v\in L^p(0,T;W_0^{1,p}(\Omega ))\) , we obtain that:
\(g_\kappa = B_\kappa u_\kappa \in L^\infty (0,T;W^{1,p^{\prime }}(\Omega )),\)
which thus implies that \(u_\kappa \) is a solution of Problem . This completes the proof.
 [3]  [
[
3212,
3215
],
[
3827,
3830
],
[
5988,
5991
],
[
6942,
6945
],
[
9706,
9709
],
[
10216,
10219
],
[
12105,
12108
],
[
13311,
13314
],
[
20094,
20097
]
]  https://openalex.org/W333643410 
670efa81f56d4569bc01e24359bc1ab6  By the BanachAlaogluBourbaki theorem (cf., e.g., Theorem 3.6 of [1]}) we infer that, up to passing to a subsequence still denoted by \(\lbrace u_\kappa \rbrace _{\kappa >0}\) , the following convergences hold:
\(\begin{aligned}u_\kappa \overset{\ast }{\rightharpoonup }u, &\textup { in } L^\infty (0,T;W_0^{1,p}(\Omega )),\\u_\kappa ^\frac{\alpha 2}{2} u_\kappa \overset{\ast }{\rightharpoonup }v, &\textup { in } L^\infty (0,T;L^2(\Omega )),\\\dfrac{\, \mathrm {d}}{\, \mathrm {d}t}\left(u_\kappa ^\frac{\alpha 2}{2} u_\kappa \right) \rightharpoonup \dfrac{\, \mathrm {d}v}{\, \mathrm {d}t}, &\textup { in } L^2(0,T;L^2(\Omega )),\\u_\kappa ^{\alpha 2} u_\kappa \overset{\ast }{\rightharpoonup }w, &\textup { in } L^\infty (0,T;L^{\alpha ^{\prime }}(\Omega )).\end{aligned}\)
 [1]  [
[
66,
69
]
]  https://openalex.org/W1545761024 
84460a30bae54eb191d6d0420fda9471  As in [1]} (see also [2]}, [3]}, [4]}, [5]}, [6]}, [7]}),
players optimize their expected terminal utility but are, also, concerned
with the performance of their peers. For an arbitrary but fixed
policy \(( \pi _{1}, \ldots , \pi _{i1}, \pi _{i+1}, \ldots ,\pi _{N})\) ,
player \(i\) , \(i\in \mathcal {I}\) , seeks to optimize
\(V^{i}\left( x_{1}, \ldots , x_{i}, \ldots , x_{N}\right) =\sup _{\pi ^{i}\in \mathcal {A}}E_{\mathbb {P}}\left[ \left. \exp \left( \frac{1}{\delta _{i}}\left(X_{T}^{i}c_{i}C_{T}\right) \right) \rightX_{0}^{1}=x_{1}, \ldots , X_{0}^{i}=x_{i}, \ldots , X_{0}^{N}=x_{N}\right],\)
 [1]  [
[
6,
9
]
]  https://openalex.org/W2606713240 
dbf38d6b71924df49b05c8b4d55b256e  To the best of our knowledge, NuClick [1]} is the only interactive segmentation approach for extracting objects in histology images in the literature that deals with these challenges by introducing the use of squiggle based guiding signals. In the original NuClick [1]}, a random point inside the GT mask and morphological skeleton of the GT mask was used for nucleus and gland segmentation tasks, respectively.
 [1]  [
[
38,
41
],
[
265,
268
]
]  https://openalex.org/W3040784645 
b4fd04b46d9f46a89f14156f1e328735  Our implementation of the abovementioned techniques allows us to incorporate a combination of them for automatic guiding signals generation (both inclusion and exclusion maps) during the training phase. In particular, we apply this ordered sequence of mask approximating, smoothing, partitioning, and distance transform thresholding techniques with probabilities of 0.75, 0.75, 0.5, and 0.5, respectively, and after all, we generate the morphological skeleton [1]} of the modified mask as the guiding signal. The combination of these mask modification techniques will guarantee the generation of unique minimalistic guiding signals in each epoch, as illustrated in fig:signal(f)(j). It is important to note that although a copy of the original mask is modified to generate the guiding signal, the original mask is used for network training as the expected output.
 [1]  [
[
461,
464
]
]  https://openalex.org/W2048733914 
8dcd2db8e6404c37a67852b1555748b2  Similar to [1]}, we first introduce a baseline model architecture and then scale its width (number of channels or feature maps in constructing blocks) and depth (number of block repetition in each stage of network) uniformly using \(w\) and \(d\) scaling factors, respectively. These factors are calculated using a compound scaling method and are directly adopted from [1]}. Note that unlike [1]} we did not scale the network for resolution because the concept of resolution in digital pathology depends on the optical magnification and changing image size or FOV will affect the problem dramatically.
 [1]  [
[
11,
14
],
[
371,
374
],
[
394,
397
]
]  https://openalex.org/W2955425717 
0019d145e49044e6b54cfe3a380700da  Following [1]}, images are stain normalized using Reinhard's method [2]}. The original images are captured at 0.25 micron per pixel (MPP) resolution (which is equal to 40x magnification) with various scanners. However, to keep enough context during the training of our interactive segmentation model, we
extract \(512\times 512\) patches from image regions and their corresponding masks at 10x magnification (1 MPP resolution). Having guiding signals in the input of the interactive segmentation model, we can use lower resolution images to speed up the region marking and processing time. However, extracted patches were confirmed by a pathologist to show enough contextual and detailed information for tissue region annotation.
 [1]  [
[
10,
13
]
]  https://openalex.org/W2922239620 
6171119fe73f47618670c809e0c1e367  Results in tab:results suggest that interactive segmentation models like NuClick [1]} and the proposed method can outperform SOTA automatic segmentation models like UNet [2]}, DeepLab v3 [3]}, and the baseline method [4]} by a large margin as they are provided with guiding signals in the input. Particularly, our best performing model, EfficientUnetB3, achieves overall Dice, accuracy, and AUC of 0.875, 0.984, 0.995, respectively. In comparison to the SOTA automatic segmentation models, our proposed approach performs about 14% and 11% better than UNet [2]} and DeepLab v3 [3]} in terms of overall Dice score, respectively. The same trend can be seen not only for overall accuracy and AUC metrics but also for all the metrics reported for different tissue types in tab:results. Note that the lower performance for some region types in comparison to the overall (average) performance can be associated with the higher noise in GT annotations of those regions. Higher noise in GT arises from ambiguity in the boundaries of these regions which makes it hard to separate them from other regions as reported in [4]}. Although the original NuClick [1]} performs better than all other automatic segmentation models (overall Dice score 0.773), it still shows lower performance metrics than the proposed method i.e., EfficientUnetB3 segmenting 10% better in terms of Dice score. Nevertheless, it can be seen that when we train the NuClick model with the proposed minimalistic signals (sec:skeleton), overall Dice scores rises to 0.835, which shows the effectiveness of the proposed minimalistic guiding signal generation.
 [2]  [
[
170,
173
],
[
558,
561
]
]  https://openalex.org/W1901129140 
f4a440c539bc49ff9c494869c4c85cbc  The storage of data and the hosting of numerous users present significant security vulnerabilities in the context of the cloud. The user data is now protected in the cloud by powerful technologies [1]}, [2]}, [3]}. In the cloud computing environment, this is becoming more complicated because to the increased security threats associated with the traffic transmitted through nodes. For instance, a hacker might introduce malicious software, which in turn could exploit a flaw and harm or lower the network's quality of service [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}. In reality, one hacked cloud user can operate as a possible entry point for a maninthemiddle attack (MITM), disrupt all connected users, leak data, utilise the service excessively, and harm the client's confidential data [18]}, [19]}, [20]}, [21]}, [1]}, [23]}, [24]}, , [25]}, [26]}, [27]}, [28]}, [29]}, [30]}, [31]}, [32]}.
Therefore, the main problem of cloudbased IoT network is the design and construction of a powerful system that can effectively guarantee security and privacy without compromising and losing performance. Since lives are at risk, defence against these attacks is crucial. Controlling user access and keeping an eye on the entire system may be the most efficient way to solve such issues. To enable highperformance computing, the security and privacy procedures need to be examined with efficient resource utilization.
 [21]  [
[
864,
868
]
]  https://openalex.org/W2900659926 
cb7bd5a4de47476693e900ddf62d5666  The choice of covariance functions is one of fundamental importance. Depending on the problem being solved, there are numerous covariance functions (also referred to as kernels in the literature) available to use. Examples include squaredexponential, Matérn, \(\gamma \) exponential, rational quadratic, and the Bayesian linear covariance functions [1]}. These functions can be combined using basic algebra (addition, multiplication, etc.) to form advanced covariance functions. In the work presented in this article, only the squaredexponential and Bayesian linear covariance functions are used, and hence the discussion will be focussed on these.
 [1]  [
[
351,
354
]
]  https://openalex.org/W1502922572 
196303eca0cc4e299917a22755202dbf  For such a pair, a presentation of \(\pi _1(\partial U)\) is given in [1]} (See also [2]}). As we shall need the notations, let us describe it.
 [1]  [
[
71,
74
]
]  https://openalex.org/W2054990988 
062bdb8a64e1437fbc83c90ee43f026d  Previous works have used the robustness of an STL formula, or the signed distance of a given trajectory from satisfying or violating a given formula as rewards to guide the RL algorithm [1]}, [2]}.
Here, we only provide an example of learning optimal policy from a given STL formula using those existing techniques.
We use Deep QLearning [3]} because of scalability to environments with a large number of states.
In our grid world environment (Fig. REF ), there are more than 8 billion states.Each state is a tuple of 16 elements consist of robot and each of the items' (door key, green and purple cube) positions, state of the lamp and fire (on or off), and state of the door (open or close).
The algorithm takes the STL specification of the task \(\varphi _{task}\) , the goal state and the hyperparameters (i.e., \(M\) , \(C\) , \(\gamma \) , and etc.) as input, and generates the optimal policy that respects \(\varphi _{task}\) as output.
The main RL loop runs for \(episodes\) .
In each episode, first, the state is set to initial state and the partial trajectory is set to \(\emptyset \) .
While the robot has not reached the final state or maximum number of states is not reached, the robot explores the grid environment and the reward is computed as robustness of the partial trajectory with respect to \(\varphi _{task}\) .
The robot experiences are recorded in replay memory to be used later for training the \(Q\) network.
Whenever the replay buffer size exceeds \(M\) , we start training the \(Q\) network using the bellman equation.
We update the weights of target actionvalue function \(\hat{Q}\) with the weights of \(Q\) in every \(C\) episodes.
For our running example, with \(\varphi _{task} = \mathbf {F}_{[0,15]}(\textit {lampOn} \,\wedge \,\mathbf {F}_{[0,10]}(\textit {itemOnRobot(purpleCube)}))\) , the reward converges in less than 15000 episodes, and the learned policy is illustrated in Fig. REF .
<FIGURE>  [3]  [
[
339,
342
]
]  https://openalex.org/W2145339207 
5b971dee430d4042b16f19a4cfdcf9b6  Prior work has used STL for reinforcement learning applications.
Quantitative semantics of STL can be used as reward functions for evaluating robotic behaviors [1]}.
STL formulas can be used to rank the quality of demonstrations in robotic domain and also computing the reward for RL problems [2]}.
However, those works put the burden of specifying the correct STL formulas on users, and can require \(3x\) more demose than DialogueSTL despite using a similar environment [2]}.
 [1]  [
[
160,
163
]
]  https://openalex.org/W3004091789 
6e48e9e052e84cb8aca00790560cd7e9  Following different constraints discussed in the Ref. [1]}, the limits on \(U(1)_{e\mu }\) ,\(U(1)_{e\tau }\) and \(U(1)_{\mu \tau }\) models are presented here.
The major constraints on these models come from various beam dump experiments [2]}, [3]}, [4]}. In the electron beam dump experiments like E137, E141 (SLAC), E774 (Fermilab) etc where electron beam falls on detector material and the dielectric state final state cross section is measured.
The electron production through light \(Z^{\prime }\) decay is possible in the models \(U(1)_{e \tau }, U(1)_{e \mu } \) where direct \(Z^{\prime }\) couplings to the electron are present.
For models like \(U(1)_{\mu  \tau }\) where the light boson couples to the electron only through loop effects, the constraints from the electron beam dump experiments become less stringent.
For the leptophilic models like these, due to absence of direct quark interaction, cannot be constrained by the proton beam dump experiments.
Borexino[5]} and TEXONO[6]} experiments measure the cross sections of the processes where neutrinos scatter off the electron i.e. the \(\nu _{\alpha }  e\) process.
These processes will be significantly modified where the light \(Z^{\prime }\) couples to the electron along with different neutrinos, while for the \(U(1)_{\mu \tau }\) ,
these interaction only happen through a \(ZZ^{\prime }\) mixing, and therefore constraints are less stringent.
In the neutrino trident [7]} production process like \(\nu _{\mu } Z \rightarrow \nu _{\mu } \mu ^{+} \mu ^{} \) which is measured in the neutrino experiments like CCFR, CharmII [8]}, nuTEV etc can provide not so suppressed contributions through the light \(Z^{\prime }\) for the U(1) models having direct \(\mu \) couplings i.e. \(U(1)_{\mu  \tau }, U(1)_{\mu  e}\) , while the constraint will be way weaker for \(U(1)_{e  \tau }\) . Presence of new leptonic forces [9]} can contribute to matter effects for neutrino oscillations. Due to this effect SuperK provides additional constrains for \(U(1)_{e\mu }\) ,\(U(1)_{e\tau }\) , while \(U(1)_{\mu \tau }\) remains insensitive. COHERENT experiment currently only has preliminary CE\(\nu \) NS measurement which does not put stringent constraints. In addition to this, for an ultra light \(Z^{\prime }\) (m\(_{Z^{\prime }} \le 1 \) eV), constraints derived from astrophysical observations and meson decays have been studied in Ref. [10]}.
 [1]  [
[
54,
57
]
]  https://openalex.org/W3105619082 
e8035264a2ab4742ab83ab0dd8f159ff  It is worthwhile to note that future projection of the exclusion plots from SuperCDMS HV [1]} and XENONnT [2]} experiments have an overlap with the modified neutrino floor in the \(U(1)_{\mu  \tau }\) model. The enhancement in the neutrino floor will enable to observe neutrino signal events in these detectors, even in the absence of any DM signal. These events due to the overlap could have been erroneously attributed to DMnucleon scattering, which are CE\(\nu \) NS events in reality. Any future signal in that range should be probed with more vigor and from alternative experiments to ascertain the presence of DM. If DM is not present, then the signal can lead to observable BSM effects in neutrino sector, which inadvertently shows up in the DM experiments.
 [2]  [
[
106,
109
]
]  https://openalex.org/W3105201648 
84b024ea13c1469f868d79361d4bc1af  3D Point Cloud Understanding.
There are mainly two streams of research lines for point cloud modeling. One is projecting a point cloud into 3D voxels [1]}, [2]} and then using 2D/3D convolutions for feature extraction. PointNet [3]} explores ingesting 3D point clouds directly. It extracts permutationinvariant feature from the point cloud that significantly impacts pointbased 3D networks. PointNet++ [4]} proposes a hierarchical neural network that extracts local features with increasing contextual scales. Recently, PointMLP [5]} proposes a pure residual MLP network and achieves competitive results without integrating sophisticated local geometrical extractors.
Moreover, selfsupervised learning for 3D point clouds has also shown promising performance in 3D understanding field. PointBERT [6]} adopts mask language modeling from BERT [7]} to the 3D field, where it tokenizes 3D patches using an external model, randomly masks out 3D tokens, and predicts them back during pretraining. A more recent work, PointMAE [8]}, directly operates the point cloud by masking out 3D patches and predicting them back using L2 loss. Our method is orthogonal to the above 3D encoders. Their performance on 3D recognition can be potentially improved by ULIP with no/minor modification.
 [6]  [
[
799,
802
]
]  https://openalex.org/W3217247671 
2b67d9bc8cd74fc8acf056e9c65e752b  3D Point Cloud Understanding.
There are mainly two streams of research lines for point cloud modeling. One is projecting a point cloud into 3D voxels [1]}, [2]} and then using 2D/3D convolutions for feature extraction. PointNet [3]} explores ingesting 3D point clouds directly. It extracts permutationinvariant feature from the point cloud that significantly impacts pointbased 3D networks. PointNet++ [4]} proposes a hierarchical neural network that extracts local features with increasing contextual scales. Recently, PointMLP [5]} proposes a pure residual MLP network and achieves competitive results without integrating sophisticated local geometrical extractors.
Moreover, selfsupervised learning for 3D point clouds has also shown promising performance in 3D understanding field. PointBERT [6]} adopts mask language modeling from BERT [7]} to the 3D field, where it tokenizes 3D patches using an external model, randomly masks out 3D tokens, and predicts them back during pretraining. A more recent work, PointMAE [8]}, directly operates the point cloud by masking out 3D patches and predicting them back using L2 loss. Our method is orthogonal to the above 3D encoders. Their performance on 3D recognition can be potentially improved by ULIP with no/minor modification.
 [7]  [
[
844,
847
]
]  https://openalex.org/W2896457183 
77d3d23dfdab40a4ab987d8b27e9a9e8  We build our dataset of triplets from ShapeNet55 [1]}, which is one of the most extensive public 3D CAD datasets.
ShapeNet55 is the publiclyavailable subset of ShapeNet.
It contains around 52.5K CAD models, each of which is associated with metadata that textually describes the semantic information of the CAD model.
For each CAD model \(i\) in the dataset, we create a triplet \(T_i:(I_i, S_i, P_i)\) of image \(I_i\) , text description \(S_i\) and point cloud \(P_i\) . ULIP will then use these triplets for pretraining.
 [1]  [
[
49,
52
]
]  https://openalex.org/W2190691619 
16291495909a45b799fbf199f086f25f  PointNet++ [1]} is an advanced version of PointNet [2]}. It uses a hierarchical structure to better capture the local geometry of the point cloud, and becomes the cornerstone of many point cloud applications.
 [1]  [
[
11,
14
]
]  https://openalex.org/W2963121255 
f9ac80b064bb4a8ba14301abf2227869  PointMLP [1]} is the SOTA method on standard 3D classification task. It uses a residual MLP network with a lightweight geometric affine module to better capture local geometric features.
<TABLE>  [1]  [
[
9,
12
]
]  https://openalex.org/W4221160819 
f85eff61d7fe43e186795a0206b8937e  ScanObjectNN is a dataset of scanned 3D objects from the real world.
It contains 2,902 objects that are categorized into 15 categories. It has three variants: OBJ_ONLY includes ground truth segmented objects extracted from the scene meshes datasets; OBJ_BJ has objects attached with background noises and Hardest introduces perturbations such as translation, rotation, and scaling to the dataset[1]}.We used the variants provided by [2]} in our experiments.
 [1]  [
[
395,
398
]
]  https://openalex.org/W2981440248 
237c1668f56d48c2b004492763a5c697  Following [1]}, zeroshot 3D classification is conducted by measuring distances between the 3D features of an object and the text features of category candidates. The category that introduces the smallest distance is selected as the predicted category, as shown in Figure REF . We use our pretrained models as they are when performing zeroshot classification. There is no finetuning stage involved. We keep using the same prompt strategy as it is during pretraining when constructing text features for each category candidate in this task.
 [1]  [
[
10,
13
]
]  https://openalex.org/W4308538276 
348b4c0cbc614e029a7a7f948562e268  We qualitatively compared our approach with related works on the quality of the interpolated frames. As can be seen from Fig. REF , our approach is relatively robust to heavily blurred inputs and interpolates visually sharper images with clearer contents compared to other related methods [1]}, [2]}, [3]}, [4]}.
 [2]  [
[
295,
298
]
]  https://openalex.org/W2949258649 
3121d4ffdd0648ee86ec5d8bcd57b8ab  Our paper is motivated by the recent work of [1]}, who made notable progress on the task of learning a DPP kernel from data. This task is conjectured to be NPHard [2]}. [1]} presented a carefully designed EMstyle procedure, which, unlike several previous approaches (e.g., [4]}, [5]}, [6]}) learns a full DPP kernel nonparameterically.
 [6]  [
[
287,
290
]
]  https://openalex.org/W2890912593 
ea068b56dc474b919cdbcb6c7c7319e6  In the limit where cargo can only exhibit small displacements (\(x_J = xx^{\prime } \ll h\) ) in a time \(dt\) , such that \(q_m(x_Jx^{\prime },n)\) decays quickly as a function of \(x_J\) , eq.(REF ) can be simplified to include only the \(l=1,0,+1\) terms of the infinite sum. Provided that the boundaries to the periodic domain at \(x=\pm h\) are far from any fluctuations in the \(S_q(x_Jx)\) distribution away from zero, and that the characteristic unbinding timescales of cargo are much smaller than the average time it would take for them to cross the domain, the solution to a simplified FokkerPlanck equation in the limit \(h \rightarrow \infty \) including only the \(l=0\) term of eq.(REF ) will be a good approximation to the solution of the complete equation. This is equivalent to neglecting the periodicity of the system. In this case, a KramersMoyal expansion can be used to simplify the third term of eq.(REF ) by defining \(x^{\prime }=xx_J\) , and assuming that the displacements \(x_J\) due to binding or unbinding events are small [1]}. This results in the recognisable FokkerPlanck equation [2]},
\(\frac{\partial P(x,t)}{\partial t} = \frac{\partial }{\partial x} \left[ D_{eff}(x) \frac{\partial P(x,t)}{\partial x} \right]  \frac{\partial }{\partial x} \left[ v_{eff}(x) P(x,t) \right],\)
 [1]  [
[
1066,
1069
]
]  https://openalex.org/W4240788980 
b7fd7e6e3ea44d9fa40300e3532647b9  The probability distribution \(P_n(x)\) has been previously derived for cargo that can rebind from the \(n=0\) state [1]}, [2]}. Using these previously published formulae [1]}, [2]}, \(P_n(x)\) has been defined in this work by the distributions,
\(\begin{aligned}P_n(x) & = \left( \frac{P_0(x)}{1P_0(x)} \right) \prod \limits _{i=0}^{n1} \left( \frac{\bar{k}_1(x,n)}{\bar{k}_2(x,n+1)} \right), \\P_0(x) & = \left( 1 + \sum \limits _{n=0}^{N1} \prod \limits _{i=0}^n \left( \frac{\bar{k}_1(x,n)}{\bar{k}_2(x,n+1)} \right) \right)^{1}.\end{aligned}\)
 [1]  [
[
119,
122
],
[
173,
176
]
]  https://openalex.org/W2102787760 
51f9ab675162433b960861b7cf672b41  The simulations were implemented in MATLAB using an adapted form of the Gillespie algorithm [1]}, [2]} dubbed the `directfamily' method. Following cargo initialisation in the \(n=1\) state at time \(t_0=0\) , this form of the Gillespie algorithm has been implemented as follows:
 [1]  [
[
92,
95
]
]  https://openalex.org/W2042321087 
820f7e78f9fc4cdf9c7eff5dc0005c03  The simulations were implemented in MATLAB using an adapted form of the Gillespie algorithm [1]}, [2]} dubbed the `directfamily' method. Following cargo initialisation in the \(n=1\) state at time \(t_0=0\) , this form of the Gillespie algorithm has been implemented as follows:
 [2]  [
[
98,
101
]
]  https://openalex.org/W2167154952 
f054a4db1c824f229632b47fc1a8ab53  Methods based on semantic features
Current methods of comparing generative models based on their samples rely on the semantic features of the samples.
Fréchet Inception Distance (FID) [1]} approximates the Wasserstein metric between distributions using the features of images extracted from a pretrained network such as the Inception v3 [2]}.
FID makes an assumption that the underlying distributions are unimodal Gaussians, and uses the estimated mean and covariance matrices of the semantic features.
Despite its wide use in benchmarking generative models, FID is prone to inaccurate comparisons due to its biased nature with large variance and its Gaussianity assumptions.
[3]}, [4]}, [5]}, [6]}.
An alternative metric that is shown to be unbiased with smaller variance is the KernelInception Distance (KID) [7]}.
KID computes a polynomial kernel \(k(x,y) = (\frac{1}{d} x^{T} y+ 1)^3\) and measures the associated Kernel Maximum Mean Discrepancy (kernel MMD) between samples from the two distributions under comparison.
It is motivated from the Kernelbased twosample tests [8]} and thus is suitable for testing which of the two models is closer to the true data distribution [9]}, [10]}.
Since KID replies on a kernelized feature representation of the samples in infinitely many dimensional space, it is hard to interpret the features unlike our fingerprints in finite dimensions.
 [7]  [
[
813,
816
]
]  https://openalex.org/W2962919088 
074caa598a0346d8aeea778915a4559e  In Section 2, we intend to study the finiteness of Gorenstein cohomological dimension of groups. Recall that a ring is Gorenstein regular [1]}, [2]} if it has finite global Gorenstein projective dimension, which contains strictly the rings of finite global dimension (e.g. \(\mathbb {Z}\) ), as well as IwanagaGorenstein rings (e.g. \(\mathbb {Z}/4\mathbb {Z}\) and \(k[x]/(x^2)\) ). Let \(G\) be a group, \(R\) be a Gorenstein regular ring.
We show in Lemma REF that if \({\rm Gcd}_{R}G\) is finite, then there exists an \(R\) split \(RG\) exact sequence \(0\rightarrow R\rightarrow \Lambda \) , where \(\Lambda \) is an \(R\) projective \(RG\) module with \({\rm pd}_{RG}\Lambda = {\rm Gcd}_RG\) ; if \(R\) is commutative, then the converse holds by Lemma REF . Moreover, a characterization of finiteness of \({\rm Gcd}_RG\) is given (see Theorem REF ), which generalizes the results in [3]} and [4]} for coefficient rings of finite global dimension to Gorenstein regular rings. Also, we rediscover [5]} by letting \(R = \mathbb {Z}\) .
 [5]  [
[
1014,
1017
]
]  https://openalex.org/W2066014515 
edb86df8f2794e2c939173bb2a58522a  The “Gcd” can be considered as an assignment of invariants for the pairs of groups and coefficient rings \((G, R)\) . In Section 3 and 4, we will study the assignment Gcd under changes of groups and coefficient rings, respectively. We define an order for such pairs; see Definition REF . Using Lemma REF and REF , we show in Proposition REF that if \(R\) is a commutative Gorenstein regular ring and \((H, R)\le (G, R)\) , then \(\mathrm {Gcd}_{R}H\le \mathrm {Gcd}_{R}G\) ; by specifically taking \(R = \mathbb {Z}\) we reobtain [1]}. If \(R\) is commutative Gorenstein regular and \((G, S) \le (G, R)\) , then \(\mathrm {Gcd}_{S}G\le \mathrm {Gcd}_{R}G\) ; see Proposition REF . We apply this to recover [2]} and [3]}, that is, \(\mathrm {Gcd}_{R}G\le \mathrm {Gcd}_{\mathbb {Z}}G\) for any commutative ring \(R\) , and particularly \(\mathrm {Gcd}_{\mathbb {Q}}G \le \mathrm {Gcd}_{\mathbb {Z}}G\) . Consequently, “Gcd” preserves the order of pairs of groups and commutative Gorenstein regular rings, that is, \(\mathrm {Gcd}_{S}H\le \mathrm {Gcd}_{R}G\) provided that \((H, S) \le (G, R)\) ; see Corollary REF .
 [2]  [
[
711,
714
]
]  https://openalex.org/W2791610753 
6ff720e5f7854ca1a6c478e4bf076caa  Let \({\rm Gcd}_{R}G = {\rm Gpd}_{RG}R = n\) . It follows from [1]} that there exists an exact sequence \(0\rightarrow K\rightarrow M\rightarrow R\rightarrow 0\) , where \(M\) is a Gorenstein projective \(RG\) module, and \({\rm pd}_{RG}K = n1\) . For \(M\) , there is an exact sequence of \(RG\) modules
\(0\rightarrow M\rightarrow P\rightarrow L\rightarrow 0\) , where \(L\) is Gorenstein projective and \(P\) is projective. We consider the following pushout of \(M\rightarrow R\) and \(M\rightarrow P\) :
\(@C=20pt@R=20pt{ & & 0[d] & 0[d] \\0 [r] &K @{=}[d] [r] & M [d][r] &R [d][r] &0 \\0 [r] &K [r] & P [r] [d] &\Lambda [r][d] & 0\\& & L [d] @{=}[r] & L[d]\\& & 0 & 0}\)
From the middle row we infer that \({\rm pd}_{RG}\Lambda = {\rm pd}_{RG}K + 1 = n\) . It follows from Lemma REF that \(L\) is also a Gorenstein projective \(R\) module, and then the sequence
\(0\rightarrow R\rightarrow \Lambda \rightarrow L\rightarrow 0\) is \(R\) split. Moreover, as an \(R\) module, \(\Lambda \cong L\oplus R\) is Gorenstein projective. By [2]}, which says that projective dimension of any Gorenstein projective module is either zero or infinity, we imply from \({\rm pd}_{R}\Lambda \le {\rm pd}_{RG}\Lambda = n\) that \(\Lambda \) is a projective \(R\) module. This completes the proof.
 [2]  [
[
1050,
1053
]
]  https://openalex.org/W2048512304 
2419d7fce47d4b2ca74901c4cf7bfa67  By Serre's Theorem, there is an equality between cohomological dimensions of a group and subgroups with finite index; see details in [1]} or [2]}. In this sense, the following result might be regarded as a Gorenstein version of Serre's Theorem. We remark that by specifying the ring to be \(\mathbb {Z}\) , the result recovers [3]}; while our proof is straightforward, and is quite different from that of [3]}. Note that the following equality was also proved in [5]} under the addition assumptions that the coefficient ring is of finite weak global dimension and \(H\) is a normal submodule of \(G\) .
 [1]  [
[
133,
136
]
]  https://openalex.org/W2009766514 
5cef7a8f4e5b40ca83d490d09e2054b7  By Serre's Theorem, there is an equality between cohomological dimensions of a group and subgroups with finite index; see details in [1]} or [2]}. In this sense, the following result might be regarded as a Gorenstein version of Serre's Theorem. We remark that by specifying the ring to be \(\mathbb {Z}\) , the result recovers [3]}; while our proof is straightforward, and is quite different from that of [3]}. Note that the following equality was also proved in [5]} under the addition assumptions that the coefficient ring is of finite weak global dimension and \(H\) is a normal submodule of \(G\) .
 [2]  [
[
141,
144
]
]  https://openalex.org/W2049550828 
2dfd48716d2747d69dadacfc0e717baa  The following characterization for Gorenstein projective modules is immediate from [1]}. For any ring \(A\) , we denote by \(\mathcal {P}(A)\) the class of all projective \(A\) module. The left orthogonal of \(\mathcal {P}(A)\) is defined as
\(^{\perp }\mathcal {P}(A) = \lbrace M\in {\rm Mod}(A)~~~~~~ {\rm Ext}^i_A(M, P) = 0, \text{ for any } P\in \mathcal {P}(A) \text{ and } i\ge 1 \rbrace .\)
 [1]  [
[
83,
86
]
]  https://openalex.org/W2114145385 
b7d71d9d01f643a38f2508f1549d2698  It is clear that \(\mathcal {P}\subseteq \mathcal {C}of\cap \mathcal {W}\) , that is, all projective \(RG\) modules are included in \(\mathcal {C}of\cap \mathcal {W}\) . We infer that \(\mathcal {C}of\cap \mathcal {W}\subseteq \mathcal {P}\) since any cofibrant module is Gorenstein projective, and projective dimension of any Gorenstein projective is either zero or infinity. Hence, \(\mathcal {C}of\cap \mathcal {W} = \mathcal {P}\) .
For any \(P\in \mathcal {P}\) and any \(RG\) module \(M\) , it is clear that \({\rm Ext}^{\ge 1}_{RG}(P, M) = 0\) , and furthermore, we have \(\mathcal {P}^{\perp } = \mathcal {F}ib\) and \(\mathcal {P} \subseteq {^{\perp }\mathcal {F}ib}\) by noting that “\(\perp \) ” is only calculated inside \(\mathcal {F}ib\) . Let \(M\in {^{\perp }\mathcal {F}ib}\) . There is an exact sequence \(0\rightarrow K\rightarrow P\rightarrow M\rightarrow 0\) in \(\mathcal {F}ib\) , where \(P\) is a projective \(RG\) module. Noting that
\({\rm Ext}^1_{RG}(M, K) = 0\) , we deduce that the sequence is split, and hence, as a direct summand of \(P\) , \(M\) is projective. This implies the inclusion \({^{\perp }\mathcal {F}ib}\subseteq \mathcal {P}\) , and consequently, we obtain a cotorsion pair \((\mathcal {C}of \cap \mathcal {W}, \mathcal {F}ib) = (\mathcal {P}, \mathcal {F}ib)\) . The completeness of this cotorsion pair is easy to see.
Next, we show that \((\mathcal {C}of, \mathcal {W}\cap \mathcal {F}ib) = (\mathcal {C}of, \mathcal {W})\) is a cotorsion pair. Since every cofibrant module is Gorenstein projective, \(\mathcal {C}of \subseteq {^{\perp }\mathcal {W}}\) and \(\mathcal {W}\subseteq \mathcal {C}of^{\perp }\) hold immediately. For any \(M\in {^{\perp }\mathcal {W}}\) , we have \(M\in \mathcal {C}of\) by Lemma REF since we only consider objects in \(\mathcal {F}ib\) . Hence, \({^{\perp }\mathcal {W}} \subseteq \mathcal {C}of\) , and then
\(\mathcal {C}of = {^{\perp }\mathcal {W}}\) . Let \(M\) be any object in \(\mathcal {C}of^{\perp }\) . Since we also have \(M\in \mathcal {F}ib\) , it follows from
Proposition REF that \({\rm Gpd}_{RG}M \le {\rm pd}_{RG}M\otimes _{R}B(G, R)\) is finite. Assume \({\rm Gpd}_{RG}M = n\) . By an argument analogous to that of Lemma REF , we have an exact sequence \(0\rightarrow M\rightarrow N\rightarrow L\rightarrow 0\) from a pushout diagram, where \(L\) is Gorenstein projective and
\({\rm pd}_{RG}N = n\) . Then, we infer from Lemma REF that \(L\) is cofibrant by noting \(L\in \mathcal {F}ib\) . Hence, \({\rm Ext}_{RG}^1(L, M) = 0\) for \(M\in \mathcal {C}of^{\perp }\) . Then, the above sequence is split, and \({\rm pd}_{RG}M\le {\rm pd}_{RG}N = n\) . This implies that \(\mathcal {C}of^{\perp } \subseteq \mathcal {W}\) , and finally, \((\mathcal {C}of, \mathcal {W})\) is a cotorsion pair.
For any \(M\in \mathcal {F}ib\) , we have an exact sequence \(0\rightarrow M\rightarrow N\rightarrow L\rightarrow 0\) with \(N\in \mathcal {W}\) and \(L\in \mathcal {C}of\) . Moreover, for \(N\in \mathcal {W}\) , there is an exact sequence \(0\rightarrow K\rightarrow P\rightarrow N\rightarrow 0\) , where \(P\) is projective and \(K\in \mathcal {W}\) . Now we consider the pullback of \(M\rightarrow N\) and \(P\rightarrow N\) , and obtain the following commutative diagram.
\(@C=20pt@R=20pt{& 0[d] & 0[d] \\& K @{=}[r][d] &K[d]\\0 [r] &X [d] [r] & P [d][r] &L @{=}[d][r] &0 \\0 [r] &M [r][d] & N [r] [d] &L [r] & 0\\& 0 & 0}\)
We infer \(X\in \mathcal {C}of\) from the middle row, where \(L\in \mathcal {C}of\) and \(P\in \mathcal {P}\) . Then, by the left column and the lower row, we infer that the cotorsion pair \((\mathcal {C}of, \mathcal {W})\) is complete.
Consequently, by using [1]} we have a model structure on \(\mathcal {F}ib\) as stated above, which is corresponding to the triple of classes of \(RG\) modules \((\mathcal {C}of, \mathcal {W}, \mathcal {F}ib)\) . The triple is usually referred to as a Hovey triple, since such a correspondence was obtained by Hovey in [2]}.
 [1]  [
[
3706,
3709
]
]  https://openalex.org/W2081330825 
53a5b43d8e1d411fab0dcfddfc9841ae  For model category \(\mathcal {F}ib\) , the associated homotopy category \(\mathrm {Ho}(\mathcal {F}ib)\) is obtained by formally inverting weak equivalences, that is, the localization of \(\mathcal {F}ib\) with respect to the class of weak equivalences. This category is equivalent to the category \(\pi \mathcal {C}of\) , whose objects are modules in \(\mathcal {C}of\) which are both cofibrant and fibrant, and where the morphisms are the homotopy classes of maps; see details in [1]}, [2]} or [3]}.
 [1]  [
[
486,
489
]
]  https://openalex.org/W4236256974 
655af8d133174f3cb207e1430e478dc5  First, we note that objects of \({\rm Ho}(\mathcal {F}ib)\) and \({\rm StMod}(RG)\) coincide. It suffices to prove that the natural functor from \({\rm Ho}(\mathcal {F}ib)\) to
\({\rm StMod}(RG)\) is fully faithful.
Let \(M\) and \(N\) be any fibrant \(RG\) modules. By the completeness of the cotorsion pair \((\mathcal {C}of, \mathcal {W})\) , there exists an exact sequence \(0\rightarrow K_M\rightarrow Q(M)\rightarrow M\rightarrow 0\) ,
where \(Q(M)\) is cofibrant and \(K_M\in \mathcal {W}\) . Hence, the cofibrant approximation \(Q(M)\rightarrow M\) is also a trivial fibration, and we refer it (or simply, \(Q(M)\) ) to be a cofibrant replacement of \(M\) . Then, we have the following isomorphisms
\({\rm Hom}_{{\rm Ho}(\mathcal {F}ib)}(M, N)\cong \underline{{\rm Hom}}_{RG}(Q(M), Q(N)) \cong {\rm Hom}_{{\rm StMod}(RG)}(Q(M), Q(N)),\)
where the first one follows by [1]} and Lemma REF , and the second one holds by Lemma REF .
By basic properties of cofibrant modules (see Proposition REF ), for fibrant \(RG\) modules \(M\) and \(N\) , there exists an integer \(r >> 0\) , such that both \(\Omega ^r(M)\) and \(\Omega ^r(N)\) are cofibrant modules, and moreover, the projective dimension of \(K_M\) and \(K_N\) are not more than \(r  1\) . For \(M\) , we have exact sequences
\(0\longrightarrow \Omega ^r(M)\longrightarrow P_{r1}\longrightarrow \cdots \longrightarrow P_1\longrightarrow P_0\longrightarrow M\longrightarrow 0,\)
\(0\longrightarrow P^{\prime }_{r}\longrightarrow P^{\prime }_{r1}\longrightarrow \cdots \longrightarrow P^{\prime }_1\longrightarrow Q(M)\longrightarrow M\longrightarrow 0,\)
where \(P_i\) and \(P^{\prime }_i\) are all projective \(RG\) modules; similarly, we obtain such exact sequences for \(N\) . Moreover, we get the following commutative diagram:
\(@C=20pt@R=20pt{0[r] & \Omega ^r(M)[d]_{\Omega ^r(f)}[r] & P_{r1}\oplus P^{\prime }_{r}[r][d] &\cdots [r] &P_0\oplus P^{\prime }_1 [r][d] & Q(M)[r][d]^{f} & 0 \\0[r] & \Omega ^r(N)[r] & Q_{r1}\oplus Q^{\prime }_{r}[r] &\cdots [r] &Q_0\oplus Q^{\prime }_1 [r] & Q(N)[r] & 0}\)
Analogous to Lemma REF , we can prove that there is an isomorphism
\(\underline{{\rm Hom}}_{RG}(Q(M), Q(N)) \cong \underline{{\rm Hom}}_{RG}(\Omega ^r(M), \Omega ^r(N)).\)
Moreover, it follows from Lemma REF that for all \(j > 0\) , we have isomorphisms
\(\underline{{\rm Hom}}_{RG}(\Omega ^r(M), \Omega ^r(N)) \cong \underline{{\rm Hom}}_{RG}(\Omega ^{r+j}(M), \Omega ^{r+j}(N)),\)
and consequently,
\({\rm Hom}_{{\rm StMod}(RG)}(M, N) = \mathop {\underrightarrow{\mathrm {lim}}}\limits _i\underline{{\rm Hom}}_{RG}(\Omega ^i(M), \Omega ^i(N)) = \underline{{\rm Hom}}_{RG}(\Omega ^r(M), \Omega ^r(N)).\)
Hence, we get the desired isomorphism \({\rm Hom}_{{\rm Ho}(\mathcal {F}ib)}(M, N)\cong {\rm Hom}_{{\rm StMod}(RG)}(M, N)\) . We are done with the proof.
 [1]  [
[
886,
889
]
]  https://openalex.org/W4230387122 
f3c7e68398b74d85b5325b27197a7d0e  In cases where the regression map \(z \mapsto \mu _z(x)\) for any feature \(x\) , can be traced with homotopy as in Ridge [1]} and Lasso [2]}, it takes \(O(n^2)\) to compute the exact conformal set. This can be reduced to \(O(n\log n)\) by sorting the roots of the instancewise scores \(E_i(z)  E_{n+1}(z)\) for \(i\) in \([n]\) and cleverly flattening the double loop when evaluating the ranks of the score functions [3]}. By relaxing the exactness, none of these two steps is needed in our approach. We obtain an asymptotic improvement to \(O(n\log _2(1/\epsilon ))\) and an easier to implement algorithm.
 [3]  [
[
426,
429
]
]  https://openalex.org/W1553101044 
06472e41f0e240ed8dd7e5d6857af3ca  The full conformal prediction set is computationally expensive since it requires knowing exactly the map \(z \mapsto \mu _z(\cdot )\) . The splitting approach does not use all the data in the learning phase but is computationally efficient since it requires a single model fit. Alternatively, it was proposed in [1]} to use an arbitrary discretization and its theoretical analysis in [2]} unfortunately failed to preserve the coverage guarantee. In this section, we argue that grid based strategy with an interpolation point of view, stands as an "inbetween" strategy that exploits full data with a restricted computational time while preserving the coverage guarantee. We propose to compute a conformal prediction set based on an interpolation of the model fit map given a finite number of query points. The main insight is that the underlying model fit plays a minor role in the coverage guarantee; the only requirement is to be symmetric with respect to permutation of the data. As such, the model path \(z \mapsto \hat{\mu }_z(\cdot )\) can be replaced by an interpolated map \(z \mapsto \tilde{\mu }_z(\cdot )\) based on query points \(z_1, \cdots , z_d\) . It reads to a valid prediction set as long as the interpolation preserves the symmetry Otherwise one can always perform a symmetrization using a model parameter \(\tilde{\beta }(z) = \frac{1}{(n+1)!} \sum _{\sigma \in \Sigma _{n+1}} \beta (w_{\sigma (1)}, \cdots , w_{\sigma (n)}, w_{\sigma (n+1)})\) ,
where \(w_i = (x_i, y_i)\) if \(i\) in \([n]\) , \(w_{n+1} = (x_{n+1}, z)\) and \(\Sigma _{n+1}\) is the group of permutation of \([n+1]\) ..
For instance, one can rely on a piecewise linear interpolation
\(\tilde{\mu }_{z} ={\left\lbrace \begin{array}{ll}\frac{z_1  z}{z_1  z_{\min }} \hat{\mu }_{z_{\min }} + \frac{z_{\min }  z}{z_1  z_{\min }} \hat{\mu }_{z_1} &\text{ if } z \le z_{\min } \hspace{5.0pt}, \\\frac{z  z_{t+1}}{z_t  z_{t+1}} \hat{\mu }_{z_t} + \frac{z  z_{t}}{z_{t+1}  z_t} \hat{\mu }_{z_{t+1}} & \text{ if } z \in [z_t, z_{t+1}]\hspace{5.0pt}, \\\frac{z  z_d}{z_{\max }  z_d} \hat{\mu }_{z_{\max }} + \frac{z_{\max }  z}{z_{\max }  z_d} \hat{\mu }_{z_d} &\text{ if } z \ge z_{\max }\hspace{5.0pt},\end{array}\right.}\)
 [1]  [
[
312,
315
]
]  https://openalex.org/W2964060211 
Dataset Card for unarXive citation recommendation
Dataset Summary
The unarXive citation recommendation dataset contains 2.5 Million paragraphs from computer science papers and with an annotated citation marker. The paragraphs and citation information is derived from unarXive.
Note that citation infromation is only given as the OpenAlex ID of the cited paper. An important consideration for models is therefore if the data is used as is, or if additional information of the cited papers (metadata, abstracts, fulltext, etc.) is used.
The dataset can be used as follows.
from datasets import load_dataset
citrec_data = load_dataset('saier/unarXive_citrec')
citrec_data = citrec_data.class_encode_column('label') # assign target label column
citrec_data = citrec_data.remove_columns('_id') # remove sample ID column
Dataset Structure
Data Instances
Each data instance contains the paragraph’s text as well as information on one of the contained citation markers, in the form of a label (cited document OpenAlex ID), citation marker, and citation marker offset. An example is shown below.
{'_id': '7c1464bb1f0f4b38b1a385754eaf6ad1',
'label': 'https://openalex.org/W3115081393',
'marker': '[1]',
'marker_offsets': [[316, 319]],
'text': 'Data: For sentiment analysis on HindiEnglish CM tweets, we used the '
'dataset provided by the organizers of Task 9 at SemEval2020.\n'
'The training dataset consists of 14 thousand tweets.\n'
'Whereas, the validation dataset as well as the test dataset contain '
'3 thousand tweets each.\n'
'The details of the dataset are given in [1]}.\n'
'For this task, we did not use any external dataset.\n'}
Data Splits
The data is split into training, development, and testing data as follows.
 Training: 2,043,192 instances
 Development: 225,084 instances
 Testing: 225,348 instances
Dataset Creation
Source Data
The paragraph texts are extracted from the data set unarXive.
Who are the source language producers?
The paragraphs were written by the authors of the arXiv papers. In file license_info.jsonl
author and text licensing information can be found for all samples, An example is shown below.
{'authors': 'Yusuke Sekikawa, Teppei Suzuki',
'license': 'http://creativecommons.org/licenses/by/4.0/',
'paper_arxiv_id': '2011.09852',
'sample_ids': ['cc375518347c43d0bfb2f88564d66df8',
'18dc073ea48e488eb34ce5fc3cb8a4ca',
'0c2e89b3d8634bc29e118f6c48d867cb',
'd85e46cfb11d49b6801b089aa2dd037d',
'92915cea17ab4a98aad2417f6cdd53d2',
'e88cb42247b74f699b0bfbddf8140d98',
'4f5094a40e6e46aea34de15ce0b9803c',
'59003494096f4a7cad65342b74eed561',
'6a99b3f5217e4d3da770693483ef8670']}
Annotations
Citation information in unarXive is automatically determined (see implementation).
Additional Information
Licensing information
The dataset is released under the Creative Commons AttributionShareAlike 4.0.
Citation Information
@inproceedings{Saier2023unarXive,
author = {Saier, Tarek and Krause, Johan and F\"{a}rber, Michael},
title = {{unarXive 2022: All arXiv Publications PreProcessed for NLP, Including Structured FullText and Citation Network}},
booktitle = {Proceedings of the 23rd ACM/IEEE Joint Conference on Digital Libraries},
year = {2023},
series = {JCDL '23}
}
 Downloads last month
 5