_id
stringlengths
36
36
text
stringlengths
5
665k
marker
stringlengths
3
6
marker_offsets
sequence
label
stringlengths
28
32
3e103bb3-3ae6-44c5-8655-cf8687f04fe4
Since the above conclusions with respect to Nijenhuis operators on commutative associative algebras ([1]}) and Lie algebroids ([2]}) simultaneously hold, by Theorem REF , the conclusions follow immediately.
[1]
[ [ 101, 104 ] ]
https://openalex.org/W1995545731
61c3fc17-a553-4e0c-a9b6-657ec01bec91
Example 5.3 ([1]}) Let \(\mathfrak {g}\) be the algebra of polynomials in \(n\) variables. Define \(\cdot :\mathfrak {D}_n\times \mathfrak {D}_n\longrightarrow \mathfrak {D}_n \) and \(\ast :\mathfrak {D}_n\times \mathfrak {D}_n\longrightarrow \mathfrak {D}_n\) by \((p\partial _{u^i})\cdot (q\partial _{u^j})&=&(pq)\delta _{ij}\partial _{u^i},\\(p\partial _{u^i})\ast (q\partial _{u^j})&=&p\partial _{u^i}(q)\partial _{u^j},\quad \forall ~p,q\in \mathfrak {g}.\)
[1]
[ [ 13, 16 ] ]
https://openalex.org/W3023032865
fee7b83d-bf46-4e18-91ad-808f57c6ea98
In this paper, motivated by [1]}, [2]}, [3]}, we aim to study the existence, the local uniqueness and the periodicity of the bubble solutions for equation (REF ), where we mainly want to study the impact of the linear term \(Q(y)u\) in equation (REF ) to them. Here we call the solutions are local unique if two sequences of solutions blow up at the same set. This uniqueness implies certain kind of symmetry. We will prove that the two solutions are the same by obtaining some useful estimates and applying some local Pohozaev identities.
[2]
[ [ 34, 37 ] ]
https://openalex.org/W2592628544
68f343fb-dce3-494d-9d8d-15d0b2bedfd0
We argue by a contradiction argument. Suppose that there exist \(L\rightarrow +\infty \) , and \(\varphi _L\) solving (REF ) for \(h=h_L\) , \(\mu =\mu _L\) with \(\Vert h_L\Vert _{**}\rightarrow 0\) and \(\Vert \varphi _L\Vert _{*}\ge C>0\) . We may assume that \(\Vert \varphi _L\Vert _{*}=1\) . For simplicity, we drop the subscript \(L.\) Since \(Q(y)\) is non-negative, we have \(|\varphi (y)|\le & C\int _{\mathbb {R}^N}\frac{1}{|z-y|^{N-2}}W_{\textbf {x},\mu _{L}}^{2^*-2}|\varphi (z)|dz\cr &+C\int _{\mathbb {R}^N}\frac{1}{|z-y|^{N-2}}(|h|+|\sum \limits _{j=1}^{N+1}\sum \limits _{i=0}^mc_{ij}W_{x_{iL},\mu _{L}}^{2^*-2}Z_{ij}|)dz.\) As in [1]}, using Lemma B.2 and B.3, we can prove \(\int _{\mathbb {R}^N}\frac{1}{|z-y|^{N-2}}W_{\textbf {x},\mu _{L}}^{2^*-2}|\varphi |dz\le C\Vert \varphi \Vert _{*}\sum \limits _{j=0}^m\frac{\mu _{L}^{\frac{N-2}{2}}}{(1+\mu _{L}|y-x_{jL}|)^{\frac{N-2}{2}+\tau +\theta }},\) \(\int _{\mathbb {R}^N}\frac{1}{|z-y|^{N-2}}|h(z)|dz\le c\Vert h\Vert _{**}\sum \limits _{j=0}^m\frac{\mu _{L}^{\frac{N-2}{2}}}{(1+\mu _{L}|y-x_{jL}|)^{\frac{N-2}{2}+\tau }},\) and \(\int _{\mathbb {R}^N}\frac{1}{|z-y|^{N-2}}|\sum \limits _{j=1}^mW_{x_{jL},\mu _{L}}^{2^*-2}Z_{j,l}|dz\le C \sum \limits _{j=0}^m\frac{\mu _{L}^{\frac{N-2}{2}+n_l}}{(1+\mu _{L}|y-x_{jL}|)^{\frac{N-2}{2}+\tau }},\) where \(n_j=1,j=0,1,2,\cdots ,N\) , \(n_{N+1}=-1\) , \(\tau ,\theta \) small enough. To estimate \(c_{ij}\) , \(i=0,1,2,\cdots ,m\) , \(j=1,2,\cdots ,N+1\) , multiplying (REF ) by \(Z_{ij}\) and integrating, we see that \(c_{ij}\) satisfies \(\sum \limits _{j=1}^{N+1}\sum \limits _{i=0}^m\int _{\mathbb {R}^N}c_{ij}W_{x_{iL},\mu _{L}}^{2^*-2}Z^2_{ij}=\langle -\Delta \varphi +Q(y)\varphi -&(2^*-1)W_{\textbf {x},\mu _{L}}^{2^*-2}\varphi ,Z_{ij}\rangle -\langle h, Z_{ij}\rangle .\) It follows from Lemma B.1 that \(|\langle h,Z_{i,j}\rangle |\le C\mu _{L}^{n_j}\Vert h\Vert _{**}.\) By direct computation, we have \(|\langle Q(y)\varphi ,Z_{il}\rangle |\le & C\Vert \varphi \Vert _{*}\int _{\mathbb {R}^N}\frac{\xi \mu _{L}^{\frac{N-2}{2}+n_l}}{(1+\mu _{L}|y-x_{iL}|)^{N-2}}\sum \limits _{j=0}^m\frac{1}{(1+\mu _{L}|y-x_{jL}|)^{\frac{N-2}{2}+\tau }}\cr &=\frac{\mu _{L}^{n_l}}{\mu _{L}^{1+\epsilon }}\Vert \varphi \Vert _{*}.\) On the other hand, we have \(|\langle -\Delta \varphi -(2^*-1)W_{\textbf {x},\mu _{L}}^{2^*-2}\varphi ,Z_{i,l}\rangle |=O(\frac{\mu _{L}^{n_l}\Vert \varphi \Vert _{*}}{\mu _{L}^{1+\epsilon }}).\) Combining (REF )-(REF ), we have \(\bigl \langle -\Delta \varphi +Q(y)\varphi -(2^*-1)W_{\textbf {x},\mu _{L}}^{2^*-2}\varphi ,Z_{i,l}\bigr \rangle -\bigr \langle h,Z_{i,l}\bigr \rangle =O\Big (\mu _{L}^{n_l}(\frac{\Vert \varphi \Vert _{*}}{\mu _{L}^{1+\epsilon }}+\Vert h\Vert _{*})\Big ).\) It is easy to check that \(\sum \limits _{j=1}^m\langle W_{x_{jL},\mu _{L}}^{2^*-2}Z_{j,h},Z_{i,l}\rangle =(\bar{c}+o(1))\delta _{hl}\mu _{L}^{2n_l}\) for some constant \(\bar{c}>0\) . Now inserting (REF ) and (REF ) into (REF ), we find \(c_{il}=\frac{1}{\mu _{L}^{n_l}}(o(\Vert \varphi \Vert _{*}+o(\Vert h\Vert _{**}))),\) so \(\Vert \varphi \Vert _{*}\le c\Big (o(1)+\Vert h\Vert _{**}+\frac{\sum \limits _{j=0}^m\frac{1}{(1+\mu _{L}|y-x_{jL}|)^{\frac{N-2}{2}+\tau +\theta }}}{\sum \limits _{j=0}^m\frac{1}{(1+\mu _{L}|y-x_{jL}|)^{\frac{N-2}{2}+\tau }}}\Big ).\) We can finish the proof of this lemma by using (REF ) as in [2]}.
[1]
[ [ 654, 657 ] ]
https://openalex.org/W2779541808
ac96e6b4-94e9-40fe-b483-a9a4705263d9
In this paper, we argue that the temporal geometric consistency in event videos may require not only a sequential temporal dependency but also a non-sequential information accumulation across frames in a long range to recover the whole human bodies. For instance, as shown in  REF , in some cases there are some key frames that contains all body parts, which can be used to complete the information of neighboring frames. In some cases, we cannot always have key frames in a video so that we need to accumulate the information from a set of frames in a time window to recover the full human body. Therefore, we adopt a basic recurrent architecture with a newly proposed temporal dense connection across a sequence of time steps to capture the geometric consistency of human poses across frames in both local and long range to complete the lost information in event frames, as illustrated in  REF . Specifically, we incorporate a set of dense connections between the current frame and its all preceding frames into the recurrent network built by using a Long Short-Term Memory (LSTM) module to link an encoder-decoder CNN [1]} temporally. This new architecture allows for both the sequential and non-sequential temporal dependency modeling thanks to such skipped dense connections rather than only sequential connections between two neighboring frames in [2]}, [3]}, [4]}, [5]}. Moreover, we introduce a spatio-temporal attention mechanism into the dense connections to pay different importance to the preceding frames and their spatial joints when fusing their information to the current frame. In addition, we found in the experiments that the existing event-based human pose datasets [6]}, [7]} are normally captured under indoor environments with clean background and controlled lighting conditions. Therefore, our method is easily saturated in the performance with these datasets. Therefore, to evaluate our method, we collect a new event-based human pose dataset, referred as CDEHP, to provide the benchmarks for event-based pose estimation.
[1]
[ [ 1121, 1124 ] ]
https://openalex.org/W2963402313
3a1e9462-4fce-4543-968b-dced778e415e
In this paper, we argue that the temporal geometric consistency in event videos may require not only a sequential temporal dependency but also a non-sequential information accumulation across frames in a long range to recover the whole human bodies. For instance, as shown in  REF , in some cases there are some key frames that contains all body parts, which can be used to complete the information of neighboring frames. In some cases, we cannot always have key frames in a video so that we need to accumulate the information from a set of frames in a time window to recover the full human body. Therefore, we adopt a basic recurrent architecture with a newly proposed temporal dense connection across a sequence of time steps to capture the geometric consistency of human poses across frames in both local and long range to complete the lost information in event frames, as illustrated in  REF . Specifically, we incorporate a set of dense connections between the current frame and its all preceding frames into the recurrent network built by using a Long Short-Term Memory (LSTM) module to link an encoder-decoder CNN [1]} temporally. This new architecture allows for both the sequential and non-sequential temporal dependency modeling thanks to such skipped dense connections rather than only sequential connections between two neighboring frames in [2]}, [3]}, [4]}, [5]}. Moreover, we introduce a spatio-temporal attention mechanism into the dense connections to pay different importance to the preceding frames and their spatial joints when fusing their information to the current frame. In addition, we found in the experiments that the existing event-based human pose datasets [6]}, [7]} are normally captured under indoor environments with clean background and controlled lighting conditions. Therefore, our method is easily saturated in the performance with these datasets. Therefore, to evaluate our method, we collect a new event-based human pose dataset, referred as CDEHP, to provide the benchmarks for event-based pose estimation.
[6]
[ [ 1686, 1689 ] ]
https://openalex.org/W2963127485
9e241565-98f8-4230-a7ca-35065598ba4a
Human pose estimation from still images in early works usually starts from building the parts-based graphic models or pictorial structure models [1]}, [2]}, [3]}, [4]} to learn spatial relationships between articulated body parts. Recently, the performance of these earlier works have been surpassed largely thanks to the great success of deep convolutional networks [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]} that provide dominant solutions nowadays. Apart from image-based pose estimation, many efforts [6]}, [14]}, [15]}, [16]} has also been made to exploit temporal and motion information for human pose estimation from videos by using optical flow or 3D CNNs, which are related to our work since that we consider to accumulate event signals into a sequence of event frames. However, these methods have the limited ability to extract temporal contexts explicitly. More recently, recurrent architectures [6]}, [18]}, [19]}, [20]}, [21]}, [22]} is normally integrated with an encoder-decoder CNN framework to model the temporal dependency across frames to refine pose predictions, which have been pioneering frameworks for video-base human pose estimation. Such kind of frameworks share a general structure, where CNNs are normally used to encode and decode every frame sequentially, and a recurrent mechanism is then introduced to temporally link the encoder-decoder streams along time steps to propagate temporal dynamics between neighboring frames. However, they all just model the temporal dependency between two consecutive frames, which are not always effective for human pose estimation from event signals, because in an event video we also need to consider a long-range geometric consistency across a set of frames in a time window. DCPose [22]} leverages the temporal cues between past, current, and next frames to facilitate keypoint prediction. However, it still models a short-range temporal dependency, and meanwhile it has to depends on future frames.
[7]
[ [ 379, 382 ] ]
https://openalex.org/W3034399482
8a1bbf23-4494-4c70-838d-76c8e0d4abc1
Human pose estimation from still images in early works usually starts from building the parts-based graphic models or pictorial structure models [1]}, [2]}, [3]}, [4]} to learn spatial relationships between articulated body parts. Recently, the performance of these earlier works have been surpassed largely thanks to the great success of deep convolutional networks [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]} that provide dominant solutions nowadays. Apart from image-based pose estimation, many efforts [6]}, [14]}, [15]}, [16]} has also been made to exploit temporal and motion information for human pose estimation from videos by using optical flow or 3D CNNs, which are related to our work since that we consider to accumulate event signals into a sequence of event frames. However, these methods have the limited ability to extract temporal contexts explicitly. More recently, recurrent architectures [6]}, [18]}, [19]}, [20]}, [21]}, [22]} is normally integrated with an encoder-decoder CNN framework to model the temporal dependency across frames to refine pose predictions, which have been pioneering frameworks for video-base human pose estimation. Such kind of frameworks share a general structure, where CNNs are normally used to encode and decode every frame sequentially, and a recurrent mechanism is then introduced to temporally link the encoder-decoder streams along time steps to propagate temporal dynamics between neighboring frames. However, they all just model the temporal dependency between two consecutive frames, which are not always effective for human pose estimation from event signals, because in an event video we also need to consider a long-range geometric consistency across a set of frames in a time window. DCPose [22]} leverages the temporal cues between past, current, and next frames to facilitate keypoint prediction. However, it still models a short-range temporal dependency, and meanwhile it has to depends on future frames.
[9]
[ [ 391, 394 ] ]
https://openalex.org/W2962730651
6109937b-6fcd-4f30-91ce-ca776a27d231
Human pose estimation from still images in early works usually starts from building the parts-based graphic models or pictorial structure models [1]}, [2]}, [3]}, [4]} to learn spatial relationships between articulated body parts. Recently, the performance of these earlier works have been surpassed largely thanks to the great success of deep convolutional networks [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]} that provide dominant solutions nowadays. Apart from image-based pose estimation, many efforts [6]}, [14]}, [15]}, [16]} has also been made to exploit temporal and motion information for human pose estimation from videos by using optical flow or 3D CNNs, which are related to our work since that we consider to accumulate event signals into a sequence of event frames. However, these methods have the limited ability to extract temporal contexts explicitly. More recently, recurrent architectures [6]}, [18]}, [19]}, [20]}, [21]}, [22]} is normally integrated with an encoder-decoder CNN framework to model the temporal dependency across frames to refine pose predictions, which have been pioneering frameworks for video-base human pose estimation. Such kind of frameworks share a general structure, where CNNs are normally used to encode and decode every frame sequentially, and a recurrent mechanism is then introduced to temporally link the encoder-decoder streams along time steps to propagate temporal dynamics between neighboring frames. However, they all just model the temporal dependency between two consecutive frames, which are not always effective for human pose estimation from event signals, because in an event video we also need to consider a long-range geometric consistency across a set of frames in a time window. DCPose [22]} leverages the temporal cues between past, current, and next frames to facilitate keypoint prediction. However, it still models a short-range temporal dependency, and meanwhile it has to depends on future frames.
[11]
[ [ 404, 408 ] ]
https://openalex.org/W2964221239
3d80b194-20c4-4bcc-b5f3-a3ab5c9f1c0a
Thus, unlike conventional cameras, event cameras produce a sequence of asynchronous events because they asynchronously sample light of each pixel independently. As a result, the events are spatially much sparser in comparison with conventional frame-based cameras, where each frame is generated by densely sampling the entire pixels at the same time. Hence, event camera is able to best capture local motions in the scene as a stream of sparse and asynchronous events. Event cameras have unique advantages of very high temporal resolution, high dynamic range, low latency, and low power consumption. Hence, event cameras have stimulated a variety of research activities and applications in computer vision [1]}, including visual SLAM [2]}, optical flow estimation [3]}, object tracking [4]}, object recognition [5]}, [6]}, [7]}, gait recognition [8]}, and high-speed maneuvers [9]}, among others.
[1]
[ [ 706, 709 ] ]
https://openalex.org/W4226051885
7a771b55-d0da-4d54-bfb4-7df6e3de1b23
In the event capturing, event camera provides a stream of asynchronous event signals. In this work, we divide the event stream into a sequence of \(T\) event packets, where each event packet consists of a set of event signals collected from a fixed time interval \((t_{i-1},t_i)\) , as shown in  REF . We then accumulate the events of each packet by following the strategy in [1]} to integrate an event frame. We here have a little abuse of notation \(t\) that is used to represent the temporal index of integrated event frames rather than a time instance. Therefore, we address the event-based human pose estimation by detecting the keypoints from a sequence of \(T\) consecutive event frames \(\lbrace I_t\rbrace _{t=0}^{T-1}\) , i.e., \(I \in \mathbb {R}^{W \times H \times T}\) . Most of the existing methods transform this problem to predict a set of heatmaps \(\mathbf {b}=\lbrace \mathbf {b}_t\rbrace _{t=0}^{T-1}\) for all frames. The \(\mathbf {b}_t \in \mathbb {R}^{W^{\prime } \times H^{\prime } \times K}\) is of spatial size \(W^{\prime } \times H^{\prime }\) , where K represents the number of keypoints in a human body, and each \(\mathbf {b}_t(k)\) indicates the location confidence of the \(k\) -th keypoint at the \(t\) -th time step (frame).
[1]
[ [ 377, 380 ] ]
https://openalex.org/W3102178346
3ad5df80-954f-482a-9cf0-5dbf9719a290
SMPL [1]} has been widely used for 3D human mesh reconstruction. To boost its power in practice, a number of deep learning frameworks have been proposed by using SMPL as regression targets [2]}, [3]}, [4]}, [5]}, [6]}, [7]}. [2]} regresses SMPL parameters directly from input images by end-to-end training. Following this research direction, [6]} add spherical Gaussian attention joint based on initial joint estimation, and the use the the attended feature to learn the vertices location. [3]} combine learning and optimization[4]} in the same framework but cannot handle occlusions. [5]} uses the template UV mapping from SMPL and transforms 3d mesh reconstruction to decomposed UV estimation and position map inpainting problems. However, the way to get 3d human joint from SMPL mesh is based on the pre-trained joint regressor, which will induce intrinsic errors and usually does not generalize to other datasets.
[5]
[ [ 207, 210 ], [ 585, 588 ] ]
https://openalex.org/W3035501466
90a701f7-8998-408b-9289-0be6c0c2b477
SMPL [1]} has been widely used for 3D human mesh reconstruction. To boost its power in practice, a number of deep learning frameworks have been proposed by using SMPL as regression targets [2]}, [3]}, [4]}, [5]}, [6]}, [7]}. [2]} regresses SMPL parameters directly from input images by end-to-end training. Following this research direction, [6]} add spherical Gaussian attention joint based on initial joint estimation, and the use the the attended feature to learn the vertices location. [3]} combine learning and optimization[4]} in the same framework but cannot handle occlusions. [5]} uses the template UV mapping from SMPL and transforms 3d mesh reconstruction to decomposed UV estimation and position map inpainting problems. However, the way to get 3d human joint from SMPL mesh is based on the pre-trained joint regressor, which will induce intrinsic errors and usually does not generalize to other datasets.
[6]
[ [ 213, 216 ], [ 342, 345 ] ]
https://openalex.org/W3107167007
d6103f80-5cae-490a-a908-c4348d0196be
Even though each human pixel contributes to joint prediction, there are still cases that some joints have no assigned vertex/pixel available from the image evidence. Thus we propose the joint inpainting module to inpaint these missing joints. This network is pretty flexible and can be MLP [1]}, GCN [2]} or even modern transformers [3]}. For the ease of implementation we use simple multi-layer perceptron. Our joint inpainting net is inspired by [1]}, which is simple, deep and a fully-connected network with six linear layer with 256 output features. It includes dropout after every fully connected layer, batch-normalization and residual connections. The model contains approximately 400k training parameters. The goal of this network is not only to inpaint the joints but also to refine the joints prediction that is not occluded. It takes the \(J_{initial}\) as input and the output of the network is the joint in root-relative coordinates \(J_{refine}\) . We use L1 loss \(L_{ji}\) to train joint inpaint and refine module. The structure of the joint inpainting and refine module is shown in Fig REF .
[3]
[ [ 333, 336 ] ]
https://openalex.org/W3175199633
28077b19-c82f-4c15-a160-a15b9e77ddac
After getting the sparse 3d human keypoints. We want to repose the template SMPL meshes based on the predicted joints location. To solve this problem we leverage inverse kinematics (IK). Typically, the IK task is tackled with iterative optimization methods [1]}, [2]}, [3]}, which requires a good initialization, more time and case-by-case optimization method. Here we propose a global inverse kinematics neural network GIK-Net. This network is constructed by the basic fully connected neural network module with residual connection, batch normalization and relu activation similar to [4]}. In particular, GIK-Net takes the refined keypoint coordinates \(J_{refine}\) in root-relative space and outputs joint rotations \(\theta \) and \(\beta \) which serve as the input for SMPL layer. As we also use the Mocap dataset (AMASS [5]}, SPIN[6]} and AIST++ [7]}), our GIK-Net can implicitly learn the realistic distribution of human kinematics rotation and human shape. The use of the additional Mocap dataset serves the same purpose as the factorized adversarial prior [8]}, variational human pose prior [9]} and motion discriminator [10]}. We use L1 loss \(L_{\theta }\) and \(L_{\beta }\) to train GIK-Net. The structure of GIK-Net is shown in Fig REF .
[4]
[ [ 585, 588 ] ]
https://openalex.org/W2612706635
f43a4f78-0639-49ec-98db-04b74fd7c5ff
After getting the sparse 3d human keypoints. We want to repose the template SMPL meshes based on the predicted joints location. To solve this problem we leverage inverse kinematics (IK). Typically, the IK task is tackled with iterative optimization methods [1]}, [2]}, [3]}, which requires a good initialization, more time and case-by-case optimization method. Here we propose a global inverse kinematics neural network GIK-Net. This network is constructed by the basic fully connected neural network module with residual connection, batch normalization and relu activation similar to [4]}. In particular, GIK-Net takes the refined keypoint coordinates \(J_{refine}\) in root-relative space and outputs joint rotations \(\theta \) and \(\beta \) which serve as the input for SMPL layer. As we also use the Mocap dataset (AMASS [5]}, SPIN[6]} and AIST++ [7]}), our GIK-Net can implicitly learn the realistic distribution of human kinematics rotation and human shape. The use of the additional Mocap dataset serves the same purpose as the factorized adversarial prior [8]}, variational human pose prior [9]} and motion discriminator [10]}. We use L1 loss \(L_{\theta }\) and \(L_{\beta }\) to train GIK-Net. The structure of GIK-Net is shown in Fig REF .
[6]
[ [ 840, 843 ] ]
https://openalex.org/W2981637078
a63b26f5-cfd9-42ae-b2c5-ff0988d3e80a
After getting the sparse 3d human keypoints. We want to repose the template SMPL meshes based on the predicted joints location. To solve this problem we leverage inverse kinematics (IK). Typically, the IK task is tackled with iterative optimization methods [1]}, [2]}, [3]}, which requires a good initialization, more time and case-by-case optimization method. Here we propose a global inverse kinematics neural network GIK-Net. This network is constructed by the basic fully connected neural network module with residual connection, batch normalization and relu activation similar to [4]}. In particular, GIK-Net takes the refined keypoint coordinates \(J_{refine}\) in root-relative space and outputs joint rotations \(\theta \) and \(\beta \) which serve as the input for SMPL layer. As we also use the Mocap dataset (AMASS [5]}, SPIN[6]} and AIST++ [7]}), our GIK-Net can implicitly learn the realistic distribution of human kinematics rotation and human shape. The use of the additional Mocap dataset serves the same purpose as the factorized adversarial prior [8]}, variational human pose prior [9]} and motion discriminator [10]}. We use L1 loss \(L_{\theta }\) and \(L_{\beta }\) to train GIK-Net. The structure of GIK-Net is shown in Fig REF .
[8]
[ [ 1069, 1072 ] ]
https://openalex.org/W2963995996
e3a68f09-ef54-4a8a-97b5-838802fcc423
After getting the sparse 3d human keypoints. We want to repose the template SMPL meshes based on the predicted joints location. To solve this problem we leverage inverse kinematics (IK). Typically, the IK task is tackled with iterative optimization methods [1]}, [2]}, [3]}, which requires a good initialization, more time and case-by-case optimization method. Here we propose a global inverse kinematics neural network GIK-Net. This network is constructed by the basic fully connected neural network module with residual connection, batch normalization and relu activation similar to [4]}. In particular, GIK-Net takes the refined keypoint coordinates \(J_{refine}\) in root-relative space and outputs joint rotations \(\theta \) and \(\beta \) which serve as the input for SMPL layer. As we also use the Mocap dataset (AMASS [5]}, SPIN[6]} and AIST++ [7]}), our GIK-Net can implicitly learn the realistic distribution of human kinematics rotation and human shape. The use of the additional Mocap dataset serves the same purpose as the factorized adversarial prior [8]}, variational human pose prior [9]} and motion discriminator [10]}. We use L1 loss \(L_{\theta }\) and \(L_{\beta }\) to train GIK-Net. The structure of GIK-Net is shown in Fig REF .
[9]
[ [ 1104, 1107 ] ]
https://openalex.org/W2978956737
a3903b82-2838-4b9d-9ef0-ccff8c9e8019
SMPL [1]} represents the body pose and shape by pose \(\theta \in R^{72}\) and shape \(\beta \in R^{10}\) parameter. Here we use the gender-neural shape model following previous work [2]}, [3]}, [4]}. Given these parameters, the SMPL module is a differentiable function that outputs a posed 3D mesh \(M(\theta ,\beta ) \in R^{6890 \times 3}\) . The 3D joint locations \(J_{3D} = WM \in R^{J \times 3}\) , while J are computed with a pretrained linear regressor \(W\) . After getting the \(\theta \) and \(\beta \) from the GIK-Net we send them to SMPL layer to get the body mesh prediction.
[2]
[ [ 185, 188 ] ]
https://openalex.org/W3202344038
22d7a22a-4a52-475e-9a60-02b09eba8d07
3DOH [1]} utilize multi-view SMPLify-X [2]} to get the 3d ground truth. The dataset is designed to have object occlusion for subjects. It contains 50,310 training images and 1,290 test images. It provides 2D, 3D annotations and SMPL parameters to generate meshes. We use the test set for evaluation purposes and the training set to train the UVI module.
[1]
[ [ 5, 8 ] ]
https://openalex.org/W3035186639
09c8725a-8991-4a66-a3ff-c52a3111c477
For AMASS [1]} data, we only get SMPL-H [2]} fitting instead of SMPL fitting data, however, SMPL-H does not included hands rotations as in SMPL. We sample random rotations from SPIN [3]} fitting or the predictions from our DMP stages for its training data. For AIST++ [4]}, it does not included \(\beta \) parameters, we sample \(\beta \) from SPIN [3]} fitting or the predictions from our DMP stages for its training data. We use the original rotation representation from SMPL [6]} (axis-angle representation) for the fast training purpose.
[1]
[ [ 10, 13 ] ]
https://openalex.org/W2971856312
7844ca31-71be-4ab9-b350-c07d209731f9
For AMASS [1]} data, we only get SMPL-H [2]} fitting instead of SMPL fitting data, however, SMPL-H does not included hands rotations as in SMPL. We sample random rotations from SPIN [3]} fitting or the predictions from our DMP stages for its training data. For AIST++ [4]}, it does not included \(\beta \) parameters, we sample \(\beta \) from SPIN [3]} fitting or the predictions from our DMP stages for its training data. We use the original rotation representation from SMPL [6]} (axis-angle representation) for the fast training purpose.
[4]
[ [ 268, 271 ] ]
https://openalex.org/W3204221554
46566c5f-e97e-4785-b19b-82e260e7f9bc
The central idea is to employ a model-based causal surrogate [1]} to play the role of a value function [2]}, and attribute the difference of the value function prior to and post any action at step \(t\) as the pseudo-reward received at \(t\) .
[2]
[ [ 103, 106 ] ]
https://openalex.org/W2121863487
a032c00c-91de-4059-8a43-ee03889a28fe
The theoretical underpinning of our method has been established in [1]} and a special case of front-door criteria in [2]}. [1]} also noted the efficiency gain, although their main objective was to study long-term outcomes that are not observable in a short experiment period (e.g. effect of job training on employment). The focus of our method is to exploit the variance reduction, even when the delayed outcome is short-term and can be observed in a normal experiment period. In particular, we use the model prediction even for subjects with an observed outcome \(Y\) . We further use the causal surrogate model as a critic model to approximate the value function in the context of reinforcement learning [4]} for incremental reward attribution. [5]} presented a similar work with the variance reduction angle, but focused on predicting a future value of the same metric that is already continuously observed, e.g. sessions-per-user after 2 weeks using 1 week's outcome. Their work shares the same source of variance reduction as ours and causal surrogacy in general — due to smoothing effect of conditional expectation, but without the surrogate modeling and cannot be used for episodical outcome. In A/B testing literature, most variance reduction work has been focused on exploiting pre-assignment covariates[6]}, [7]}, [8]}.
[1]
[ [ 67, 70 ], [ 123, 126 ] ]
https://openalex.org/W2986211311
95432efd-8837-4924-8d52-82ac6c614e32
The theoretical underpinning of our method has been established in [1]} and a special case of front-door criteria in [2]}. [1]} also noted the efficiency gain, although their main objective was to study long-term outcomes that are not observable in a short experiment period (e.g. effect of job training on employment). The focus of our method is to exploit the variance reduction, even when the delayed outcome is short-term and can be observed in a normal experiment period. In particular, we use the model prediction even for subjects with an observed outcome \(Y\) . We further use the causal surrogate model as a critic model to approximate the value function in the context of reinforcement learning [4]} for incremental reward attribution. [5]} presented a similar work with the variance reduction angle, but focused on predicting a future value of the same metric that is already continuously observed, e.g. sessions-per-user after 2 weeks using 1 week's outcome. Their work shares the same source of variance reduction as ours and causal surrogacy in general — due to smoothing effect of conditional expectation, but without the surrogate modeling and cannot be used for episodical outcome. In A/B testing literature, most variance reduction work has been focused on exploiting pre-assignment covariates[6]}, [7]}, [8]}.
[2]
[ [ 117, 120 ] ]
https://openalex.org/W2143891888
ed7a916e-0047-450a-8d2d-875c10ae6ffa
In this case, since the matrix \(D^1+D^2\) is positive definite, it follows that \(z(t)\) converges to \(\mathbf {0}\) exponentially fast, and thus the system will eventually become a competitive bi-virus model which has been studied in [1]}, [2]}, [3]}, [4]}.
[4]
[ [ 259, 262 ] ]
https://openalex.org/W2962976446
dde582f0-37e1-43f3-9c78-d7a5c670ef22
It is well known that when we route any number of particles in a rotor graph, the final configuration does not depend on the order in which the particles move (which can alternate between particles) as long as we route the particles to the sinks (see [1]}).
[1]
[ [ 251, 254 ] ]
https://openalex.org/W2165513768
858cf2c2-b26c-4200-b4b2-d76b8d8ae346
Finally, we compare our results for the hot pion matter conductivity to results from the literature in Fig. REF . Our conductivity is significantly larger than the results from kinetic theory using Breit-Wigner cross sections [1]}, chiral perturbation theory [2]}, and a relaxation time approximation [3]}, but it is smaller than the \(K\) -matrix results of Ref. [4]}. However, our calculation agrees well with the real-time field theory results of Ref. [5]}. In Refs. [1]}, [2]}, [4]}, [5]} expressions for the conductivity are provided in terms of either the pion width or the relaxation time (sometimes equated to the collisions time), which are similar to our Eq. REF ; however, the inputs for the pion width vary considerably. For example, in Ref. [3]}, the momentum averaged charged-pion relaxation time at \(T\) =150 MeV (using vacuum \(\rho \) and \(\sigma \) channel cross sections) amounts to ca. 2 fm/\(c\) (3 fm/\(c\) for neutral pions), which translates into a reaction rate of \(\sim \) 100 MeV, substantially larger than our optical potential of \(\Gamma =-2 {\rm Im}U_\pi \simeq 20-30\)  MeV. Figure REF also indicates that the pion gas results are significantly larger than lQCD calculations, with most lQCD results falling below a proposed lower bound from a calculation for a strongly coupled supersymmetric Yang-Mills plasma using AdS/CFT duality [11]} (which, however, depends on the number of degrees of freedom in the calculation and therefore may not be appropriate to be compared to pion matter; we will return to this issue Sec. REF below). Furthermore, in Ref. [12]} it is cautioned that the extraction of the conductivity at low temperature from lQCD computations of Euclidean vector-current correlators faces difficulties in extracting narrow transport peaks created by hadronic interactions. <FIGURE>
[2]
[ [ 259, 262 ], [ 476, 479 ] ]
https://openalex.org/W2083598615
e609c0ee-f55e-4b50-8279-e7c04fb6a107
Our results support a pion matter conductivity significantly higher than the lower bound proposed in Ref. [1]}. Furthermore, our calculations indicate that the effects of the vertex corrections are rather small (at the \(\sim 10\%\) level), whereas the conductivity is dominated by the Landau cut of the \(\rho \) self-energy, which is related to the pion's collisional width. As demonstrated in Eq. REF , consistent with kinetic theory, the conductivity is essentially inversely proportional to the pion's width. Therefore, the conductivity is sensitive to pionic interactions and a robust calculation of the pion's width is required in order to reliably extract the conductivity.
[1]
[ [ 106, 109 ] ]
https://openalex.org/W1689445748
a05ff842-ea66-4239-bb2b-ea363bff5fee
The original XLM-RoBERTa embeddings [1]} are trained on the filtered CommonCrawl data (General domain), whereas the data of the shared task comprises documents from scientific and legal domains. In order to better adapt the contextualized representation to the target scientific and legal domain, we further pretrained the original XLM-RoBERTa model on the corpus data (see Figure REF ). Our experiments demonstrate improved performance on the task of acronym extraction due to the domain adaptive pretraining across all the languages.
[1]
[ [ 36, 39 ] ]
https://openalex.org/W3035390927
59c924c2-bd61-441a-bbf4-786efbcee203
There exist several extensions of the present work worth pursuing in future endeavors. A straightforward generalization would be to study the quench dynamics in a \(^7\) Li spin-1 BEC where the strong ferromagnetic spin-interaction would certainly enhance the spin-mixing processes which might be possibly associated with a richer pattern formation. Additionally, exploring the interaction effects of vortex lattices as well as their stability and dynamics in spinor setups is of direct relevance, due to the potential of inclusion of external rotation [1]}, [2]}. Indeed, it is already of significant recent interest to explore the interaction of two multi-component vortical patterns, as has been explored recently in two-component settings, e.g., in [3]}, [4]} (see also references therein). Moreover, in the current setup the inclusion of three-body recombination processes as a dissipative mechanism in selective spin-channels constitutes a situation that accounts for possible experimental imperfections [5]}. Yet another fruitful perspective is to consider domain-walls formed by two out of the three spin-components with the remaining one being a nonlinear excitation of different flavor, e.g. a vortex [6]}. This setting will enable one to devise particular spin-mixing channels and consequently study dynamical pattern formation.
[5]
[ [ 1010, 1013 ] ]
https://openalex.org/W3034688823
c551d8b4-f961-4ae3-a83f-04c1165e49d1
To construct the neutron star models, which become the background models for linear analysis, one has to prepare the EOS for neutron star matter. In this study, we adopt the same EOSs as in Ref. [1]}, i.e., the EOSs based on the relativistic framework, DD2 [2]}, Miyatsu [3]}, and Shen [4]}; the EOSs with the Skyrm-type interaction, FPS [5]}, SKa [6]}, SLy4 [7]}, and SLy9 [8]}; and the EOS constructed with the variational method, Togashi [9]}. We remark that all EOSs adopted here is the unified EOS, i.e., the EOS for the crustal and core region of the neutron star can be constructed with the same framework. The EOS parameters for the EOSs adopted in this study are listed in Table REF together with the maximum mass of the neutron star constructed with each EOS, where \(K_0\) and \(L\) are the incompressibility of the symmetric nuclear matter and the density-dependence of the nuclear symmetry energy, and \(\eta \) is the combination of \(K_0\) and \(L\) as \(\eta \equiv (K_0L^2)^{1/3}\) [10]}. With the auxiliary parameter \(\eta \) , one can estimate the properties of the low-mass neutron stars [10]}, [12]} and also discuss the maximum mass [13]}, [14]}. <TABLE>
[1]
[ [ 195, 198 ] ]
https://openalex.org/W3164490304
484abd5e-d9b8-4fe4-914a-323cb250aa39
Since human actors are primary subjects in most trending videos, H2V video conversion crops horizontal videos around the primary subject to reduce the loss of information and produce meaningful vertical content during the conversion process. To correctly identify the most primary human subject, we first discover all human objects in the scene using the DSFD face detector [1]} and FreeAnchor body detector [2]}. Meanwhile, we prefer to utilizing a face detector since it is easier to maintain the completeness of a face than a body in the cropped area. Indeed, the ablation experiment in Section REF proved that the face detector is more effective than a body detector. Then, selecting the primary subject from all is a highly empirical and subjective task, for which we summarize the following criteria under guidance from professional video editors:
[2]
[ [ 408, 411 ] ]
https://openalex.org/W2970575838
b3d18d8a-60c0-4d9e-91aa-75bac4ed52b6
In addition to the salient feature, blur feature and bounding box size and position information as our selection basis, we design a RCNN-like [1]}, [2]} module (shown on the left-side in Fig. REF ) with deep semantic embedding. To better optimize the Sub-Select module in Rank-SS, we develop a new pairwise ranking-based supervision paradigm as illustrated on the right-side in Fig. REF , and the Siamese architecture [3]} has two identical Sub-Select module branches and is valid for pair-wise inputs. On top of a Siamese architecture, bounding boxes for subject \(i\) and \(j\) are simultaneously passed onto the Rank-SS module, together with the extracted feature \(\mathcal {F}\) for the scene. Both branches in the Siamese architecture instantiate the same Sub-Select module, feature map \(\mathcal {F}_{i}\) is pooled from bounding box \(c_{i}\) on \(\mathcal {F}\) with RoIAlign operation [4]}. \(\mathcal {F}_i = RoIAlign(\mathcal {F}, c_i),\)
[1]
[ [ 142, 145 ] ]
https://openalex.org/W1536680647
6108fe77-42d4-416f-9432-7e840fc90d6f
In addition to the salient feature, blur feature and bounding box size and position information as our selection basis, we design a RCNN-like [1]}, [2]} module (shown on the left-side in Fig. REF ) with deep semantic embedding. To better optimize the Sub-Select module in Rank-SS, we develop a new pairwise ranking-based supervision paradigm as illustrated on the right-side in Fig. REF , and the Siamese architecture [3]} has two identical Sub-Select module branches and is valid for pair-wise inputs. On top of a Siamese architecture, bounding boxes for subject \(i\) and \(j\) are simultaneously passed onto the Rank-SS module, together with the extracted feature \(\mathcal {F}\) for the scene. Both branches in the Siamese architecture instantiate the same Sub-Select module, feature map \(\mathcal {F}_{i}\) is pooled from bounding box \(c_{i}\) on \(\mathcal {F}\) with RoIAlign operation [4]}. \(\mathcal {F}_i = RoIAlign(\mathcal {F}, c_i),\)
[2]
[ [ 148, 151 ] ]
https://openalex.org/W639708223
c0d18ecc-1d10-4e2c-9430-4e9124e424a4
We compare our ranking-based module with the state-of-the-art salient object detection CPD [1]}, fixation prediction-based competitors [2]}, image cropping [3]}, [4]}, as well as our naive and deep selection-based baselines, on the image subset of the H2V-142K dataset. For both SOD and FP methods, probability maps are generated for input images with pre-trained released models due to a lack of annotated data for our task. Then, the biggest contour in binarized probability maps is selected as the subject. The result position is represented as a bounding box and a centroid of the contour. As for image cropping, the traditional methods discard the irrelevant content and remain the enjoyable part of the image, but cannot output a fixed-size image. After transformation, the detected bounding box closest to the center of the cropped image is regarded as our selected subject.
[1]
[ [ 91, 94 ] ]
https://openalex.org/W2963112696
57a8850e-c0c0-418c-abfa-fcbe346e63ca
Implementation Details. In our Rank-SS module and N-SS as well as D-SS baseline modules, we deploy DSFD [1]} and FreeAnchor [2]} as the face and torso detectors. As for the integrated feature extraction described above, CPD [3]} is attached to produce the saliency detection response, and Tenengrad algorithm [4]}, [5]} is used to produce blur response in three proposed subject selection modules. In the Rank-SS module, an ImageNet pre-trained Resnet50 backbone is implemented to extract deep semantic embedding, and all feature maps are resized to stride 16 consistent with the embedding feature size. Input images are resized such that their shorter side is 600 pixels during training and testing. Regional feature size pooled by RoIAlign layer is 14\(\times \) 14.
[1]
[ [ 104, 107 ] ]
https://openalex.org/W2962766044
c5ffd9b4-b7bb-4ef8-bac1-ce86dc8b2d0c
In addition to testing on H2V-142K Image Subset, we conduct experiments on the ASR dataset [1]} to evaluate the modules' generalization ability. As shown in Table REF , there are three kinds of training settings based on our training data, which include only H2V-142K Image Subset, only ASR dataset, and both two datasets. Our Rank-SS trained with H2V-142K Image Subset achieves 59.59% in mAP, which significantly outperforms SOD [2]}, FP [3]} and image cropping [4]}, [5]} competitors. The Rank-SS trained with ASR dataset further improves upon the module trained with H2V-142K Image Subset by 5.72% in max-IoU and 7.57% in mAP, which benefits from the homogeneity of the dataset. The Rank-SS reports the best overall selection accuracy with both two datasets training data, surpassing the module trained with ASR dataset by 0.09%, and largely outperforming the SOD based model by 19.77% in term of mAP. <TABLE>
[3]
[ [ 439, 442 ] ]
https://openalex.org/W2976087789
b7b3ba14-cae4-481b-8b1f-469b92bcdf2e
We evaluate and compare the H2V framework on the video subset of our H2V-142K dataset, employing different Sub-Select variants, with other H2V frameworks based on SOD and FP subject selections, as well as video cropping framework [1]}. The video-based SOD anchor-diff [2]} and FP Aclnet [3]} modules are executed on each frame to obtain the video results. The implementation is similar to the image-based SOD and FP methods.
[3]
[ [ 287, 290 ] ]
https://openalex.org/W2955060956
77534060-c018-4550-b13b-89738039fa93
As illustrated in the first row of Fig. REF , our H2V framework successfully selects the primary subject from background distractors (1a and 1c). It can also discard the pseudo-subject, who is not facing the camera directly (1d). Moreover, H2V can incorporate human closely-located with the selected primary subject. More visualization of results in the MSCOCO 2017 Val dataset [1]}, proposed H2V-142K dataset, and ASR dataset [2]} is shown in Fig. REF . As it can be seen that our method generalizes well on common objects not limited to humans.
[2]
[ [ 427, 430 ] ]
https://openalex.org/W3034965397
6fbe6a0a-96c1-4aac-8520-10c123962a6b
The paper LARS[1]} focuses on speeding up deep neural network training. Their approach focuses on increasing batch size using Layer-wise Adaptive Rate Scaling(LARS) for efficient use of massive resources. They train Alexnet and Resnet-50 using the Imagenet-1k dataset while maintaining its state of the art accuracy. However, they successfully increase the batch size to larger than 16k, thereby, reducing the training time of 100 epoch Alexnet from hours to 11 minutes and the 90-epoch ResNet-50 from hours to 20 minutes. Their system, for 90 epoch Resnet 50 returns an accuracy of 75.4% with a batch size of 32k but reduces to 73.2% when it is 64k. Their system clearly shows the extent to which large scale computers are capable of accelerating the training of a DNN by using massive resources with standard Imagenet-1k.
[1]
[ [ 14, 17 ] ]
https://openalex.org/W2962747323
2a8c8ba4-3684-4260-ade6-a3b04162f92d
The theory is described by the Lagrangian (employing the notation of Ref. [1]}), \(\mathcal {L} = -\frac{1}{4} F_{\mu \nu } F^{\mu \nu } + \text{Tr} \bar{\psi } i {D} \psi + \text{Tr} D_\mu \phi ^\dagger D^\mu \phi - \tilde{h} \text{Tr} \phi ^\dagger \phi \phi ^\dagger \phi - \tilde{f} \text{Tr} \phi ^\dagger \phi \text{Tr} \phi ^\dagger \phi ,\)
[1]
[ [ 74, 77 ] ]
https://openalex.org/W3004965668
690b7266-1907-44e3-9894-5953f604e984
In this work, we use the MP-dataset of 1864 dielectric tensors[1]}, [2]} to train statistical models and subsequently identify dielectrics from the set of stable materials in the OQMD. Thus the MP-data forms the training-data and the set of materials from OQMD forms the search-space for the materials design. This work is a successful demonstration of the scenario where the data obtained from the multiple sources can be utilized to discover new compounds. The negligible difference found between the representation vectors, which are also called as feature vectors in machine learning, generated for equivalent materials in MP and OQMD made the cross-database design possible in this work. Overall, we conducted three design cycles which required us to perform dielectric calculations for just 17 materials using DFPT. We report the dielectric constant values of all the 17 materials among which three of them (HoClO, Eu\(_5\) SiCl\(_6\) O\(_4\) , and Tl\(_3\) PbBr\(_5\) ) have very large \(\epsilon \) (69 \( < \epsilon < \) 101) and \(E_{\text{g}}\) (2.9 eV \( < E_{\text{g}} < \) 5.5 eV) values making them part of the Pareto-front of the known data, and four other materials (Sr\(_2\) LuBiO\(_6\) , Bi\(_5\) IO\(_7\) , Bi\(_3\) ClO\(_4\) , and Bi\(_3\) BrO\(_4\) ) have moderately large \(\epsilon \) (20 \( < \epsilon < \) 40) and \(E_{\text{g}}\) (2.3 eV \( < E_{\text{g}} < \) 2.7 eV) values.
[1]
[ [ 62, 65 ] ]
https://openalex.org/W2801006326
5159194a-bf3b-4a49-99b9-b6856c631f88
A dataset containing information about crystal structures, chemical compositions, band-gap energy values and dielectric-tensors of 1864 stable materials was obtained from the MP[1]}, [2]}, [3]} data repository. This dataset was used to generate the training-data. The target-property, \(\epsilon \) , was obtained for each material in this database from its calculated dielectric-tensor. Another dataset consisting of 11,102 stable, non-metallic materials containing information about crystal structures, chemical compositions, band-gap energy values was obtained from OQMD[4]}, [5]}. This OQMD dataset was used to generate the search-space in which the search to find dielectrics was conducted. The dielectric tensor data of all crystals included in the search-space were unknown at the beginning of this work.
[4]
[ [ 573, 576 ] ]
https://openalex.org/W1976492731
c07f6e35-dd57-4493-811d-d8c8451cb8ed
A dataset containing information about crystal structures, chemical compositions, band-gap energy values and dielectric-tensors of 1864 stable materials was obtained from the MP[1]}, [2]}, [3]} data repository. This dataset was used to generate the training-data. The target-property, \(\epsilon \) , was obtained for each material in this database from its calculated dielectric-tensor. Another dataset consisting of 11,102 stable, non-metallic materials containing information about crystal structures, chemical compositions, band-gap energy values was obtained from OQMD[4]}, [5]}. This OQMD dataset was used to generate the search-space in which the search to find dielectrics was conducted. The dielectric tensor data of all crystals included in the search-space were unknown at the beginning of this work.
[5]
[ [ 579, 582 ] ]
https://openalex.org/W2278970271
cfb1fe26-b112-41d6-a5cf-d81fdf32da10
Recent progress in computer vision and natural language processing has enabled a wide range of possible applications to generative models. One of the most promising applications is text-guided image generation (text-to-image models). Solutions like DALL-E 2 [1]} and Stable Diffusion [2]} use the recent advances in joint image and text embedding learning (CLIP [3]}) and diffusion models [4]} to produce photo-realistic and aesthetically-appealing images based on a textual description.
[1]
[ [ 258, 261 ] ]
https://openalex.org/W4224035735
9728fe70-ec11-42ca-aa1d-f324df1d4582
Recent progress in computer vision and natural language processing has enabled a wide range of possible applications to generative models. One of the most promising applications is text-guided image generation (text-to-image models). Solutions like DALL-E 2 [1]} and Stable Diffusion [2]} use the recent advances in joint image and text embedding learning (CLIP [3]}) and diffusion models [4]} to produce photo-realistic and aesthetically-appealing images based on a textual description.
[4]
[ [ 389, 392 ] ]
https://openalex.org/W2129069237
e287ad19-cbea-409e-a410-4527a816a29f
It follows from [1]} that we can equivalently put \(\varepsilon =0\) in (REF ) if \(\varphi \) is lower semicontinuous (l.s.c.) around \(\bar{x}\) and the space \(X\) is Asplund, i.e., a Banach space where each separable subspace has a separable dual. This class of spaces is fairly large including, e.g., every reflexive Banach space and every Banach space which dual is separable; see [2]}, [3]}, [1]}, [5]} with more details and the references therein. It has been well recognized in variational analysis that the limiting subdifferential (REF ) and the associated constructions for sets and set-valued mappings enjoy full calculi in Asplund spaces with a variety of applications presented in the two-volume book [1]}, while finite-dimensional specifications can be found in [7]}, [8]}. Some useful results for (REF ) hold in general Banach spaces; see, e.g., [1]}.
[8]
[ [ 788, 791 ] ]
https://openalex.org/W4249513058
b55e63c8-4865-425f-b535-fbd9f7a2be61
A monotone (resp. strongly monotone) operator \(T\) is maximal monotone (resp. strongly maximal monotone) if \(\mbox{\rm gph}\,T = \mbox{\rm gph}\,S\) for any monotone operator \(S:X\rightrightarrows X^*\) with \(\mbox{\rm gph}\,T \subset \mbox{\rm gph}\,S\) . We refer the reader to the monographs [1]}, [2]}, [3]}, [4]} for various properties and applications of monotone and maximal monotone operators in finite and infinite dimensions. Note, in particular, that the graph of any maximal monotone mapping is nonempty and closed.
[2]
[ [ 308, 311 ] ]
https://openalex.org/W345014192
f659d1da-da33-451d-ba1f-8a68367ad11d
A monotone (resp. strongly monotone) operator \(T\) is maximal monotone (resp. strongly maximal monotone) if \(\mbox{\rm gph}\,T = \mbox{\rm gph}\,S\) for any monotone operator \(S:X\rightrightarrows X^*\) with \(\mbox{\rm gph}\,T \subset \mbox{\rm gph}\,S\) . We refer the reader to the monographs [1]}, [2]}, [3]}, [4]} for various properties and applications of monotone and maximal monotone operators in finite and infinite dimensions. Note, in particular, that the graph of any maximal monotone mapping is nonempty and closed.
[3]
[ [ 314, 317 ] ]
https://openalex.org/W1492700492
3b612c72-92f7-4752-bbe2-473cd0e4a106
Now we present two lemmas on epi-convergence in minimization used in what follows. The first one is taken from [1]}.
[1]
[ [ 111, 114 ] ]
https://openalex.org/W1517403807
4f99e03e-ab6c-4704-818f-717a52c5d2b5
Deep neural network models [1]}[2]}[3]}[4]} have promoted the performance of prior Natural Language Processing (NLP) techniques, including emotion analysis and recognition. CNN (Convolutional NN) [5]}[3]} and RNN (Recurrent NN)[5]}[8]} are two common deep learning architectures that are often integrated at the top of embedding modules, e.g., GloVe and Word2Vec, to infer emotion-pertinent cues in textual contents. While the convnets can effectively extract n-gram features, they are not as productive as the RNN schemes in attaining correlation within long-term sequences. Nonetheless, CNN models are distorted toward subsequent context and neglect previous words. To address the issue, LSTM models [2]} exploit the intensity of emotions out of brief contents in a bidirectional manner which results in preferable outputs in single and multi-label classification tasks. The Deep Rolling model [10]} combines LSTM and CNN into an ensemble to create a non-linear emotion-prediction model. Absorbed by the appealing performance of the deep neural network models [11]}, we devise a novel ensemble classifier equipped with dynamic dropout convnets that further leverages individual latent aspects, known as cognitive cues. Moreover, we propose a nontrivial method to extract features from emotion and semantic contents to feed the convnets in ensembles.
[5]
[ [ 196, 199 ], [ 227, 230 ] ]
https://openalex.org/W2791506524
ec5e7275-fe54-4e6e-b0eb-7e1e38c7ffee
We conducted extensive experiments on multiple datasets [1]}[2]} to compare our proposed unified framework to other novel approaches in emotion detection. Taking advantage of various Python libraries and interfaces for neural networks, we ran the experiments on a server with a 4.20 GHz Intel Core i7-7700K CPU and 64GB of RAM. The codes are available to download https://sites.google.com/view/EmoDNN.
[2]
[ [ 61, 64 ] ]
https://openalex.org/W2153803020
0ccbf2fc-2749-4e04-ae7d-ac68fdf69b89
\(Uunison\) : This baseline [1]} leverages a variety of deep learning modules, such as word and character-based RNN and CNN, to improve traditional classifiers including BOW and latent semantic indexing. \(Senti_{HC}\) : This model is a hierarchical classification scheme that comprises three levels in the learning process: neutrality(neutrality versus emotionality), polarity, and emotions(five basic emotions) [2]}. \(SVM-Behavior\) : Similar to [3]}, it combines unigrams and emotion lexicons and uses SVM-Behavior to classify text contents according to emotion cues. \(lexicon based\) : Instead of word embedding[4]}, this model is performed by emotion lexicon. \(EmoDNN_{SVM}\) : This model is based on our proposed categorization method, but the learning component employs an SVM classifier on unigrams. \(EmoDNN_{wd}\) : This method replaces multichannel feature learning with text embedding [5]}[6]}. \(EmoDNN\) : Our proposed framework in Sec. REF .
[5]
[ [ 906, 909 ] ]
https://openalex.org/W3034090986
e0ef3f9e-8d13-485f-92ea-8668b850889c
We choose the multi-class WASSA-2017 [1]} emotion recognition dataset to evaluate mutual co-existed labels. We compare our method with unison and SVM-behavior methods that address the multi-class challenge. Table REF shows that EmoDNN overcomes both competitors and unison, equipped with DL modules, can overpass SVM classifiers. Since we include the cognitive cues in emotion features, EmoDNN can upgrade unison by up to 2% in F1-measure. <TABLE><FIGURE>
[1]
[ [ 38, 41 ] ]
https://openalex.org/W2963177779
1390107a-7ba8-467d-87c2-6452e5cda510
With this we obtain the gauge independent Abelian decomposition of \({\vec{B}}_\mu \) adding the valence part \({\vec{W}}_\mu \) which was excluded by the isometry. Introducing a right-handed orthonormal SU(2) basis \((\hat{n}_1,\hat{n}_2,\hat{n}_3=\hat{n})\) , we can express \({\vec{B}}_\mu \) by [1]}, [2]} \({\vec{B}}_\mu = {\hat{B}}_\mu + {\vec{W}}_\mu ,~~~{\vec{W}}_\mu =W_\mu ^1 \hat{n}_1+W_\mu ^2 \hat{n}_2.\)
[2]
[ [ 308, 311 ] ]
https://openalex.org/W2052609175
e0a7b0d3-b15e-4845-8379-40b4ae08265a
In this section we compare neuron interference as an editing method to various generating methods: a regularized autoencoder, a standard GAN, a ResnetGAN, and a CycleGAN. For the regularized autoencoder, the regularization penalized differences in the distributions of the source and target in a latent layer using maximal mean discrepancy [1]}, [2]}. The image experiment used convolutional layers with stride-two filters of size four, with 64-128-256-128-64 filters in the layers. All other models used fully connected layers of size 500-250-50-250-500. In all cases, leaky ReLU activation was used with \(0.2\) leak. Training was done with minibatches of size 100, with the adam optimizer [3]}, and a learning rate of \(0.001\) . <FIGURE>
[1]
[ [ 340, 343 ] ]
https://openalex.org/W2978541146
1ba7337c-5b88-46d5-80b4-7759f98cc587
We will use in this article a definition of bimodule of a commutative Hom-associative algebras including Hom-module map condition (), while we note that there are also other definitions of Hom-modules and Hom-bimodules of Hom-associative algebras, for example the more general notions requiring only (REF ), [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}.
[4]
[ [ 326, 329 ] ]
https://openalex.org/W1827507246
e7ed047b-577f-41cf-9718-5fcc464cd654
Lemma 2.25 ([1]}) Let \((A,\cdot ,\lbrace \cdot ,\cdot \rbrace )\) be a transposed Poisson algebra. Then, for all \(x,y,z,t\in A,\) \(\lbrace xz,yt\rbrace +\lbrace xt,yz\rbrace =2zt\lbrace x,y\rbrace .\)
[1]
[ [ 12, 15 ] ]
https://openalex.org/W3023500865
6725b02d-d740-4484-a638-75d5cbacaf83
Definition 5.4 ([1]}) Let \(\mathcal {A}=(A, [\cdot ,\cdot ], \alpha )\) be a Hom-Lie algebra, and \((\rho , \beta ,V)\) be a representation of \(\mathcal {A}\) . Then, a linear map \( T : V \rightarrow A \) is called an \( \mathcal {O} \) -operator associated to \((\rho , \beta , V)\) , if \( T \) satisfies for all \(u, v \in V\) , \(\alpha T &= T\beta ,\\[T(u), T(v)] &= T(\rho (T(u))v - \rho (T(v))u).\)
[1]
[ [ 16, 19 ] ]
https://openalex.org/W2962730465
bdc87336-4dba-42fc-b994-9fa200eba728
where \(m_{0}\) is the magnitude of the bulk magnetic moment [1]}, \(\sigma _{z}\) is the \(z\) Pauli matrix for the spin degree of freedom, \( \tau _{0}\) is a \(2\times 2\) unit matrix for the orbital degree of freedom, and the magnetization energy along the \(z\) direction is given by \(m_{0}\) , i.e., \(m_{0}\) is the exchange field from the magnetic dopants.
[1]
[ [ 63, 66 ] ]
https://openalex.org/W1970188810
4e066214-fee2-42d7-afdc-98fde0c800d8
For the \(2\times 2\) Hamiltonian in terms of the \(\textbf {d}(k_{\perp })\) vectors and Pauli matrices, the Kubo formula for the Hall conductance can be generally expressed as [1]}, [2]} \(\sigma _{xy} = \frac{e^{2}}{2\hbar } \int \frac{d^{2}\textbf {k}}{(2\pi )^{2}} \frac{(f_{k,c} - f_{k,\nu })}{d^{3}} \epsilon _{\alpha \beta \gamma } \frac{\partial d_{\alpha }}{\partial k_{x}} \frac{\partial d_{\beta }}{\partial k_{y}} d_{\gamma },\)
[1]
[ [ 180, 183 ] ]
https://openalex.org/W2029095318
a252fd48-6c97-4ae6-8802-a4e3cab82f2f
We propose an experimental scheme to manipulate the topological phases in Cr-doped Bi\(_{2}\) Se\(_{3}\) with high-frequency pumping light. Our proposal can be realized in an experimentally accessiable range. Particularly, to realize the light driven topological phases, the frequency and intensity of the light are both within the experimental accessibility [1]}, [2]}. In most of the recent experiments [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, people focuses on using the magnetic fields to manipulate the topological phases of the system. However, the topological phases obtained in this way may be confused with the quantum Hall effect. Luckily, our proposal avoids this. Therefore, the theoretically investigations we put forward will be helpful to the future experiments.
[1]
[ [ 360, 363 ] ]
https://openalex.org/W2964187990
7276a805-e6f9-4c3b-802e-b0b6849f174c
Conjecture 1.2 (see [1]}) If \(G\) is a cubic graph with girth at least 6, then \(i(G) \le \frac{|V(G)|}{3}\) .
[1]
[ [ 20, 23 ] ]
https://openalex.org/W2077326803
418b3447-ccfe-47b9-a9a5-3fc2c14cd65c
Definition: Since each \(v \in V\) can belong to several layers, we can consider vertices as pairs \(V_M \subseteq V \times L\) , where \(L\) is the set of associated layers. Edges \(E_M \subseteq V_M \times V_M\) indicate the connectivity of pairs \((v_i,l_p)\) , \((v_j,l_q)\) . An edge is considered as an intra-layer edge when \(l_p=l_q\) or an inter-layer edge when \(l_p \ne l_q\) , respectively. In biological networks, we would have \(L = \lbrace l_1, l_2, l_3, ..., l_p\rbrace \) , where \(l_1\) could be metabolites occurring in mitochondria and \(l_2\) could be metabolites existing in cytoplasm, and so on. Note that some metabolites, such as \(H_2O\) , which occurs in both mitochondria and cytoplasm, can be connected using an interlayer edge. This formulation becomes powerful in the sense that it covers existing concepts and can be further used as an intermediate form to transform one concept to another, not only model wise but also visually [1]}, [2]}.
[1]
[ [ 967, 970 ] ]
https://openalex.org/W3037826211
05b24594-b59e-4565-af70-275a7d400ec5
An example of many-nucleon systems described by the aforementioned hadronic models, in addition to finite nuclei and infinite nuclear matter, is the composing matter of some astrophysical objects as neutron stars, for instance. These objects are formed by protons, neutrons, leptons, and other exotic particles, interacting in such a way to ensure the \(\beta \) -equilibrium condition. According to several theoretical studies, the interior of a neutron star is composed by a solid crust, at low-density, surrounding a liquid homogeneous core at several times \(\rho _0\) . The crust, estimated to contain around \(1\%\) percent of the total mass of the star, has a complex structure and is extremely important for the understanding of some astrophysical observations [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]} such as X-rays bursts [10]}, and the abrupt spin-up in the rotational frequency of pulsars [11]}, [12]}.
[1]
[ [ 770, 773 ] ]
https://openalex.org/W4235382658
f612165d-ffe6-4186-b4d4-e2b9a0210a4c
The original relativistic mean-field model was developed in 1974 by [1]}. In this model, based on quantum filed theory, nucleons are described by the Dirac spinor \(\psi \) , and the exchanged mesons by the scalar and vector fields \(\sigma \) and \(\omega _\mu \) , responsible by attractive and repulsive nuclear interaction, respectively, in symmetric matter. In order to take into account also the isospin asymmetry (different numbers of protons and neutrons), the inclusion of the \(\rho \) meson, represented by the isovector field \(\vec{\rho }_\mu \) , is also needed. Here we study parametrizations of a generalized version of the Walecka model. The fundamental quantity that describes this improved model is the Lagrangian density given by [2]} and [3]} L = (i- Mnuc)+ g- g
[3]
[ [ 761, 764 ] ]
https://openalex.org/W2065119767
2391ae02-b7b6-4cea-bca0-0fbecd816802
In this work we analyze the outcomes related to the crustal properties of neutron stars provided by a set of relativistic hadronic mean-field (RMF) model parametrizations consistent with different constraints coming from symmetric and asymmetric nuclear matter, pure neutron matter, and some astrophysical data related studied in Refs. [1]}, [2]}. We use the approach in which it is possible to calculate crust mass (\(M_{\rm crust}\) ) and radius (\(R_{\rm crust}\) ) without a specific treatment for this part of the neutron star [3]}. Since this quantities are obtained, it is possible to determine analytically the ratio between the crustal fraction of the moment of inertia and the total moment of inertia, \(\Delta I/I\) given in Eq. (REF ), and verify which parametrizations satisfy the constraints of \(\Delta I/I \geqslant 1.4\%\) and \(\Delta I/I \geqslant 7\%\) , found to be important to correctly explain the glitching mechanism observed in pulsars, such as the Vela one [4]}, with and without entrainment effects included.
[2]
[ [ 342, 345 ] ]
https://openalex.org/W2257969343
aa032b73-300d-4f37-9ec9-c6b368455a72
In this work we analyze the outcomes related to the crustal properties of neutron stars provided by a set of relativistic hadronic mean-field (RMF) model parametrizations consistent with different constraints coming from symmetric and asymmetric nuclear matter, pure neutron matter, and some astrophysical data related studied in Refs. [1]}, [2]}. We use the approach in which it is possible to calculate crust mass (\(M_{\rm crust}\) ) and radius (\(R_{\rm crust}\) ) without a specific treatment for this part of the neutron star [3]}. Since this quantities are obtained, it is possible to determine analytically the ratio between the crustal fraction of the moment of inertia and the total moment of inertia, \(\Delta I/I\) given in Eq. (REF ), and verify which parametrizations satisfy the constraints of \(\Delta I/I \geqslant 1.4\%\) and \(\Delta I/I \geqslant 7\%\) , found to be important to correctly explain the glitching mechanism observed in pulsars, such as the Vela one [4]}, with and without entrainment effects included.
[3]
[ [ 532, 535 ] ]
https://openalex.org/W4245652956
11c75476-d074-4d18-a38b-d7f5cde1a182
Another investigation performed in this work was the analysis of how the nuclear matter bulk parameter, namely, incompressibility, effective mass, symmetry energy and its slope, affect the crustal properties of the neutron star. We verify that the symmetry energies is the quantity that produces the higher variation in \(M_{\rm crust}\) , \(R_{\rm crust}\) , and \(\Delta I/I\) , according to the results presented in Figs. REF , REF , and REF . Furthermore, we were able to construct a particular parametrization in which the \(\Delta I/I \geqslant 7\%\) constraint is satisfied for neutron stars masses of \(M=1.4M_\odot \) . This particular value of \(M\) was used in Ref. [1]} to properly fit data from the softer component of the Vela pulsar X-ray spectrum. The bulk parameters found for this purpose are \(\rho _0=0.15\)  fm\(^{-3}\) , \(B_0=-16.0\)  MeV, \(m^*=0.575\) , \(K_0=260\)  MeV, \(J=35\)  MeV, and \(L_0=70\)  MeV for the model with \(\alpha _1=\alpha ^{\prime }_1=\alpha _2=\alpha ^{\prime }_2=0\) . The mass-radius diagram for this specific RMF parametrization, displayed in Fig. REF , shows that it is compatible with data from pulsars PSR J1614-2230, \(M=1.97\pm 0.04M_{\odot }\)  [2]}, PSR J0348+0432, \(M=2.01\pm 0.04M_{\odot }\)  [3]}, and MSP J0740+6620, \(M=2.14{{+0.20}\atop {-0.18}}M_{\odot }\)  [4]}. We also verify agreement with data from the NICER mission, namely, \(M=1.44^{+0.15}_{-0.14}M_{\odot }\) with \(R=13.02^{+1.24}_{-1.06}\)  km [5]}, \(M=1.34^{+0.15}_{-0.16}M_{\odot }\) with \(R=12.71^{+1.14}_{-1.19}\)  km [6]}, and \(R_{1.44}>10.7\)  km [7]}. The radius for the Vela pulsar with entrainment effects included was estimated to satisfy \(R \geqslant 3.24293 + 4.43579(M/M_\odot ) - 0.39817(M/M_\odot )^2\) .
[2]
[ [ 1205, 1208 ] ]
https://openalex.org/W4230859992
27988a94-6175-4b67-b010-2ba0d40b9549
We used data from fMRI experiment 1 [1]}, where authors conducted experiments with multiple subjects by showing different categories of words as stimuli. Each category might correspond to activation of distinct brain regions. In the experiment, the target word was presented with a picture that depicted some aspect(s) of the relevant meaning. This fMRI dataset was collected from a total of 9 participants. For each participant in the experiment, a total set of 60 words (12 categories) were used as stimuli in multi-modal form (word, picture). The fMRI dataset constitutes \(51 \times 61\) voxel windows arranged as 23 slices, per subject per stimulus. We use the publicly available Mitchell's 25-feature vector data [1]} of 60 words as input and the corresponding brain response of each participant (containing 21000 voxels) as output to train the model. <FIGURE>
[1]
[ [ 36, 39 ], [ 720, 723 ] ]
https://openalex.org/W2168217710
866ef147-3f2d-4169-8b04-ff0a4ccc1f94
where \(d/d\mathbf {n}\) denote differentiation along the outward normal to the slope of a wedge-shaped sample [1]}. However, according to our numerical simulations the imposing of Eq. (REF ) lead to significant increase of the calculation time. To avoid such a problem and speed up numerical solution the inclined edge is treated as the stress free (see details of the numerical simulation in Appendix A). This assumption does not change essentially the results on qualitative and quantitative level and preserves our further conclusions (see below) regardless of the boundary conditions selection for the inclined edge of a wedge.
[1]
[ [ 112, 115 ] ]
https://openalex.org/W1588835142
8e4c9b45-1898-4764-8716-dde8d6b42832
(1) Suppose that \({\rm cp}(G)>\frac{1}{4}\) . If \(G\ne F\) , then \(\frac{1}{|G:F|^2}\le \frac{1}{4}\) and so \({\rm cp}(H)>1\) which is impossible. Thus \(G=F\) . Now [1]} follows that both \(G^{\prime }\) and \(G/Z(G)\) are finite.
[1]
[ [ 173, 176 ] ]
https://openalex.org/W2133647997
cb7b40c8-0ca4-4a3a-b16a-ff2c6b1911dd
(2) Suppose that \({\rm cp}(G)>\frac{3}{40}\) . It follows that \({\rm cp}(H)>\frac{3}{40}\) and \(|G:F|\in \lbrace 1,2,3\rbrace \) so that \(G^{\prime }\le F\) . Now [1]} implies that \(H\) is solvable or \(H\cong A_5 \times T\) for some abelian group \(T\) . If \(H\) is solvable, by isoclinism, \(H^{\prime }\cong F^{\prime }\) and so \(F\) is solvable and so is \(G\) , since \(G/F\) is cyclic. Now assume that \(H\cong A_5 \times T\) for some abelian group \(T\) so that \({\rm cp}(H)=\frac{1}{12}\) . Now (1) implies that \(G=F\) . By isoclinism, \(G/Z(G)\cong A_5\) and \(G^{\prime }\cong A_5\) . Therefore \(G=G^{\prime }Z(G)\) and so \(G\cong A_5 \times Z(G)\) .
[1]
[ [ 170, 173 ] ]
https://openalex.org/W2070677709
06350a21-e776-4481-9a80-e04e33b2d8ba
Low-light image enhancement is a very challenging low-level computer vision task because, while enhancing the brightness, we also need to control the color bias, suppress amplified noise, preserve details and texture information and restore blurred edges. Images captured in insufficient lighting conditions often suffer from several types of degradation, such as poor visibility, low contrast, color distortion and severe ISO noise, which have negative effects on other computer vision tasks, such as image recognition [1]} [2]} [3]}, object detection [4]} [5]} [6]} and image segmentation [7]} [8]} [9]}. Therefore, there is a huge demand for low-light image enhancement. According to [10]}, although adjusting camera settings (e.g., increasing ISO, extending exposure time and using flash) can enhance the brightness of the image and improve the visibility, it can also bring about specific problems, making the image suffer from degradation to varying degrees. For example, increasing the ISO may introduce additional noise and cause parts of the image to be overexposed. Extending the exposure time may blur the objects in the image. Using flash enhances the brightness of captured images, but it may also lead to an unnatural image with color bias and uneven brightness. In recent years, a great number of approaches have been proposed and achieved remarkable results in low-light image enhancement, but, to the best of our knowledge, there are few successful methods for simultaneously dealing with all degradations contained in low-light images (such as low brightness, color bias, noise pollution, detail and texture loss, edge blurring, halo artifacts and contrast distortion).
[8]
[ [ 596, 599 ] ]
https://openalex.org/W2963150697
85da76cd-8160-41f9-9cb6-f8a40526bafa
With the advent of deep learning, a great number of state-of-the-art methods have been developed for low-light image enhancement. LLNet [1]} is a deep auto-encoder model for enhancing lightness and denoising simultaneously. LLCNN [2]} is a CNN-based method utilizing multi-scale feature maps and SSIM loss for low-light image enhancement. MSR-net [3]} is a feedforward CNN with different Gaussian convolution kernels to simulate the pipeline of MSR for directly learning end-to-end mapping between dark and bright images. GLADNet [4]} is a global illumination-aware and detail-preserving network that calculates global illumination estimation. LightenNet [4]} serves as a trainable CNN by taking a weakly illuminated image as the input and outputting its illumination map. MBLLEN [6]} uses multiple subnets for enhancement and generates the output image through multi-branch fusion. RetinexNet [7]} decomposes low-light input into reflectance and illumination and enhances the lightness over illumination. EnlightenGan [8]} trains an unsupervised generative adversarial network (GAN) without low/normal-light pairs. KinD [9]} first decomposes low-light images into a noisy reflectance and a smooth illumination and then uses a U-Net to recover reflectance from noise and color bias. RDGAN [10]} proposes a Retinex decomposition based GAN for low-light image enhancement. SID [11]} uses a U-Net to enhance the extremely dark RAW image. RetinexDIP [12]} provides a unified deep framework using a novel ”generative” strategy for Retinex decomposition. Zhang et al. [13]} presented a self-supervised low-light image enhancement network, which is only trained with low-light images. Zero-DCE [14]} estimates the brightness curve of the input image without any paired or unpaired data during training.
[7]
[ [ 894, 897 ] ]
https://openalex.org/W2887817889
039f2cb9-aa81-4d97-a698-4683fd076dc7
With the advent of deep learning, a great number of state-of-the-art methods have been developed for low-light image enhancement. LLNet [1]} is a deep auto-encoder model for enhancing lightness and denoising simultaneously. LLCNN [2]} is a CNN-based method utilizing multi-scale feature maps and SSIM loss for low-light image enhancement. MSR-net [3]} is a feedforward CNN with different Gaussian convolution kernels to simulate the pipeline of MSR for directly learning end-to-end mapping between dark and bright images. GLADNet [4]} is a global illumination-aware and detail-preserving network that calculates global illumination estimation. LightenNet [4]} serves as a trainable CNN by taking a weakly illuminated image as the input and outputting its illumination map. MBLLEN [6]} uses multiple subnets for enhancement and generates the output image through multi-branch fusion. RetinexNet [7]} decomposes low-light input into reflectance and illumination and enhances the lightness over illumination. EnlightenGan [8]} trains an unsupervised generative adversarial network (GAN) without low/normal-light pairs. KinD [9]} first decomposes low-light images into a noisy reflectance and a smooth illumination and then uses a U-Net to recover reflectance from noise and color bias. RDGAN [10]} proposes a Retinex decomposition based GAN for low-light image enhancement. SID [11]} uses a U-Net to enhance the extremely dark RAW image. RetinexDIP [12]} provides a unified deep framework using a novel ”generative” strategy for Retinex decomposition. Zhang et al. [13]} presented a self-supervised low-light image enhancement network, which is only trained with low-light images. Zero-DCE [14]} estimates the brightness curve of the input image without any paired or unpaired data during training.
[9]
[ [ 1121, 1124 ] ]
https://openalex.org/W2943838036
0ac7cccb-3913-4961-a329-01d733e9bfb1
Based on the assumption of Retinex Theory [1]}, a natural image (S) can be decomposed into two components: reflectance (R) and illumination (L). \(S=R*L\)
[1]
[ [ 42, 45 ] ]
https://openalex.org/W2076205488
6ab4de23-b47a-4f6f-b361-9e27f5e35645
where * represents a pixel-wise product operator. Reflectance is usually a three-channel image that contains color and the most of the high-frequency components, such as details and texture information. Illumination is a very smooth single-channel image that only contains low-frequency components, such as the intensity and distribution of lumination. In the process of decomposition, illumination is smooth enough to be regarded as noise-free since it only contains low-frequency information. However, noise hidden in the dark is amplified in the reflectance, which results in a very low peak signal-to-noise ratio (PSNR) for the reflectance. Previous approaches, such as LIME [1]}, RetinexNet [2]} and KinD[3]}, used additional well-designed denoisers, such as BM3D [4]}, CBDNet [5]} or an embedded denoiser, to denoise the reflectance. However, there may be some problems such as color bias and loss of high-frequency details in the reflectance after applying extra denoisers. Furthermore, additional denoisers can significantly reduce the forward inferencing speed of the whole pipeline. In the enhancement phase, the brightness of the illumination is enhanced, but if the intensity of brightness and the distribution of light are not restored correctly, the result will be overexposed or underexposed. The color information of the image depends not only on the reflectance but also on the brightness information of the illumination. Incorrectly predicted illumination maps can also result in color bias. A variety of degradations may arise after enhancing the brightness of the low-light image. Many of the previous methods used multiple sub-methods or sub-networks to tackle some of these problems in numerous steps [1]}[2]}[3]}, which can slow down the speed of the pipeline. Unlike previous methods, we aim to enhance the lightness of low-light images without introducing extra networks to deal with real-world noise and color distortion. We use two simple but effective U-Nets and the NCBC Module by carefully adjusting a series of well-designed loss functions rather than designing multiple sub-networks to deal with these problems individually. We are inspired by RetinexNet [2]} and we improve the shortcomings of RetinexNet. <FIGURE>
[1]
[ [ 679, 682 ], [ 1723, 1726 ] ]
https://openalex.org/W2566376500
b109ab62-c465-4a7e-9a58-182ce58cbd17
where * represents a pixel-wise product operator. Reflectance is usually a three-channel image that contains color and the most of the high-frequency components, such as details and texture information. Illumination is a very smooth single-channel image that only contains low-frequency components, such as the intensity and distribution of lumination. In the process of decomposition, illumination is smooth enough to be regarded as noise-free since it only contains low-frequency information. However, noise hidden in the dark is amplified in the reflectance, which results in a very low peak signal-to-noise ratio (PSNR) for the reflectance. Previous approaches, such as LIME [1]}, RetinexNet [2]} and KinD[3]}, used additional well-designed denoisers, such as BM3D [4]}, CBDNet [5]} or an embedded denoiser, to denoise the reflectance. However, there may be some problems such as color bias and loss of high-frequency details in the reflectance after applying extra denoisers. Furthermore, additional denoisers can significantly reduce the forward inferencing speed of the whole pipeline. In the enhancement phase, the brightness of the illumination is enhanced, but if the intensity of brightness and the distribution of light are not restored correctly, the result will be overexposed or underexposed. The color information of the image depends not only on the reflectance but also on the brightness information of the illumination. Incorrectly predicted illumination maps can also result in color bias. A variety of degradations may arise after enhancing the brightness of the low-light image. Many of the previous methods used multiple sub-methods or sub-networks to tackle some of these problems in numerous steps [1]}[2]}[3]}, which can slow down the speed of the pipeline. Unlike previous methods, we aim to enhance the lightness of low-light images without introducing extra networks to deal with real-world noise and color distortion. We use two simple but effective U-Nets and the NCBC Module by carefully adjusting a series of well-designed loss functions rather than designing multiple sub-networks to deal with these problems individually. We are inspired by RetinexNet [2]} and we improve the shortcomings of RetinexNet. <FIGURE>
[4]
[ [ 769, 772 ] ]
https://openalex.org/W2003884262
0255cfbe-e774-44b7-8613-e9d53ec26183
Our NCBC Module consists of a plain CNN, whose architecture is shown in Fig. REF , and two loss functions: noise loss and color loss. The input images of our NCBC Module are: (1) the reflectance with noise and color distortion, which is decomposed from the low-light input images and the reflectance without noise; and (2) the color distortion, which is decomposed from normal-light GroundTruth. We apply TV loss to the output of the reflectance with noise and color distortion to smooth the \(R_{low}\) . Total variance loss (TV loss) [1]} is as follows: \(L_{TV}^{low}=\left\Vert \triangledown _{h}\phi (R_{low})] \right\Vert _{2}^{2}+\left\Vert \triangledown _{v}\phi (R_{low})] \right\Vert _{2}^{2}\)
[1]
[ [ 536, 539 ] ]
https://openalex.org/W2963725279
bd7dd90a-3eb4-4c9f-908a-2b4e82bc6a50
We evaluate our method on the LOL [1]} validation dataset and test it on several widely used datasets, including DICM [2]} dataset, LIME [3]} dataset and MEF [4]} dataset. We adopt PSNR, SSIM [5]}, LPIPS [6]}, FSIM [7]} and UQI [8]} as the quantative metrics to measure the performance of our method. In addition, we use Angular Error [9]} and DeltaE[10]} as the metrics of color distortion to calculate the color bias between our results and GroundTruth. The Angular Error is as follows: \(Angular Error=arcos(\frac{<S_{output},S_{high}>}{\left\Vert S_{output} \right\Vert \cdot \left\Vert S_{high} \right\Vert })\)
[6]
[ [ 204, 207 ] ]
https://openalex.org/W2962785568
60ae2603-186f-4057-829f-bb39c43ba156
Higher values of PSNR, SSIM, FSIM and UQI and lower value of LPIPS indicate better quality of images. We compared our method with other state-of-the-art methods in terms of these metrics, including traditional methods such as MSRCR [1]}, BIMEF [2]}, LIME [3]}, Dong [4]}, SRIE [5]}, MF [6]}, NPE [7]}, RRM [8]}, LECARM [9]}, JED [10]}, PLM [11]} and DIE [12]} and deep learning methods such as MBLLEN [13]}, RetinexNet [14]}, GLAD [15]}, RDGAN[16]}, Zero-DCE [17]}, Zhang [18]} and EnlightenGan [19]}. As shown in Table REF , our method achieves the best performance in PSNR, SSIM, LPIPS, FSIM and UQI.
[1]
[ [ 232, 235 ] ]
https://openalex.org/W2150721269
5172f569-c0fa-4caf-a608-90131a3e8799
Higher values of PSNR, SSIM, FSIM and UQI and lower value of LPIPS indicate better quality of images. We compared our method with other state-of-the-art methods in terms of these metrics, including traditional methods such as MSRCR [1]}, BIMEF [2]}, LIME [3]}, Dong [4]}, SRIE [5]}, MF [6]}, NPE [7]}, RRM [8]}, LECARM [9]}, JED [10]}, PLM [11]} and DIE [12]} and deep learning methods such as MBLLEN [13]}, RetinexNet [14]}, GLAD [15]}, RDGAN[16]}, Zero-DCE [17]}, Zhang [18]} and EnlightenGan [19]}. As shown in Table REF , our method achieves the best performance in PSNR, SSIM, LPIPS, FSIM and UQI.
[8]
[ [ 306, 309 ] ]
https://openalex.org/W2791710889
7c545daa-7292-410e-8fef-0a48f4068b60
Higher values of PSNR, SSIM, FSIM and UQI and lower value of LPIPS indicate better quality of images. We compared our method with other state-of-the-art methods in terms of these metrics, including traditional methods such as MSRCR [1]}, BIMEF [2]}, LIME [3]}, Dong [4]}, SRIE [5]}, MF [6]}, NPE [7]}, RRM [8]}, LECARM [9]}, JED [10]}, PLM [11]} and DIE [12]} and deep learning methods such as MBLLEN [13]}, RetinexNet [14]}, GLAD [15]}, RDGAN[16]}, Zero-DCE [17]}, Zhang [18]} and EnlightenGan [19]}. As shown in Table REF , our method achieves the best performance in PSNR, SSIM, LPIPS, FSIM and UQI.
[10]
[ [ 329, 333 ] ]
https://openalex.org/W2963228457
d705e049-bbdb-4fe5-9f3b-0d8a5e27c78b
Higher values of PSNR, SSIM, FSIM and UQI and lower value of LPIPS indicate better quality of images. We compared our method with other state-of-the-art methods in terms of these metrics, including traditional methods such as MSRCR [1]}, BIMEF [2]}, LIME [3]}, Dong [4]}, SRIE [5]}, MF [6]}, NPE [7]}, RRM [8]}, LECARM [9]}, JED [10]}, PLM [11]} and DIE [12]} and deep learning methods such as MBLLEN [13]}, RetinexNet [14]}, GLAD [15]}, RDGAN[16]}, Zero-DCE [17]}, Zhang [18]} and EnlightenGan [19]}. As shown in Table REF , our method achieves the best performance in PSNR, SSIM, LPIPS, FSIM and UQI.
[13]
[ [ 401, 405 ] ]
https://openalex.org/W2893333553
4962d717-20ba-4f3d-b18e-90799e59e0c2
Higher values of PSNR, SSIM, FSIM and UQI and lower value of LPIPS indicate better quality of images. We compared our method with other state-of-the-art methods in terms of these metrics, including traditional methods such as MSRCR [1]}, BIMEF [2]}, LIME [3]}, Dong [4]}, SRIE [5]}, MF [6]}, NPE [7]}, RRM [8]}, LECARM [9]}, JED [10]}, PLM [11]} and DIE [12]} and deep learning methods such as MBLLEN [13]}, RetinexNet [14]}, GLAD [15]}, RDGAN[16]}, Zero-DCE [17]}, Zhang [18]} and EnlightenGan [19]}. As shown in Table REF , our method achieves the best performance in PSNR, SSIM, LPIPS, FSIM and UQI.
[15]
[ [ 431, 435 ] ]
https://openalex.org/W2807563922
dca9a4f2-3200-443a-9f95-e3b3930ea239
Higher values of PSNR, SSIM, FSIM and UQI and lower value of LPIPS indicate better quality of images. We compared our method with other state-of-the-art methods in terms of these metrics, including traditional methods such as MSRCR [1]}, BIMEF [2]}, LIME [3]}, Dong [4]}, SRIE [5]}, MF [6]}, NPE [7]}, RRM [8]}, LECARM [9]}, JED [10]}, PLM [11]} and DIE [12]} and deep learning methods such as MBLLEN [13]}, RetinexNet [14]}, GLAD [15]}, RDGAN[16]}, Zero-DCE [17]}, Zhang [18]} and EnlightenGan [19]}. As shown in Table REF , our method achieves the best performance in PSNR, SSIM, LPIPS, FSIM and UQI.
[17]
[ [ 459, 463 ] ]
https://openalex.org/W3035731588
686d263f-3e0c-470a-ab7a-f2248bcef0e9
Higher values of PSNR, SSIM, FSIM and UQI and lower value of LPIPS indicate better quality of images. We compared our method with other state-of-the-art methods in terms of these metrics, including traditional methods such as MSRCR [1]}, BIMEF [2]}, LIME [3]}, Dong [4]}, SRIE [5]}, MF [6]}, NPE [7]}, RRM [8]}, LECARM [9]}, JED [10]}, PLM [11]} and DIE [12]} and deep learning methods such as MBLLEN [13]}, RetinexNet [14]}, GLAD [15]}, RDGAN[16]}, Zero-DCE [17]}, Zhang [18]} and EnlightenGan [19]}. As shown in Table REF , our method achieves the best performance in PSNR, SSIM, LPIPS, FSIM and UQI.
[19]
[ [ 495, 499 ] ]
https://openalex.org/W3121661546
206dd70a-a3f2-4c10-a861-5d4730be71cc
Equipped with the product topology, \(X\) is a Cantor set. We note that the space \(X\) along with the left shift is called a one-sided subshift of finite type, see [1]} and references therein regarding relations with the symbolic dynamics.
[1]
[ [ 167, 170 ] ]
https://openalex.org/W2964342717
760995f2-49f5-42b5-9a8a-0ae4777a6a38
We tackle the intractability problem that [1]} faced, we use Graph R-CNN [2]} and Detectron2 [3]} to extract textual information about objects and their properties i.e., types and attributes, and relations to other objects. This step vastly reduces the search space when generating utterances.
[1]
[ [ 42, 45 ] ]
https://openalex.org/W2964183327
55c2e2c6-a555-4973-8833-7bb86aaa6245
We provide a detailed analysis of the result, specifically on the types of error based on the human evaluation. This deviates from standard evaluation process where they key metric is the comprehension accuracy (i.e is the expression distinctively describe the target) and provides a new angle in analysing expression quality. Background RSA RSA, first introduced by [1]} encapsulates the idea that pragmatic reasoning is essentially Bayesian. In the reference game scenario studied by [1]}, the domain consists of a set of objects with various qualities that are fully available to two players. The speaker will describe one targeted object unknown to the listener by creating a referring expression and the listener needs to reason about which object the expression is referring to. As laid out by [3]}, RSA is a simple Bayesian inference model with three components: literal listener, pragmatic speaker and pragmatic listener. For a given object \(o\) and utterance \(u\) : \(\textit {literal listener}\ P_{L_0}(o|u) \propto \llbracket u\rrbracket (o) \cdot P(o)\) \(\textit {pragmatic speaker}\ P_{S_1}(u|o) \propto \alpha U(u,o)\) \(\textit {pragmatic listener}\ P_{L_1}(o|u) \propto P_{S_1}(u|o) \cdot P(o)\) where \(\llbracket u\rrbracket \) is the literal meaning of \(u\) , either true (1) or false (0). The literal listener thus interprets an utterance at face value, modulo the prior probability of referring to that object \(P(o)\) , which we take to correspond to the object's salience. The pragmatic speaker decides which utterance to make by using the utility function \(U(u,o)\) , which is a combination of literal listener score and a cost function and the \(\alpha \) term denotes the rationality scale of the speaker. Lastly, the pragmatic listener infers the targeted object by estimating the likelihood that the speaker would use the given utterance to describe it. [1]} showed that RSA can accurately model human listener behavior for one-word utterances in controlled contexts with few objects and few relevant properties. Since then, a wealth of evidence has accumulated in support of the framework; see [3]} for some examples. Still, most RSA models use a very constrained utterance space, each utterance being a single lexical item. [6]} explore RSA models with two-word utterances where each utterance is associated with its own (continuous) semantics. But it remains a major open question how to scale up RSA models for large-scale natural language processing tasks. <FIGURE> Detectron2 and Graph R-CNN The RSA framework requires prior knowledge about the images and targets in order to generate expressions. Most approaches that use RSA and the speaker/listener model acquire this knowledge through a deep learning model that learns an embedding of the image and the target object, represented as a bounded box or bounded area inscribed on the image then use these embeddings to generate expressions. Instead of using embeddings, we decided to take a different route by generating the symbolic knowledge in the form of scene graph obtained from the image using Detectron2 and Graph R-CNN, which contains objects, properties, and relations, all in a lingual format, which is the ideal input for an RSA model. Detectron2 is the state-of-the-art object detection model developed by [7]} that utilizes multiple deep learning architecture such as Faster-RCNN [8]} and Mask-RCNN [9]} and is applicable to multiple object detection tasks. Graph R-CNN [10]} is a scene graph generation model capable of detecting objects in images as well as relations between them using a graph convolutional neural network inspired by Faster-RCNN with a relation proposal network (RPN). RPN and Graph R-CNN is among the state-of-the-art architecture in objects' relation detection and scene graph generation. Method As discussed in [3]} and [1]}, RSA requires a specification of the utterance space and background knowledge about the state of the `world' under consideration. Thus, we view the problem of generating referring expressions as a two-step process where, given an image and a targeted region, we: (1) Acquire textual classifications (e.g. car) of the objects inside the image and the relations between objects in the image; (2) Generate a referring expression from the knowledge acquired from step (1). In step (1), most previous work falls into two categories. [1]} and [6]} assume the information about objects and their properties are known to the agent generating the expression. On the other hand, [15]} and [16]} use deep learning to obtain embeddings of the image and the targeted region. [17]} combine the embedding extraction step with the referring expression in one single model. In step (1), we neither assume the availability of descriptive knowledge of the images like [1]} nor do we use an image and region embedding like [15]}. Instead, we generate both the utterance space and the literal semantics of the input image by applying Graph R-CNN to obtain objects' relations and Detectron2 to obtain objects' properties. This idea is motivated by the intractable problem that [15]} face when considering a vast number of utterances at every step. By extracting the symbolic textual information from images, we vastly reduce the number of utterances per step since the number of objects, their relations, and properties are limited in each image. Specifically, Detectron2 outputs objects and the probability that some property is applicable to those objects. For example, a given object categorized as an elephant might have a high probability of having the property big and a lower probability of having the property pink. Graph R-CNN outputs pairs of objects and probabilities of how true some predefined relation is to some pair of objects. One challenge in merging computer vision systems with datasets like RefCOCO is matching the target referent in the dataset to the right visually detected object (assuming it is found). RefCOCO provides a bounding box around the target referent, and Detectron2 and Graph R-CNN may or may not identify an object with the same position and dimensions. One simple approach is to use the most overlapped detected object with the target box as the subject for the generation algorithm. However, there is no guarantee that the most overlapped detected object is the target. We overcome this problem by combining feature extraction with target feature extraction from Detectron2. We first let Detectron2 identify all the objects it can in the image (call this the context). We then instruct Detectron2 to consider the target box an object and classify it. If there is an object in the context that overlaps at least 80% with the target box and is assigned the same class, then we leave the context as is; otherwise we add the target box to the context. <FIGURE>To enrich object relations beyond binary relations in Graph R-CNN, we also implemented a simple algorithm to generate ordinal relations. We do so by sorting detected objects of the same category (e.g all dogs in an image) by the \(x\) -axis and assign predefined ordinal relations such as left, right, or second from left. The product of these image analysis methods are used in the literal semantics, which are categorical, although they are based on the gradient output of Detectron2 and Graph R-CNN, which assigns objects to properties and relations with varying degrees of certainty. Since Detectron2 and Graph-RCNN output likelihood values for attributes and types for each object as shown in Figure REF , the last step in the textual extraction process is using a cutoff threshold to decide what level of likelihood make one attribute belongs to a particular object. If the threshold is too low, then objects would contain many irrelevant attributes; if the threshold is too high, there may not be enough attributes to uniquely describe some objects. Currently, we use a hard-coded value that is slightly higher than the minimum value where most of the irrelevant attributes and types are, as examined by hand. Thus, in the spirit of [21]}, we assume a threshold \(\theta \) to decide whether a given type or attribute holds of a given object. Let \(F\) be a function that assigns: to each attribute and type, a function from \(D\) to [0,1]; and to each relation, a function from \(D\times D\) to [0,1], where \(D\) is the set of objects in the image. \(F\) represents the output of the Detectron2 and Graph R-CNN. For each type, attribute, and relation symbol \(u\) , \(\theta (u)\) is a threshold between 0 and 1 serving as the cutoff for the truthful application of the type, attribute, or relation to the object(s). Then \(\llbracket u\rrbracket (o) = 1\) iff \(F(u)(o) \ge \theta (u)\) , etc. Ultimately we plan to learn these thresholds from referring expression training datasets such as RefCOCO. Currently, they are fixed by hand: one uniform threshold for types/attributes and relations, respectively. Using categorical semantics rather than the gradient semantics that would be obtained directly from the Detectron2 avoids the well-known problems of modification in fuzzy semantics, a proper solution to which would require conditional probabilities that are unknown [22]}. Our key contribution with respect to step (2) is at the speaker level. We introduce iterative RSA, described in the Algorithm REF below. Iterative RSA takes as input the domain of all objects \(D\) , a prior \(P(d)\) over all objects \(d\in D\) , the referent object \(o\) and list of possible `utterances' \(U\) . Although an utterance may consist of multiple words, each `utterance' here is a single predicate (e.g. dog, second from left, wearing black polo). We will use the word `descriptor' instead of `utterance' in this setting, because the strings in question may be combined into a single output that the speaker pronounces once (a single utterance, in the proper sense of the word). Again, we take the prior over objects to be proportional to salience (which we define as object size). Our RSA speaker will iteratively generate one descriptor at a time and update the listener's prior over objects at every step until either (i) the entropy of the probability distribution over objects reaches some desirable threshold \(K\) , signifying that the listener has enough information to differentiate \(o\) among objects in \(D\) , or (ii) the maximum utterance length \(T\) has been reached. Inputinput Outputoutput initialization: \(E=[]\) \(t < T\) & Entropy\((P_{D}^{t-1}) < K\) \(u\) = sample(Speaker \(P_{S_1}(u|o,P_{D}^{t-1},U_E)\) ) \(P_{D}^t\) = Literal listener \(P_{L_0}(o|u, P_{D}^{t-1})\) add \(u\) to \(E\) \(E\) Iterative RSA In standard RSA, the utility function \(U(u,o)\) is defined as \(U = \log (P_{L_0}(o|u)) + \text{cost}(u)\) [3]}. We define ours as: \(U_E = \log (P_{L_0}(o|u) + P_{ngram}(u|E)) + \text{cost}(u)\) where \(P_{ngram}\) is the probability of \(u\) following the previous \(n\) words in \(E\) . Specifically, we use a 3-gram LSTM model (\(n\) =3). Figure REF outlines our overall workflow. Experiment and Result The framework is implemented in Python and will be made publicly available. In the implementation of Algorithm REF , we set \(T=4\) . This value for maximum utterances per expressions come from the average length of the expressions from our target dataset, both RefCOCO and RefCOCO+ have average length less than 4 utterances per expression. We evaluate our framework on the test set of RefCOCO and RefCOCO+ datasets released by [24]}. For these two datasets, each data point consists of one image, one bounding box for a referent (the target box) and some referring expressions for the referent. We used pre-trained weights from the COCO dataset for Graph R-CNN and Detectron2. Additionally, we experiment separately with finetuning Detectron on RefCOCO referring expressions. Finally, we test the framework with RefCOCO Google split test set and RefCOCO+ UNC split test set. We evaluate the generated expressions on the test dataset with both automatic overlap-based metrics (BLEU, ROUGE and METEOR) and accuracy (human evaluation) (Table REF ). Specifically, we run human evaluation through crowdsourcing site Prolific on the following scheme: our IterativeRSA, RecurrentRSA [16]} and SLR [26]} trained on \(0.1\%, 1\%\) and \(10\%\) of the training sets of RefCOCO and RefCOCO+. For each scheme, we collected survey results for 1000 randomly selected instance from the RefCOCO test dataset from 20 participants and 3000 instances from RefCOCO+ test dataset from 60 participants. Each image is preprocessed by adding 6 bounding boxes on some objects in the image, one of which is the true target. The boxes are chosen from 5 random objects detected by Detectron2 an the true target object. Each participant is asked to find the matching object given expression for 50 images through multiple choice questions. In addition, we also manually insert 5 extra instances where the answer is fairly obvious and use those instances as a sanity check. Data from participants who failed more than half of the sanity checks (i.e \(3/5\) ) was not included in the analysis. Since our referring expressions are generated based on extracted textual information about individual objects and not the raw image as a whole, there are cases where Detectron2 does not recognize the object in the target box or the suggested bounding box from Detectron2 is different in size compared to the target box. In such cases, our algorithm ended up generating an expression for a different observable object than the targeted one. To understand the different types of errors our model makes, we also included additional options in cases where the testers cannot identify a box that matched the expression. Specifically, we added three categories of error when no (unique) matching object is identified: nothing in the picture matches the description several things match this description equally well the thing that matches the description best is not highlighted Despite the simplicity of our proposed method, it achieves comparable performance in terms of METEOR score to the Speaker-Listener-Reinforcer(SLR) [26]}. More importantly, our method outperforms SLR in human comprehension under low training data scheme and RecurrentRSA with both RefCOCO and RefCOCO+. <TABLE>Beside raw accuracy, we also report the accuracy rate using the formula \( adjusted-accuracy = True/(True+False+Underinformative)\) where Underinformative counts instances where the expressions correctly refer to the referent objects but are not distinctive enough. Our human evaluation accuracy is slightly less than that of MMI [24]} and while our METEOR score is higher. However, our performance measures fall short when compared to the state-of-the-art extensively trained end-to-end deep neural network model by SLR [29]}. This is to be expected as our method was not trained and does not require training on the specific task of referring expression generation or comprehension. Further performance analysis will be given in the next sections. <TABLE><TABLE>Comparison with Recurrent RSA and SLR trained with limited data As discussed above, to see the advantages and drawbacks of Iterative RSA, we run human evaluation on generated expressions from RefCOCO and RefCOCO+ datasets and compare Iterative RSA with RecurrentRSA-another RSA approach as well as SLR. From Table REF , Iterative RSA outperforms RecurrentRSA with \(28\%\) compared to \(26.9\%\) . On the other hand, to make a fair comparison with a deep learning end-to-end approach like SLR, we decided to train SLR with limited training data as Iterative RSA does not require any direct training process. From Table REF , the Iterative RSA (no training) outperforms all SLR models trained with \(0.1\%, 1\%\) and \(10\%\) training data for refCOCO+ dataset and outperform SLR model trained with highly limited training data (\(0.1\%\) ) on RefCOCO. Furthermore, when examining the SLR-generated expressions, we observed that for the model trained and tested on RefCOCO dataset, a lot of the expressions contains positional property of objects such as left, right, which makes identifying the target easier when the expression is low quality and incomplete (as a result of training on limited data). Thus, we can see that SLR performs better on RefCOCO than RefCOCO+. On the other hand, IterativeRSA performs more consistently, especially when used without any training or observation of the data. Finetuning the Detectron2 model for object detection with RefCOCO expressions improve the performance on the corresponding dataset, however, using the same model on the RefCOCO+ dataset does not show any significant change in accuracy. <FIGURE>Figure REF is an example of referring expression generated with RSA compared to SLR trained with limited data. For the RSA expression, it clearly shows that the model explains Gricean maxim of quantity by generating the shortest possible word to describe the target which are the jeans, whereas SLR shows the overfitting behavior when generating unrelated expression to the target. Analysis of the human evaluation As mentioned above, in our study, aside from letting users choose one of the objects surrounded by bounding boxes given the generated expression, we also give additional options to handle the case where survey participants cannot find a sensible object to match the description. Overall, we observe that incorrect responses can be divided into the following categories: under-informative expression, not highlighted, no match and false. These categories of error help in identifying the sources of deficiency in our approach. If the expression is under-informative, there are two possibilities. The first is that the textual data extraction step (i.e., Detectron2) was able to identify multiple objects of the same type, but the algorithm is unable to differentiate between the target and the rest of the objects. In this case the problem is on the linguistic side of our model. Another possibility is that not all objects of the relevant type were detected, which is the deficiency of our visual system (Detectron2). Another type of visual system deficiency happens when the described object is not the highlighted one or if there is no match. In these cases, the visual system (Detectron2) mis-classified the object in the bounding box. As shown in Table REF , about \(48\%\) of the recorded instances belong to these two categories. Under-informative expressions One type of error is when the generated expression is under-informative. This occurs when the expression correctly indicated the type of the target object but failed to differentiate between the target and other objects of the same type in the picture. For example, in Figure REF , the algorithm was able to correctly identify the type of object in the bounding box but the modifier (cooking) failed to differentiate the target from the other instance of that type. <FIGURE> Object not highlighted Another type of errors revealed through human evaluation is when the matching object is not highlighted as the target. <FIGURE>This type of deficiency is due to the textual extraction component (Detectron2) not observing all objects of the same type. In Figure REF , Detectron2 can only observe four instances of the category man, which are all highlighted in this image with box \(1,2,3,5\) . When comparing the available attributes for these mans, target man in box 2 (i.e., the light green box at the bottom left of the image) is assigned a distinctive attribute that others do not have: laying down (although he is sitting, not laying down). The use of this modifier increases the salience of the target relative to the other individuals that are detected. It is quite possible that participants assumed laying down man refers to the only person at the bottom center of the image who is actually laying down. However, that individual is not detected by Detectron2 and thus there is no highlighted box. High quality expression When the participants correctly identify the target object by choosing the right bounding box, we observe that the textual extraction step provides sufficient information for the algorithm to work correctly. Figure REF is an example where we observe that the system works well when the extracted textual information is accurate and sufficient. Specifically, Detectron2 found all the objects of the type train in box 4 and 5. Furthermore, the train objects have fairly sensible attributes, including the left and the right. <FIGURE> Discussion The Iterative RSA introduced in this paper is able to generate multiple-modifier descriptions, which goes far beyond the vanilla RSA speaker described by [3]} and [1]}, and our RSA speaker has even gone past the two-word stage of [6]}. While the result is not at the level of the state-of-the-art end-to-end model, Iterative RSA outperforms Recurrent RSA and SLR trained under limited data. We can clearly explain how our model comes up with the referring expressions it generates. The explainability of our model is a contrast feature when compare with RecurrentRSA. While RecurrentRSA also applies the RSA model to generate expressions, its expression generation by recursively generate characters makes it hard to explain why at each step, why one character is a feasible choice that helps identify a target object. Furthermore, to our knowledge, we are the first attempt to apply pure probabilistic RSA model without any neural network components in the expression generation step of the referring expression generation from image task. From the analysis of the human evaluation and concrete examples, it is clear that the performance of Iterative RSA is tightly coupled with the performance of the textual extraction model, particularly Detectron2. When Detectron2 detects enough information, including the objects in a given image as well as their probable attributes, we observe that our proposed Iterative RSA can create high quality expressions with distinctive modifiers. Another key strength and also a weakness of our proposed iterative RSA is the size of the vocabulary of descriptors. Currently, this vocabulary is limited to the attributes and types vocabulary that Detectron2 possesses. While this vastly reduces the search space of all possible descriptors, it also limits the possible descriptors that RSA can choose from, given a target. The textual extraction step (Detectron2 in this case) can be analogized to the act of “observing” and the Iterative RSA algorithm to “reasoning”. One cannot reason about objects or aspects of objects that are not observed. On the other hand, in terms of efficiency, our proposed method is fast because Iterative RSA does not require training data and can be applied directly on the fly with any given textual extraction system. In addition, our application of Detectron2 and Graph-RCNN also does not require training as it utilizes pre-trained weights. Experiments with fine-tuning Detectron2 with RefCOCO data does show better accuracy on the test set of RefCOCO dataset but does not show any major improvement when tested on RefCOCO+ as shown in Table REF . Thus, the base Iterative RSA is more generalized and consistent across different datasets. Minimal reliance on training data has other advantages: That property makes our approach a promising one for low-resource languages where labeled data for training, especially for vision-language tasks such as referring expression generation/comprehension, are virtually non-existent [33]} for languages other than English. Conclusion In this paper, we have explored the possibility of decomposing referring expression generation into a two-component process of symbolic knowledge acquisition and expression generation, adapting the RSA framework to real world scenes where textual information is not available. We also introduce two promising innovations that help to address the intractability problem of applying RSA to real world scenes in previous work, which includes (1) constraining the utterance space using the output of object recognition and scene graph generation systems, and (2) proposing a simple yet intuitive and explainable model for referring expression generation called iterative RSA, which incrementally outputs referring expression one predicate at a time. Lastly, our method allows for easy analysis and understanding of each individual expression, and provides clear explanations as to why the system generates the expressions it does.
[1]
[ [ 367, 370 ], [ 486, 489 ], [ 1895, 1898 ], [ 3856, 3859 ], [ 4389, 4392 ], [ 4810, 4813 ], [ 20795, 20798 ] ]
https://openalex.org/W1993979041
d3168163-d8dc-4bec-9315-61a728616bd9
Detectron2 is the state-of-the-art object detection model developed by [1]} that utilizes multiple deep learning architecture such as Faster-RCNN [2]} and Mask-RCNN [3]} and is applicable to multiple object detection tasks. Graph R-CNN [4]} is a scene graph generation model capable of detecting objects in images as well as relations between them using a graph convolutional neural network inspired by Faster-RCNN with a relation proposal network (RPN). RPN and Graph R-CNN is among the state-of-the-art architecture in objects' relation detection and scene graph generation.
[4]
[ [ 236, 239 ] ]
https://openalex.org/W2886970679
7f412f32-ce16-4c2b-9628-c3bb2912f070
In the third possible scenario, two WD merge or collide. This process occurs on a dynamical timescale, much faster than the slow accretion timescales from the previous processes [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. In simulations of this process, the ejecta show large-scale density asymmetries.
[1]
[ [ 178, 181 ] ]
https://openalex.org/W1522553235
f912098b-fee6-4b6e-84ea-c48988cf6b95
The simulations begin with a spherical hydrostatic C/O WD, identical to that of Stage 1 [1]}. The simulations utilize a nuclear network of 218 isotopes during the early phases of the explosion; detailed, time-dependent non-LTE models for atomic level populations; and \(\gamma -\) ray and positron transport and radiation-hydrodynamics to calculate low-energy LCs and spectra [2]}, [3]}, [4]}.
[3]
[ [ 382, 385 ] ]
https://openalex.org/W3037302264
daadef93-2b61-4311-8490-9a3deb69a5ea
Coupling of radiation transport, statistical and hydro equations: We use the well established method of accelerated lambda iteration (ALI, e.g. [1]}, [2]}, [3]}, [4]}, [5]}). We employ several concepts to improve the stability, and convergence rate/control including the concept of leading elements, the use of net rates, level locking, reconstruction of global photon redistribution functions, equivalent-2-level approach [6]}, [7]}, [8]}, and predictor-corrector methods.
[7]
[ [ 429, 432 ] ]
https://openalex.org/W4234289204
f33e0898-6ee1-4c1f-9bf1-4198ecc4bc3b
Success in self-supervised training of speech encoder [1]} enables significant advancements in various speech processing tasks, ranging from ASR [2]}, [3]}, speech-to-text translation (S2T) [4]}, [5]}, which are critical components in building speech-to-speech translation systems [6]}, [7]}. Such pretrained encoder produces speech hidden representations that can be discretized into units that condense semantic and prosodic information [3]}. Specifically, the self-supervised HuBERT model [3]} is trained to encode input speech into discrete units by performing \(k\) -means clustering over the hidden vectors by a pretrained \(k\) -means model. <FIGURE>
[1]
[ [ 54, 57 ] ]
https://openalex.org/W3099782249
c0930354-361f-4187-a95b-895b9200998c
Recent S2ST models [1]}, [2]}, [3]} make use of such \(k\) -means units that, instead of directly generating an audio signal, which is considerably slow, they generate shorter unit sequences with a heavy speech-to-unit translation (S2UT) model and use a lightweight vocoder [4]} to convert units to output audios. Specifically, the S2UT model is an attention-based Seq2Seq model [5]}. Its encoder is a Wav2vec 2.0 [6]} model that is pretrained to encode speech representations from unlabeled audios. It consists of a multi-layer convolutional network to encode raw audio signal, followed by a Transformer [5]} (or Conformer [8]}) encoder to produce contextual representations for the audio. Meanwhile, the decoder is a unit-mBART [9]}, which is pretrained with masked language modeling on the unsupervised reduced discrete unit data derived from unlabeled speech via the HuBERT-\(k\) -means model [10]}. During S2S training, the S2UT model is initialized with the pretrained models [6]}, [9]} and then finetuned with speech-to-unit data, which is in fact the S2S data where the target speech is converted to discrete units. During finetuning, Popuri et al [3]} suggest that it is most beneficial to freeze the decoder parameters, except its layer-norm layers [14]} in the attention modules. fig:s2umodel depicts the architecture of the direct speech-to-speech translation system. More importantly, Popuri et al [3]} also make use of intensive data augmentation, where extra supervised speech recognition (ASR) data is used with pretrained MT [5]} and TTS [17]} models to synthetically generate more speech-to-speech data for training, which profoundly improves the performance. Our approach is built on top of this work in that, in addition to speech-based data augmentation, we introduce an effective way to convert the existing massive unlabeled text data into prosodically diverse speech-to-speech data to add into the training data pool.
[1]
[ [ 19, 22 ] ]
https://openalex.org/W2972495969
5868e0b7-6309-4150-85bd-c2b595d96534
In terms of model setup, similar to [1]}, we use the multilingual HuBERT and \(k\) -means model [2]}, which was pretrained from unlabeled VoxPopuli speech data [3]}. The S2U model's encoder is a large Conformer Wav2Vec 2.0 [4]}, [5]} pretrained with Libri-light [6]} for En and VoxPopuli [3]} for Es. The decoder is the unit mBART [8]} that was pretrained from reduced units derived from the aforementioned unlabeled speech with the HuBERT-\(k\) -means models.
[8]
[ [ 331, 334 ] ]
https://openalex.org/W3107826490
7a551eea-0e90-45fd-aac6-608909fd9738
Regarding setups relating to our method, to produce the Text-aug dataset, we use the pretrained unsupervised MT model CRISS [1]} to translate \(\sim \) 12M En and 12M Es monolingual sentences, which are randomly sampled from the CC25 [2]} corpora, into Es and En for respectively. After further filtering and TTS speech conversion [3]}, we obtain \(\sim \) 14K and 21K hours of audio Text-aug data for Es\(\rightarrow \) En and En\(\rightarrow \) Es tasks, which are almost 10x the original data [4]}. Despite its much larger size, during S2U finetuning, we sample the original and Text-aug data at a 50:50 sampling ratio to ensure that the model has sufficient exposure to real audios to avoid further distribution shift. We compare our method with the state of the art [4]}, along with related baselines such as the cascaded S2T+TTS and ASR+MT+TTS systems or [4]} with back-translation data from unlabeled speech [7]}. In terms of Effects-aug settings, we randomly select at \(p=50\%\) chance: (i) speed variations by 0.95-1.05 ratio, (ii) pitch variations by 0.95-1.05 ratio, (iii) low-pass filter with cut-off frequency in 300-1000Hz, (iv) up to 4 noise utterances chosen in the Musan corpus [8]} at SNR ratio between 25-35. To stabilize the model, we average the best 10 checkpoints after training for 50K updates.
[3]
[ [ 331, 334 ] ]
https://openalex.org/W3169905056