context
stringlengths
115
2.43k
A
stringlengths
102
2.17k
B
stringlengths
103
1.71k
C
stringlengths
105
1.57k
D
stringlengths
102
2.43k
label
stringclasses
4 values
3:output the new weight vector 𝐰ksubscript𝐰𝑘{\bf w}_{k}bold_w start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT in (41)
we next present the connection between C2⁢-WORDsuperscriptC2-WORD\textrm{C}^{2}\textrm{-WORD}C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT -WORD and
and have discussed the connection between C2⁢-WORDsuperscriptC2-WORD\textrm{C}^{2}\textrm{-WORD}C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT -WORD
(C2⁢-WORDsuperscriptC2-WORD\textrm{C}^{2}\textrm{-WORD}C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT -WORD).
IV Connection Between C2⁢-WORDsuperscriptC2-WORD\textrm{C}^{2}\textrm{-WORD}C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT -WORD and
D
Here we also refer to CNN as a neural network consisting of alternating convolutional layers each one followed by a Rectified Linear Unit (ReLU) and a max pooling layer and a fully connected layer at the end while the term ‘layer’ denotes the number of convolutional layers.
The one layer module consists of one 1D convolutional layer (kernel sizes of 3333 with 8888 channels).
Although we choose the EEG epileptic seizure recognition dataset from University of California, Irvine (UCI) [13] for EEG classification, the implications of this study could be generalized in any kind of signal classification problem.
The UCI EEG epileptic seizure recognition dataset [13] consists of 500500500500 signals each one with 4097409740974097 samples (23.5 seconds).
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
C
We establish a multi-factor system model based on large-scale UAV networks in highly dynamic post-disaster scenarios. Considering the limitations in existing algorithms, we devise a novel algorithm which is capable of updating strategies simultaneously to fit the highly dynamic environments. The main contributions of this paper are as follows:
We formulate the UAV ad-hoc network game in large-scale post-disaster area as a multi-aggregator aggregative game [27], where we calibrate its definition in our UAV network model and put it as follows.
We propose a novel UAV ad-hoc network model with the aggregative game which is compatible with the large-scale highly dynamic environments, in which several influences are coupled together. In the aggregative game, the interference from other UAVs can be regarded as the integral influence, which makes the model more practical and efficient.
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) of UAV; altitude decides the number of users that can be supported [7], and it also determines the minimum value of SNR. It is because the higher altitude a UAV is, the more users it can support, and the higher SNR it requires. Therefore, power control and altitude are two essential factors. There have been extensive researches building models focusing on various network influence factors. For example, the work in [8] established a system model with channels and time slots selections. Authors of [9] constructed a coverage model which considered each agent’s coverage size on a network graph. However, such models usually consider only one specific characteristic of networks but ignore systems’ multiplicity, which would bring great loss in practice. Since UAVs will consume too much power to improve SNR or to increase coverage size. Even though UAV systems in 3D scenario with multi-factors of coverage and charging strategies have been studied by [7], it overlooks power control which means that UAVs might wast lots of energy. To sum up, in terms of UAV ad-hoc networks in post-disaster scenarios, power control and altitude which determine energy consumption, SNR, and coverage size ought to be considered to make the model credible [10].
We design a model which jointly considers multiple factors such as coverage and power control in multi-channel scenario. The model with more network influence factors ensures its reliability.
D
Pascal VOC datasets: The PASCAL Visual Object Classes (VOC) Challenge (Everingham et al., 2010) was an annual challenge that ran from 2005 through 2012 and had annotations for several tasks such as classification, detection, and segmentation. The segmentation task was first introduced in the 2007 challenge and featured objects belonging to 20 classes. The last offering of the challenge, the PASCAL VOC 2012 challenge, contained segmentation annotations for 2,913 images across 20 object classes (Everingham et al., 2015).
Table 2: A summary of papers for semantic segmentation of natural images applied to PASCAL VOC 2012 dataset.
Pascal VOC datasets: The PASCAL Visual Object Classes (VOC) Challenge (Everingham et al., 2010) was an annual challenge that ran from 2005 through 2012 and had annotations for several tasks such as classification, detection, and segmentation. The segmentation task was first introduced in the 2007 challenge and featured objects belonging to 20 classes. The last offering of the challenge, the PASCAL VOC 2012 challenge, contained segmentation annotations for 2,913 images across 20 object classes (Everingham et al., 2015).
Cityscapes: The Cityscapes dataset (Cordts et al., 2016) contains annotated images of urban street scenes. The data was collected during daytime from 50 cities and exhibits variance in the season of the year and traffic conditions. Semantic, instance wise, and dense pixel-wise annotations are provided, with ‘fine’ annotations for 5,000 images and ‘coarse’ annotations for 20,000 images.
PASCAL Context: The PASCAL Context dataset (Mottaghi et al., 2014) extended the PASCAL VOC 2010 Challenge dataset by providing pixel-wise annotations for the images, resulting in a much larger dataset with 19,740 annotated images and labels belonging to 540 categories.
D
In this paper, we consider a dynamic mission-driven UAV network with UAV-to-UAV mmWave communications, wherein multiple transmitting UAVs (t-UAVs) simultaneously transmit to a receiving UAV (r-UAV). In such a scenario, we focus on inter-UAV communications in UAV networks, and the UAV-to-ground communications are not involved. In particular, each UAV is equipped with a cylindrical conformal array (CCA), and a novel-codebook-based mmWave beam tracking scheme is proposed for such a highly dynamic UAV network. More specifically, the codebook consists of the codewords corresponding to various subarray patterns and beam patterns. Based on the joint UAV position-attitude prediction, an efficient codeword selection scheme is further developed with tracking error (TE) awareness, which achieves fast subarray activation/partition and array weighting vector selection. It is verified that our proposed scheme achieves a higher spectrum efficiency, lower outage probability and stronger robustness for inter-UAV mmWave communications. In summary, the key contributions of this paper are listed as follows.
When considering UAV communications with UPA or ULA, a UAV is typically modeled as a point in space without considering its size and shape. Actually, the size and shape can be utilized to support more powerful and effective antenna array. Inspired by this basic consideration, the conformal array (CA) [16] is introduced to UAV communications. A CA is usually in a shape of cylindrical or spherical conforming to a predefined surface, e.g., a part of an airplane or UAV, and can reap full spatial coverage with proper array designs. Compared with surface-mounted multiple UPAs, a CA, conforming to the surface of a UAV, can compact the UAV design, reduce the extra drag and fuel consumption, and also facilitate an array of a larger size [16]. Furthermore, directional radiating elements (DREs) are commonly integrated with antenna array to enhance the beamforming ability [16, 17, 18]. In such a case, the coverage capability of CA is far stronger than that of UPA and ULA via proper array designs, due to the exploitation of size and shape. Specifically, a CA can enable the potential to enlarge (roll up) the surface of antenna array. This advantage not only achieves a larger array gain to combat path-loss but also sustains full-spatial transmitting/receiving to facilitate fast beam tracking for mobile UAV mmWave networks [19]. Note that in mission-driven UAV networks, agile and robust beam tracking is very challenging yet critical for inter-UAV mmWave communications [10], because UAV position and attitude may vary very fast. By carefully exploiting the CA’s full spatial transmission/reception property, the stringent constraints on beam tracking for highly dynamic moving UAVs can be relieved considerably. So far, however, the CA-enabled UAV mmWave network is almost untouched in the literature. Regarding the mmWave CA, there are only a few recent works on the radiation patterns and beam scanning characteristics [20] and the performance evaluation of CA-based beamforming for static mmWave cellular networks [21]. These works validate the potential advantage of CA in the static mmWave networks, which are not applicable to mobile UAV mmWave networks.
The first study on the beam tracking framework for CA-enabled UAV mmWave networks. We propose an overall beam tracking framework to exemplify the idea of the DRE-covered CCA integrated with UAVs, and reveal that CA can offer full-spatial coverage and facilitate beam tracking, thus enabling high-throughput inter-UAV data transmission for mission-driven UAV networking. To the best of our knowledge, this is the first work on the beam tracking framework for CA-enabled UAV mmWave networks.
The specialized codebook design of the DRE-covered CCA for multi-UAV mobile mmWave communications. Under the guidance of the proposed framework, a novel hierarchical codebook is designed to encompass both the subarray patterns and beam patterns. The newly proposed CA codebook can fully exploit the potentials of the DRE-covered CCA to offer full spatial coverage. Moreover, the corresponding codeword selection scheme is also carefully designed to facilitate fast multi-UAV beam tracking/communication in the considered CA-enabled UAV mmWave network.
Therefore, the dynamic subarray localization and activation are very coupled and critical for the efficient utilization of the DRE-covered CA. Note that conventional ULA/UPA-oriented codebook designs mainly focus on the beam direction/width controlling via the random-like subarray activation/deactivation without specific subarray localization. In contrast, the codebook design for DRE-covered CA should emphasize the location of the activated subarray to achieve the promise of full-spatial coverage of the CA in UAV networks. Nevertheless, such work is still missing now in the literature. These points mentioned above motivate us to study a new beam tracking framework with the well-tailored codebook for CA-enabled UAV mmWave networks.
B
Though the above works have made a deep research on distributed stochastic optimization, the practical cases may be more complex.
In distributed statistical machine learning algorithms, the (sub)gradients of local loss functions cannot be obtained accurately, the graphs may change randomly and the communication links may be noisy. There are many excellent results on the distributed optimization with multiple uncertain factors ([11]-[15]).
In [12]-[14], the (sub)gradient measurement noises are martingale difference sequences and their second-order conditional moments depend on the states of the local optimizers. The random graph sequences in [12]-[15] are i.i.d. with connected and undirected mean graphs. In addition, additive communication noises are considered in [14]-[15].
previous time and the consensus error. However, this can not be obtained for the case with the linearly growing subgradients. Also, different from [15], the subgradients are not required to be bounded and the inequality (28) in [15] does not hold.
Besides, the network graphs may change randomly with spatial and temporal dependency (i.e. Both the weights of different edges in the network graphs at the same time instant and the network graphs at different time instants may be mutually dependent.) rather than i.i.d. graph sequences as in [12]-[15],
D
More precisely, the force input applied to the system is u=Kv⁢(Kp⁢ep+ev)𝑢subscript𝐾𝑣subscript𝐾𝑝subscript𝑒𝑝subscript𝑒𝑣u=K_{v}(K_{p}e_{p}+e_{v})italic_u = italic_K start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ( italic_K start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT italic_e start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT + italic_e start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ), where (Kp,Kv)subscript𝐾𝑝subscript𝐾𝑣(K_{p},K_{v})( italic_K start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , italic_K start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) are the nominal gains of the controller, and epsubscript𝑒𝑝e_{p}italic_e start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT and evsubscript𝑒𝑣e_{v}italic_e start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT are the error signals respectively in tracking the desired position and velocity trajectories.
In the low level control of the plant, a cascade controller is employed for tracking the position and velocity reference trajectories
We model the system as two uncoupled axis with identical parameters. According to (1), the plant can be described by the transfer function G⁢(s)𝐺𝑠G(s)italic_G ( italic_s ), from the force input to the the position of system, p𝑝pitalic_p, defined as
To bring the model close to the real system, we unify the terms required for the contour control formulation with the velocity and acceleration for each axis from the identified, discretized state-space model from (4).
One can easily obtain the transfer function from the reference trajectories to the actual position and velocity as
D
It is worth noting that for both CPP and B-CPP, the choices b=2𝑏2b=2italic_b = 2 for quantization or k=5𝑘5k=5italic_k = 5 for Rand-k are more communication-efficient than b=4,6𝑏46b=4,6italic_b = 4 , 6 or k=10,20𝑘1020k=10,20italic_k = 10 , 20.
The compression and the communication are applied on the difference (𝒙i−𝒖i)subscript𝒙𝑖subscript𝒖𝑖(\bm{\mathit{x}}_{i}-\bm{\mathit{u}}_{i})( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - bold_italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) and its compressed version, respectively.
This indicates that as the compression accuracy becomes smaller, its impact exhibits “marginal effects”.
To reduce the error from compression, some works [48, 49, 50] increase compression accuracy as the iteration grows to guarantee the convergence. However, they still need high communication costs to get highly accurate solutions. Techniques to remedy this increased communication costs include gradient difference compression [34, 51, 52] and error compensation [37, 53, 54], which enjoy better performance than direct compression.
When b=6𝑏6b=6italic_b = 6 or k=20𝑘20k=20italic_k = 20, the trajectories of CPP are very close to that of exact Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, which indicates that when the compression errors are small, they are no longer the bottleneck of convergence.
B
For the sequence-level tasks, which require only a prediction for an entire sequence, we follow \textciteemopia and choose the Bi-LSTM-Attn model from \textcitelin2017structured as our baseline, which was originally proposed for sentiment classification in NLP.
Being inspired by the Bi-LSTM-Attn model \parencitelin2017structured, we employ an attention-based weighting average mechanism to convert the sequence of 512 hidden vectors for an input sequence to one single vector before feeding it to the classifier layer, which comprises two dense layers.
Table 2: The testing classification accuracy (in %) of different combinations of MIDI token representations and models for four downstream tasks: three-class melody classification, velocity prediction, style classification and emotion classification. “CNN” represents the ResNet50 model used by \textcitelee20ismirLBD, which only supports sequence-level tasks. “RNN” denotes the baseline models introduced in Section 5, representing the Bi-LSTM model for the first two (note-level) tasks and the Bi-LSTM-Attn model \parencitelin2017structured for the last two (sequence-level) tasks.
The model combines LSTM with a self-attention module for temporal aggregation. Specifically, it uses a Bi-LSTM layer to convert the input sequence of tokens into a sequence of embedding, which can be considered as feature representations of the tokens and then fuses these embeddings into one sequence-level embedding according to the weights assigned by the attention module to each token-level embedding. The sequence-level embedding then goes through two dense layers for classification.
Instead of feeding the token embedding of each of them individually to the Transformer, we can combine the token embedding of either the four tokens for MIDI scores or six tokens for MIDI performances in a group by concatenation and let the Transformer model
C
A. Balatsoukas-Stimming, M. B. Parizi, and A. Burg, “LLR-based successive cancellation list decoding of polar codes,” IEEE Trans. Signal Process., vol. 63, no. 19, pp. 5165–5179, Jun. 2015.
A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in Proc. 23rd Int. Conf. Mach. Learning (ICML), Pittsburgh, USA, Jun. 2006, pp. 369–376.
D. Amodei, S. Ananthanarayanan, R. Anubhai, and etc., “Deep speech 2 : End-to-end speech recognition in english and mandarin,” in Proc. 33rd Int. Conf. Mach. Learning (ICML), New York, New York, USA, Jun. 2016, pp. 173–182.
D. Amodei, S. Ananthanarayanan, R. Anubhai, and etc., “Deep speech 2 : End-to-end speech recognition in english and mandarin,” in Proc. 33rd Int. Conf. Mach. Learning (ICML), New York, New York, USA, Jun. 2016, pp. 173–182.
A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in Proc. 23rd Int. Conf. Mach. Learning (ICML), Pittsburgh, USA, Jun. 2006, pp. 369–376.
B
The performance of all the models on the PCAM and IDC datasets is described in Table 3 and 4. All the indicators are measured on the test sets. Most of the model show a very good performance, with AUC scores around 0.90 or above. However, when we look at the details, there are clear differences. For instance the AUC of the simple 2-layers B2 model is 0.85, increasing to 0.91 with the B6 model, which is slightly more complex. If we take the best performing models in terms of AUC score, for the PCAM dataset we have VGG19 (0.95), followed by the MobileNet (MN, 0.93) and the EfficientNet V2 B2 (ENB2, 0.93). For the IDC dataset, the best performing models are again VGG19 (0.95), together with MobileNet (MN, 0.95), and followed by the B6 (0.94), DenseNet 121 (DN121, 0.94) and EfficientNet V2 B2 (ENB2, 0.94). Looking at the AUC values versus depth and number of parameters of the models for the PCAM dataset (Figure 3), we can see that the VGG19 model has a high AUC value but a low depth, while the ENB0 model has a very high depth but a very low AUC. For the number of parameters, VGG19 has a very high number of parameters and the best AUC, but models with a much lower number of parameters, such as MN or ENB2, also have high AUC values, in fact very close to the value for VGG19. Taken together, these results show that the more complex models perform better than very simple models (such as the B2 model), but the relationship is not entirely straightforward.
For each network architecture tested in this study, the same procedure is used: the model is trained on the training set for 15 epochs, with an evaluation on the validation set after each epoch. Depending on the accurracy value of the model, the weights are saved after each epoch to keep the best model, which is then evaluated on the test set. For some models, I used the KerasTuner framework with the Hyperband algorithm to optimize some hyperparameters [17].
Precise staging by expert pathologists of breast cancer axillary nodes, a tissue commonly used for the detection of early signs of tumor spreading, is an essential task that will determine the patient’s treatment and his chances of recovery. However, it is a difficult task that was shown to be prone to misclassification. Algorithms, and in particular deep learning based convolutional neural networks, can help the experts in this task by analyzing fully digitized slides of microscopic stained tissue sections. In this study, I evaluated twelve different CNN architectures and different hardware acceleration devices for breast cancer classification on two different public datasets consisting of hundreds of thousands of images. The performance of hardware acceleration devices can improve the training time by a factor of five to twelve, depending on the model used. On the other hand, increasing the convolutional depth increases the training time by a factor of four to six, depending on the acceleration device used. More complex models tend to perform better than very simple ones, especially when fully retrained on the digital pathology dataset, but the relationship between model complexity and performance is not straightforward. Transfer learning from imagenet always performs worse than fully retraining the models. Fine-tuning the hyperparameters of the model improves the results, with the best model tested in this study showing very high performance, comparable to current state–of–the–art models.
Table 4: Performance of the models on the invasive ductal carcinoma (IDC) breast cancer test set. AUC: area under the ROC curve.
The performance of all the models on the PCAM and IDC datasets is described in Table 3 and 4. All the indicators are measured on the test sets. Most of the model show a very good performance, with AUC scores around 0.90 or above. However, when we look at the details, there are clear differences. For instance the AUC of the simple 2-layers B2 model is 0.85, increasing to 0.91 with the B6 model, which is slightly more complex. If we take the best performing models in terms of AUC score, for the PCAM dataset we have VGG19 (0.95), followed by the MobileNet (MN, 0.93) and the EfficientNet V2 B2 (ENB2, 0.93). For the IDC dataset, the best performing models are again VGG19 (0.95), together with MobileNet (MN, 0.95), and followed by the B6 (0.94), DenseNet 121 (DN121, 0.94) and EfficientNet V2 B2 (ENB2, 0.94). Looking at the AUC values versus depth and number of parameters of the models for the PCAM dataset (Figure 3), we can see that the VGG19 model has a high AUC value but a low depth, while the ENB0 model has a very high depth but a very low AUC. For the number of parameters, VGG19 has a very high number of parameters and the best AUC, but models with a much lower number of parameters, such as MN or ENB2, also have high AUC values, in fact very close to the value for VGG19. Taken together, these results show that the more complex models perform better than very simple models (such as the B2 model), but the relationship is not entirely straightforward.
C
The uniform random expander is constructed by assigning each pixel a phase that is uniformly randomly chosen within [0,2⁢π]02𝜋[0,2\pi][ 0 , 2 italic_π ]. To ensure at least 2⁢π2𝜋2\pi2 italic_π phase is available for all wavelengths the [0,2⁢π]02𝜋[0,2\pi][ 0 , 2 italic_π ] phase range is defined for 660 nmtimes660nm660\text{\,}\mathrm{n}\mathrm{m}start_ARG 660 end_ARG start_ARG times end_ARG start_ARG roman_nm end_ARG. Conventional holography is subject to a low display étendue that is limited by the SLM native resolution, thus resulting in a low FOV. Photon sieves, binary random expanders, and uniform random expanders have low reconstruction fidelity, resulting in severe noise and low contrast in the generated holograms. In the case of the trichromatic holograms, both uniform and binary random expanders do not facilitate consistent étendue expansion at all wavelengths, which results in chromatic artifacts. Although the uniform random expander provides at least 2⁢π2𝜋2\pi2 italic_π phase coverage for all wavelengths, the variation in refractive index across wavelengths results in differing phase profiles. Thus, although the uniform random expander has the same degree of quantization as neural étendue expanders, it does not enable étendue expanded trichromatic holograms. Photon sieves scatter light equally across wavelengths but their randomized amplitude-only modulation does not allow for high-fidelity reconstruction of natural images, see Fig. 3d for quantitative metrics and Supplementary Note 2 for qualitative examples and additional metrics. Neural étendue expansion is the only technique that facilitates high-fidelity reconstructions for both trichromatic and monochromatic setups. We quantitatively verify this by evaluating the reconstruction fidelity on an unseen test dataset, where fidelity is measured in peak signal-to-noise ratio (PSNR). Fig. 3b shows that neural étendue expanders achieve over 14 dB PSNR improvement favorable to other expanders when generating 64×64\times64 × étendue expanded trichromatic holograms. For monochromatic holograms, neural étendue expansion achieves over 10 dB PSNR improvement. Thus, neural étendue expansion allows for an order of magnitude improvement over existing étendue expansion methods. See Fig. 3d for quantitative evaluations at different étendue expansion factors. See Supplementary Note 2 for further simulation details and more comparison examples.
In addition to field-of-view, we also investigate the eyebox that is produced with neural étendue expansion. By initializing the learning process with a uniform random expander we bias the optimized solution towards expanders that distribute energy throughout the eyebox, in contrast to a quadratic phase profiles[28] that concentrate the energy at fixed points. Thus, the viewer’s eye pupil can freely move within the eyebox and observe the wide field-of-view hologram at any location. We incorporate pupil-aware optimization[37] to preserve the perceived hologram quality at different eye pupil locations. See Supplementary Note 5 for findings.
The experimental findings on the display prototype verify that conventional non-étendue expanded holography can produce high-fidelity content but at the cost of a small FOV. Increasing the étendue via a binary random expander will increase the FOV but at the cost of low image fidelity, even at the design wavelength of 660 nmtimes660nm660\text{\,}\mathrm{n}\mathrm{m}start_ARG 660 end_ARG start_ARG times end_ARG start_ARG roman_nm end_ARG, and chromatic artifacts. The étendue expanded holograms produced with the neural étendue expanders are the only holograms that showcase both ultra-wide-FOV and high-fidelity. The captured holograms demonstrate high contrast and are free from chromatic aberrations. Fig. 2d reports the étendue expanded hologram produced with both expanders at each color wavelength. Since the binary random expander is, by design, only tailored to a single wavelength, in this case 660 nmtimes660nm660\text{\,}\mathrm{n}\mathrm{m}start_ARG 660 end_ARG start_ARG times end_ARG start_ARG roman_nm end_ARG, the étendue expanded holograms that are generated with it exhibit severe chromatic artifacts. In contrast, holograms generated with neural étendue expansion show consistent high-fidelity performance at all illumination wavelengths. Notably, even at the wavelength of 660 nmtimes660nm660\text{\,}\mathrm{n}\mathrm{m}start_ARG 660 end_ARG start_ARG times end_ARG start_ARG roman_nm end_ARG the hologram fidelity is higher for the holograms generated with neural étendue expansion, see Fig. 2c. For comparisons against a uniform random expander, where the phase profile is uniformly randomly selected from within [0,2⁢π]02𝜋[0,2\pi][ 0 , 2 italic_π ], see Fig. 3 and Supplementary Note 2.
While our experimental prototype was built for a HOLOEYE-PLUTO which possesses a 1K-pixel resolution, corresponding to a 1 mm eyebox with 75.6∘superscript75.675.6^{\circ}75.6 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT horizontal and vertical FOV, the improvement in hologram fidelity persists across resolutions. Irrespective of the resolution of the SLM, performing 4×4\times4 ×, 16×16\times16 ×, or 64×64\times64 × étendue expansion with neural étendue expanders results in a similar margin of improvement over uniform and binary random expanders. This is because the improvement in fidelity depends only on the étendue expansion factor. To validate this we simulate an 8K-pixel SLM with 64×64\times64 × étendue expansion and we verify that the improvement in fidelity is maintained. See Supplementary Note 6 for results and further details. Thus, neural étendue expansion enables high fidelity expansion for 64×64\times64 × étendue expansion for 8K-pixel SLMs[30], providing étendue to cover 85% of the human stereo FOV[31] with a 18.5 mm eyebox size, see Supplementary Note 3 for details.
The uniform random expander is constructed by assigning each pixel a phase that is uniformly randomly chosen within [0,2⁢π]02𝜋[0,2\pi][ 0 , 2 italic_π ]. To ensure at least 2⁢π2𝜋2\pi2 italic_π phase is available for all wavelengths the [0,2⁢π]02𝜋[0,2\pi][ 0 , 2 italic_π ] phase range is defined for 660 nmtimes660nm660\text{\,}\mathrm{n}\mathrm{m}start_ARG 660 end_ARG start_ARG times end_ARG start_ARG roman_nm end_ARG. Conventional holography is subject to a low display étendue that is limited by the SLM native resolution, thus resulting in a low FOV. Photon sieves, binary random expanders, and uniform random expanders have low reconstruction fidelity, resulting in severe noise and low contrast in the generated holograms. In the case of the trichromatic holograms, both uniform and binary random expanders do not facilitate consistent étendue expansion at all wavelengths, which results in chromatic artifacts. Although the uniform random expander provides at least 2⁢π2𝜋2\pi2 italic_π phase coverage for all wavelengths, the variation in refractive index across wavelengths results in differing phase profiles. Thus, although the uniform random expander has the same degree of quantization as neural étendue expanders, it does not enable étendue expanded trichromatic holograms. Photon sieves scatter light equally across wavelengths but their randomized amplitude-only modulation does not allow for high-fidelity reconstruction of natural images, see Fig. 3d for quantitative metrics and Supplementary Note 2 for qualitative examples and additional metrics. Neural étendue expansion is the only technique that facilitates high-fidelity reconstructions for both trichromatic and monochromatic setups. We quantitatively verify this by evaluating the reconstruction fidelity on an unseen test dataset, where fidelity is measured in peak signal-to-noise ratio (PSNR). Fig. 3b shows that neural étendue expanders achieve over 14 dB PSNR improvement favorable to other expanders when generating 64×64\times64 × étendue expanded trichromatic holograms. For monochromatic holograms, neural étendue expansion achieves over 10 dB PSNR improvement. Thus, neural étendue expansion allows for an order of magnitude improvement over existing étendue expansion methods. See Fig. 3d for quantitative evaluations at different étendue expansion factors. See Supplementary Note 2 for further simulation details and more comparison examples.
A
GAN inversion framework that utilizes the powerful generative ability of StyleGAN-XL, which shows preferable quantitative and qualitative results in SISR.
Cycle Consistency: Cycle consistency assumes that there exist some underlying relationships between the source and target domains, and tries to make supervision at the domain level. To be precise, we want to capture some special characteristics of one image collection and figure out how to translate these characteristics into the other image collection. To achieve this, Zhu et al. (Zhu
et al., 2018) uses the test image and its downscaling versions with the data augmentation approaches to build the ”training dataset” and then applies the loss function to optimize the model. In addition, weakly-supervised learning also belongs to the unsupervised learning strategy. Among them, some researchers first learn the HR-to-LR degradation and use it to construct datasets for training the model, while other researchers design cycle-in-cycle models to learn the LR-to-HR and HR-to-LR mappings simultaneously. For instance, CinCGAN (Yuan
Although a series of models have been proposed for domain-specific applications, most of them directly transfer the SISR methods to these specific fields. This is the simplest and most feasible method, but it will also inhibit the model performance since they ignore the data structure characteristics of the domain-specific images. Therefore, fully mining and using the potential prior and data characteristics of the domain-specific images is beneficial for efficient and accurate domain-specific SISR model construction. In the future, it will be a trend to further optimize the existing SISR models based on prior knowledge and the characteristics of the domain-specific images.
In SISR, the idea of cycle consistency has also been widely discussed. Given the LR images domain X𝑋Xitalic_X and the HR images domain Y𝑌Yitalic_Y, we not only learn the mapping from LR to HR but also the backward process. Researchers have shown that learning how to perform image degradation first without paired data can help generate more realistic images (Bulat
A
A second visualisation focusing on this specific region is displayed in Fig. 1(d). Ignoring for now whether or not the SHAP values are positive or negative, it exhibits a high degree of correlation to the fundamental frequency and harmonics in the spectrogram, indicating the focus of the classifier on these same components. Last, while the presence of dark blue traces in Fig. 1(d) indicate components of the spectrogram which favour the negative class, the overall dominance of red colours (though not all dark red) indicate a greater support for the positive class (the classifier output correctly indicates bona fide speech).
This paper demonstrates how DeepSHAP can be applied to explain what influences the outputs produced by a spoofing detection model. The examples shown in the paper show how SHAP analysis can be used to highlight the attention applied by a given classifier at low-level spectro-temporal intervals. Nonetheless, the tool offers the basis for what is needed to explore higher-level explanations. It will be of interest to explore, e.g., whether we can make the link between SHAP results, classifier outputs and specific speech units or spoofing attack algorithms (e.g., synthetic speech, converted voice and replay) and their algorithmic components (e.g., waveform models, recording devices and microphones). Other future directions include the use of SHAP to explore classifier differences in an attempt to explain why some classifiers perform better than others, and under which conditions. In this context, it will be of interest to develop SHAP-based approaches that can be used to compare systems that operate upon hand-crafted spectro-temporal decompositions to those that operate directly upon raw waveforms. Ultimately, of course, the goal is to exploit what we can learned from SHAP analysis to design better performing, more reliable models.
In the remainder of this paper we describe our use of DeepSHAP to help explain the behaviour of spoofing detection systems. We show a number of illustrative examples for which the input utterances, all drawn from the ASVspoof 2019 LA database [13], are chosen specially to demonstrate the potential insights which can be gained. Given the difficulty in visualising true SHAP values, in the following we present average temporal or spectral results. Given our focus on spoofing detection, we present results for both bona fide and spoofed utterances and the temporal or spectral regions which favour either bona fide or spoofed classes. Results hence reflect where, either in time or frequency, the model has learned to focus attention and hence help to explain its behaviour in terms of how the model responds to a particular utterance.
Fig. 4 shows an example for which SHAP analysis reveals differences in classifier behaviour. The two plots show frequency-averaged SHAP values against time for the 2D-Res-TSSDNet classifier (middle) and the PC-DARTS classifier (bottom) and in both cases the support for the spoof class (blue) and bona fide class (red). The classifiers are shown to apply attention to different intervals. The PC-DARTS model derives its output mostly from non-speech segments whereas the 2D-Res-TSSDNet model applies greater attention to speech intervals. We observed other, more subtle differences in classifier behaviour. SHAP values for the PC-DARTS model are notably more noisy than those of 2D-Res-TSSDNet model. We believe that these differences stem from the relative simplicity of the PC-DARTS model which contains only 0.1 M network parameters, 0.87 M fewer than the 2D-Res-TSSDNet model, possibly implying that the former has insufficient learning capacity, leading to noisier outputs. These characteristics might also be caused by the high number of dilated convolutions in the learned architecture. We observed more stable SHAP values when dilated convolution operations are excluded from the architecture search space. SHAP analysis can hence also be used to explore lower-level classifier behaviours. Instances where different classifiers exhibit different behaviour might also help to improve performance in cases where single system solutions are preferred to system fusion.
Plots of SHAP values such as those shown in Fig. 1(c) are not easily visualised without the use of dilation operations or some other such smoothing operations which distort the results. While they offer interesting insights, we need more easily visualised means with which to explore results.
D
We summarize our algorithm to learn safe ROCBFs h⁢(x)ℎ𝑥h(x)italic_h ( italic_x ) in Algorithm 1. We first construct the set of safe datapoints Zsafesubscript𝑍safeZ_{\text{safe}}italic_Z start_POSTSUBSCRIPT safe end_POSTSUBSCRIPT from the expert demonstrations Zdynsubscript𝑍dynZ_{\text{dyn}}italic_Z start_POSTSUBSCRIPT dyn end_POSTSUBSCRIPT (line 3). We construct the set of as unsafe labeled datapoints Z𝒩subscript𝑍𝒩Z_{\mathcal{N}}italic_Z start_POSTSUBSCRIPT caligraphic_N end_POSTSUBSCRIPT from Zsafesubscript𝑍safeZ_{\text{safe}}italic_Z start_POSTSUBSCRIPT safe end_POSTSUBSCRIPT (line 4), i.e., Z𝒩⊆Zsafesubscript𝑍𝒩subscript𝑍safeZ_{\mathcal{N}}\subseteq Z_{\text{safe}}italic_Z start_POSTSUBSCRIPT caligraphic_N end_POSTSUBSCRIPT ⊆ italic_Z start_POSTSUBSCRIPT safe end_POSTSUBSCRIPT, by identifying boundary points in Zsafesubscript𝑍safeZ_{\text{safe}}italic_Z start_POSTSUBSCRIPT safe end_POSTSUBSCRIPT and labeling them as unsafe (details can be found in Section 5.2). We then re-define Zsafesubscript𝑍safeZ_{\text{safe}}italic_Z start_POSTSUBSCRIPT safe end_POSTSUBSCRIPT by removing the unsafe labeled datapoints Z𝒩subscript𝑍𝒩Z_{\mathcal{N}}italic_Z start_POSTSUBSCRIPT caligraphic_N end_POSTSUBSCRIPT from Zsafesubscript𝑍safeZ_{\text{safe}}italic_Z start_POSTSUBSCRIPT safe end_POSTSUBSCRIPT (line 5). Following our discussion in Section 4.3, we obtain Z¯safesubscript¯𝑍safe\underline{Z}_{\text{safe}}under¯ start_ARG italic_Z end_ARG start_POSTSUBSCRIPT safe end_POSTSUBSCRIPT according to equation (9) (line 6). We then solve the constrained optimization problem (7) by an unconstrained relaxation defined in (13) (line 7) as discussed in Section 5.3. Finally, we check if the constraints (7b)-(7d) and the constraints (8), (10), (12) are satisfied by the learned function h⁢(x)ℎ𝑥h(x)italic_h ( italic_x ) (line 8). If the constraints are not satisfied, the hyperparameters are adjusted and the process is repeated (line 9). We discuss Lipschitz constant estimation of hℎhitalic_h and q𝑞qitalic_q and the hyperparameter selection in Section 5.4.
Finally, we discuss what behavior expert demonstrations in Zdynsubscript𝑍dynZ_{\mathrm{dyn}}italic_Z start_POSTSUBSCRIPT roman_dyn end_POSTSUBSCRIPT should exhibit.
Let the system in (1) and the set of safe expert demonstrations Zdynsubscript𝑍dynZ_{\mathrm{dyn}}italic_Z start_POSTSUBSCRIPT roman_dyn end_POSTSUBSCRIPT be given. Under Assumptions 1 and 2, learn a function h:ℝn→ℝ:ℎ→superscriptℝ𝑛ℝh:\mathbb{R}^{n}\to\mathbb{R}italic_h : blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_R from Zdynsubscript𝑍dynZ_{\mathrm{dyn}}italic_Z start_POSTSUBSCRIPT roman_dyn end_POSTSUBSCRIPT so that the set
1:Input: Set of expert demonstrations Zdynsubscript𝑍dynZ_{\text{dyn}}italic_Z start_POSTSUBSCRIPT dyn end_POSTSUBSCRIPT,
For collecting safe expert demonstrations Zdynsubscript𝑍dynZ_{\text{dyn}}italic_Z start_POSTSUBSCRIPT dyn end_POSTSUBSCRIPT, we use an “expert” PID controller u⁢(x)𝑢𝑥u(x)italic_u ( italic_x ) that uses full state knowledge of x𝑥xitalic_x. Throughout this section, we use the parameters α⁢(r):=rassign𝛼𝑟𝑟\alpha(r):=ritalic_α ( italic_r ) := italic_r, γsafe:=γunsafe:=0.05assignsubscript𝛾safesubscript𝛾unsafeassign0.05\gamma_{\text{safe}}:=\gamma_{\text{unsafe}}:=0.05italic_γ start_POSTSUBSCRIPT safe end_POSTSUBSCRIPT := italic_γ start_POSTSUBSCRIPT unsafe end_POSTSUBSCRIPT := 0.05, and γdyn:=0.01assignsubscript𝛾dyn0.01\gamma_{\text{dyn}}:=0.01italic_γ start_POSTSUBSCRIPT dyn end_POSTSUBSCRIPT := 0.01 to train safe ROCBFs h⁢(x)ℎ𝑥h(x)italic_h ( italic_x ). For the boundary point detection algorithm in Algorithm 2, we select k:=200assign𝑘200k:=200italic_k := 200 and η𝜂\etaitalic_η such that 40404040 percent of the points in Zsafesubscript𝑍safeZ_{\text{safe}}italic_Z start_POSTSUBSCRIPT safe end_POSTSUBSCRIPT are labeled as boundary points.
C
As Ltsubscript𝐿𝑡L_{t}italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT increases to reach Nt=8subscript𝑁𝑡8N_{t}=8italic_N start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 8, the empirical histogram converge to chi-square distribution with 4 degrees of freedom. It is noteworthy that the channel gain, i.e., the squared envelope of the channel coefficient in conventional MIMO systems, has chi-square distribution with 2 degrees of freedom; whereas, PR-MIMO systems without hybrid antenna selection have 4 degrees of freedom. The reason is that PR-MIMO systems exploit the polarization domain where 2 degrees of freedom can be supported by orthogonal polarization, e.g., vertical and horizontal polarization.
where RHeffsubscript𝑅superscript𝐻effR_{H^{\rm eff}}italic_R start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT roman_eff end_POSTSUPERSCRIPT end_POSTSUBSCRIPT is the rank of the matrix Heffsuperscript𝐻effH^{\rm eff}italic_H start_POSTSUPERSCRIPT roman_eff end_POSTSUPERSCRIPT, and Plsubscript𝑃𝑙P_{l}italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT is
element-wise squared envelopes in Heffsuperscript𝐻effH^{\rm eff}italic_H start_POSTSUPERSCRIPT roman_eff end_POSTSUPERSCRIPT. The optimum polarization
the singular value of Heffsuperscript𝐻effH^{\rm eff}italic_H start_POSTSUPERSCRIPT roman_eff end_POSTSUPERSCRIPT; therefore
Without loss of generality, an element in Heffsuperscript𝐻effH^{\rm eff}italic_H start_POSTSUPERSCRIPT roman_eff end_POSTSUPERSCRIPT has the following description of its squared envelope.
D
+\infty&\text{otherwise,}\end{cases}italic_f ( italic_x ) = { start_ROW start_CELL 1 end_CELL start_CELL if italic_x ∈ [ - 1 , 1 ] ∖ { 0 } , end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL if italic_x = 0 , end_CELL end_ROW start_ROW start_CELL + ∞ end_CELL start_CELL otherwise, end_CELL end_ROW
Unfortunately, this is not the case in the sparse and low rank examples. We observe that for fixed k,n𝑘𝑛k,nitalic_k , italic_n we have in both cases
Unfortunately, this construction is harder to generalize on an unbounded domain or in higher dimension.
It is an open question to generalize our framework for low-dimensional recovery in more general settings such as Banach spaces (e.g., for off-the-grid super-resolution).
a random kernel of fixed dimension. This measure for kernels of dimension ℓℓ\ellroman_ℓ and a descent cone K𝐾Kitalic_K is the following:
B
It is also a public dataset including 909 X-ray images of hands. The setting of this dataset follows  [25]. The first 609 images are used for training and the rest for testing. The image size varies among a small range, so all images are resized to 384×\times×384.
Figure 4: Visual Comparison of templates from our policy and random selection. Column “Template/Test 1/Test 2" refers to the templates and two test images. The row “Ours" and “Random" refers to the template selected by our method and random selection, respectively.
It is a widely-used public dataset for cephalometric landmark detection, containing 400 radiographs, and is provided in IEEE ISBI 2015 Challenge [14, 37]. There are 19 landmarks of anatomical significance labeled by 2 expert doctors in each radiograph. The averaged version of annotations by two doctors is set as the ground truth. The image size is 1935×2400193524001935\times 24001935 × 2400 and the pixel spacing is 0.1mm. The dataset is split into 150 and 250 for training and testing respectively, referring to the official division.
Few-shot medical landmark detection: Firstly, experiments are conducted on different numbers of templates for Cephalometric dataset. For Table 1, M𝑀Mitalic_M denotes the number of templates used in experiment. The columns “ours" refer to the results achieved by the proposed method, while “random" means to the average results of multiple rounds of training (we use 1,000 trials), with standard variations. The column "best" and "worst" refer to the best or worst results in multiple tries, respectively.
This dataset is from [38] containing 10,000 faces with 7500 and 2500 in training and test sets, respectively. All images are collected from the WIDER FACE dataset [40] and manually labeled with 98 landmarks. The dataset contains different test subsets where the image appearances vary due to variations in pose, expression, and/or illumination or the presence of blur, occlusion, and/or make-up.
D
Fast automatic segmentation of hippocampal subfields and medial temporal lobe subregions in 3 tesla and 7 tesla t2-weighted mri.
Zeineldin, R.A., Karar, M.E., Elshaer, Z., Schmidhammer, M., Coburger, J., Wirtz, C.R., Burgert, O., Mathis-Ullrich, F., 2021.
Zeineldin, R.A., Karar, M.E., Elshaer, Z., Schmidhammer, M., Coburger, J., Wirtz, C.R., Burgert, O., Mathis-Ullrich, F., 2021.
Zeineldin, R.A., Karar, M.E., Elshaer, Z., Schmidhammer, M., Coburger, J., Wirtz, C.R., Burgert, O., Mathis-Ullrich, F., 2021.
Zeineldin, R.A., Karar, M.E., Elshaer, Z., Schmidhammer, M., Coburger, J., Wirtz, C.R., Burgert, O., Mathis-Ullrich, F., 2021.
A
(X,ℬ,μ)𝑋ℬ𝜇(X,\mathcal{B},\mu)( italic_X , caligraphic_B , italic_μ ) be a measure space where X𝑋Xitalic_X is a set,
on the probability space (R1,ℬ,μ)superscript𝑅1ℬ𝜇(R^{1},\mathcal{B},\mu)( italic_R start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , caligraphic_B , italic_μ ). Then the
(X,ℬ,μ)𝑋ℬ𝜇(X,\mathcal{B},\mu)( italic_X , caligraphic_B , italic_μ ) be a measure space where X𝑋Xitalic_X is a set,
ℬ=ℬ⁢(X)ℬℬ𝑋\mathcal{B}=\mathcal{B}(X)caligraphic_B = caligraphic_B ( italic_X ) is the borel σ−limit-from𝜎\sigma-italic_σ -algebra on the set
X𝑋Xitalic_X and μ𝜇\muitalic_μ is a measure on the measurable space (X,ℬ)𝑋ℬ(X,\mathcal{B})( italic_X , caligraphic_B ).
C
Now, we will present the theorem which prescribes the design requirements on the controller gains in order to guarantee both pISSf and ISSt for the PDE system (4)-(7).
Consider the system (4) with boundary conditions (8). Let us also consider the unsafe set for this system to be (12) and the metric measuring the distance from this unsafe set to be given by (13). If the controller gains are chosen such that the following inequalities are satisfied,
We also note here that if the gains satisfy the pISSf conditions given in (24), then (49) will be automatically satisfied and B⁢T2<0𝐵subscript𝑇20BT_{2}<0italic_B italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT < 0.
then the system (4) satisfies the two conditions of Proposition 1, and is considered to be practical Input-to-State Safe (pISSf) with respect to the unsafe set 𝒰𝒰\mathscr{U}script_U.
Consider the system (4) with boundary conditions (8). If there exists controller gains that satisfy pISSf inequality conditions given in (24), then the system (4) is considered to be both pISSf and ISSt.
D
In this setting, the PUs’ parameters are available—for determination of spectrum allocation. A PU’s parameters include its location, transmit power, and its PURs’ locations.
based on locations improved the performance of our models significantly compared to placing the SSs based on received powers.
For each SS, its parameters may include its location and aggregate received power from the PUs, and in general, may also include the mean and variance of the Gaussian distribution of the received power.
Allocation based on SSs parameters is implicitly based on real-time channel conditions, which is important for accurate and optimized spectrum allocation as the conditions affecting signal attenuation (e.g., air, rain, vehicular traffic) may change over time.
In such a crowdsourced sensing architecture, allocation decision is based on SS parameters, which includes each sensor’s location and received (aggregated)
D
The following result states that, under Assumption 1, if the stepsize at each iteration is chosen by the doubling trick scheme, there is an upper bound for the static regret defined in (4). Moreover, the upper bound has the order of O⁢(T)𝑂𝑇O(\sqrt{T})italic_O ( square-root start_ARG italic_T end_ARG ) for convex costs.
Suppose Assumptions 1 (i) and 2 hold. Furthermore, if the stepsize is chosen as αt=Pμ⁢tsubscript𝛼𝑡𝑃𝜇𝑡\alpha_{t}=\frac{P}{\mu t}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = divide start_ARG italic_P end_ARG start_ARG italic_μ italic_t end_ARG. Then, the static regret (4) achieved by Algorithm 1 satisfies
Suppose Assumption 1 holds. Furthermore, if the stepsize is chosen as αt=CTTsubscript𝛼𝑡subscript𝐶𝑇𝑇\alpha_{t}=\sqrt{\frac{C_{T}}{T}}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = square-root start_ARG divide start_ARG italic_C start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT end_ARG start_ARG italic_T end_ARG end_ARG, the dynamic regret (5) achieved by Algorithm 1 satisfies
Suppose Assumption 1 holds. Furthermore, if the stepsize is chosen as αt=CTTsubscript𝛼𝑡subscript𝐶𝑇𝑇\alpha_{t}=\sqrt{\frac{C_{T}}{T}}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = square-root start_ARG divide start_ARG italic_C start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT end_ARG start_ARG italic_T end_ARG end_ARG, the dynamic regret achieved by the online gradient descent algorithm (32) satisfies
Suppose Assumption 1 holds. Furthermore, if the stepsize is chosen according to Definition 1. Then, the static regret (4) achieved by Algorithm 1 satisfies
D
Deep learning models should not be considered as a replacement for clinical diagnosis by medical professionals. These models should be used as complementary tools to aid medical professionals in making more accurate diagnoses. It is also crucial to validate the accuracy and reliability of these models on diverse and representative datasets. Interpretability and explainability of deep learning models in medical imaging tasks remain challenging research areas.
Finally, we conclude our paper with limitations in Section 5 and conclusions in Section 6, summarizing our key findings and contributions. We also discuss the implications of our work and highlight future directions for research in the field of thoracic disease prediction using deep learning techniques.
By analyzing these metrics, we gain insights into the model’s performance in terms of sensitivity, specificity, predictive values, discrimination power (ROC curve), and overall classification accuracy (F1 score). These evaluations help us understand the strengths and limitations of the model in accurately predicting different pathologies.
Deep learning models should not be considered as a replacement for clinical diagnosis by medical professionals. These models should be used as complementary tools to aid medical professionals in making more accurate diagnoses. It is also crucial to validate the accuracy and reliability of these models on diverse and representative datasets. Interpretability and explainability of deep learning models in medical imaging tasks remain challenging research areas.
The deep learning model presented in this study has several limitations that should be acknowledged. These limitations include:
D
The most straightforward use of this database is the classification of the emotions felt by a woman, using machine and deep learning algorithms with unimodal and multimodal approaches.
Familiarity with the emotion felt, the situation displayed in the clip, and the specific clip: annotated in three different questions. The two first consider a 9999-point Likert scale, whereas the last one considers a binary yes-no option.
As introduced before, only 8888 of the 12121212 emotions initially selected were included in WEMAC (see the Stimuli Section), although the 12121212 emotions were considered for the discrete emotion labeling (see the Measures Section). It means that the number of targeted emotions is smaller than the reported ones in this matrix. Analyzing this figure it can be found that the non-included emotions (attraction, contempt, hope and tedium) are very scarcely selected with the exception of the 17%percent1717\%17 % of times a stimulus expected to represent anger is taken as contempt. It is also observed that sadness, calm, joy and fear are the emotions best identified, being the agreement in the fear emotion especially relevant for the use case. Tenderness and disgust are also quite well portrayed by the stimuli while anger is often taken as disgust or contempt, and amusement as joy or disgust.
Moreover, the physiological signals were recorded during the entire experiment, so that synchronization can be made with the physiological and audio signals, leading to a multi-modal or fusion scheme. On this basis, a series of experiments carried out in mono- and multi-modal emotion recognition can be found in "Supplementary Material".
First, the physiological signals can be used together or separately to analyze their relationship with the annotated discrete or dimensional emotions.
D
In retinal imaging, GANs have been used to create synthetic data. Li et al. [27] highlighted the importance of enhancing the quality of synthetic retinal images in their review, emphasizing that using synthetic images in training can improve performance and help mitigate overfitting.
In retinal imaging, GANs have been used to create synthetic data. Li et al. [27] highlighted the importance of enhancing the quality of synthetic retinal images in their review, emphasizing that using synthetic images in training can improve performance and help mitigate overfitting.
In the field of Optical Coherence Tomography (OCT) imaging, super-resolution GANs (like ESRGAN [24]) have demonstrated their value as a tool to enhance the quality of the image and improve AMD detection [25]. Das et al. [26] proposed a quick and reliable super-resolution approach concerning OCT imagery using GANs, achieving a classification accuracy of 96.54%percent96.5496.54\%96.54 %.
Bellemo et al. [28] described the possible advantages and limitations towards synthetic retina image generation using GANs. The authors highlighted the potential clinical applications of GANs concerning early- and late-stage AMD classification.
We have employed a retinal image quality assessment model in preprocessing step. We have compared a number of synthetic medical image generation techniques and found StyleGAN2-ADA to be the most suitable using which we have developed a method to generate synthetic images. We have investigated the use of the synthetic images obtained using examples from publicly available databases to train the model for distinguishing between AMD and healthy eyes. We found that experienced clinical experts were unable to differentiate between synthetic and real images. We have tested the model for generalisability by training the model using images from three databases and validated it using a fourth database. We also have demonstrated that the classification accuracy of deep learning networks marginally outperformed clinical experts in separating the AMD and Non-AMD retinal images.
C
We show how to guarantee the (uniform, in a set) ultimate boundedness property 6 of a discrete-time polytopic system when the ReLU approximation replaces a traditional stabilizing controller. Specifically, by focusing on the approximation error between NN-based and traditional controller-based state-to-input mappings, we establish a sufficient conditions involving the worst-case approximation error;
We now characterize a stabilizing control law Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) from a geometrical perspective. While both of the vertex-based policies Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) defined in (3) or (4) are known to produce a controller with PWA structure, the structure underlying a selection-based controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) as defined in §2.2 is less clear in view of the nonlinear constraints in (5).
While the variable structure controller amounts to a continuous piecewise-affine (PWA) mapping by construction, we characterize the geometric properties of the selection-based controller. Specifically, for the resulting nonlinear multi-parametric program, we show that:
In fact, the continuous PWA structure of (3) comes by construction, since it is defined directly over a simplicial partition, while for (4) that structure can be proved by recognizing that the controller’s definition amounts to that of a strictly convex multi-parametric quadratic program (mp-QP), so that available results from 11 Ch. 6.2, 6.3 can be applied.
We have considered the design of ReLU-based approximations of traditional controllers for polytopic systems, enabling implementation even on very fast embedded control systems. We have shown that our reliability certificate require one to construct and solve an MILP offline, whose associated optimal value characterizes a quantity of the approximation error. While a variable structure controller enjoys a nice geometric structure by construction, this is not immediate for a (minimal) selection-based policy. We have hence shown that the latter also introduces a state-to-input PWA mapping, and provided a systematic way to encode both its output and the maximal gain through binary and continuous variables subject to MI constraints. This optimization-based result is compatible with existing results from the machine learning literature on computing the output of a trained ReLU network. Taken together they provide a sufficient condition to assess the reliability, in terms of uniform ultimate boundedness of the closed-loop system, of a given ReLU-based approximation traditional controllers for uncertain systems.
B
Note that the existence of such a sequence can be found via the breadth-first search [20] on the verifier with complexity O⁢(|Xv|×|Eo|)𝑂subscript𝑋𝑣subscript𝐸𝑜O(|X_{v}|\times|E_{o}|)italic_O ( | italic_X start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT | × | italic_E start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT | ).
This paper deals with the problem of event concealment for concealing secret events in a system modeled as an NFA under partial observation.
Due to partial observation of the system, the event set E𝐸Eitalic_E can be partitioned into a set of observable events Eosubscript𝐸𝑜E_{o}italic_E start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT and a set of unobservable events Eu⁢o=E−Eosubscript𝐸𝑢𝑜𝐸subscript𝐸𝑜E_{uo}=E-E_{o}italic_E start_POSTSUBSCRIPT italic_u italic_o end_POSTSUBSCRIPT = italic_E - italic_E start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT.
In this paper, we focus on the case when certain events of a system are deemed secret and study (i) the circumstances under which their occurrences get revealed to an external eavesdropper, and (ii) ways to conceal (as long as needed) the secret events using an obfuscation mechanism.
To answer these questions, the privacy of a system is considered in terms of concealing secret events and the concept of event concealment is proposed.
A
Table 3, comparing Clear and PPIR(MPC), PPIR(FHE)-v1 and v2, showcases the metrics resulting from spline-based non-linear registration between grey matter density images without the application of gradient approximation. Additionally, the table includes results for the registration between whole-body PET images when the gradient approximation is applied.
Point Cloud Data. In Supplementary Table A1 we present the registration metrics for PPIR(MPC) and PPIR(FHE)-v1. The registration shows that PPIR(MPC) achieves the best results compared to PPIR(FHE), which exhibits not only a longer computation time but also requires higher bandwidth, thanks to its non-iterative algorithm. However, to carry out MatMul, a sufficiently large N𝑁Nitalic_N (4096409640964096) is required, and in this scenario, it leads to a significant loss of chipertext slots compared to the dimension of the point set n=193𝑛193n=193italic_n = 193. Finally, the qualitative results reported in Figure A7 show negligible differences between point cloud transformed with Clear, PPIR(MPC) and PPIR(FHE)-v1.
Here, the limitations of PPIR(FHE)-v1 on the bandwidth size are even more evident than in the affine case, since the bandwidth increases according to the number of parameters. This result gives a non-negligible burden to the p⁢a⁢r⁢t⁢y1𝑝𝑎𝑟𝑡subscript𝑦1party_{1}italic_p italic_a italic_r italic_t italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, due to the multiple sending of the flattened and encrypted submatrices of updated parameters. Furthermore, in this case, PPIR(FHE)-v1 performs slightly worse than PPIR(MPC) in terms of execution time.
Regarding the registration accuracy, we draw conclusions similar to those of the affine case, where PPIR(MPC) leads to minimum differences with respect to Clear, while PPIR(FHE)-v1 seems slightly superior.
Incorporating gradient approximation for handling whole-body PET data leads to similar conclusions as for the experiments on brain data. Qualitative results, reported in Supplementary Figure A6, show negligible differences between images transformed with Clear+GMS, PPIR(MPC)+GMS, and PPIR(FHE)-v1+GMS.
C
})\in\mathcal{A}^{\ell+1}\times\mathcal{O}^{\ell+1}blackboard_Y start_POSTSUPERSCRIPT italic_θ , italic_π end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_w start_POSTSUBSCRIPT italic_h - 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_o start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) = blackboard_P start_POSTSUPERSCRIPT italic_θ , italic_π end_POSTSUPERSCRIPT ( italic_w start_POSTSUBSCRIPT italic_h - 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_o start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT = ⋅ ) , ∀ ( italic_w start_POSTSUBSCRIPT italic_h - 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_o start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) ∈ caligraphic_A start_POSTSUPERSCRIPT roman_ℓ + 1 end_POSTSUPERSCRIPT × caligraphic_O start_POSTSUPERSCRIPT roman_ℓ + 1 end_POSTSUPERSCRIPT
Based on the two density mappings defined in (3.6) and (3.7), respectively, we have the following identity for all h∈[H]ℎdelimited-[]𝐻h\in[H]italic_h ∈ [ italic_H ] and θ∈Θ𝜃Θ\theta\in\Thetaitalic_θ ∈ roman_Θ,
An Overview of Embedding Learning. We now summarize the learning procedure of the embedding. First, we estimate the density mappings defined in (3.6) and (3.7) under the true parameter θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT based on interaction history. Second, we estimate the Bellman operators {𝔹hθ∗⁢(ah,oh)}h∈[H]subscriptsubscriptsuperscript𝔹superscript𝜃ℎsubscript𝑎ℎsubscript𝑜ℎℎdelimited-[]𝐻\{\mathbb{B}^{\theta^{*}}_{h}(a_{h},o_{h})\}_{h\in[H]}{ blackboard_B start_POSTSUPERSCRIPT italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_o start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_h ∈ [ italic_H ] end_POSTSUBSCRIPT based on the identity in (3.8) and the estimated density mappings in the first step. Finally, we recover the embedding Φ⁢(τ1H)Φsubscriptsuperscript𝜏𝐻1\Phi(\tau^{H}_{1})roman_Φ ( italic_τ start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) by assembling the Bellman operators according to Lemma 3.8.
Under Assumptions 3.1 and 3.5, it holds for all the parameter θ∈Θ𝜃Θ\theta\in\Thetaitalic_θ ∈ roman_Θ that
where f∈L1⁢(𝒜k×𝒪k+1)𝑓superscript𝐿1superscript𝒜𝑘superscript𝒪𝑘1f\in L^{1}(\mathcal{A}^{k}\times\mathcal{O}^{k+1})italic_f ∈ italic_L start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( caligraphic_A start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT × caligraphic_O start_POSTSUPERSCRIPT italic_k + 1 end_POSTSUPERSCRIPT ) is the input of linear operator 𝕌hθ,†subscriptsuperscript𝕌𝜃†ℎ\mathbb{U}^{\theta,\dagger}_{h}blackboard_U start_POSTSUPERSCRIPT italic_θ , † end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT and ghθsubscriptsuperscript𝑔𝜃ℎg^{\theta}_{h}italic_g start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT is the mapping defined in Assumption 3.5. Under Assumptions 3.1 and 3.5, it holds for all h∈[H]ℎdelimited-[]𝐻h\in[H]italic_h ∈ [ italic_H ], θ∈Θ𝜃Θ\theta\in\Thetaitalic_θ ∈ roman_Θ, and π∈Π𝜋Π\pi\in\Piitalic_π ∈ roman_Π that
A
V𝑉Vitalic_V, γ′superscript𝛾′\gamma^{\prime}italic_γ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, χ𝜒\chiitalic_χ, β𝛽\betaitalic_β and μ𝜇\muitalic_μ are constants.
\in\Xi_{3}\right|=\frac{1}{\cos(\beta)},| divide start_ARG ∂ italic_P end_ARG start_ARG ∂ italic_ξ end_ARG | italic_P ∈ roman_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ; italic_ξ ∈ roman_Ξ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT | = - italic_V start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_cos ( italic_γ ) ; | divide start_ARG ∂ italic_P end_ARG start_ARG ∂ italic_ξ end_ARG | italic_P ∈ roman_Σ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ; italic_ξ ∈ roman_Ξ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT | = divide start_ARG 1 end_ARG start_ARG roman_cos ( italic_β ) end_ARG ,
We denote by ξ^⁢(t)^𝜉𝑡\hat{\xi}(t)over^ start_ARG italic_ξ end_ARG ( italic_t ) the planed function for any state variable
the choice of x^^𝑥\hat{x}over^ start_ARG italic_x end_ARG, y^^𝑦\hat{y}over^ start_ARG italic_y end_ARG, z^^𝑧\hat{z}over^ start_ARG italic_z end_ARG and β^^𝛽\hat{\beta}over^ start_ARG italic_β end_ARG (or μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG). We also denote by δ⁢ξδ𝜉\updelta\xiroman_δ italic_ξ the difference ξ−ξ^𝜉^𝜉\xi-\hat{\xi}italic_ξ - over^ start_ARG italic_ξ end_ARG
red correspond to the planed trajectory ξ^^𝜉\hat{\xi}over^ start_ARG italic_ξ end_ARG, while curves in
B
Model settings. The local observation data yi⁢(k)subscript𝑦𝑖𝑘y_{i}(k)italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_k ) of node i𝑖iitalic_i is given by yi⁢(k)=Hi⁢(k)⁢x0+vi⁢(k)subscript𝑦𝑖𝑘subscript𝐻𝑖𝑘subscript𝑥0subscript𝑣𝑖𝑘y_{i}(k)=H_{i}(k)x_{0}+v_{i}(k)italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_k ) = italic_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_k ) italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_k ), i=1,⋯,10𝑖1⋯10i=1,\cdots,10italic_i = 1 , ⋯ , 10. Here, the regression matrices are taken as
(000h¯s,t,k00),(00h¯s+2,t,k000000),(0h¯5,t,k0000h¯6,t,k00),s=1,2,t=1,2,formulae-sequencematrix000subscript¯ℎ𝑠𝑡𝑘00matrix00subscript¯ℎ𝑠2𝑡𝑘000000matrix0subscript¯ℎ5𝑡𝑘0000subscript¯ℎ6𝑡𝑘00𝑠12𝑡12\begin{pmatrix}0&0&0\\
]_{2\times 2}\},k\geq 0\}{ caligraphic_G ( italic_k ) = { caligraphic_V = { 1 , 2 } , caligraphic_A start_POSTSUBSCRIPT caligraphic_G ( italic_k ) end_POSTSUBSCRIPT = [ italic_w start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_k ) ] start_POSTSUBSCRIPT 2 × 2 end_POSTSUBSCRIPT } , italic_k ≥ 0 } with w12⁢(k)={1,k=2⁢m;1k,k=2⁢m+1subscript𝑤12𝑘cases1𝑘2𝑚1𝑘𝑘2𝑚1w_{12}(k)=\begin{cases}1,&k=2m;\\
ΦZ⁢(j,i)={Z⁢(j)⁢⋯⁢Z⁢(i),j≥iIn,j<i.,∏k=ijZ⁢(k)=ΦZ⁢(j,i).formulae-sequencesubscriptΦ𝑍𝑗𝑖cases𝑍𝑗⋯𝑍𝑖𝑗𝑖subscript𝐼𝑛𝑗𝑖superscriptsubscriptproduct𝑘𝑖𝑗𝑍𝑘subscriptΦ𝑍𝑗𝑖\Phi_{Z}(j,i)=\begin{cases}Z(j)\cdots Z(i),&j\geq i\\
−λ⁢(k)⁢xi⁢(k),k≥0,i∈𝒱.formulae-sequence𝜆𝑘subscript𝑥𝑖𝑘𝑘0𝑖𝒱\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\lambda(k)x_{i}(k),~{}%
A
As motivation, we first argue that conventional notion of “graph gradients”, computed using Laplacian888A similar argument can be made using normalized Laplacian 𝐋nsubscript𝐋𝑛{\mathbf{L}}_{n}bold_L start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT to compute gradients for a positive connected graph;
𝐋≜𝐃−𝐖≜𝐋𝐃𝐖{\mathbf{L}}\triangleq{\mathbf{D}}-{\mathbf{W}}bold_L ≜ bold_D - bold_W, is ill-suited to define planar graph signals.
The most common graph smoothness prior to regularize an inherently ill-posed signal restoration problem is graph Laplacian regularizer (GLR) [3] 𝐱⊤⁢𝐋𝐱superscript𝐱top𝐋𝐱{\mathbf{x}}^{\top}{\mathbf{L}}{\mathbf{x}}bold_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_Lx, where 𝐋𝐋{\mathbf{L}}bold_L is a graph Laplacian matrix corresponding to a similarity graph kernel for signal 𝐱𝐱{\mathbf{x}}bold_x.
\mathbf{I}}-{\mathbf{D}}^{-1/2}{\mathbf{W}}{\mathbf{D}}^{-1/2}bold_L start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ≜ bold_D start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT bold_LD start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT = bold_I - bold_D start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT bold_WD start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT, where 𝐃𝐃{\mathbf{D}}bold_D and 𝐖𝐖{\mathbf{W}}bold_W are the degree and adjacency matrices, respectively, and 𝐈𝐈{\mathbf{I}}bold_I is the identity matrix.
A combinatorial graph Laplacian matrix 𝐋𝐋{\mathbf{L}}bold_L is defined as 𝐋≜𝐃−𝐖≜𝐋𝐃𝐖{\mathbf{L}}\triangleq{\mathbf{D}}-{\mathbf{W}}bold_L ≜ bold_D - bold_W.
A
In contrast, recent works employ advanced platforms such as MRiLab [7] and Brainweb [13], which rely on biophysical models that use complex non-linearities to estimate MR images in different parameters. MRiLab is an MR image simulator equipped with the generalized multi-pool exchange model for accurate MRI simulations.
In contrast, recent works employ advanced platforms such as MRiLab [7] and Brainweb [13], which rely on biophysical models that use complex non-linearities to estimate MR images in different parameters. MRiLab is an MR image simulator equipped with the generalized multi-pool exchange model for accurate MRI simulations.
For our training, we require the MRI scans in two different parameter settings of {TE, TR}. One serves as input to the model, and the other as the ground truth corresponding to the desired parameter setting to compute the loss. We use MRiLab [7] which is an MRI Simulator to generate these synthetic brain scans in different parameter settings of {TE, TR}. We generated these brain MRI scans for 200 random pairs of {TE, TR}. The TR values were chosen uniformly at random in the range 1.2 s to 10s. The TE values ranged from 20 ms to 1s non-uniformly. The distribution was such that lower TE values were selected with higher probability. This was done because the scans were more sensitive toward changes in lower values of TE. The T1 and T2 relaxation times used by MRiLab were matrices of size 108×90×901089090108\times 90\times 90108 × 90 × 90 with values in the range 0s to 4.5s for T1 and 0s to 2.2s for T2. For each pair of {TE, TR}, we generated 24 different 2D axial MR slices of a 3D brain volume, so in total we obtained 4800 MR slices. We used 1500 samples of these slices for training, while the rest were kept for testing. The generated scans were rescaled to a 256×256256256256\times 256256 × 256 matrix.
These works also utilize the multi-pool modeling capabilities of MRiLab to simulate the effects of fat-water interference in macromolecular-rich tissues and validate them in a physical phantom. Brainweb is a Simulated Brain Database generated using an MRI simulator, developed at the McConnell Brain Imaging Centre. This simulator uses first-principles modeling based on the Bloch equations to implement a discrete-event simulation of NMR signal production and realistically models noise and partial volume effects of the image production process. Building these simulators requires physical modeling of MR imaging. Our work in contrast explores the DL-based method to learn these non-linearities that govern the re-parameterization of MR scans from one parameter to another parameter of our choice.
In our work, we propose a coarse-to-fine fully convolutional network for MR image re-parameterization mainly for Repetition Time (TR) and Echo Time (TE) parameters. As the model is coarse-to-fine, we use image features extracted from an image reconstruction auto-encoder as input instead of directly using the raw image. This technique makes the proposed model more robust to a potential overfitting. Based on our preliminary experiments, DL-based methods hold the potential to simulate MRI scans with a new set of parameters. Our deep learning model also performs the task considerably faster than simple biophysical models. To generate our data, we rely on MRiLab [7] which is a conventional MR image simulator. Source code is publicly available at https://github.com/Abhijeet8901/Deep-Learning-Based-MR-Image-Re-parameterization.
C
The previous literature showed that the summation method with DFE has the advantage of a higher OOK data rate [10, 12, 25]. However, at data rates when the pulses are countable and the output pulses do not overlap, the thresholding method for single photon counting is still necessary, especially in scenarios where the incident optical power of the receiver is extremely low. In addition, the data rate cannot be reduced flexibly, hence operating at a specific data rate that is expected to be twice the recovery time in OOK [18].
Moreover, since the current commercially available SiPMs have a higher PDE in the visible blue-green spectrum, for example, in UWOC, VLC and Li-Fi applications, it is expected that a lower optical power is required to achieve the same BER at a longer wavelength. However, these SiPM are not yet suitable for near-infrared (NIR) communication, such as infrared data association (IrDA), which is typically 850 nm. The NIR SiPMs are expected in the near future due to progress in SPAD fabrication technology [43, 44]. The theory and experimental results demonstrated in this work remain valid for any wavelength range since it is based on the detected photons.
In this work, the detector is a commercially available 1⁢m⁢m21𝑚superscript𝑚21~{}mm^{2}1 italic_m italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT C-Series SiPM from On Semiconductor [30]. The technical parameters for the SiPM are listed in Table I.
Considering the bandwidth limitation of the PMOD connector on the FPGA evaluation board, we focused on the SiPM standard output to present the SiPM’s dynamic range and Poisson limited BER performance, as well as the bandwidth limitation on the SiPM readout circuit. We have experimentally verified that the Poisson limit can be approached at the data rate between 10 kbps to 1 Mbps when the optical power was below -74 dBm for the standard output. However, the high sensitivity of the SiPM was achieved by counting individual pulses in each bit period, which needs a high GBP analog readout circuit, hence, a higher power consumption receiver circuit is required[38]. In order to practically demonstrate this concept, the power consumption of 35 commercially available op-amps from TI and ADI was calculated from their respective data sheets and shown in Fig. 13. According to the simulation results in Fig. 11, a minimum GBP of 120 MHz is needed to preserve the photon counting capability while maintaining a power consumption of 50 mW shown in Fig. 13. This 50 mW value represents a significant reduction in power consumption compared to the prototype amplifiers used in real-time setup.
In this paper, we have demonstrated a novel real-time SiPM-based receiver with a low bit rate and high sensitivity, which has the potential for low transmitter power consumption. The work provides the evaluations of the analog chain of the receiver to show the potential for lower power consumption. The numerical simulation proves that the required power consumption of the amplifier is approximately 50 mW at 120 MHz GBP. In addition, to further reduce the complexity and power consumption in the digital circuit design, the FPGA implemented an asynchronous photon detection method. Finally, the implementation of interleaved counters in the receiver allows it to receive streaming data without dead time. This design is being implemented on an FPGA and conventional SiPM for the first time to the best of our knowledge, making it more beneficial for utilizing SiPM in IoT applications than previous offline approaches.
B
If the autonomy is restricted to the operation around the asteroid, that is when the transition from the ground to the autonomous operation takes place. In this case, the spacecraft would rely on the ground up to the moment when the asteroid is found as a point source in its optical cameras. After that, a hybrid approach could take place as the autonomous spacecraft could fast approach the body up to thousands or hundreds of kilometers with the supervision of the ground for a safe delegation of navigation responsibility. On the other hand, an onboard algorithm to search for the asteroid - considering its ephemeris uncertainty - should be used if the spacecraft has been autonomously operating since its deep-space cruise phase.
We have purposely selected those specific asteroids to underscore the fact that our proposed guidance, navigation, and control (G⁢N&C𝐺𝑁𝐶GN\&Citalic_G italic_N & italic_C) approach is not reliant on the size or shape of the asteroid. The mission profile can be customized based on the specific objectives of the mission and the available information about the asteroid’s environment and properties. Once the initial assessment of the environment is conducted, the spacecraft can consider various profiles based on the overall characteristics of the asteroid as observed from a distance. This preliminary analysis aids in determining whether solar radiation pressure or the asteroid’s elongated shape is the predominant factor influencing the mission. For scenarios where solar radiation pressure dominates, a sun-terminator orbit would be suitable, while for elongated asteroids, a retrograde equatorial orbit in the asteroid’s inertial frame would be generally preferable [40, 38, 39]. However, it is important to note that selecting the most suitable approach may vary depending on the specific mission objectives and the characteristics of the individual asteroid.
By “far-approach”, we consider the period of the mission when the spacecraft changes from heliocentric to relative navigation about the small-body, which is the same as phase 1 of the Hayabusa 2 campaign [49]. That phase ends with the spacecraft at an arbitrarily far distance from the small body, when a preliminary assessment of the environment is made, such as identifying small moons, coarse asteroid shape reconstruction, asteroid’s attitude determination, constraining the parameter of mass, and others. Depending on the asteroid’s size and mass, the relative distance in that mission period should vary from a few million or hundred thousand kilometers up to thousands or hundreds of kilometers. From a guidance and control standpoint, this is the least critical phase of the mission. The dynamics should remain nearly constant throughout the operation, with the main forces being the sun’s gravitational pull and the solar radiation pressure.
The more or less constant dynamics in the far-approach phase make it easy to use simple guidance laws, such as an LQG or a ZEM/ZEV that considers the state’s uncertainty, for approaching the body. As it will become apparent soon, with our future assumptions, there is no need for an exact orbit determination at this point. The critical aspect is to get close enough to the asteroid for the preliminary environment assessment. For current research describing embedded algorithms for making the asteroid’s preliminary attitude, shape, and environment assessment, we refer the reader to other works [15, 6, 5, 32].
A mission could have different profiles depending on the mission’s goals and the availability of prior knowledge about the asteroid’s environment and properties. We consider that after the preliminary environment assessment at the end of the far-approach phase, the spacecraft could opt between different profiles, depending on the general properties of the asteroid assessed at a distance. It is safe to assume that in the preliminary environment characterization, the spacecraft would be able to determine if the environment tends or not to be a solar radiation pressure-dominated one or if the body is elongated. We can then rely on consolidated results in the literature [40, 38, 39] to propose that in the first case, the spacecraft would adopt a sun-terminator orbit, while in the latter, the choice would be for a retrograde equatorial orbit in the asteroid’s inertial frame. Of course, this is a simplification to aid in the argument of this work. Other operational profiles could be embedded in the spacecraft, accounting for other types of systems (e.g., binaries) and mission goals.
C
The study of the motion of aerial vehicles is a complex subject that has been investigated since the early appearance of the first airplanes. There is a large body of literature on the aerodynamic aspects of these vehicles and their modeling. In this section, we will discuss the quadrotor aerial robot, which is a basic type of multirotor UAV.
The study of the motion of aerial vehicles is a complex subject that has been investigated since the early appearance of the first airplanes. There is a large body of literature on the aerodynamic aspects of these vehicles and their modeling. In this section, we will discuss the quadrotor aerial robot, which is a basic type of multirotor UAV.
In [131], the communications-related term is the outage probability of the communication link. In [119], the optimization target is the number of users served by a UAV operating as an aerial BS. In [132], the communications-related term is the coverage radius of a Low-altitude aerial platform (LAP) acting as a BS for ground users. In [121], the communications-related terms are the time that the UAV takes to transmit a certain amount of data, and the total amount of bits transmitted by the UAV. In [120], the optimization target includes the expected value of the channel gain. In [118], the communications-related term considered in the optimization target is the energy spent in data transmission.
As we have explained above, the oversimplification of MR models can have serious consequences, thus the importance of selecting an adequate model complexity. In order to help researchers with no (or little) robotics background, the rest of this section provides a general description of mathematical models describing the motion and energy consumption for three popular MRs: ground wheeled robots, rotary-wing(s) aerial robots, and fixed-wing aerial robots. We will present an overview of the different types of models, their implications, and their limitations. This does not constitute an exhaustive list of all the mathematical models associated with these MRs, but rather an introductory presentation for common models used in trajectory planning. Readers familiar with this research area can skip to section III where we discuss the modelling of communication systems and the wireless channel.
Multirotor aerial robots (also called rotary-wing aerial robots) are one of the most popular types of aerial robots nowadays. One of the most common type of these UAVs is the quadrotor, which is the subject of this section.
D
Theorem 20 requires the existence of storage functions that satisfy several properties. Therefore, in practice, the conditions in
The IQC based Theorem 28 makes use of a quadratic form involving incrementally bounded multipliers, while the supply rate in the
In both Examples 6 and 7, even though the systems may be described by dissipativity with respect to some static supply rates, the advantage of using dynamic supply rates lies in offering great flexibility in system characterisation as well as reducing conservatism in feedback stability analysis, similarly to the benefit of using dynamic multipliers, or IQCs, in an input-output setting.
Theorem 28 may be easier to verify. In particular, the use of dynamic multipliers is both natural and well known in the theory of
In contrast to Assumption 12, the lower and upper bounds in Assumption 14 depend on both x𝑥xitalic_x and z𝑧zitalic_z. A byproduct of this assumption is that the stability with respect to both x=0𝑥0x=0italic_x = 0 and z=0𝑧0z=0italic_z = 0 in (17) and (18) may be established, even though we are only concerned with the former. This resembles the theory of dynamic Lyapunov functions proposed in [Sassano and Astolfi, 2013, Def. 1].
C
Because the compensator diverges at ∂\partial∂ χ~~𝜒\tilde{\chi}over~ start_ARG italic_χ end_ARG, it may have the potential to cage the solution x𝑥xitalic_x in χ~~𝜒\tilde{\chi}over~ start_ARG italic_χ end_ARG with probability one. The answer will be given in a later section.
On the other hand, the CBF approach is closely related to a control Lyapunov function (CLF), which immediately provides a stabilizing control law from the CLF, as in Sontag [16] for deterministic systems and Florchinger [17] for stochastic systems. Therefore, in the CBF approach, the derivation of a safety-critical control law immediately from the CBF is also important. For this discussion, the problem setting in which the safe set is coupled with the CBF is appropriate, as in Ames et al. [2]. The stochastic version of the Ames’s et al.’s result is recently discussed by Clark [12]; he insists that his RCBF and ZCBF guarantee the safety of a set with probability one. At the same time, Wang et al. [13] analyze the probability of a time when the sample path leaves a safe set under conditions similar to Clark’s ZCBF. Wang et al. also claim that a state-feedback law achieving safety with probability one often diverges toward the boundary of the safe set; the inference is also obtained from the fact that the conditions for the existence of an invariance set in a stochastic system are strict and influenced by the properties of the diffusion coefficients [18]. This argument is in the line of stochastic viability by Aubin and Prato [20]. For CBFs, Tamba et al. [19] provides sufficient conditions for safety with probability one, which require difficult conditions for the diffusion coefficients. Therefore, we need to reconsider a sufficient condition of safety with probability one, and we also need to rethink the problem setup to compute the safety probability obtained by a bounded control law.
For a stochastic system, a subset of the state space is generally hard to be (almost sure) invariance because the diffusion coefficient is required to be zero at the boundary of the subset111The detail is discussed in[18], which aims to make the state of a stochastic system converge to the origin with probability one and confine the state in a specific subset with probability one. The aim is a little like the aim of a control barrier function. Tamba et al. make a similar argument for CBFs in [19], but their sufficient condition is more stringent.. To avoid the tight condition for the coefficient, we should design a state-feedback law whose value is massive, namely diverge in general, at the boundary of the subset so that the effect of the law overcomes the disturbance term. Moreover, a functional ensuring the (almost sure) invariance of the subset probably diverges at the boundary of the set as with a global stochastic Lyapunov function [22, 23, 24] and an RCBF.
The above discussion also implies that if a ZCBF is defined for a stochastic system and ensures “safety with probability one,” the good robust property of the ZCBF probably gets no appearance. The reason is that the related state-feedback law generally diverges at the boundary of the safe set. Hence, the previous work in [13] proposes a ZCBF with analysis of exit time of
In the context of a CBF, the control objective is to make a specific subset, which is said to be a safe set, on the state space invariance forward in time (namely, forward invariance [2]). There are various types of CBFs, the most commonly used currently are a reciprocal control barrier function (RCBF) [2, 4, 5] and a zeroing control barrier function (ZCBF) [2, 3, 6]: the RCBF is a positive function that diverges from the inside of the safe set toward the boundary, while the ZCBF is a function that is zero at the boundary of the safe set. The RCBF has a form that is easy to imagine as a barrier, while the ZCBF is defined outside the safe set, allowing the design of control laws with robustness.
B
In practice, it is also possible that the wind farm (GFL converter) and the GFM converter are connected to one common 35 kV bus, as shown in Fig. 6. The equivalent inductor of the transformer is 0.08⁢pu0.08pu0.08~{}{\rm pu}0.08 roman_pu. Hence, the typical value of Zlocalsubscript𝑍localZ_{\rm local}italic_Z start_POSTSUBSCRIPT roman_local end_POSTSUBSCRIPT is 0.08⁢pu0.08pu0.08~{}{\rm pu}0.08 roman_pu under VSMs (without reactive power droop control) and power synchronization control, but the typical value of Zlocalsubscript𝑍localZ_{\rm local}italic_Z start_POSTSUBSCRIPT roman_local end_POSTSUBSCRIPT becomes 0.12⁢pu0.12pu0.12~{}{\rm pu}0.12 roman_pu under VSMs with reactive power droop control.
In this case, the electrical distance between the GFL converter and the GFM converter becomes smaller, and one may need fewer GFM converters to enhance the equivalent power grid strength, as illustrated in the next example.
Combining the power grid strength quantified by gSCR in this section and the analysis of the voltage source behaviors of GFM converters in Section II, it is once again emphasized that it is necessary to install GFM converters to provide effective voltage source behaviors and thus enhance the power grid strength, which can be quantified by gSCR. On this basis, we will show in the next section that the integration of GFM converters has a similar effect to installing ideal voltage sources (i.e., infinite buses) in series with an equivalent internal impedance in the network. Further, we will derive the closed-form relationship between the gSCR and the capacity ratio between the GFM and the GFL converters to simplify the analysis of how large the capacity should be to meet certain stability margins.
Moreover, one important question is: since GFL converters can perform constant AC voltage magnitude control, do they also have effective voltage source behaviors to enhance the power grid strength? To be specific, one can introduce the terminal voltage magnitude as a feedback signal to generate the reactive current reference and regulate the voltage magnitude to a reference value [3, 4]. In this case, though the terminal voltage magnitude is well regulated, it remains unclear if the GFL converters can be considered as effective voltage sources to enhance the power grid (voltage) strength. We believe that it is essential to answer the above question before studying how many GFM converters we will need to enhance the power grid strength, as one may simply resort to modifying GFL converters to enable voltage source behaviors if they can be used to enhance the power grid strength.
Intuitively, since GFM converters behave like voltage sources, installing a GFM converter near a GFL converter should improve the local power grid strength of the GFL converter and thus improve its small signal stability margin (as GFL converters may become unstable in weak grids). This intuition was confirmed in our previous work [9], where we investigated the impact of GFM converters on the small signal stability of power systems integrated with GFL converters. We demonstrated that replacing GFL converters with GFM converters is equivalent to enhancing the power grid strength, characterized by the so-called generalized short-circuit ratio (gSCR). However, the approach [9] can only be used to determine the optimal locations to replace GFL converters with GFM converters, but it still remains unclear how to configure newly installed GFM converters in the grid and more importantly, how to decide their capacities (or equivalently, how many GFM converters we will need) to ensure the system’s small signal stability. Furthermore, the analysis in  [9] only considers one type of GFM control (i.e., VSM) and directly approximates a VSM as an ideal voltage source (without deriving the equivalent impedance as will be done in this paper). Such an approach might not apply to other GFM methods once they have weaker voltage source behaviors than VSMs in [9], as it remains unclear how to quantify the voltage source behaviors of different GFM methods and analyze their interaction with GFL converters.
A
Motivated by these issues, we propose multi-scale large kernel attention (MLKA) that combines classical multi-scale mechanism and emerging LKA to build various-range correlations with relatively few computations. The multi-scale kernel can implicitly encode features from coarse to fine, which allows the model to mimic both CNNs and transformers. Moreover, to avoid potential block artifacts aroused by dilation, we adopt the gate mechanism to recalibrate the generated attention maps adaptively. To maximize the benefits of MLKA, we place it on the MetaFormer [53]-style (Norm-TokenMixer-Norm-MLP) structure rather than RCAN-style (Conv-Act-Conv-TokenMixer) to construct a multi-attention block (MAB). Although transformer-style MAB can deliver higher performance, the MLP feed-forward module is too heavy for large images. Inspired by recent work [5, 47], we propose a simplified gated spatial attention unit (GSAU) by applying spatial attention and gate mechanism to reduce calculations and include spatial information. Arming with the simple yet striking MLKA and GSAU, the MABs are stacked to build the multi-scale attention network (MAN) for the SR task. In Fig. 1, we present the superior performance of our MAN. To summarize, our contributions are as follows:
Motivated by these issues, we propose multi-scale large kernel attention (MLKA) that combines classical multi-scale mechanism and emerging LKA to build various-range correlations with relatively few computations. The multi-scale kernel can implicitly encode features from coarse to fine, which allows the model to mimic both CNNs and transformers. Moreover, to avoid potential block artifacts aroused by dilation, we adopt the gate mechanism to recalibrate the generated attention maps adaptively. To maximize the benefits of MLKA, we place it on the MetaFormer [53]-style (Norm-TokenMixer-Norm-MLP) structure rather than RCAN-style (Conv-Act-Conv-TokenMixer) to construct a multi-attention block (MAB). Although transformer-style MAB can deliver higher performance, the MLP feed-forward module is too heavy for large images. Inspired by recent work [5, 47], we propose a simplified gated spatial attention unit (GSAU) by applying spatial attention and gate mechanism to reduce calculations and include spatial information. Arming with the simple yet striking MLKA and GSAU, the MABs are stacked to build the multi-scale attention network (MAN) for the SR task. In Fig. 1, we present the superior performance of our MAN. To summarize, our contributions are as follows:
This paper proposes a multi-scale attention network (MAN) for super-resolution under multiple complexities. MAN adopts transformer-style blocks for better modeling representation. To effectively and flexibly establish long-range correlations among various regions, we develop multi-scale large kernel attention (MLKA) that combines large kernel decomposition and multi-scale mechanisms. Furthermore, we propose a simplified feed-forward network (GSAU) that integrates gate mechanisms and spatial attention to activate local information and reduce model complexity. Extensive experiments have demonstrated that our CNN-based MAN can achieve better performance than previous SOTA ConvNets and keep pace with transformer-based methods in a more efficient manner.
The attention mechanism can force networks to focus on crucial information and ignore irrelevant ones. Previous SR models adopt a series of attention mechanisms, including channel attention (CA) and self-attention (SA), to obtain more informative features. However, these methods fail to simultaneously uptake local information and long-range dependence, and they often consider the attention maps at a fixed reception field. Enlightened by the latest visual attention research [15], we propose multi-scale large kernel attention (MLKA) to resolve these problems by combining large kernel decomposition and multi-scale learning. Specifically, the MLKA consists of three main functions, large kernel attentions (LKA) for establishing interdependence, the multi-scale mechanism for obtaining heterogeneous-scale correlation, and gated aggregation for dynamic recalibration.
We propose multi-scale large kernel attention (MLKA) for obtaining long-range dependencies at various granularity levels by combining large kernel with gate and multi-scale mechanisms, which significantly increases model representation capability.
D
For example, a UAV might experience stronger wind during its flight than it anticipated, or the off-shore landing pad of a rocket might have drifted away from its original position.
The agent can use these parameter-conditioned reachable sets online to activate the safety function corresponding to the current environment and system factors, leading to a real-time adaptation of safety assurances.
Thus, during the run time, the system can sample the environmental factors and other parameters and activate the corresponding safety function via a simple DNN query, leading to a real-time adaptation of safety assurances.
In such situations, it is hard to provide safety guarantees prior to the system deployment; instead, the system needs to perform an efficient evaluation and adaptation of safety assurances online in light of the environment and system evolution.
Various simulation studies are presented to demonstrate the utility of the proposed method in maintaining safety despite the system and environment evolution.
C
\boldsymbol{S}_{\alpha\alpha})^{-1}italic_N ( italic_θ , italic_φ ) = italic_N start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ⋅ roman_max ( bold_italic_s start_POSTSUBSCRIPT italic_α italic_β end_POSTSUBSCRIPT ( italic_θ , italic_φ ) start_POSTSUPERSCRIPT sansserif_T end_POSTSUPERSCRIPT ( bold_italic_S start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT - bold_italic_S start_POSTSUBSCRIPT italic_α italic_α end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT
(𝑺L−1−𝑺α⁢α)−1=∑k=0∞(𝑺L⁢𝑺α⁢α)k⁢𝑺L=𝑺L+𝑺L⁢𝑺α⁢α⁢𝑺L+…,superscriptsuperscriptsubscript𝑺𝐿1subscript𝑺𝛼𝛼1superscriptsubscript𝑘0superscriptsubscript𝑺𝐿subscript𝑺𝛼𝛼𝑘subscript𝑺𝐿subscript𝑺𝐿subscript𝑺𝐿subscript𝑺𝛼𝛼subscript𝑺𝐿…\displaystyle(\boldsymbol{S}_{L}^{-1}-\boldsymbol{S}_{\alpha\alpha})^{-1}\!=\!%
𝑩⁢(𝑺L−1−𝑺α⁢α)−𝖧⁢𝒔α⁢β⁢(θ,φ)∗.𝑩superscriptsuperscriptsubscript𝑺𝐿1subscript𝑺𝛼𝛼𝖧subscript𝒔𝛼𝛽superscript𝜃𝜑\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{%
𝑺L−H(𝑺L−1−𝑺α⁢α)−Hα𝑯UE−RISH(α𝑯UE−RIS(𝑺L−1−𝑺α⁢α)−1\displaystyle\boldsymbol{S}_{L}^{-\rm H}(\boldsymbol{S}_{L}^{-1}-\boldsymbol{S%
𝑩(𝑺L−1−𝑺α⁢α)−𝖧𝒔α⁢β(θ,φ)∗−Aλ2cosθ,0),\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\boldsymbol{B}(%
D
Haptic communication has been incorporated by industries to perform grasping and manipulation, where the robot transmits the haptic data to the manipulator. The shape and weight of the objects to be held are measured using cutaneous feedback derived from the fingertip contact pressure and kinesthetic feedback of finger positions, which should be transmitted within stringent latency requirements to guarantee industrial operation safety.
Haptic communication has been incorporated by industries to perform grasping and manipulation, where the robot transmits the haptic data to the manipulator. The shape and weight of the objects to be held are measured using cutaneous feedback derived from the fingertip contact pressure and kinesthetic feedback of finger positions, which should be transmitted within stringent latency requirements to guarantee industrial operation safety.
Due to the difficulty in supporting massive haptic data with stringent latency requirements, JND can be identified as important goal-oriented semantic information to ignore the haptic signal that cannot be perceived by the manipulator. Two effectiveness-aware performance metrics including SNR and SSIM have been verified to be applicable to vibrotactile quality assessment.
Difference (JND) is identified as valuable semantic information to filter the haptic signal that cannot be perceived by the human, where Weber’s law serves as an important semantic information extraction criterion.
In the Augmented Reality (AR) display task, the central server transmits the rendered 3D model of a specific virtual object to the user. It is noted that the virtual object identification and its pose information related to the real world is the key to achieving alignment between virtual and physical objects. Therefore, the virtual object identification and pose information can be extracted as goal-oriented semantic information to reduce the data size. Then, by sharing the same 3D model library, the receiver can locally reconstruct the 3D virtual object model based on the received goal-oriented semantic information. To evaluate the 3D model transmission, Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) can be adopted as effectiveness-aware performance metrics. However, how to quantify the alignment accuracy among the virtual objects and physical objects as a performance metric remains to be solved.
B
Distributed energy resources (DERs) are being rapidly deployed in distribution systems. Fluctuations in DER power outputs and varying load demands can potentially cause violations of voltage limits, i.e., voltages outside the bounds imposed in the ANSI C84.1 standard. These violations can cause equipment malfunctions, failures of electrical components, and, in severe situations, power outages.
In this paper, we consider a sensor placement problem which seeks to locate the minimum number of sensors and determine corresponding sensor alarm thresholds in order to reliably identify all possible violations of voltage magnitude limits in a distribution system. We formulate this sensor placement problem as a bilevel optimization with an upper level that minimizes the number of sensors and chooses sensor alarm thresholds and a lower level that computes the most extreme voltage magnitudes within given ranges of power injection variability. This problem additionally aims to reduce the number of false positive alarms, i.e., violations of the sensors’ alarm thresholds that do not correspond to an actual voltage limit violation.
To address challenges associated with power flow nonlinearities, we employ a linear approximation of the power flow equations that is adaptive (i.e., tailored to a specific system and a range of load variability) and conservative (i.e., intend to over- or under-estimate a quantity of interest to avoid constraint violations). These linear approximations are called conservative linear approximations (CLAs) and were first proposed in BUASON2022 . As a sample-based approach, the CLAs are computed using the solution to a constrained regression problem across all samples within the range of power injection variability. They linearly relate the voltage magnitudes at a particular bus to the power injections at all PQ buses. These linear approximations can also effectively incorporate the characteristics of more complex components (e.g., tap-changing transformers, smart inverters, etc.), only requiring the ability to apply a power flow solver to the system. Additionally, in the context of long-term planning, the CLAs can be readily computed with knowledge of expected DER locations and their potential power injection ranges. The accuracy and conservativeness of our proposed method is based on the information of the location of DERs and their power injections variability. As inputs, our method uses the net load profiles including the size of PVs when computing the CLAs. In practice, this data can be obtained by leveraging the extensive existing research on load modeling and monitoring to identify the locations and capabilities of behind-the-meter devices (refer to, e.g., Grijalva2021 ; Schirmer2023 ).
To mitigate the impacts of violations, distribution system operators (DSOs) must identify when power injection fluctuations lead to voltages exceeding their limits. To do so, sensors are placed within the distribution system to measure and communicate the voltage magnitudes at their locations. Due to the cost of sensor hardware and communication infrastructure and the structure of distribution systems, sensors are not placed at all buses. The question arises whether a voltage violation at a location without a sensor can nevertheless be detected.
In contrast to previous work, this problem does not attempt to ensure full observability of the distribution system. Rather, we seek to locate (a potentially smaller number of) sensors that can nevertheless identify all voltage limit violations for any power injections within a specified range of power injection variability. With a small number of sensors, the proposed formulation also provides a simple means to design corrective actions if voltage violations are encountered in real-time operations. By restoring voltages at these few critical locations to within their alarm thresholds, the system operator can guarantee feasibility of the voltage limits for the full system. This guarantee is obtained by our sensor placement method purely by analyzing the geometric properties of the feasible set. We do not consider the design details of the feedback control protocol and thus dynamic properties of the sensors such as latency are not relevant in our approach.
C
Table X lists the MDE results of the comparison 2D speaker localization systems with the linear arrays on the real-world data. From the table, we see that, although the CNN-based methods were only trained on the simulated data, they generalize well on the real-world data, and consistently outperform the conventional methods.
From the table, we see that the performance of all DOA algorithms in the multi-speaker localization scenarios drops compared to that in the single-speaker scenarios. However, the CNN-based methods still outperform conventional methods. CNN-ULD performs the best in the CNN-based methods, while CNN-LBT outperforms CNN-PIT. Finally, the DOA methods with the circular array perform worse than those with the linear arrays.
Table X lists the MDE results of the comparison 2D speaker localization systems with the linear arrays on the real-world data. From the table, we see that, although the CNN-based methods were only trained on the simulated data, they generalize well on the real-world data, and consistently outperform the conventional methods.
TABLE X: MDE (in meters) of the comparison 2D speaker localization methods on the real-world data. Note that in the single-source scenario, CNN-LBT degrades into CNN-Mask.
Due to the strong interference from ghost speakers, the MDE produced from conventional methods seem to be too large, which indicates that they perform random guess to the speaker positions. Therefore, our focus is on the CNN-based methods. (i) For the single-source localization, the MDE is controlled to a sufficiently low level. Comparing Tables X and V, we see that the MDE on the real-world data is close to that on the simulated data with 30dB, since that the real-world data is nearly ambient-noise-free. (ii) For the multi-source localization, all CNN-based methods show significant performance degradation due to the strong interference made by the ghost speakers. This negative effect will be further discussed in Section VI.
D
If a more refined modeling of p⁢(x)𝑝𝑥p(x)italic_p ( italic_x ) is necessary, we can increase the output resolution N𝑁Nitalic_N of the IPU.
The primary drawback of this approach is that neither of the two optimization goals is achieved optimally.
A pragmatical method to achieve the two goals is to find a suitable compromise, akin to the approach taken by sparse coding methods [63].
This is referred to as the node-wise loss function, in contrast to the original sample-wise loss function.
This approach to balancing the two objectives in our discrete IPU model is referred to as even coding.
D
(b) The actual image that the CNN sees at “A” (yellow star in Fig. 3(a)). The CNN confuses the runway marking as the centreline. (c) Modified image with an artificial patch over the runway marking.
The observed images along the aircraft trajectory (Fig. 5(c),(d)) expose that at night time the CNN is indeed unable to properly see the centreline due to illumination issues guiding the aircraft off the runway (blue trajectory from location A to B in Fig. 5(b)). However, such errors are avoided in the morning (red trajectory from location A to C in Fig. 5(b)) due to a better visibility.
We simulate the aircraft trajectory from a state in the BRT (marked with the yellow star in Fig. 1(a)) and query the images observed by the aircraft along the trajectory.
Figure 3: (a) Top-view of the runway in the morning. The trajectory followed by the aircraft under the CNN policy (red line) takes it off the runway. The successful trajectory (in green) takes the aircraft from “A” to “C”, on adding the patch over the runway marking during ablation. The trajectory (in cyan) from “A” to “D” is followed at night.
(a) The morning (red shaded) and night (blue shaded) BRTs overlaid for pysubscript𝑝𝑦p_{y}italic_p start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT = 190m. The state, shown with a yellow star, is only included in the night BRT. (b) Top view of the runway. In the morning, the CNN policy accomplishes the taxiing task by taking the red trajectory from “A” (yellow star in (a)) to “C.” At night, the policy takes the aircraft outside the runway along the blue trajectory from “A” to “B”. (c) The centreline in the image cannot be vividly seen by the CNN at location “A” at night due to poor illumination, whereas it can be seen clearly in the morning (d).
B
An upward pointing arrow leaving node (t,u)𝑡𝑢(t,u)( italic_t , italic_u ) represents y⁢(t,u)𝑦𝑡𝑢y(t,u)italic_y ( italic_t , italic_u ), the probability of outputting an actual label; and a rightward pointing arrow represents Ø⁢(t,u)italic-Ø𝑡𝑢\O(t,u)italic_Ø ( italic_t , italic_u ), the probability of outputting a blank at (t,u)𝑡𝑢(t,u)( italic_t , italic_u ).
introduces big blank symbols. Those big blank symbols could be thought of as blank symbols with explicitly defined durations – once emitted, the big blank advances the t𝑡titalic_t by more than one, e.g. two or three.
Note that when outputting an actual label, u𝑢uitalic_u would be incremented by one; and when a blank is emitted, t𝑡titalic_t is incremented by one.
With the multi-blank models, when a big blank with duration m𝑚mitalic_m is emitted, the decoding loop increments t𝑡titalic_t by exactly m𝑚mitalic_m.
In standard decoding algorithms for RNN-Ts, the emission of a blank symbol advances input by one frame.
B
The data structure and the detailed configurations of acoustic scene manipulation in the SceneFake dataset are illustrated in Figure 4.
The statistics of the SceneFake dataset are shown in Table 3, where #Speakers, #SE, #Scenes, #Real, #Fake, and #Total denote the number of speakers, speech enhancement methods, acoustic scene types, real utterances, fake utterances, and all utterances in the SceneFake dataset.
Table 7: The results of the fake utterances using different speech enhancement models in terms of PESQ on our SceneFake dataset. “Avg.” denotes the average PESQ of the fake utterances using all speech enhancement models on the corresponding sets. “BeforeSE” denotes the results of the original utterances of fakes ones, which are not enhanced by speech enhancement models. The BeforeSE utterances are mixed with acoustic scenes. “Total” is referred to as the performance of the models at all SNRfake.
Table 6 illustrates the statistic distribution of the noisy LA dataset, where #Speakers, #Genuine, #Spoofed, and #Total denote the number of speakers, genuine utterances, spoofed utterances, and all utterances in noisy LA dataset of ASVspoof 2019.
The description of 10 acoustic scenes in the DCASE 2022 Challenge is reported in the Table 1. The statistics of the LA dataset of ASVspoof 2019 are listed in Table 2, where #Speakers, #Genuine, #Spoofed, and #Total denote the number of speakers, genuine utterances, spoofed utterances, and all utterances in the three sets of LA dataset.
A
+βt,t−1*⁢ut−1+⋯+βt,t−q*⁢ut−q.subscriptsuperscript𝛽𝑡𝑡1subscript𝑢𝑡1⋯subscriptsuperscript𝛽𝑡𝑡𝑞subscript𝑢𝑡𝑞\displaystyle+\beta^{*}_{t,t-1}u_{t-1}+\cdots+\beta^{*}_{t,t-q}u_{t-q}.+ italic_β start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t , italic_t - 1 end_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT + ⋯ + italic_β start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t , italic_t - italic_q end_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_t - italic_q end_POSTSUBSCRIPT .
We note here that the LS-solution (30) and the “fundamental solution” (7) need not be the same; nonetheless, both can predict the initial + forced response of the linear system (2) after time t𝑡titalic_t s.t. m⁢t≥n𝑚𝑡𝑛mt\geq nitalic_m italic_t ≥ italic_n, under the observability assumption 3.1.
An immediate consequence of the above result is that the LS-ARMA model (30) also predicts the response of the linear system (2).
This paper describes a new system realization technique for the system identification of linear time-invariant as well as time-varying systems. The system identification method proceeds by modeling the current output of the system using an ARMA model comprising of the finite past outputs and inputs. A theory based on linear observability is developed to justify the usage of an ARMA model, which also provides the minimum number of inputs and outputs required from history for the model to fit the data exactly. The method uses the information-state, which simply comprises of the finite past inputs and outputs, to realize a state-space model directly from the ARMA parameters. This is shown to be universal for both linear time-invariant and time-varying systems that satisfy the observability assumption. Further, we show that feedback control based on the minimal information state is optimal for the underlying state space system, i.e., the information state is indeed a loss-less representation for the purpose of control. The method is tested on various systems in simulation, and the results show that the models are accurately identified.
The results show that the information-state model can predict the responses accurately. The TV-OKID approach also can predict the response well in the oscillator experiment when the experiments have zero initial conditions, but it suffers from inaccuracy if the experiments have non-zero initial conditions as seen in Fig. 5b. In the case of fish and cart-pole, TV-OKID fails with the observer in the loop. We found that the identified open-loop Markov parameters predict the response well, but the prediction diverges from the truth when the observer is introduced, making the predictions useless. This observation further validates the hypothesis that the ARMA model cannot be explained by an observer in the loop system. Hence, we use only the estimated open-loop Markov parameters without the observer to show the performance of the TV-OKID prediction. The last q𝑞qitalic_q steps in OKID are ignored, as there is not sufficient data to calculate models for the last few steps, as discussed in Sec. 6.3. There is also the potential for numerical errors to creep in due to the additional steps taken in TV-OKID: determination of the time-varying Markov parameters from the time-varying observer Markov parameters, calculating the SVD of the resulting Hankel matrices and the calculation of the system matrices from these SVDs, as mentioned in [11]. On the other hand, the effort required to identify systems using the information-state approach is negligible compared to other techniques as the state-space model can be set up by just using the ARMA parameters. More examples can be found in [1], where the authors use the information-state model for optimal feedback control synthesis in complex nonlinear systems.
B
README.md exists but content is empty.
Downloads last month
35