citing_id
stringlengths
9
16
cited_id
stringlengths
9
16
section_title
stringlengths
0
2.25k
citation
stringlengths
52
442
text_before_citation
sequence
text_after_citation
sequence
keywords
sequence
citation_intent
stringclasses
3 values
citing_paper_content
dict
cited_paper_content
dict
1905.06625
1801.10340
V. IMPLEMENTATION
The Infrastructure services consist of the following services: Message Broker: In #REFR , the authors used Lightweight M2M based on Constrained Application Protocol (CoAP) as the main message transportation protocol.
[ "Compared to individual instances of Representation Service, this service has a global knowledge of APs' locations and coverage areas, which then used to correlate with the robot's predicted path to find out the next AP the robot may connect.", "This service outputs recommendations, which includes the robot ID and a list of maximum three possible APs ranked by the recommendation's confidence.", "Knowledge Base Service: This service is responsible for keeping track of recommendations.", "It maintains two connections with (1) the Web-based Interface to display recommendations received from the Path Prediction Service, and (2) the fog infrastructures to form a feedback loop for location updates.", "Web-based Interface: Provides a portal with visualized network of microservices, together with runtime metrics such as health status, memory, cache, system and environment properties, etc. The recommendations are also displayed here." ]
[ "In contrast to this work, we propose to use Message Queue Telemetry Transport (MQTT) as the main protocol for microservices' interactions.", "Compared to CoAP, MQTT has more sophisticated reliability and congestion control mechanisms, which becomes significant when data is exchanged frequently #OTHEREFR .", "In addition, when the packet loss rate is low, MQTT performs better in terms of latency #OTHEREFR .", "However, it should be noted that CoAP, which is based on UDP, is more lightweight compared to a TCP-based protocol like MQTT.", "The Message Broker manages several MQTT message queues: Representation service's queue: Each instance of the Representation Service has a separate queue to receive updates from the Gateway Service; Aggregation queue: delivers messages from the Representation Service to the Path Prediction Service; Knowledge Based queue: delivers recommendations from the Path Prediction Service to the Knowledge Base Service; Data event queue: delivers data update events created by the Gateway Service to other services, thus allowing services to synchronized data about registered robots and access points." ]
[ "Lightweight M2M" ]
method
{ "title": "MAIA: A Microservices-based Architecture for Industrial Data Analytics", "abstract": "Abstract-In recent decades, it has become a significant tendency for industrial manufacturers to adopt decentralization as a new manufacturing paradigm. This enables more efficient operations and facilitates the shift from mass to customized production. At the same time, advances in data analytics give more insights into the production lines, thus improving its overall productivity. The primary objective of this paper is to apply a decentralized architecture to address new challenges in industrial analytics. The main contributions of this work are therefore two-fold: (1) an assessment of the microservices' feasibility in industrial environments, and (2) a microservicesbased architecture for industrial data analytics. Also, a prototype has been developed, analyzed, and evaluated, to provide further practical insights. Initial evaluation results of this prototype underpin the adoption of microservices in industrial analytics with less than 20ms end-to-end processing latency for predicting movement paths for 100 autonomous robots on a commodity hardware server. However, it also identifies several drawbacks of the approach, which is, among others, the complexity in structure, leading to higher resource consumption." }
{ "title": "Cyber-physical microservices: An IoT-based framework for manufacturing systems", "abstract": "Recent advances in ICT enable the evolution of the manufacturing industry to meet the new requirements of the society. Cyber-physical systems, Internet-of-Things (IoT), and Cloud computing, play a key role in the fourth industrial revolution known as Industry 4.0. The microservice architecture has evolved as an alternative to SOA and promises to address many of the challenges in software development. In this paper, we adopt the concept of microservice and describe a framework for manufacturing systems that has the cyber-physical microservice as the key construct. The manufacturing plant processes are defined as compositions of primitive cyber-physical microservices adopting either the orchestration or the choreography pattern. IoT technologies are used for system integration and model-driven engineering is utilized to semiautomate the development process for the industrial engineer, who is not familiar with microservices and IoT. Two case studies demonstrate the feasibility of the proposed approach." }
1904.07414
1111.1055
Discussion
The fact that the spectra of the graph provide a natural partitioning #REFR aligns with our result that the first few eigenvalues will provide sufficient differentiation if the number of communities is low.
[ "In this discussion, as we have done throughout the paper, we will emphasize a distinction between local and global graph structure.", "Global structures include community separation as seen in the stochastic blockmodel, while local structures include the high density of triangles in the Watts-Strogatz model.", "In general, we find that when examining global structure, the adjacency spectral distance and DeltaCon distance both provide good performance.", "When examining community structure in particular, one need not employ the full spectrum when using a spectral distance." ]
[ "When one is interested in both global and local structure, we recommend use of the adjacency spectral distance.", "When the full spectrum is employed, the adjacency spectral distance is effective at differentiating between models even if the primary structural differences occur on the local level (e.g. the Watts-Strogatz graph).", "The use of the entire spectrum here is essential; much of the most important information is contained in the tail of the distribution, and the utility of the adjacency spectral distance decreases significantly when only the dominant eigenvalues are compared.", "It is important to remember that these experiments represent only one way that pairwise graph comparison might be used.", "In particular, we are here comparing a sample to a known population." ]
[ "graph", "eigenvalues" ]
result
{ "title": "Metrics for Graph Comparison: A Practitioner’s Guide", "abstract": "Comparison of graph structure is a ubiquitous task in data analysis and machine learning, with diverse applications in fields such as neuroscience [1], cyber security [2] , social network analysis [3] , and bioinformatics [4], among others. Discovery and comparison of structures such as modular communities, rich clubs, hubs, and trees in data in these fields yields insight into the generative mechanisms and functional properties of the graph. Often, two graphs are compared via a pairwise distance measure, with a small distance indicating structural similarity and vice versa. Common choices include spectral distances (also known as λ distances) and distances based on node affinities (such as DeltaCon [5]). However, there has of yet been no comparative study of the efficacy of these distance measures in discerning between common graph topologies and different structural scales. In this work, we compare commonly used graph metrics and distance measures, and demonstrate their ability to discern between common topological features found in both random graph models and empirical datasets. We put forward a multi-scale picture of graph structure, in which the effect of global and local structure upon the distance measures is considered. We make recommendations on the applicability of different distance measures to empirical graph data problem based on this multi-scale view. Finally, we introduce the Python library NetComp which implements the graph distances used in this work." }
{ "title": "Multiway Spectral Partitioning and Higher-Order Cheeger Inequalities", "abstract": "A basic fact in spectral graph theory is that the number of connected components in an undirected graph is equal to the multiplicity of the eigenvalue zero in the Laplacian matrix of the graph. In particular, the graph is disconnected if and only if there are at least two eigenvalues equal to zero. Cheeger's inequality and its variants provide an approximate version of the latter fact; they state that a graph has a sparse cut if and only if there are at least two eigenvalues that are close to zero. It has been conjectured that an analogous characterization holds for higher multiplicities: There are k eigenvalues close to zero if and only if the vertex set can be partitioned into k subsets, each defining a sparse cut. We resolve this conjecture positively. Our result provides a theoretical justification for clustering algorithms that use the bottom k eigenvectors to embed the vertices into R k , and then apply geometric considerations to the embedding. We also show that these techniques yield a nearly optimal quantitative connection between the expansion of sets of size ≈ n/k and λ k , the kth smallest eigenvalue of the normalized Laplacian, where n is the number of vertices. In particular, we show that in every graph there are at least k/2 disjoint sets (one of which will have size at most 2n/k), each having expansion at most O( λ k log k). Louis, Raghavendra, Tetali, and Vempala have independently proved a slightly weaker version of this last result. The log k bound is tight, up to constant factors, for the \"noisy hypercube\" graphs." }
1812.10140
1111.1055
C. Multiple Clusters and Higher-order Cheeger Inequalities of MOSC
However, by replacing k-means with a different clustering algorithm, MOSC-GL can derive a theoretical performance guarantee #REFR .
[ "To cluster a network into k > 2 clusters based on mixedorder structures, MOSC-GL and MOSC-RW follow the conventional SC #OTHEREFR .", "Specifically, MOSC-GL treats the first k row-normalised eigenvectors of L X as the embedding of nodes that can be clustered by k-means.", "Similarly, MOSC-RW uses the first k eigenvectors of H as the node embedding to perform k-means.", "Regarding performance guarantee, following #OTHEREFR and #OTHEREFR , MOSC-GL and MOSC-RW do not have performance guarantee with respect to higher-order Cheeger inequalities." ]
[]
[ "different clustering algorithm" ]
background
{ "title": "Mixed-Order Spectral Clustering for Networks", "abstract": "Abstract-Clustering is fundamental for gaining insights from complex networks, and spectral clustering (SC) is a popular approach. Conventional SC focuses on second-order structures (e.g., edges connecting two nodes) without direct consideration of higher-order structures (e.g., triangles and cliques). This has motivated SC extensions that directly consider higher-order structures. However, both approaches are limited to considering a single order. This paper proposes a new Mixed-Order Spectral Clustering (MOSC) approach to model both second-order and third-order structures simultaneously, with two MOSC methods developed based on Graph Laplacian (GL) and Random Walks (RW). MOSC-GL combines edge and triangle adjacency matrices, with theoretical performance guarantee. MOSC-RW combines first-order and second-order random walks for a probabilistic interpretation. We automatically determine the mixing parameter based on cut criteria or triangle density, and construct new structure-aware error metrics for performance evaluation. Experiments on real-world networks show 1) the superior performance of two MOSC methods over existing SC methods, 2) the effectiveness of the mixing parameter determination strategy, and 3) insights offered by the structure-aware error metrics." }
{ "title": "Multiway Spectral Partitioning and Higher-Order Cheeger Inequalities", "abstract": "A basic fact in spectral graph theory is that the number of connected components in an undirected graph is equal to the multiplicity of the eigenvalue zero in the Laplacian matrix of the graph. In particular, the graph is disconnected if and only if there are at least two eigenvalues equal to zero. Cheeger's inequality and its variants provide an approximate version of the latter fact; they state that a graph has a sparse cut if and only if there are at least two eigenvalues that are close to zero. It has been conjectured that an analogous characterization holds for higher multiplicities: There are k eigenvalues close to zero if and only if the vertex set can be partitioned into k subsets, each defining a sparse cut. We resolve this conjecture positively. Our result provides a theoretical justification for clustering algorithms that use the bottom k eigenvectors to embed the vertices into R k , and then apply geometric considerations to the embedding. We also show that these techniques yield a nearly optimal quantitative connection between the expansion of sets of size ≈ n/k and λ k , the kth smallest eigenvalue of the normalized Laplacian, where n is the number of vertices. In particular, we show that in every graph there are at least k/2 disjoint sets (one of which will have size at most 2n/k), each having expansion at most O( λ k log k). Louis, Raghavendra, Tetali, and Vempala have independently proved a slightly weaker version of this last result. The log k bound is tight, up to constant factors, for the \"noisy hypercube\" graphs." }
1902.10424
1808.00449
Colorization
For example, in the bottom example of Figure 6 the pixel values #REFR 18.6086 1.0287 plotted for the baseline CNN are in many cases close to 0, and occasionally spike to high values.
[ "This also goes for comparison to the flow-based postprocessing network by Lai et al.", "The transform invariance formulation with α = 0.95 gives the best smoothness, and with a PSNR close to the other regularization settings.", "Examples of the impact of the regularization techniques are demonstrated in Figure 6 .", "The baseline CNN can exhibit large frame to frame differences, which is much less likely after performing the regularized training.", "Also, there is an overall increase in the reconstruction performancewhereas the baseline has a tendency to fail in many of the frames, this is less likely to happen when accounting for the differences between frames in the loss evaluation." ]
[ "This problem is alleviated by the regularization, resulting in both overall better reconstruction and smoother changes between frames." ]
[ "baseline CNN" ]
background
{ "title": "Single-Frame Regularization for Temporally Stable CNNs", "abstract": "Convolutional neural networks (CNNs) can model complicated non-linear relations between images. However, they are notoriously sensitive to small changes in the input. Most CNNs trained to describe image-to-image mappings generate temporally unstable results when applied to video sequences, leading to flickering artifacts and other inconsistencies over time. In order to use CNNs for video material, previous methods have relied on estimating dense frame-to-frame motion information (optical flow) in the training and/or the inference phase, or by exploring recurrent learning structures. We take a different approach to the problem, posing temporal stability as a regularization of the cost function. The regularization is formulated to account for different types of motion that can occur between frames, so that temporally stable CNNs can be trained without the need for video material or expensive motion estimation. The training can be performed as a fine-tuning operation, without architectural modifications of the CNN. Our evaluation shows that the training strategy leads to large improvements in temporal smoothness. Moreover, for small datasets the regularization can help in boosting the generalization performance to a much larger extent than what is possible with naïve augmentation strategies." }
{ "title": "Learning Blind Video Temporal Consistency", "abstract": "Style transfer Intrinsic decomposition Fig. 1 : Applications of the proposed method. Our algorithm takes perframe processed videos with serious temporal flickering as inputs (lower-left) and generates temporally stable videos (upper-right) while maintaining perceptual similarity to the processed frames. Our method is blind to the specific image processing algorithm applied to input videos and runs a high frame-rates. This figure contains animated videos, which are best viewed using Adobe Acrobat. Abstract. Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient approach based on a deep recurrent network for enforcing temporal consistency in a video. Our method takes the original and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied to the original video. We train the proposed network by minimizing both shortterm and long-term temporal losses as well as a perceptual loss to strike a balance between temporal coherence and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos." }
1905.02882
1808.00449
Dataset and Experimental Settings
Checking the columns marked as "CombCN" and "Ours", it can be found that our results outperform CombCN in all mask and dataset combinations, especially on the complex natuaral dataset DAVIS+VIDEVO #REFR and random masks.
[ "We test our network on two datasets and three types of masks.", "FaceForensics #OTHEREFR is a human face dataset containing 1,004 video clips with near frontal pose and neutral expression changed across frames.", "To fully excavate the potential of our framework, we also test on the DAVIS+VIDEVO dataset #OTHEREFR which has 190 videos and contains a variety of moving objects and motion types. The three types of masks include: Figure 4 .", "Results: The first 2 samples are from FaceForensics #OTHEREFR and the rest are from DAVIS+VIDEVO #OTHEREFR .", "The rows shows different frames in a video, and the column shows the results comparison under different masks." ]
[ "1) Fixed rectangles: the rectangle masks are the same across all frames in a video;", "2) Random rectangles: each frame in a video have rectangle mask of changing size and locations;", "3) Random walker: the masks have random streaks and holes of arbitrary shapes as in #OTHEREFR For the two rectangle masks, we follow the setting in #OTHEREFR which generates masks of size between [0.375l, 0.5l], where l is the frame size.", "In all experiments, we use FlowNet2 #OTHEREFR for online optical flow generation.", "As mentioned, we also pre-train the frame inpainting network H s and the flow inpainting network H c to boost the training process." ]
[ "random masks" ]
result
{ "title": "Frame-Recurrent Video Inpainting by Robust Optical Flow Inference", "abstract": "In this paper, we present a new inpainting framework for recovering missing regions of video frames. Compared with image inpainting, performing this task on video presents new challenges such as how to preserving temporal consistency and spatial details, as well as how to handle arbitrary input video size and length fast and efficiently. Towards this end, we propose a novel deep learning architecture which incorporates ConvLSTM and optical flow for modeling the spatial-temporal consistency in videos. It also saves much computational resource such that our method can handle videos with larger frame size and arbitrary length streamingly in real-time. Furthermore, to generate an accurate optical flow from corrupted frames, we propose a robust flow generation module, where two sources of flows are fed and a flow blending network is trained to fuse them. We conduct extensive experiments to evaluate our method in various scenarios and different datasets, both qualitatively and quantitatively. The experimental results demonstrate the superior of our method compared with the state-of-the-art inpainting approaches." }
{ "title": "Learning Blind Video Temporal Consistency", "abstract": "Style transfer Intrinsic decomposition Fig. 1 : Applications of the proposed method. Our algorithm takes perframe processed videos with serious temporal flickering as inputs (lower-left) and generates temporally stable videos (upper-right) while maintaining perceptual similarity to the processed frames. Our method is blind to the specific image processing algorithm applied to input videos and runs a high frame-rates. This figure contains animated videos, which are best viewed using Adobe Acrobat. Abstract. Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient approach based on a deep recurrent network for enforcing temporal consistency in a video. Our method takes the original and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied to the original video. We train the proposed network by minimizing both shortterm and long-term temporal losses as well as a perceptual loss to strike a balance between temporal coherence and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos." }
1905.01639
1808.00449
Recurrence and Memory
We adopt a convolutional LSTM (ConvLSTM) layer and a warping loss as suggested in #REFR .
[ "Our formulation encourages the current output to be conditional to the previous output frame.", "The knowledge from the previous output encourages the traceable features to be kept unchanged, while the untraceable (e.g. occlusion) points to be synthesized.", "This not only helps the output to be consistent along the motion trajectories but also avoids ghosting artifacts at occlusions or motion discontinuities.", "While the recurrent feedback connects the consecutive frames, filling in the large holes requires more long-term (e.g. 5 frames) knowledge.", "At this point, the temporal memory layer can help to connect internal features from different time steps in the long term." ]
[ "In particular, we feed the composite feature F c at the scale 1/8 to the ConvLSTM at every time step." ]
[ "warping loss" ]
method
{ "title": "Deep Video Inpainting", "abstract": "Video inpainting aims to fill spatio-temporal holes with plausible content in a video. Despite tremendous progress of deep neural networks for image inpainting, it is challenging to extend these methods to the video domain due to the additional time dimension. In this work, we propose a novel deep network architecture for fast video inpainting. Built upon an image-based encoder-decoder model, our framework is designed to collect and refine information from neighbor frames and synthesize still-unknown regions. At the same time, the output is enforced to be temporally consistent by a recurrent feedback and a temporal memory module. Compared with the state-of-the-art image inpainting algorithm, our method produces videos that are much more semantically correct and temporally smooth. In contrast to the prior video completion method which relies on time-consuming optimization, our method runs in near real-time while generating competitive video results. Finally, we applied our framework to video retargeting task, and obtain visually pleasing results." }
{ "title": "Learning Blind Video Temporal Consistency", "abstract": "Style transfer Intrinsic decomposition Fig. 1 : Applications of the proposed method. Our algorithm takes perframe processed videos with serious temporal flickering as inputs (lower-left) and generates temporally stable videos (upper-right) while maintaining perceptual similarity to the processed frames. Our method is blind to the specific image processing algorithm applied to input videos and runs a high frame-rates. This figure contains animated videos, which are best viewed using Adobe Acrobat. Abstract. Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient approach based on a deep recurrent network for enforcing temporal consistency in a video. Our method takes the original and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied to the original video. We train the proposed network by minimizing both shortterm and long-term temporal losses as well as a perceptual loss to strike a balance between temporal coherence and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos." }
1912.04950
1808.00449
Experimental Setup
We denote D LPIPS and E warp as the mean of d LPIPS and e warp over all test frames, and use the evaluation code from Lai et al. #REFR to compute them.
[ "The first metric, warping error, quantifies smoothness between output frames by measuring flow-based photometric consistency between consecutive frames in the final prediction o a and o a+1 :", "is the estimated flow between input frames v a and v a+1 ); p indexes the pixels in the frame; and M f a indicates pixels with reliable flow (1 for reliable, 0 for unreliable).", "The flow reliability mask is computed based on flow consistency and motion boundaries as defined by Ruder et al. #OTHEREFR .", "The second metric quantifies the similarity between a frame-wise translated video and the output by computing LPIPS dis-", "where p a and o a respectively are frames obtained via framewise translation and the model to evaluate, and φ LPIPS is a distance between several layers of feature activations extracted from a perceptual distance network θ LPIPS ." ]
[ "D LPIPS is used in blind video consistency to measure adherence to the intended task and video content #OTHEREFR ; however, it has a crucial limitation that highlights the need for a more meaningful metric.", "Specifically, lower is supposed to be better, but an excessively low value indicates that the evaluated method is reproducing the frame-wise translated video instead of resolving the flickering issue that our work tries to address.", "On the other hand, D LPIPS has merit in that a very high value indicates blurriness and/or incongruity with the intended stylization.", "Proposing a new metric is outside the scope of this work; instead, we report D LPIPS for the sake of completeness and conformity with prior work, and preface our analysis with the aforementioned issues.", "Datasets." ]
[ "warp", "test frames" ]
method
{ "title": "HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks", "abstract": "Video-to-video translation for super-resolution, inpainting, style transfer, etc. is more difficult than corresponding image-to-image translation tasks due to the temporal consistency problem that, if left unaddressed, results in distracting flickering effects. Although video models designed from scratch produce temporally consistent results, training them to match the vast visual knowledge captured by image models requires an intractable number of videos. To combine the benefits of image and video models, we propose an image-to-video model transfer method called Hyperconsistency (HyperCon) that transforms any well-trained image model into a temporally consistent video model without fine-tuning. HyperCon works by translating a synthetic temporally interpolated video frame-wise and then aggregating over temporally localized windows on the interpolated video. It handles both masked and unmasked inputs, enabling support for even more video-to-video tasks than prior image-to-video model transfer techniques. We demonstrate HyperCon on video style transfer and inpainting, where it performs favorably compared to prior state-ofthe-art video consistency and video inpainting methods, all without training on a single stylized or incomplete video." }
{ "title": "Learning Blind Video Temporal Consistency", "abstract": "Style transfer Intrinsic decomposition Fig. 1 : Applications of the proposed method. Our algorithm takes perframe processed videos with serious temporal flickering as inputs (lower-left) and generates temporally stable videos (upper-right) while maintaining perceptual similarity to the processed frames. Our method is blind to the specific image processing algorithm applied to input videos and runs a high frame-rates. This figure contains animated videos, which are best viewed using Adobe Acrobat. Abstract. Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient approach based on a deep recurrent network for enforcing temporal consistency in a video. Our method takes the original and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied to the original video. We train the proposed network by minimizing both shortterm and long-term temporal losses as well as a perceptual loss to strike a balance between temporal coherence and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos." }
1912.04950
1808.00449
Experimental Setup
D LPIPS is used in blind video consistency to measure adherence to the intended task and video content #REFR ; however, it has a crucial limitation that highlights the need for a more meaningful metric.
[ "is the estimated flow between input frames v a and v a+1 ); p indexes the pixels in the frame; and M f a indicates pixels with reliable flow (1 for reliable, 0 for unreliable).", "The flow reliability mask is computed based on flow consistency and motion boundaries as defined by Ruder et al. #OTHEREFR .", "The second metric quantifies the similarity between a frame-wise translated video and the output by computing LPIPS dis-", "where p a and o a respectively are frames obtained via framewise translation and the model to evaluate, and φ LPIPS is a distance between several layers of feature activations extracted from a perceptual distance network θ LPIPS .", "We denote D LPIPS and E warp as the mean of d LPIPS and e warp over all test frames, and use the evaluation code from Lai et al. #OTHEREFR to compute them." ]
[ "Specifically, lower is supposed to be better, but an excessively low value indicates that the evaluated method is reproducing the frame-wise translated video instead of resolving the flickering issue that our work tries to address.", "On the other hand, D LPIPS has merit in that a very high value indicates blurriness and/or incongruity with the intended stylization.", "Proposing a new metric is outside the scope of this work; instead, we report D LPIPS for the sake of completeness and conformity with prior work, and preface our analysis with the aforementioned issues.", "Datasets.", "For evaluation, we use the YouTube-VOS #OTHEREFR and DAVIS #OTHEREFR video datasets, which primarily consist of dynamic outdoor scenes of animals, dancers, bikers, etc." ]
[ "blind video consistency" ]
method
{ "title": "HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks", "abstract": "Video-to-video translation for super-resolution, inpainting, style transfer, etc. is more difficult than corresponding image-to-image translation tasks due to the temporal consistency problem that, if left unaddressed, results in distracting flickering effects. Although video models designed from scratch produce temporally consistent results, training them to match the vast visual knowledge captured by image models requires an intractable number of videos. To combine the benefits of image and video models, we propose an image-to-video model transfer method called Hyperconsistency (HyperCon) that transforms any well-trained image model into a temporally consistent video model without fine-tuning. HyperCon works by translating a synthetic temporally interpolated video frame-wise and then aggregating over temporally localized windows on the interpolated video. It handles both masked and unmasked inputs, enabling support for even more video-to-video tasks than prior image-to-video model transfer techniques. We demonstrate HyperCon on video style transfer and inpainting, where it performs favorably compared to prior state-ofthe-art video consistency and video inpainting methods, all without training on a single stylized or incomplete video." }
{ "title": "Learning Blind Video Temporal Consistency", "abstract": "Style transfer Intrinsic decomposition Fig. 1 : Applications of the proposed method. Our algorithm takes perframe processed videos with serious temporal flickering as inputs (lower-left) and generates temporally stable videos (upper-right) while maintaining perceptual similarity to the processed frames. Our method is blind to the specific image processing algorithm applied to input videos and runs a high frame-rates. This figure contains animated videos, which are best viewed using Adobe Acrobat. Abstract. Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient approach based on a deep recurrent network for enforcing temporal consistency in a video. Our method takes the original and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied to the original video. We train the proposed network by minimizing both shortterm and long-term temporal losses as well as a perceptual loss to strike a balance between temporal coherence and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos." }
1912.10687
1808.00449
Related Works
However, the limitation is that the corresponding method differs from the dense correspondence information used for each image processing method and depends heavily on the quality of that information. Lai et al. #REFR proposed a temporal consistency scheme for learning-based methods.
[ "In this paper, we deal with the synthesis of light field images, especially the synthesis of light field videos.", "Different from conventional methods, no camera position information and no two or more input images are required to synthesize one light field image.", "We also propose a deep learningbased framework that synthesizes a light field video from a monocular video rather than a single image for static objects.", "Video Temporal Consistency Research has been conducted for a long time to solve the temporal inconsistency that occurs when an image processing method is applied to a video. Bonnel et al.", "#OTHEREFR proposed a method that provides temporal consistency to a video over general image processing methods, rather than specific image processing methods." ]
[ "The method adds a long short term memory (LSTM) layer to various learning-based methods, such as colorization, enhancement, style transfer, and intrinsic decomposition of the image, to provide temporal consistency.", "In this paper, we propose a method that provides temporal consistency for light field videos, not monocular videos." ]
[ "dense correspondence information" ]
method
{ "title": "5D Light Field Synthesis from a Monocular Video", "abstract": "Commercially available light field cameras have difficulty in capturing 5D (4D + time) light field videos. They can only capture still light filed images or are excessively expensive for normal users to capture the light field video. To tackle this problem, we propose a deep learning-based method for synthesizing a light field video from a monocular video. We propose a new synthetic light field video dataset that renders photorealistic scenes using UnrealCV rendering engine because no light field dataset is avaliable. The proposed deep learning framework synthesizes the light field video with a full set (9×9) of sub-aperture images from a normal monocular video. The proposed network consists of three sub-networks, namely, feature extraction, 5D light field video synthesis, and temporal consistency refinement. Experimental results show that our model can successfully synthesize the light field video for synthetic and actual scenes and outperforms the previous frame-byframe methods quantitatively and qualitatively. The synthesized light field can be used for conventional light field applications, namely, depth estimation, viewpoint change, and refocusing." }
{ "title": "Learning Blind Video Temporal Consistency", "abstract": "Style transfer Intrinsic decomposition Fig. 1 : Applications of the proposed method. Our algorithm takes perframe processed videos with serious temporal flickering as inputs (lower-left) and generates temporally stable videos (upper-right) while maintaining perceptual similarity to the processed frames. Our method is blind to the specific image processing algorithm applied to input videos and runs a high frame-rates. This figure contains animated videos, which are best viewed using Adobe Acrobat. Abstract. Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient approach based on a deep recurrent network for enforcing temporal consistency in a video. Our method takes the original and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied to the original video. We train the proposed network by minimizing both shortterm and long-term temporal losses as well as a perceptual loss to strike a balance between temporal coherence and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos." }
1902.06359
1808.00093
V. RELATED WORK
However, the proposed hybrid architectures in #REFR rely on a Trusted Third Party (TTP) as an oracle and thus the architectures are not completely decentralized.
[ "Recently, off-chain resources of blockchain have been widely studied by researchers to improve the performance of blockchains #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR .", "Among them, #OTHEREFR and #OTHEREFR are most relevant to our work.", "In #OTHEREFR , smart contracts are implemented using hybrid architectures similar to the hybrid-on/off-chain execution model proposed in our work." ]
[ "In #OTHEREFR , a smart contract system named Arbitrum is developed, where smart contracts are designed to be executed off-chain.", "As the system is specially designed for this purpose, it is hard to generalize the system-level design to existing systems such as Ethereum.", "In this paper, instead of treating the use of offchain contracts as a system-level design goal, we consider the hybrid-on/off-chain computation model as an applicationlevel smart contract design pattern and also as a building block for enhancing blockchain scalability and privacy.", "Thus, the proposed approach is a plug-and-play solution that is compatible with existing smart contract systems and their time-tested infrastructure and community.", "In addition, the combination of the proposed approach and other system-level or application-level solutions, such as sharding #OTHEREFR and zero knowledge proof #OTHEREFR , can further enhance the scalability and privacy of the smart contract systems." ]
[ "Trusted Third Party" ]
background
{ "title": "Scalable and Privacy-Preserving Design of On/Off-Chain Smart Contracts", "abstract": "The rise of smart contract systems such as Ethereum has resulted in a proliferation of blockchain-based decentralized applications including applications that store and manage a wide range of data. Current smart contracts are designed to be executed solely by miners and are revealed entirely on-chain, resulting in reduced scalability and privacy. In this paper, we discuss that scalability and privacy of smart contracts can be enhanced by splitting a given contract into an off-chain contract and an on-chain contract. Specifically, functions of the contract that involve high-cost computation or sensitive information can be split and included as the off-chain contract, that is signed and executed by only the interested participants. The proposed approach allows the participants to reach unanimous agreement off-chain when all of them are honest, allowing computing resources of miners to be saved and content of the off-chain contract to be hidden from the public. In case of a dispute caused by any dishonest participants, a signed copy of the offchain contract can be revealed so that a verified instance can be created to make miners enforce the true execution result. Thus, honest participants have the ability to redress and penalize any fraudulent or dishonest behavior, which incentivizes all participants to honestly follow the agreed off-chain contract. We discuss techniques for splitting a contract into a pair of on/off-chain contracts and propose a mechanism to address the challenges of handling dishonest participants in the system. Our implementation and evaluation of the proposed approach using an example smart contract demonstrate the effectiveness of the proposed approach in Ethereum. 7" }
{ "title": "Implementation of Smart Contracts Using Hybrid Architectures with On and Off–Blockchain Components", "abstract": "Decentralised (on-blockchain) and centralised (offblockchain) platforms are available for the implementation of smart contracts. However, none of the two alternatives can individually provide the services and quality of services (QoS) imposed on smart contracts involved in a large class of applications. The reason is that blockchain platforms suffer from scalability, performance, transaction costs and other limitations. Likewise, off-blockchain platforms are afflicted by drawbacks emerging from their dependence on single trusted third parties. We argue that in several applications, hybrid platforms composed from the integration of on and off-blockchain platforms are more adequate. Developers that informatively choose between the three alternatives are likely to implement smart contracts that deliver the expected QoS. Hybrid architectures are largely unexplored. To help cover the gap and as a proof of concept, in this paper we discuss the implementation of smart contracts on hybrid architectures. We show how a smart contract can be split and executed partially on an off-blockchain contract compliance checker and partially on the rinkeby ethereum network. To test the solution, we expose it to sequences of contractual operations generated mechanically by a contract validator tool." }
0812.4963
0704.0608
Introduction
Hong, Simis and Vasconcelos #REFR had identified the ideal A if m ¼ 3 and n e 5, and they proposed a conjectural, inductive procedure for finding a generating set of A if n is arbitrary.
[ "In the present paper, the column degrees of j are ð1; . . . ; 1; nÞ.", "In other words, the entries of one column of j have arbitrary degree n, all of the other entries of j are linear; we say that the ideal I is almost linearly presented.", "In this setting we are able to identify homogeneous generators of the defining ideal A of the Rees ring RðI Þ.", "We can safely assume that n f 2, for otherwise I ¼ ðx; yÞ d and the answer is well known (see, for instance, #OTHEREFR ).", "Incidentally, except when n ¼ 1, the Rees ring RðI Þ is never Cohen-Macaulay." ]
[ "Their conjecture was proved in #OTHEREFR , thus solving the case of arbitrary almost linearly presented almost complete intersection ideals in two variables.", "Whereas the method of #OTHEREFR and #OTHEREFR is based on iterations of 'Jacobian duals' and 'Sylvester determinants', our approach is entirely different and allows closed formulas for all defining equations at once, besides avoiding the need to restrict the number of generators of I .", "To determine the defining ideal A of the Rees ring one often uses the fact that its presentation map P factors through the symmetric algebra SymðI Þ.", "It then remains to determine the kernel of the natural epimorphism SymðI Þ ! RðI Þ; since the defining ideal of SymðI Þ can be described easily.", "On the downside however, SymðI Þ does not have good ring-theoretic properties in general, for instance, it is hardly ever a domain." ]
[ "ideal" ]
background
{ "title": "Rational normal scrolls and the defining equations of Rees algebras", "abstract": "Consider a height two ideal, I , which is minimally generated by m homogeneous forms of degree d in the polynomial ring R ¼ k½x; y. Suppose that one column in the homogeneous presenting matrix j of I has entries of degree n and all of the other entries of j are linear. We identify an explicit generating set for the ideal A which defines the Rees algebra R ¼ R½It; so R ¼ S=A for the polynomial ring S ¼ R½T 1 ; . . . ; T m . We resolve R as an S-module and I s as an R-module, for all powers s. The proof uses the homogeneous coordinate ring, A ¼ S=H, of a rational normal scroll, with H L A. The ideal AA is isomorphic to the n th symbolic power of a height one prime ideal K of A. The ideal K ðnÞ is generated by monomials. Whenever possible, we study A=K ðnÞ in place of A=AA because the generators of K ðnÞ are much less complicated then the generators of AA. We obtain a filtration of K ðnÞ in which the factors are polynomial rings, hypersurface rings, or modules resolved by generalized Eagon-Northcott complexes. The generators of I parameterize an algebraic curve C in projective m À 1 space. The defining equations of the special fiber ring R=ðx; yÞR yield a solution of the implicitization problem for C." }
{ "title": "On the homology of two-dimensional elimination", "abstract": "We study birational maps with empty base locus defined by almost complete intersection ideals. Birationality is shown to be expressed by the equality of two Chern numbers. We provide a relatively effective method of their calculation in terms of certain Hilbert coefficients. In dimension two the structure of the irreducible ideals leads naturally to the calculation of Sylvester determinants via a computer-assisted method. For degree at most 5 we produce the full set of defining equations of the base ideal. The results answer affirmatively some questions raised by D. Cox ([9])." }
0906.1591
0704.0608
Binary Ideals
Here we give a general format of the elimination equation of I up to a power, thus answering several questions raised in #REFR .
[ "In this section we take d = 2 and write R = k[x, y] (instead of the general notation R = k[x 1 , x 2 ]).", "Let I ⊂ R = k[x, y] be an (x, y)-primary ideal generated by three forms of degree n.", "Suppose that I has a minimal free resolution 0 → R", "We will assume throughout the section that the first column of ϕ has degree r, the other degree s ≥ r. We note n = r + s." ]
[]
[ "elimination equation" ]
background
{ "title": "Equations of Almost Complete Intersections", "abstract": "In this paper we examine the role of four Hilbert functions in the determination of the defining relations of the Rees algebra of almost complete intersections of finite colength. Because three of the corresponding modules are Artinian, some of these relationships are very effective, opening up tracks to the determination of the equations and also to processes of going from homologically defined sets of equations to higher degrees ones assembled by resultants." }
{ "title": "On the homology of two-dimensional elimination", "abstract": "We study birational maps with empty base locus defined by almost complete intersection ideals. Birationality is shown to be expressed by the equality of two Chern numbers. We provide a relatively effective method of their calculation in terms of certain Hilbert coefficients. In dimension two the structure of the irreducible ideals leads naturally to the calculation of Sylvester determinants via a computer-assisted method. For degree at most 5 we produce the full set of defining equations of the base ideal. The results answer affirmatively some questions raised by D. Cox ([9])." }
1704.04227
1309.3189
Assumption 2.2
We are interested in strong approximations (meansquare) of (3.1), in the case of super-or sub-linear drift and diffusion coefficients and cover cases not included in the previous work #REFR .
[ "Let W t,ω : [0, T ] × Ω → R be a one-dimensional Wiener process adapted to the filtration {F t } 0≤t≤T . Consider the following stochastic differential equation (SDE), (3.1)", "where the coefficients a, b : [0, T ] × R → R are measurable functions such that (3.1) has a unique strong solution and x 0 is independent of all {W t } 0≤t≤T . SDE (3.1) has nonautonomous coefficients, i.e. a(t, x), b(t, x) depend explicitly on t.", "To be more precise, we assume the existence of a predictable stochastic process", "and", "SDEs of the form (3.1) have rarely explicit solutions, thus numerical approximations are necessary for simulations of the paths x t (ω), or for approximation of functionals of the form EF (x), where F :" ]
[ "The purpose of this section is to further generalize the semi-discrete (SD) method covering cases of sub-linear diffusion coefficients such as the Cox-Ingersoll-Ross model (CIR) or the Constant Elasticity of Variance model (CEV) (cf.", "[3, (1.2) and (1.4)]), where also an additive discretization is considered.", ", where f 1 , f 2 , g satisfy the following conditions", "for some appropriate 0 < l (we take 0 < l ≤ p/2 in Theorem 3.8) ) the quantity C R depends on R and x ∨ y denotes the maximum of x, y.(By the fact that we want the problem (3.1) to be well-posed and by the conditions on f 1 , f 2 and g we get that f 1 , f 2 , g are bounded on bounded intervals.) ✷ Let the equidistant partition 0 = t 0 < t 1 < ...", "< t N = T and ∆ = T /N. We propose the following semi-discrete numerical scheme" ]
[ "diffusion coefficients", "super-or sub-linear drift" ]
background
{ "title": "A boundary preserving numerical scheme for the Wright-Fisher model", "abstract": "Abstract. We are interested in the numerical approximation of non-linear stochastic differential equations (SDEs) with solution in a certain domain. Our goal is to construct explicit numerical schemes that preserve that structure. We generalize the semi-discrete method Halidias N. and Stamatiou I.S. (2016), On the numerical solution of some non-linear stochastic differential equations using the Semi-Discrete method, Computational Methods in Applied Mathematics,16(1) and propose a numerical scheme, for which we prove a strong convergence result, to a class of SDEs that appears in population dynamics and ion channel dynamics within cardiac and neuronal cells. We furthermore extend our scheme to a multidimensional case." }
{ "title": "On the Numerical Solution of Some Non-Linear Stochastic Differential Equations Using the Semi-Discrete Method", "abstract": "We are interested in the numerical solution of stochastic di erential equations with non-negative solutions. Our goal is to construct explicit numerical schemes that preserve positivity, even for super-linear stochastic di erential equations. It is well known that the usual Euler scheme diverges on super-linear problems and the tamed Euler method does not preserve positivity. In that direction, we use the semi-discrete method that the rst author has proposed in two previous papers. We propose a new numerical scheme for a class of stochastic di erential equations which are super-linear with non-negative solution. The Heston /model appearing in nancial mathematics belongs to this class of stochastic di erential equations. For this model we prove, through numerical experiments, the \"optimal\" order of strong convergence at least / of the semi-discrete method. and The drift coe cient a is the in nitesimal mean of the process x t and the di usion coe cient b is the in nitesimal standard deviation of the process x t . SDEs of the form ( . ) have rarely explicit solutions, thus numerical approximations are necessary for simulations of the paths x t (ω), or for approximation of functionals of the form F(x), where F : C([ , T], ℝ) → ℝ can be for example in the area of nance, the discounted payo of European type derivative. We are interested in strong approximations (mean-square) of ( . ), in the case of super-or sub-linear drift and di usion coe cients. These kinds of numerical schemes have applications in many areas, such as simulating scenarios, ltering or visualizing stochastic dynamics (see for instance [ , Section ] and references therein), they have theoretical interest (they provide fundamental insight for weak-sense schemes), and they generally do not involve simulations over long-time periods or of a signi cant number of trajectories. We present some models that are not linear both in the drift and di usion coe cient: • The following linear drift model was initially proposed for the dynamics of the in ation rate by Cox, Ingersoll and Ross [ , ( )] and is thus named CIR. It is used in the eld of nance as a description of the stochastic volatility procedure in the Heston model [ ], but also belongs to the fundamental family of SDEs that approximate Markov jump processes [ ]. The CIR model is described by the following SDE:" }
1909.03057
1903.10057
INTRODUCTION
RP directly supports their use of supercomputers or is used as a runtime system by third party workflow or workload management systems #REFR .
[ "Tasks are thus executed within time and space boundaries set by the resource scheduler.", "By implementing multi-level scheduling and late-binding, Pilot systems lower task scheduling overhead, enable higher task execution throughput, and allow greater control over the resources acquired to execute workloads.", "The pilot must interact with and is dependent on system software to manage the task execution.", "RADICAL-Pilot (RP) is a Pilot system that implements the pilot paradigm as outlined in Ref. #OTHEREFR .", "RP is implemented in Python and provides a well defined API and usage modes, and is being used by applications drawn from diverse domains, from earth sciences and biomolecular sciences to high-energy physics." ]
[ "In this paper, we characterize the performance of executing many tasks using RP when it is interfaced with JSM and PRTTE on Summit -a DOE leadership class machine and currently the top ranked supercomputer on the Top 500 list.", "Summit has 4,608 nodes IBM POWER9 processors and each node has 6 NVIDIA Volta V100s, with a theoretical peak performance of approximately 200 petaFLOPS.", "JSM is part of LSF and provides services for starting tasks on compute resources; PRTTE provides the server-side capabilities for a reference implementation of the process management interface for ExaScale (PMIx).", "Specifically, we describe and investigate the baseline performance of the integration of RP #OTHEREFR with JSM and PRRTE.", "We experimentally characterize the task execution rates, various overheads, and resource utilization rates." ]
[ "workload management systems", "third party workflow" ]
method
{ "title": "Characterizing the Performance of Executing Many-tasks on Summit", "abstract": "Many scientific workloads are comprised of many tasks, where each task is an independent simulation or analysis of data. The execution of millions of tasks on heterogeneous HPC platforms requires scalable dynamic resource management and multi-level scheduling. RADICAL-Pilot (RP) -an implementation of the Pilot abstraction, addresses these challenges and serves as an effective runtime system to execute workloads comprised of many tasks. In this paper, we characterize the performance of executing many tasks using RP when interfaced with JSM and PRRTE on Summit: RP is responsible for resource management and task scheduling on acquired resource; JSM or PRRTE enact the placement of launching of scheduled tasks. Our experiments provide lower bounds on the performance of RP when integrated with JSM and PRRTE. Specifically, for workloads comprised of homogeneous single-core, 15 minutes-long tasks we find that: PRRTE scales better than JSM for > O(1000) tasks; PRRTE overheads are negligible; and PRRTE supports optimizations that lower the impact of overheads and enable resource utilization of 63% when executing O(16K), 1-core tasks over 404 compute nodes." }
{ "title": "Middleware Building Blocks for Workflow Systems", "abstract": "Abstract-This paper describes a building blocks approach to the design of scientific workflow systems. We discuss RADICALCybertools as one implementation of the building blocks concept, showing how they are designed and developed in accordance with this approach. Four case studies are presented, discussing how RADICAL-Cybertools are integrated with existing workflow, workload, and general purpose computing systems to support the execution of scientific workflows. This paper offers three main contributions: (i) showing the relevance of the design principles of self-sufficiency, interoperability, composability and extensibility for middleware to support scientific workflows on high performance computing machines; (ii) illustrating a set of building blocks that enable multiple points of integration, which results in design flexibility and functional extensibility, as well as providing a level of \"unification\" in the conceptual reasoning across otherwise very different tools and systems; and (iii) showing how these building blocks have been used to develop and integrate workflow systems." }
1904.03085
1903.10057
Software description
AirFlow, Oozie, Azkaban, Spark Streaming, Storm, or Kafka are examples of tools that have a design consistent with the building blocks approach and that have been integrated with RCT #REFR .
[ "EnTK promotes ensembles of tasks to a high-level abstraction, providing a programming interface and execution model specific to ensemble-based applications.", "EnTK is engineered for scale and a diversity of computing platforms and runtime systems, agnostic of the size, type and coupling of the tasks comprising the ensemble.", "RCT are designed to work both individually and as an integrated system, with or without third party systems.", "This requires a \"Building Block\" approach to their design and development, based on applying the traditional notions of modularity at system level.", "The Building Block approach derives from the work on Service-oriented Architecture and its Microservice variants, and the component-based software development approaches where computational and compositional elements are explicitly separated #OTHEREFR ." ]
[]
[ "Spark Streaming", "building blocks approach" ]
method
{ "title": "RADICAL-Cybertools: Middleware Building Blocks for Scalable Science", "abstract": "RADICAL-Cybertools (RCT) are a set of software systems that serve as middleware to develop efficient and effective tools for scientific computing. Specifically, RCT enable executing many-task applications at extreme scale and on a variety of computing infrastructures. RCT are building blocks, designed to work as stand-alone systems, integrated among themselves or integrated with third-party systems. RCT enables innovative science in multiple domains, including but not limited to biophysics, climate science and particle physics, consuming hundreds of millions of core hours. This paper provides an overview of RCT components, their impact, and the architectural principle and software engineering underlying RCT." }
{ "title": "Middleware Building Blocks for Workflow Systems", "abstract": "Abstract-This paper describes a building blocks approach to the design of scientific workflow systems. We discuss RADICALCybertools as one implementation of the building blocks concept, showing how they are designed and developed in accordance with this approach. Four case studies are presented, discussing how RADICAL-Cybertools are integrated with existing workflow, workload, and general purpose computing systems to support the execution of scientific workflows. This paper offers three main contributions: (i) showing the relevance of the design principles of self-sufficiency, interoperability, composability and extensibility for middleware to support scientific workflows on high performance computing machines; (ii) illustrating a set of building blocks that enable multiple points of integration, which results in design flexibility and functional extensibility, as well as providing a level of \"unification\" in the conceptual reasoning across otherwise very different tools and systems; and (iii) showing how these building blocks have been used to develop and integrate workflow systems." }
1108.3845
quant-ph/0108010
Main results
Third, we compute the storage time numerically using the Monte Carlo technique developed by Terhal and DiVincenzo #REFR for simulating quantum dynamics and measurements for non-interacting fermions (see Section 7).
[ "We prove that the required localization condition is satisfied whenever the entries of the orthogonal matrix describing the time evolution of the Majorana modes decay exponentially away from the diagonal with an N -independent localization length ξ 1 , see Lemma 3 in Section 6.3.", "The localization length ξ 1 may diverge in the limit η → 0, as is the case in the standard 1D Anderson model, but it must be upper bounded as ξ 1 = O(η −γ ) for some sufficiently small constant γ > 0.", "Second, we give supporting evidence that the desired scaling of the localization length can be achieved by computing Lyapunov exponents of the one-particle eigenfunctions, see Section 6.2.", "This suggests a scaling ξ s ∼ log (η −1 ) in the limit η → 0 when the ratio µ/η is kept constant.", "We note that the logarithmic divergence of the localization length at the band center is a common feature of systems with disorder in the hopping amplitudes (so called off-diagonal disorder), see e.g. #OTHEREFR ." ]
[ "The running time of our algorithm grows as N 3 /δ 2 , where δ is the precision up to which one needs to estimate the storage fidelity.", "It allows us to compute the storage time for chains with a few hundred sites (up to N = 256) in the regime of strong perturbations 2 , that is, µ ∼ 1 and η = 0 (clean case), and η ∼ µ ∼ 1 (disordered case).", "The simulation shows that in the absence of disorder the storage time grows as a logarithm of the system size:", "This scaling has been recently predicted by Kay #OTHEREFR based on mean-field arguments, see Section 7 for details.", "In the presence of disorder we observe an approximately linear scaling E[T storage ] ∼ N ." ]
[ "quantum dynamics", "non-interacting fermions" ]
method
{ "title": "Disorder-assisted error correction in Majorana chains", "abstract": "It was recently realized that quenched disorder may enhance the reliability of topological qubits by reducing the mobility of anyons at zero temperature. Here we compute storage times with and without disorder for quantum chains with unpaired Majorana fermions -the simplest toy model of a quantum memory. Disorder takes the form of a random site-dependent chemical potential. The corresponding one-particle problem is a one-dimensional Anderson model with disorder in the hopping amplitudes. We focus on the zero-temperature storage of a qubit encoded in the ground state of the Majorana chain. Storage and retrieval are modeled by a unitary evolution under the memory Hamiltonian with an unknown weak perturbation followed by an error-correction step. Assuming dynamical localization of the one-particle problem, we show that the storage time grows exponentially with the system size. We give supporting evidence for the required localization property by estimating Lyapunov exponents of the one-particle eigenfunctions. We also simulate the storage process for chains with a few hundred sites. Our numerical results indicate that in the absence of disorder, the storage time grows only as a logarithm of the system size. We provide numerical evidence for the beneficial effect of disorder on storage times and show that suitably chosen pseudorandom potentials can outperform random ones." }
{ "title": "Classical simulation of noninteracting-fermion quantum circuits", "abstract": "We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant [1] corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits [2] ." }
1108.3845
quant-ph/0108010
Simulation of the syndrome measurement
For example, applying the algorithm of #REFR to our settings, we can reduce the problem of sampling s from π(s) to a series of simpler tasks: sample a bit s j from the conditional distribution of s j given s 1 , . . . , s j−1 , where j = 1, . . . , N − 1.
[ "Let t be some fixed time and let π(s) be the probability of measuring a syndrome s = (s 1 , . . .", ", s N −1 ) on the time-evolved state |g(t) .", "Our first goal is to describe an efficient algorithm that allows one to sample s from the distribution π(s).", "An important fact is that both time evolution and the stabilizer measurements belong to a class of operations known as fermionic linear optics for which efficient simulation algorithms have been described by Knill #OTHEREFR as well as Terhal and DiVincenzo #OTHEREFR ." ]
[ "Using the techniques of #OTHEREFR , the conditional probability of, say, s j = 0 can be computed as a ratio of two determinants representing probabilities of outcomes s 1 , . . .", ", s j−1 , 0 and s 1 , . . . , s j−1 .", "Once this conditional probability is known, the bit s j can be set by tossing a coin with an appropriate bias.", "Setting the bits of s one by one starting from s 1 , the computational cost of generating one full syndrome sample s in this fashion is O(N 4 ) since this involves the computation of O(N ) determinants of matrices of size O(N ), see #OTHEREFR for details.", "Here we propose a simplified version of this algorithm in which the computational cost of generating one full syndrome sample is only O(N 3 )." ]
[ "π(s" ]
method
{ "title": "Disorder-assisted error correction in Majorana chains", "abstract": "It was recently realized that quenched disorder may enhance the reliability of topological qubits by reducing the mobility of anyons at zero temperature. Here we compute storage times with and without disorder for quantum chains with unpaired Majorana fermions -the simplest toy model of a quantum memory. Disorder takes the form of a random site-dependent chemical potential. The corresponding one-particle problem is a one-dimensional Anderson model with disorder in the hopping amplitudes. We focus on the zero-temperature storage of a qubit encoded in the ground state of the Majorana chain. Storage and retrieval are modeled by a unitary evolution under the memory Hamiltonian with an unknown weak perturbation followed by an error-correction step. Assuming dynamical localization of the one-particle problem, we show that the storage time grows exponentially with the system size. We give supporting evidence for the required localization property by estimating Lyapunov exponents of the one-particle eigenfunctions. We also simulate the storage process for chains with a few hundred sites. Our numerical results indicate that in the absence of disorder, the storage time grows only as a logarithm of the system size. We provide numerical evidence for the beneficial effect of disorder on storage times and show that suitably chosen pseudorandom potentials can outperform random ones." }
{ "title": "Classical simulation of noninteracting-fermion quantum circuits", "abstract": "We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant [1] corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits [2] ." }
1801.01231
quant-ph/0108010
. (6.5)
Fermionic linear optical computing using only beam splitters and phases is efficiently classically simulatable #REFR .
[ "First of all note that we can obtain a Hadamard gate using a beam splitter and two binary black vertices (also known as X-gates):", "(6.6) Remark 6.2.", "Note that the two instances of an X-gate are used in order to perform the Hadamard on the even sector of two LFMs: without those, the beam splitter would give a Hadamard gate on the odd sector.", "Then, Z-phases with angle ϑ are obtained as follows:", "(6.7)" ]
[ "As mentioned in Section 2, the swap gate allows to recover universal quantum computation, but it is not physically implementable because linear optical setups are restricted to two dimensions.", "The even projector, also known as \"parity gate\" in the literature, has been discussed as a powerful resource for fermionic linear optical computing #OTHEREFR .", "It is known that this gate can be performed using charge measurements on fermions and that this allows universal quantum computation #OTHEREFR .", "We now recover this universality result in our framework: we will see that even projectors on the physical qubits allow to form \"holes\" as in (6.4) at the logical level.", "Our notation exhibits the topological nature of universal quantum computation with fermions. Remark 6.3." ]
[ "Fermionic linear optical" ]
background
{ "title": "A Diagrammatic Axiomatisation of Fermionic Quantum Circuits.", "abstract": "We introduce the fermionic ZW calculus, a string-diagrammatic language for fermionic quantum computing (FQC). After defining a fermionic circuit model, we present the basic components of the calculus, together with their interpretation, and show how the main physical gates of interest in FQC can be represented in our language. We then list our axioms, and derive some additional equations. We prove that the axioms provide a complete equational axiomatisation of the monoidal category whose objects are systems of finitely many local fermionic modes (LFMs), with maps that preserve or reverse the parity of states, and the tensor product as monoidal product. We achieve this through a procedure that rewrites any diagram in a normal form. As an example, we show how the statistics of a fermionic Mach-Zehnder interferometer can be calculated in the diagrammatic language. We conclude by giving a diagrammatic treatment of the dual-rail encoding, a standard method in optical quantum computing used to perform universal quantum computation." }
{ "title": "Classical simulation of noninteracting-fermion quantum circuits", "abstract": "We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant [1] corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits [2] ." }
1602.03539
quant-ph/0108010
III. MAIN RESULT: EFFICIENT PI-MO SIMULATION OF MATCHGATE CIRCUITS
More specifically, we show how to extend the result of #REFR to allow efficient classical simulation of matchgate circuits with arbitrary product inputs and measurements of arbitrary subsets of the output in the computational basis (that is, a PI-MO simulation).
[ "The comparisons between fermionic and bosonic linear optics at the ends of Section II C and Section II D seem to suggest that efficient PI-SO and CI-MO simulations of matchgates are possible for fundamentally different reasons-the former is a consequence of fermionic probabilities being described by determinants, whereas the latter seems to be a consequence of the linear rela-tion satisfied by free particles [i.e. Eq.", "(7)], and in fact only the latter seems possible for free bosons.", "In this section, we argue that this apparent difference is not fundamental." ]
[ "We begin by stating the following theorem:", "Theorem 3.", "Let {M n } be a uniform family of (possibly adaptive) quantum circuits composed of poly(n) nearest-neighbour matchgates acting on n qubits, and let the input be an arbitrary n-qubit product state ψ⟩ = ψ 1 ⟩ ψ 2 ⟩ . . . ψ n ⟩.", "Then, there are polynomial-time classical algorithms to simulate the corresponding outcomes in the weak, strong and adaptive sense.", "The first step to prove Theorem 3 is to replace the arbitrary product state ψ⟩ = ψ 1 ⟩ ψ 2 ⟩ . . ." ]
[ "matchgate circuits", "efficient classical simulation" ]
background
{ "title": "Efficient classical simulation of matchgate circuits with generalized inputs and measurements", "abstract": "Matchgates are a restricted set of two-qubit gates known to be classically simulable under particular conditions. Specifically, if a circuit consists only of nearest-neighbour matchgates, an efficient classical simulation is possible if either (i) the input is a computational basis state and the simulation requires computing probabilities of multi-qubit outcomes (including also adaptive measurements), or (ii) if the input is an arbitrary product state, but the output of the circuit consists of a single qubit. In this paper we extend these results to show that matchgates are classically simulable even in the most general combination of these settings, namely, if the inputs are arbitrary product states, if the measurements are over arbitrarily many output qubits, and if adaptive measurements are allowed. This remains true even for arbitrary single-qubit measurements, albeit only in a weaker notion of classical simulation. These results make for an interesting contrast with other restricted models of computation, such as Clifford circuits or (bosonic) linear optics, where the complexity of simulation varies greatly under similar modifications." }
{ "title": "Classical simulation of noninteracting-fermion quantum circuits", "abstract": "We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant [1] corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits [2] ." }
quant-ph/0403031
quant-ph/0108010
Discussion
It is likely that the extension to "fermion-parity preserving" quadratic Hamiltonians which was treated in Ref. #REFR , can be analyzed similarly using Slater determinants.
[ "We have presented an alternative description of the fermionic linear optics computation." ]
[ "We want to close with a few words of caution about the applicability of our results.", "We have indicated that the two-mode measurement that enables quantum computation is \"nondestructive\" and uses projective measurement 'elements'. What happens if we relax these conditions?", "If the measurement is destructive, it means that the modes | κ and | λ are no longer available for further processing.", "The \"tracing out\" of these two modes that this throwing away implies is implemented in second quantization in the following way: the density matrix of the system, after the application of the two-mode projectors discussed above, is changed by the application of two trace-preserving completely positive maps, T κ and T λ . The trace-over-ζ map T ζ is given by 4", "This map leaves the one-mode measurements unchanged; but the two-mode measurements are changed in a very important way." ]
[ "fermion-parity preserving" ]
method
{ "title": "Fermionic Linear Optics Revisited", "abstract": "We provide an alternative view of the efficient classical simulatibility of fermionic linear optics in terms of Slater determinants. We investigate the generic effects of two-mode measurements on the Slater number of fermionic states. We argue that most such measurements are not capable (in conjunction with fermion linear optics) of an efficient exact implementation of universal quantum computation. Our arguments do not apply to the two-mode parity measurement, for which exact quantum computation becomes possible, see [1] ." }
{ "title": "Classical simulation of noninteracting-fermion quantum circuits", "abstract": "We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant [1] corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits [2] ." }
1004.3791
quant-ph/0108010
Why Majorana fermion codes?
The spectrum and properties of Kitaev's model -as any other quadratic fermion Hamiltonian in the theory of topological insulators-are efficiently computable and quantum circuits which employ only non-interacting fermion Hamiltonians and simple fermionic measurements are efficiently simulatable classically, see #REFR .
[ "The logical Pauli operators for a qubit encoded into |0 and |1 can be chosen asX = c 1 , Y = −c 2L , andZ = ic 1 c 2L .", "Note that two of the logical operators c 1 and c 2L are of odd weight and hence would require the coherent creation/annihilation of a single fermion which is prohibited by superselection.", "An essential part of the model is that the only even-weight logical operator c 1 c 2L is very non-local.", "It is natural to assume that elementary perturbations to the Hamiltonian and errors can be represented by local even weight Majorana fermion operators.", "Hence in a perturbative analysis such as the Schrieffer-Wolf perturbation theory, the first contributions that split the energy degeneracy between |0 and |1 are expected to occur in O(L)th order implying that the splitting in degeneracy between |0 between |1 is exponentially small in L." ]
[ "Hence, if we are serious about using fermionic systems to robustly store and manipulate quantum information (see e.g.", "#OTHEREFR ), we will need some source of interaction to obtain quantum universality (see e.g. #OTHEREFR ). Let us mention that generalizations of the Hamiltonian Eq.", "(1) to interacting fermions have been recently considered by Fidkowski and Kitaev #OTHEREFR to study the effect of interactions on the classification of 1D topological insulators.", "The toy model, Eq.", "(1), demonstrates that fermionic parity conservation provides an alternative protection mechanism for the encoded qubit unrelated to topological quantum order." ]
[ "non-interacting fermion Hamiltonians" ]
background
{ "title": "Majorana Fermion Codes", "abstract": "We initiate the study of Majorana fermion codes. These codes can be viewed as extensions of Kitaev's 1D model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of Majorana fermion codes (MFCs) is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed which combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code." }
{ "title": "Classical simulation of noninteracting-fermion quantum circuits", "abstract": "We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant [1] corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits [2] ." }
1508.04099
quant-ph/0108010
Comparison with formal fermionic model
Analogue models with preserving number of fermions was studied in #REFR . The same d-dimensional unitary group Eq.
[ "After further application to each multiplierÛâ † j kÛ −1 and expansion as sums using Eq.", "(27) The permanent complexity is more essential in expressions for \"transition\" amplitudes (and probabilities) between two Fock states", "Basic example with consequent indexes in k and j corresponds to Eq. (9).", "For calculation of probabilities p kj a phase |θÛ | = 1 from Eq. (38) should be omitted.", "An alternative consideration with commutative polynomials resembling an abstract boson model discussed earlier in Sec. 2 may be found in #OTHEREFR ." ]
[ "(31) again can handle the evolution, but determinants are used instead of permanents and expressions for fermionic amplitudes may be efficiently evaluated #OTHEREFR . The analogue of Eq. (39) in fermionic case directly coincides with determinant.", "So, for fermions multi-mode measurement amplitudes also can be efficiently evaluated due to absence of problems with permanent calculation discussed earlier for bosons." ]
[ "fermions" ]
background
{ "title": "Permanents, Bosons and Linear Optics", "abstract": "Abstract. Expressions with permanents in quantum processes with bosons deserved recently certain attention. A difference between a pair of relevant models is discussed in presented work. The second model has certain resemblance with matchgate circuits also known as \"fermionic linear optics\" and effectively simulated on classical computer. The possibility of effective classical computations of average particle numbers in singlemode measurement for bosonic linear optical networks is treated using the analogy." }
{ "title": "Classical simulation of noninteracting-fermion quantum circuits", "abstract": "We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant [1] corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits [2] ." }
1005.1143
quant-ph/0108010
Introduction
Employing mappings between spin-1/2 systems and fermions, matchgate circuits further describe the dynamics of all noninteracting fermionic systems #REFR .
[ "Of particular interest, also in recent work, is the class of quantum processes generated by matchgates #OTHEREFR Cai & Choudhary 2006a,b; #OTHEREFR .", "The latter is a class of unitary two-qubit operations that are defined by certain algebraic constraints.", "The theory of matchgates is an instance of a research area that displays strong connections to both physics and computer science #OTHEREFR *maarten.vandennest@mpq.mpg.de #OTHEREFR Cai & Choudhary 2006a,b; #OTHEREFR .", "In the study of strongly correlated systems, for example, the dynamics of an important class of one-dimensional quantum systems such as the XY model are modelled by matchgate circuits, i.e.", "for such Hamiltonians H one can construct a poly-size matchgate circuit C t , such that C t = e itH , for any time t (e.g. #OTHEREFR ." ]
[ "In the theory of quantum computation, matchgates are of particular interest as they provide a key example of the class of non-trivial quantum circuits that cannot offer any speed-up over classical computers (for example, in spite of the complex entangled states such circuits may generate ; #OTHEREFR .", "In addition, matchgate computations were recently found to be equivalent to space-bounded universal quantum computation #OTHEREFR .", "In classical computer science, finally, matchgates occur in various studies related to, for example the theory of holographic algorithms #OTHEREFR Cai & Choudhary 2006a,b) .", "The aim of the present paper is to characterize the computational power of matchgate circuits.", "We will in particular study which Boolean functions 1 can be computed with such circuits." ]
[ "matchgate circuits", "noninteracting fermionic systems" ]
background
{ "title": "Quantum matchgate computations and linear threshold gates", "abstract": "The theory of matchgates is of interest in various areas in physics and computer science. Matchgates occur, for example, in the study of fermions and spin chains, in the theory of holographic algorithms and in several recent works in quantum computation. In this paper, we completely characterize the class of Boolean functions computable by unitary two-qubit matchgate circuits with some probability of success. We show that this class precisely coincides with that of the linear threshold gates. The latter is a fundamental family that appears in several fields, such as the study of neural networks. Using the above characterization, we further show that the power of matchgate circuits is surprisingly trivial in those cases where the computation is to succeed with high probability. In particular, the only functions that are matchgate-computable with success probability greater than 3/4 are functions depending on only a single bit of the input." }
{ "title": "Classical simulation of noninteracting-fermion quantum circuits", "abstract": "We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant [1] corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits [2] ." }
1308.1463
quant-ph/0108010
III. CLASSICAL SIMULATION OF MATCHGATES ON THE PATH AND CYCLE
These two-qubit Hamiltonians are precisely the generators of the group of nearest-neighbor matchgates #REFR .
[ "We begin by defining the Jordan-Wigner operators #OTHEREFR acting on n qubits:", "for j ∈ {1, . . .", ", n}, where X i , Y i , Z i denote the Pauli X, Y , and Z operators, respectively, acting on qubit i. Using this transformation, we can write", "for k ∈ {1, . . . , n} and", "for k ∈ {1, . . . , n − 1}." ]
[ "Suppose that the circuit being simulated has an initial product state input |ψ = |ψ 1 |ψ 2 . . .", "|ψ n , a sequence of nearest-neighbor matchgates, and a final measurement in the computational basis.", "To simulate the final measurement of qubit k, it suffices to calculate of the expectation value Z k = −i c 2k−1 c 2k = −i ψ| U † c 2k−1 c 2k U |ψ , where U is the unitary corresponding to the action of the matchgate circuit.", "To show that this can be calculated efficiently, we invoke the following (cf.", "#OTHEREFR , as stated in #OTHEREFR ): Theorem 2." ]
[ "two-qubit Hamiltonians" ]
background
{ "title": "The computational power of matchgates and the XY interaction on arbitrary graphs", "abstract": "Matchgates are a restricted set of two-qubit gates known to be classically simulable when acting on nearest-neighbor qubits on a path, but universal for quantum computation when the qubits are arranged on certain other graphs. Here we characterize the power of matchgates acting on arbitrary graphs. Specifically, we show that they are universal on any connected graph other than a path or a cycle, and that they are classically simulable on a cycle. We also prove the same dichotomy for the XY interaction, a proper subset of matchgates related to some implementations of quantum computing." }
{ "title": "Classical simulation of noninteracting-fermion quantum circuits", "abstract": "We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant [1] corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits [2] ." }
1309.1842
1309.1926
Chordless graphs
Most remarkably, since #REFR deals only with the case ∆ = 3, any cutset of two non-adjacent vertices actually determines a cutset of two non-adjacent edges.
[ "Both edge-colouring and total-colouring problems are NP-complete problems when restricted to unichord-free graphs, as proved by Machado, de Figueiredo and Vušković #OTHEREFR and by Machado and de Figueiredo #OTHEREFR ; hence, it is of interest to determine subclasses of unichordfree graphs for which edge-colouring and total-colouring are polynomial.", "Our main result is the following.", "Note that the edge-colouring of chordless graphs with maximum degree 3 was already established in #OTHEREFR .", "Theorem 1 relies on the decomposition theorem from #OTHEREFR .", "We emphasize, however, that there are differences between the proof of #OTHEREFR and ours." ]
[ "Such an edge-cutset is used to construct a natural induction on the decomposition blocks.", "Our proof uses a different strategy based on the existence of an extreme decomposition tree, in which one of the decomposition blocks is 2-sparse.", "This leads to our third and main motivation for this work: to understand how such kind of decomposition results, which are classically applied to the design of vertex colouring algorithms, can be useful in the development of algorithms for other colouring problems, in particular edge-colouring #OTHEREFR and total-colouring #OTHEREFR -the present work is successful in the sense that our chosen class of chordless graphs showed to be fruitful for the development of polynomial-time edge-colouring and total-colouring algorithms.", "Section 2 reviews the decomposition result for chordless graphs established in #OTHEREFR .", "Section 3 gives several results for a subclass of chordless graphs, the so-called 2-sparse graphs. Section 4 gives the proof of Theorem 1." ]
[ "two non-adjacent edges", "two non-adjacent vertices" ]
background
{ "title": "Edge-colouring and total-colouring chordless graphs", "abstract": "A graph G is chordless if no cycle in G has a chord. In the present work we investigate the chromatic index and total chromatic number of chordless graphs. We describe a known decomposition result for chordless graphs and use it to establish that every chordless graph of maximum degree ∆ ≥ 3 has chromatic index ∆ and total chromatic number ∆+1. The proofs are algorithmic in the sense that we actually output an optimal colouring of a graph instance in polynomial time." }
{ "title": "On graphs with no induced subdivision of $K_4$", "abstract": "We prove a decomposition theorem for graphs that do not contain a subdivision of K 4 as an induced subgraph where K 4 is the complete graph on four vertices. We obtain also a structure theorem for the class C of graphs that contain neither a subdivision of K 4 nor a wheel as an induced subgraph, where a wheel is a cycle on at least four vertices together with a vertex that has at least three neighbors on the cycle. Our structure theorem is used to prove that every graph in C is 3-colorable and entails a polynomial-time recognition algorithm for membership in C. As an intermediate result, we prove a structure theorem for the graphs whose cycles are all chordless." }
1602.02916
1309.1926
Decomposition Theorem
As explained in the Introduction, the proof of Theorem 4.1 closely follows the proof of the decomposition theorem for ISK4-free graphs from #REFR , but the proof of our theorem is easier because we restrict ourselves to the wheel-free case.
[ "In this section, we state a decomposition theorem for {ISK4,wheel}-free trigraphs (see Theorem 4.1 below), and then we derive an \"extreme\" decomposition theorem for this class of graphs, which states (roughly) that every {ISK4,wheel}-free trigraph is either \"basic\" or admits a \"decomposition\" such that one of the \"blocks of decomposition\" is basic (see Theorem 4.8 and Corollary 4.9).", "Here, we state Theorem 4.1 without proof, but the interested reader can find a complete proof in #OTHEREFR ." ]
[ "Interestingly, the fact that we work with trigraphs rather than graphs does not make the proof significantly harder. Theorem 4.1 #OTHEREFR Let G be an {ISK4,wheel}-free trigraph. Then at least one of the following holds:" ]
[ "ISK4-free graphs" ]
background
{ "title": "Stable Sets in {ISK4,wheel}-Free Graphs", "abstract": "An ISK4 in a graph G is an induced subgraph of G that is isomorphic to a subdivision of K 4 (the complete graph on four vertices). A wheel is a graph that consists of a chordless cycle, together with a vertex that has at least three neighbors in the cycle. A graph is {ISK4,wheel}-free if it has no ISK4 and does not contain a wheel as an induced subgraph. We give an O(|V (G)| 7 )-time algorithm to compute the maximum weight of a stable set in an input weighted {ISK4,wheel}-free graph G with non-negative integer weights." }
{ "title": "On graphs with no induced subdivision of $K_4$", "abstract": "We prove a decomposition theorem for graphs that do not contain a subdivision of K 4 as an induced subgraph where K 4 is the complete graph on four vertices. We obtain also a structure theorem for the class C of graphs that contain neither a subdivision of K 4 nor a wheel as an induced subgraph, where a wheel is a cycle on at least four vertices together with a vertex that has at least three neighbors on the cycle. Our structure theorem is used to prove that every graph in C is 3-colorable and entails a polynomial-time recognition algorithm for membership in C. As an intermediate result, we prove a structure theorem for the graphs whose cycles are all chordless." }
1602.02406
1309.1926
Introduction
We now wish to state the decomposition theorem for {ISK4,wheel}-free graphs from #REFR , but we first need a few definitions.
[ "An ISK4 in a graph G is an induced subgraph of G that is isomorphic to a subdivision of K 4 (the complete graph on four vertices).", "A wheel is a graph that consists of a chordless cycle, together with a vertex that has at least three neighbors in the cycle.", "Lévêque, Maffray, and Trotignon #OTHEREFR proved a decomposition theorem for ISK4-free graphs and then derived a decomposition theorem for {ISK4,wheel}-free graphs as a corollary.", "Here, we are interested in a class that generalizes the class of {ISK4,wheel}-free graphs, namely, the class of {ISK4,wheel}-free \"trigraphs.\" Trigraphs (originally introduced by Chudnovsky #OTHEREFR in the context of Berge graphs) are a generalization of graphs in which certain pairs of vertices may have \"undetermined\" adjacency (one can think of such pairs as \"optional edges\").", "Every graph can be thought of as a trigraph: a graph is simply a trigraph with no \"optional edges.\" Trigraphs and related concepts are formally defined in Section 3." ]
[ "A graph is series-parallel if it does not contain any subdivision of K 4 as a (not necessarily induced) subgraph.", "The line graph of a graph H, denoted by L(H), is the graph whose vertices are the edges of H, and in which two vertices (i.e., edges of H) are adjacent if they share an endpoint in H.", "A graph is chordless if all its cycles are induced.", "If H is an induced subgraph of a graph G and v ∈ V (G) V (H), then the attachment of v over H in G is the set of all neighbors of v in V (H).", "If S is either a set of vertices or an induced subgraph of G V (H), then the attachment of S over H in G is the set of all vertices of H that are adjacent to at least one vertex of S. A square is a cycle of length four." ]
[ "ISK4,wheel}-free" ]
background
{ "title": "A decomposition theorem for {ISK4,wheel}-free trigraphs", "abstract": "An ISK4 in a graph G is an induced subgraph of G that is isomorphic to a subdivision of K 4 (the complete graph on four vertices). A wheel is a graph that consists of a chordless cycle, together with a vertex that has at least three neighbors in the cycle. A graph is {ISK4,wheel}-free if it has no ISK4 and does not contain a wheel as an induced subgraph. A \"trigraph\" is a generalization of a graph in which some pairs of vertices have \"undetermined\" adjacency. We prove a decomposition theorem for {ISK4,wheel}-free trigraphs. Our proof closely follows the proof of a decomposition theorem for ISK4-free graphs due to Lévêque, Maffray, and Trotignon (On graphs with no induced subdivision of K 4 . J. Combin. Theory Ser. B, 102(4): 2012)." }
{ "title": "On graphs with no induced subdivision of $K_4$", "abstract": "We prove a decomposition theorem for graphs that do not contain a subdivision of K 4 as an induced subgraph where K 4 is the complete graph on four vertices. We obtain also a structure theorem for the class C of graphs that contain neither a subdivision of K 4 nor a wheel as an induced subgraph, where a wheel is a cycle on at least four vertices together with a vertex that has at least three neighbors on the cycle. Our structure theorem is used to prove that every graph in C is 3-colorable and entails a polynomial-time recognition algorithm for membership in C. As an intermediate result, we prove a structure theorem for the graphs whose cycles are all chordless." }
1602.02406
1309.1926
Cyclically 3-connected graphs
In this section, we state a few lemmas proven in #REFR , but first, we need some definitions.
[]
[ "Given a graph G, a vertex u ∈ V (G), and a set X ⊆ V (G) {u}, we say that u is complete (respectively: anti-complete) to X in G provided that u is adjacent (respectively: non-adjacent) to every vertex of X in G.", "Given a graph G and disjoint sets X, Y ⊆ V (G), we say that X is complete (respectively: anti-complete) to Y in G provided that every vertex of X is complete (respectively: anti-complete) to Y in G.", "A graph H is cyclically 3-connected if it is 2-connected, is not a cycle, and admits no cyclic 2-separation.", "Note that a cyclic 2-separation of any graph is proper.", "A theta is any subdivision of the complete bipartite graph K 2,3 ." ]
[ "lemmas" ]
background
{ "title": "A decomposition theorem for {ISK4,wheel}-free trigraphs", "abstract": "An ISK4 in a graph G is an induced subgraph of G that is isomorphic to a subdivision of K 4 (the complete graph on four vertices). A wheel is a graph that consists of a chordless cycle, together with a vertex that has at least three neighbors in the cycle. A graph is {ISK4,wheel}-free if it has no ISK4 and does not contain a wheel as an induced subgraph. A \"trigraph\" is a generalization of a graph in which some pairs of vertices have \"undetermined\" adjacency. We prove a decomposition theorem for {ISK4,wheel}-free trigraphs. Our proof closely follows the proof of a decomposition theorem for ISK4-free graphs due to Lévêque, Maffray, and Trotignon (On graphs with no induced subdivision of K 4 . J. Combin. Theory Ser. B, 102(4): 2012)." }
{ "title": "On graphs with no induced subdivision of $K_4$", "abstract": "We prove a decomposition theorem for graphs that do not contain a subdivision of K 4 as an induced subgraph where K 4 is the complete graph on four vertices. We obtain also a structure theorem for the class C of graphs that contain neither a subdivision of K 4 nor a wheel as an induced subgraph, where a wheel is a cycle on at least four vertices together with a vertex that has at least three neighbors on the cycle. Our structure theorem is used to prove that every graph in C is 3-colorable and entails a polynomial-time recognition algorithm for membership in C. As an intermediate result, we prove a structure theorem for the graphs whose cycles are all chordless." }
1602.02406
1309.1926
Cyclically 3-connected graphs
The five lemmas below are Lemmas 4.3, 4.5, 4.6, 4.7, and 4.8 from #REFR , respectively. Lemma 2.1.
[ "As usual, if H 1 and H 2 are graphs, we denote by H 1 ∪ H 2 the graph whose vertex set is V (H 1 ) ∪ V (H 2 ) and whose edge set is E(H 1 ) ∪ E(H 2 ).", "The length of a path is the number of edges that it contains.", "A branch vertex in a graph G is a vertex of degree at least three.", "A branch in a graph G is an induced path P of length at least one whose endpoints are branch vertices of G and all of whose interior vertices are of degree two in G.", "We now state the lemmas from #OTHEREFR that we need." ]
[ "#OTHEREFR Let H be a cyclically 3-connected graph, let a and b be two branch vertices of H, and let P 1 , P 2 , and P 3 be three induced paths of H whose ends are a and b. Then one of the following holds:", "• P 1 , P 2 , P 3 are branches of H of length at least two and H = P 1 ∪P 2 ∪P #OTHEREFR (so H is a theta);", "• there exist distinct indices i, j ∈ {1, 2, 3} and a path S of H with one end in the interior of P i and the other end in the interior of P j , such that no interior vertex of S belongs to V (P 1 ∪ P 2 ∪ P 3 ), and such that" ]
[ "five lemmas" ]
background
{ "title": "A decomposition theorem for {ISK4,wheel}-free trigraphs", "abstract": "An ISK4 in a graph G is an induced subgraph of G that is isomorphic to a subdivision of K 4 (the complete graph on four vertices). A wheel is a graph that consists of a chordless cycle, together with a vertex that has at least three neighbors in the cycle. A graph is {ISK4,wheel}-free if it has no ISK4 and does not contain a wheel as an induced subgraph. A \"trigraph\" is a generalization of a graph in which some pairs of vertices have \"undetermined\" adjacency. We prove a decomposition theorem for {ISK4,wheel}-free trigraphs. Our proof closely follows the proof of a decomposition theorem for ISK4-free graphs due to Lévêque, Maffray, and Trotignon (On graphs with no induced subdivision of K 4 . J. Combin. Theory Ser. B, 102(4): 2012)." }
{ "title": "On graphs with no induced subdivision of $K_4$", "abstract": "We prove a decomposition theorem for graphs that do not contain a subdivision of K 4 as an induced subgraph where K 4 is the complete graph on four vertices. We obtain also a structure theorem for the class C of graphs that contain neither a subdivision of K 4 nor a wheel as an induced subgraph, where a wheel is a cycle on at least four vertices together with a vertex that has at least three neighbors on the cycle. Our structure theorem is used to prove that every graph in C is 3-colorable and entails a polynomial-time recognition algorithm for membership in C. As an intermediate result, we prove a structure theorem for the graphs whose cycles are all chordless." }
1901.04170
1309.1926
On the other hand, Scott proved that the class of graphs with no induced subdivision of K 4 has bounded chromatic number (see #REFR ).
[ "We say that a graph G contains an induced subdivision of H if G contains a subdivision of H as an induced subgraph.", "A class of graphs F is said to be χ-bounded if there is a function f such that for any graph G ∈ F , χ(G) f (ω(G)), where χ(G) and ω(G) stand for the chromatic number and the clique number of G, respectively.", "Scott #OTHEREFR conjectured that for any graph H, the class of graphs without induced subdivisions of H is χ-bounded, and proved it when H is a tree. But Scott's conjecture was disproved in #OTHEREFR .", "Finding which graphs H satisfy the assumption of Scott's conjecture remains a fascinating question.", "It was proved in #OTHEREFR that every graph H obtained from the complete graph K 4 by subdividing at least 4 of the 6 edges once (in such a way that the non-subdivided edges, if any, are non-incident), is a counterexample to Scott's conjecture." ]
[ "Le #OTHEREFR proved that every graph in this class has chromatic number at most 24. If triangles are also excluded, Chudnovsky et al.", "#OTHEREFR proved that the chromatic number is at most 3.", "In this paper, we extend the list of graphs known to satisfy Scott's conjecture. Let K + 4", "be the 5-vertex graph obtained from K 4 by subdividing one edge precisely once.", "Theorem 1." ]
[ "chromatic number" ]
background
{ "title": "Coloring graphs with no induced subdivision of $K_4^+$", "abstract": "Abstract. Let K + 4 be the 5-vertex graph obtained from K 4 , the complete graph on four vertices, by subdividing one edge precisely once (i.e. by replacing one edge by a path on three vertices). We prove that if the chromatic number of some graph G is much larger than its clique number, then G contains a subdivision of K + 4 as an induced subgraph." }
{ "title": "On graphs with no induced subdivision of $K_4$", "abstract": "We prove a decomposition theorem for graphs that do not contain a subdivision of K 4 as an induced subgraph where K 4 is the complete graph on four vertices. We obtain also a structure theorem for the class C of graphs that contain neither a subdivision of K 4 nor a wheel as an induced subgraph, where a wheel is a cycle on at least four vertices together with a vertex that has at least three neighbors on the cycle. Our structure theorem is used to prove that every graph in C is 3-colorable and entails a polynomial-time recognition algorithm for membership in C. As an intermediate result, we prove a structure theorem for the graphs whose cycles are all chordless." }
1611.04279
1309.1926
Preliminaries
We use in this paper some decomposition theorems from #REFR : Reducing a flat path P of length at least 2 means deleting its interior and add an edge between its two ends.
[ "C 4 ) in G with u 1 , u 2 , u 3 , u 4 in this order along the square.", "A link of S is a path P of G with ends p, p ′ such that either p = p ′ and", "and N S (p ′ ) = {u 2 , u 3 }, and no interior vertex of P has a neighbor in S.", "A rich square is a graph K that contains a square S as an induced subgraph such that K \\ S has at least two components and every component of K \\ S is a link of S.", "For example, K 2,2,2 is a rich square (it is the smallest one)." ]
[ "The following lemma shows that a graph remains ISK4-free after reducing a flat path: Lemma 2.3 (see Lemma 11.1 in #OTHEREFR ). Let G be an ISK4-free graph.", "Let P be a flat path of length at least 2 in G and G ′ be the graph obtained from G by reducing P . Then G ′ is ISK4-free.", "Proof.", "Let e be the edge of G ′ that results from the reduction of P . Suppose that G ′ contains an ISK4 H.", "Then H must contain e, for otherwise H is an ISK4 in G." ]
[ "edge", "decomposition" ]
method
{ "title": "Chromatic number of ISK4-free graphs", "abstract": "A graph G is said to be ISK4-free if it does not contain any subdivision of K 4 as an induced subgraph. In this paper, we propose new upper bounds for chromatic number of ISK4-free graphs and {ISK4, triangle}-free graphs." }
{ "title": "On graphs with no induced subdivision of $K_4$", "abstract": "We prove a decomposition theorem for graphs that do not contain a subdivision of K 4 as an induced subgraph where K 4 is the complete graph on four vertices. We obtain also a structure theorem for the class C of graphs that contain neither a subdivision of K 4 nor a wheel as an induced subgraph, where a wheel is a cycle on at least four vertices together with a vertex that has at least three neighbors on the cycle. Our structure theorem is used to prove that every graph in C is 3-colorable and entails a polynomial-time recognition algorithm for membership in C. As an intermediate result, we prove a structure theorem for the graphs whose cycles are all chordless." }
0901.2915
math/0503448
Introduction
However, these works differ from #REFR in the fact that they are based on residuation theory and transfer series techniques.
[ "In this paper, we show that some of the main results of the geometric approach do carry over to the max-plus case.", "A first work in this direction was developed by Katz [Kat07] , who studied the (A, B)-invariant spaces of max-plus linear systems providing solutions to some control problems.", "The max-plus analogue of the disturbance decoupling problem has been studied by Lhommeau et al.", "#OTHEREFR making use of invariant sets in the spirit of the classical geometric approach.", "More precisely, principal ideal invariant sets were considered, which is an elegant solution to the algorithmic issues, leading to effective algorithms at the price of a restrictive assumption." ]
[ "The present paper is devoted to studying the max-plus analogues of conditioned and controlled invariance and the duality between them.", "In the classical linear system theory, conditioned invariant spaces are defined in terms of the kernel of the output matrix.", "In the semiring case, the usual definition of the kernel of a matrix is not pertinent because it is usually trivial.", "In their places, we consider a natural extension of kernels, the congruences, which are equivalence relations with a semimodule structure (see #OTHEREFR ).", "Instead of considering, for instance, situations in which the perturbed state x ′ of the system is the sum of the unperturbed state x and of a noise w, we require the states x and x ′ to belong to the same equivalence class modulo a relation (congruence) which represents the perturbation." ]
[ "residuation theory" ]
background
{ "title": "Duality between invariant spaces for max-plus linear discrete event systems", "abstract": "Abstract. We extend the notions of conditioned and controlled invariant spaces to linear dynamical systems over the max-plus or tropical semiring. We establish a duality theorem relating both notions, which we use to construct dynamic observers. These are useful in situations in which some of the system coefficients may vary within certain intervals. The results are illustrated by an application to a manufacturing system." }
{ "title": "Max-Plus $(A,B)$-Invariant Spaces and Control of Timed Discrete-Event Systems", "abstract": "The concept of ( )-invariant subspace (or controlled invariant) of a linear dynamical system is extended to linear systems over the max-plus semiring. Although this extension presents several difficulties, which are similar to those encountered in the same kind of extension to linear dynamical systems over rings, it appears capable of providing solutions to many control problems like in the cases of linear systems over fields or rings. Sufficient conditions are given for computing the maximal ( )-invariant subspace contained in a given space and the existence of linear state feedbacks is discussed. An application to the study of transportation networks which evolve according to a timetable is considered. Index Terms-Discrete-event systems (DESs), geometric control, invariant spaces, max-plus algebra." }
0901.2915
math/0503448
Lemma 2. The intersection of (C, A)-conditioned invariant congruences is a (C, A)-conditioned invariant congruence.
For this property to hold true, in Definition 2 the semimodule X ⊕ B must be replaced by X ⊖ B := {z ∈ R n max | ∃b ∈ B, z ⊕ b ∈ X } (see #REFR for details).
[ "where B := Im B and X ⊕ B :", "Remark 1.", "From a dynamical point of view, the interpretation of (A, B)-controlled invariance differs from the classical one. For linear dynamical systems over fields of the form", "where x(k) is the state, u(k) is the control, and A and B are matrices of suitable dimension, it can be shown (see #OTHEREFR ) that X is (A, B)-controlled invariant if, and only if, any trajectory of (6) starting in X can be kept inside X by a suitable choice of the control.", "However, due to the non-invertibility of addition, this is no longer true in the max-plus case." ]
[ "The proof of the following simple lemma, which is dual of Lemma 2, is left to the reader.", "Lemma 3.", "The (max-plus) sum of (A, B)-controlled invariant semimodules is (A, B)-controlled invariant.", "By Lemma 3 the set of all (A, B)-controlled invariant semimodules contained in a given semimodule K ⊂ R n max , which will be denoted by M (A, B, K), is an upper semilattice with respect to ⊂ and ⊕.", "In this case, M (A, B, K) admits a biggest element, the maximal (A, B)-controlled invariant semimodule contained in K, which will be denoted by K * (A, B)." ]
[ "semimodule" ]
background
{ "title": "Duality between invariant spaces for max-plus linear discrete event systems", "abstract": "Abstract. We extend the notions of conditioned and controlled invariant spaces to linear dynamical systems over the max-plus or tropical semiring. We establish a duality theorem relating both notions, which we use to construct dynamic observers. These are useful in situations in which some of the system coefficients may vary within certain intervals. The results are illustrated by an application to a manufacturing system." }
{ "title": "Max-Plus $(A,B)$-Invariant Spaces and Control of Timed Discrete-Event Systems", "abstract": "The concept of ( )-invariant subspace (or controlled invariant) of a linear dynamical system is extended to linear systems over the max-plus semiring. Although this extension presents several difficulties, which are similar to those encountered in the same kind of extension to linear dynamical systems over rings, it appears capable of providing solutions to many control problems like in the cases of linear systems over fields or rings. Sufficient conditions are given for computing the maximal ( )-invariant subspace contained in a given space and the existence of linear state feedbacks is discussed. An application to the study of transportation networks which evolve according to a timetable is considered. Index Terms-Discrete-event systems (DESs), geometric control, invariant spaces, max-plus algebra." }
1207.7040
1207.0892
Introduction
The running time of the construction of #REFR was not analyzed; we remark that a naive implementation requires quadratic time.
[ "Specifically, Elkin and Solomon #OTHEREFR showed that one can build in O(n·log n) time a (1+ǫ)-spanner with degree O(ρ), diameter O(log ρ n+α(ρ)) and lightness O(ρ · log ρ n), where ρ ≥ 2 is an integer parameter and α is the inverse Ackermann function. Due to lower bounds of Dinitz et al.", "#OTHEREFR , this tradeoff is tight (up to constant factors) in the entire range.", "Later, Chan et al.", "#OTHEREFR provided a simpler proof for Arya et al.'s conjecture. Moreover, they strengthened their construction to be fault-tolerant (FT).", "#OTHEREFR Specifically, they showed that there exists a k-FT (1 + ǫ)-spanner with degree O(k 2 ), diameter O(log n) and lightness O(k 3 · log n)." ]
[ "In this work we improve the results of Chan et al. #OTHEREFR , using a simpler proof.", "Specifically, we present a simple proof which shows that a k-FT (1 + ǫ)-spanner with degree O(k 2 ), diameter O(log n) and lightness O(k 2 · log n) can be built in O(n · (log n + k 2 )) time, for any integer 0 ≤ k ≤ n − 2.", "Similarly to the constructions of #OTHEREFR and #OTHEREFR , our construction applies to arbitrary doubling metrics.", "However, in contrast to the construction of Elkin and Solomon #OTHEREFR , our construction fails to provide a complete (and tight) tradeoff between the involved parameters. The construction of Chan et al. #OTHEREFR has this drawback too.", "For random point sets in R d , where d ≥ 2 is an integer constant, we \"shave\" a factor of log n from the lightness bound." ]
[ "running time", "naive implementation" ]
background
{ "title": "Fault-Tolerant Spanners for Doubling Metrics: Better and Simpler", "abstract": "In STOC'95 Arya et al." }
{ "title": "Incubators vs Zombies: Fault-Tolerant, Short, Thin and Lanky Spanners for Doubling Metrics", "abstract": "Recently Elkin and Solomon gave a construction of spanners for doubling metrics that has constant maximum degree, hop-diameter O(log n) and lightness O(log n) (i.e., weight O(log n)·w(MST)). This resolves a long standing conjecture proposed by Arya et al. in a seminal STOC 1995 paper. However, Elkin and Solomon's spanner construction is extremely complicated; we offer a simple alternative construction that is very intuitive and is based on the standard technique of net tree with cross edges. Indeed, our approach can be readily applied to our previous construction of k-fault tolerant spanners (ICALP 2012) to achieve k-fault tolerance, maximum degree O(k 2 ), hop-diameter O(log n) and lightness O(k 3 log n). A finite metric space (X, d) with n = |X| can be represented by a complete graph G = (X, E), where the edge weight w(e) on an edge e = {x, y} is d(x, y). A t-spanner of X, is a weighted subgraph H = (X, E ) of G that preserves all pairwise distance within a factor of t, i.e., d H (x, y) ≤ t · d(x, y) for all x, y ∈ X, where d H (x, y) denotes the shortest-path distance between x and y in H, and the factor t is called the stretch of H. A path between x and y in H with length at most t · d(x, y) is called a t-spanner path. Spanners have been studied extensively since the mid-eighties (see [2, 7, 1, 14, 12, 3, 5, 10] and the references therein; also refer to [13] for an excellent survey), and find applications in approximation algorithms, network topology design, distance oracles, distributed systems. Spanners are important structures, as they enable approximation of a metric space in a much more economical form. Depending on the application, there are parameters of the spanner other than stretch that can be optimized. The total weight of the edges should be at most some factor (known as lightness) times the weight of a minimum spanning tree (MST) of the metric space. It might also be desirable for the spanner to have small maximum degree (hence also having small number of edges), or small hop-diameter, i.e., every pair of points x and y should be connected by a t-spanner path with a small number of edges. Observe that for some metric spaces such as the uniform metric on n points, the only possible spanner with stretch 1.5 is the complete graph. Doubling metrics are special classes of metrics, but still have interesting properties. The doubling dimension of a metric space (X, d), denoted by dim(X) (or dim when the context is clear), is the smallest value ρ such that every ball in X can be covered by 2 ρ balls of half the radius [11] . A metric space is called doubling, if its doubling dimension is bounded by some constant. Doubling dimension is a generalization of Euclidean dimension to arbitrary metric spaces, as the space R T equipped with p -norm has doubling dimension Θ(T ) [11] . Spanners for doubling metrics have been studied in [12, 3, 5, 10, 14] . Sometimes we want our spanner to be robust against node failures, meaning that even when some of the nodes in the spanner fail, the remaining part is still a t-spanner. Formally, given 1 ≤ k ≤ n − 2, a spanner H of X is called a k-vertex-fault-tolerant t-spanner ((k, t)-VFTS or simply k-VFTS if the stretch t is clear from context), if for any subset S ⊆ X with |S| ≤ k, H \\ S is a t-spanner for X \\ S. Our Contributions. Our main theorem subsumes the results of two recent works on spanners for doubling metrics: (1) our previous paper [4] on fault-tolerant spanners with constant maximum degree or small hop-diameter, (2) Elkin and Solomon's spanner construction [9] with constant maximum degree, O(log n) hop-diameter and O(log n) lightness." }
1709.01560
1708.09352
A. Simulation Results
As an aside, we take note of the similarities that the resultant likelihood has with the expected information density (EID) of the known shape measurement model (see #REFR for EID derivation).
[ "Moreover, by comparing the posterior and the windowed sensor path, it is shown in Fig.", "1 that the sensor path is drawn to high likelihood densities.", "This ensures that the posterior estimate is verified and updated as new sensor information is acquired.", "Note that the sensor is unable to access the interior of the shape, resulting in high likelihood probabilities within the shape.", "The posterior likelihood of the square shape estimate shows high likelihood probabilities near the corners of the square." ]
[ "Specifically, high likelihood near the corners correspond to the similar large expectation of information for a square shape.", "If we define the measurement model of the known square as Υ(x, x) where x is search space and x is the robot's state space, then a measure of information is the Fisher Information Matrix #OTHEREFR defined by", "where Σ is the measurement covariance.", "Assuming the measurement model is of the same form as in equation (1) then the region with the largest information is at the corners where the slopes of the edges collide.", "Thus, large likelihood estimates should exist near corners if they have not been previously searched. The non-contact motion shown in Fig. 4 is beneficial for tactile-based exploration." ]
[ "known shape measurement", "expected information density" ]
background
{ "title": "Ergodic Exploration using Binary Sensing for Non-Parametric Shape Estimation", "abstract": "Abstract-Current methods to estimate object shape-using either vision or touch-generally depend on high-resolution sensing. Here, we exploit ergodic exploration to demonstrate successful shape estimation when using a low-resolution binary contact sensor. The measurement model is posed as a collisionbased tactile measurement, and classification methods are used to discriminate between shape boundary regions in the search space. Posterior likelihood estimates of the measurement model help the system actively seek out regions where the binary sensor is most likely to return informative measurements. Results show successful shape estimation of various objects as well as the ability to identify multiple objects in an environment. Interestingly, it is shown that ergodic exploration utilizes non-contact motion to gather significant information about shape. The algorithm is extended in three dimensions in simulation and we present two dimensional experimental results using the Rethink Baxter robot." }
{ "title": "Ergodic Exploration of Distributed Information", "abstract": "Abstract-This paper presents an active search trajectory synthesis technique for autonomous mobile robots with nonlinear measurements and dynamics. The presented approach uses the ergodicity of a planned trajectory with respect to an expected information density map to close the loop during search. The ergodic control algorithm does not rely on discretization of the search or action spaces, and is well posed for coverage with respect to the expected information density whether the information is diffuse or localized, thus trading off between exploration and exploitation in a single objective function. As a demonstration, we use a robotic electrolocation platform to estimate location and size parameters describing static targets in an underwater environment. Our results demonstrate that the ergodic exploration of distributed information (EEDI) algorithm outperforms commonly used information-oriented controllers, particularly when distractions are present." }
1902.03320
1708.09352
Problem Statement
This is an approximation because the true time-averaged statistics, as described in #REFR , is a collection of delta functions parameterized by time.
[ "KL-divergence and Area Coverage Given the assumptions of known approximate dynamics and the equilibrium policy, we can define active exploration for informative data acquisition as automating safe switching between µ(x) and some control authority µ (t) that generates actions that actively seek out informative data.", "This is accomplished by specifying the active data acquisition task using an area coverage objective where we minimize the KL-divergence between the time average statistics of the robot along a trajectory and a spatial distribution defining the current coverage requirement.", "We can then define an approximation to the spatial statistics of the robot as follows: Definition 1.", "Given a search domain X v ⊂ R n+m where v ≤ n + m, the Σ-approximated time-averaged statistics of the robot, i.e., the time the robot spends in regions of the search domain X v , is defined by", "where s ∈ X v ⊂ R n+m is a point in the search domain X v , x v (t) is the component of the robot's trajectory x(t) and actions µ(t) that intersects the search domain X v , Σ ∈ R v×v is a positive definite matrix parameter that specifies the width of the Gaussian, η is a normalization constant such that q(s) > 0 and X v q(s)ds = 1, t i is the i th sampling time, and T r = T + t r is sum of the time horizon T and amount of time t r the robot remembers x v (t) into the past." ]
[ "We approximate the delta function as a Gaussian distribution with covariance Σ, converging as Σ → 0.", "Using this approximation, we are able to relax the ergodic area-coverage objective in #OTHEREFR and use the following KL-divergence objective #OTHEREFR :", "where E is the expectation operator, q(s) = q(s | x(t), µ(t)), and p(s), p(s) > 0, X v p(s)ds = 1, is a distribution that describes where in the search domain an informative measurement is likely to be acquired.", "We can further approximate the KL-divergence via sampling where we approximate the expectation operator as", "where N is the number of samples in the search domain drawn from a uniform distribution." ]
[ "approximation", "delta functions" ]
background
{ "title": "Active Area Coverage from Equilibrium", "abstract": "Abstract. This paper develops a method for robots to integrate stability into actively seeking out informative measurements through coverage. We derive a controller using hybrid systems theory that allows us to consider safe equilibrium policies during active data collection. We show that our method is able to maintain Lyapunov attractiveness while still actively seeking out data. Using incremental sparse Gaussian processes, we define distributions which allow a robot to actively seek out informative measurements. We illustrate our methods for shape estimation using a cart double pendulum, dynamic model learning of a hovering quadrotor, and generating galloping gaits starting from stationary equilibrium by learning a dynamics model for the half-cheetah system from the Roboschool environment." }
{ "title": "Ergodic Exploration of Distributed Information", "abstract": "Abstract-This paper presents an active search trajectory synthesis technique for autonomous mobile robots with nonlinear measurements and dynamics. The presented approach uses the ergodicity of a planned trajectory with respect to an expected information density map to close the loop during search. The ergodic control algorithm does not rely on discretization of the search or action spaces, and is well posed for coverage with respect to the expected information density whether the information is diffuse or localized, thus trading off between exploration and exploitation in a single objective function. As a demonstration, we use a robotic electrolocation platform to estimate location and size parameters describing static targets in an underwater environment. Our results demonstrate that the ergodic exploration of distributed information (EEDI) algorithm outperforms commonly used information-oriented controllers, particularly when distractions are present." }
1812.04711
1701.05055
The joint computation task offloading scheduling and transmit power allocation of a single-user system was investigated in #REFR .
[ "Second, the close transmitterreceiver proximity allows small cell users to achieve high signal-to-noise ratio (SNR) even with low transmit power.", "This enables them to meet the low-latency requirements of many emerging applications.", "in small-cell based wireless HetNets can lead to significant benefits such as prolonging battery lifetime and providing high-speed and ultra-low latency communications services in future 5G wireless systems.", "Several MCC platforms have been proposed and developed in the literature such as MAUI #OTHEREFR , CloneCloud #OTHEREFR , ThinkAir #OTHEREFR and a good survey for them with the corresponding computation offloading designs can be found in #OTHEREFR .", "In particular, the tradeoff between transmission and computation energy was studied in #OTHEREFR , #OTHEREFR ." ]
[ "Moreover, the authors in #OTHEREFR studied the multi-user radio resource management problem for the HetNet-MCC system, which always offloads the entire computation task to the cloud.", "Dynamic computation offloading policies based on Lyapunov optimization were developed in #OTHEREFR , #OTHEREFR .", "These existing works, however, only consider the single-cell setting and many practical design aspects of the multi-cell MCC system such as dynamic computation offloading, joint multi-user resource allocation and computing resource assignment, and consideration of practical constraints on bandwidth, operating frequency and tolerable delay limits are not satisfactorily accounted for.", "Our current work aims to fill this gap in the literature.", "In this paper, we study the joint optimization problem for computation offloading and resource allocation where computation tasks are either processed locally at the mobile or offloaded and processed in the cloud." ]
[ "offloading scheduling" ]
background
{ "title": "Joint Computation Offloading and Resource Allocation in Cloud Based Wireless HetNets", "abstract": "Abstract-In this paper, we study the joint computation offloading and resource allocation problem in the two-tier wireless heterogeneous network (HetNet). Our design aims to optimize the computation offloading to the cloud jointly with the subchannel allocation to minimize the maximum (min-max) weighted energy consumption subject to practical constraints on bandwidth, computing resource and allowable latency for the multi-user multitask computation system. To tackle this non-convex mixed integer non-linear problem (MINLP), we employ the bisection search method to solve it where we propose a novel approach to transform and verify the feasibility of the underlying problem in each iteration. In addition, we propose a low-complexity algorithm, which can decrease the number of binary optimization variables and enable more scalable computation offloading optimization in the practical wireless HetNets. Numerical studies confirm that the proposed design achieves the energy saving gains about 55% in comparison with the local computation scheme under the strict required latency of 0.1s." }
{ "title": "Joint Task Offloading Scheduling and Transmit Power Allocation for Mobile-Edge Computing Systems", "abstract": "Mobile-edge computing (MEC) has emerged as a prominent technique to provide mobile services with high computation requirement, by migrating the computation-intensive tasks from the mobile devices to the nearby MEC servers. To reduce the execution latency and device energy consumption, in this paper, we jointly optimize task offloading scheduling and transmit power allocation for MEC systems with multiple independent tasks. A low-complexity sub-optimal algorithm is proposed to minimize the weighted sum of the execution delay and device energy consumption based on alternating minimization. Specifically, given the transmit power allocation, the optimal task offloading scheduling, i.e., to determine the order of offloading, is obtained with the help of flow shop scheduling theory. Besides, the optimal transmit power allocation with a given task offloading scheduling decision will be determined using convex optimization techniques. Simulation results show that task offloading scheduling is more critical when the available radio and computational resources in MEC systems are relatively balanced. In addition, it is shown that the proposed algorithm achieves near-optimal execution delay along with a substantial device energy saving. Index Terms-Mobile-edge computing, task offloading scheduling, power control, flow shop scheduling, convex optimization." }
1909.00478
1701.05055
VI. SIMULATION RESULTS
All the channel gains are independently generated based on a Rayleigh fading model with an average gain factor of σ 2 h = E[|h| 2 ] = 10 −3 #REFR .
[ "In this section, we use Monte Carlo simulations to demonstrate the benefits of the proposed CCCP-based and low complexity IBCD algorithms for the RACO systems in terms of the end-to-end delay and system energy consumption.", "The simulations are run on a desktop computer with (Intel i7-920) CPU running at 2.66 GHz and 24 GB RAM, while the simulation parameters are set as follows unless specified otherwise." ]
[ "The radio bandwidth available for data transmission from user A to user B via the MERS is W = 40 MHz for the combination of the AF and DF schemes.", "The background noise at MERS and user B is −169 dBm/Hz #OTHEREFR .", "The maximum transmit power levels of user A and the MERS are set to P max A = 1 W and P max R = 5 W, respectively.", "The maximum computation speed of user A and the MERS are characterized by F max l = 800 MHz and F max r = 2.4 GHz, respectively, #OTHEREFR .", "For user A, the data size of the tasks before computation follows a uniform distribution over the interval [4 · 10 5 , 2 · 10 6 ] bits, the conversion ratio is fixed to ρ = 0.1, and the required number of CPU cycles per bit for both user A and the MERS is set to K K l = K r = 10 3 cycles/bit #OTHEREFR ." ]
[ "channel gains" ]
method
{ "title": "Efficient Resource Allocation for Relay-Assisted Computation Offloading in Mobile-Edge Computing", "abstract": "In this article, relay-assisted computation offloading (RACO) is investigated, where user A wishes to share the results of computational tasks with another user B with the assistance of a mobile-edge relay server (MERS). To enable this computation offloading, we propose a hybrid relaying (HR) approach employing a pair of orthogonal frequency bands, which are, respectively, used for the amplify-forward relaying of computational results and the decode-forward relaying of the unprocessed raw tasks. The motivation here is to adapt the allocation of computing and communication resources both to dynamic user requirements and to diverse computational tasks. Using this framework, we seek to minimize the weighted sum of the execution delays and the energy consumption in the RACO system by jointly optimizing the computation offloading ratio, the bandwidth allocation, the processor speeds, as well as the transmit power levels of both user A and the MERS, under some practical constraints. By adopting a series of transformations, we first recast this problem into a form amenable to optimization and then develop an efficient iterative algorithm for its solution based on the concave-convex procedure (CCCP). By virtue of the particular problem structure in our case, we propose furthermore a simplified algorithm based on the inexact block coordinate descent (IBCD) method, which leads us to much lower computational complexity. Finally, our numerical results demonstrate the advantages of the proposed algorithms over the state-of-the-art benchmark schemes. Index Terms-Concave-convex procedure (CCCP), computation offloading, hybrid relaying (HR), inexact block coordinate descent (IBCD), mobile-edge computing (MEC), resource allocation." }
{ "title": "Joint Task Offloading Scheduling and Transmit Power Allocation for Mobile-Edge Computing Systems", "abstract": "Mobile-edge computing (MEC) has emerged as a prominent technique to provide mobile services with high computation requirement, by migrating the computation-intensive tasks from the mobile devices to the nearby MEC servers. To reduce the execution latency and device energy consumption, in this paper, we jointly optimize task offloading scheduling and transmit power allocation for MEC systems with multiple independent tasks. A low-complexity sub-optimal algorithm is proposed to minimize the weighted sum of the execution delay and device energy consumption based on alternating minimization. Specifically, given the transmit power allocation, the optimal task offloading scheduling, i.e., to determine the order of offloading, is obtained with the help of flow shop scheduling theory. Besides, the optimal transmit power allocation with a given task offloading scheduling decision will be determined using convex optimization techniques. Simulation results show that task offloading scheduling is more critical when the available radio and computational resources in MEC systems are relatively balanced. In addition, it is shown that the proposed algorithm achieves near-optimal execution delay along with a substantial device energy saving. Index Terms-Mobile-edge computing, task offloading scheduling, power control, flow shop scheduling, convex optimization." }
1912.07599
1701.05055
B. The best response strategy
In the best response strategy, each mobile user calculates the best response of the variables in s according to the information that gotten from BS, i.e., for given − , user n calculates the best response of , , and based on #REFR .
[ "For the game which the existence of NE is guaranteed, the best-response dynamic always converges to a NE #OTHEREFR , so it is applied in this algorithm to reach the NE of game ′ ." ]
[]
[ "mobile user" ]
method
{ "title": "Game Theory based Joint Task Offloading and Resources Allocation Algorithm for Mobile Edge Computing", "abstract": "Mobile edge computing (MEC) has emerged for reducing energy consumption and latency by allowing mobile users to offload computationally intensive tasks to the MEC server. Due to the spectrum reuse in small cell network, the inter-cell interference has a great effect on MEC's performances. In this paper, for reducing the energy consumption and latency of MEC, we propose a game theory based approach to join task offloading decision and resources allocation together in the MEC system. In this algorithm, the offloading decision, the CPU capacity adjustment, the transmission power control, and the network interference management of mobile users are regarded as a game. In this game, based on the best response strategy, each mobile user makes their own utility maximum rather than the utility of the whole system. We prove that this game is an exact potential game and the Nash equilibrium (NE) of this game exists. For reaching the NE, the best response approach is applied. We calculate the best response of these three variables. Moreover, we investigate the properties of this algorithm, including the convergence, the computational complexity, and the Price of anarchy (PoA). The theoretical analysis shows that the inter-cell interference affects on the performances of MEC greatly. The NE of this game is Pareto efficiency. Finally, we evaluate the performances of this algorithm by simulation. The simulation results illustrate that this algorithm is effective in improving the performances of the multi-user MEC system." }
{ "title": "Joint Task Offloading Scheduling and Transmit Power Allocation for Mobile-Edge Computing Systems", "abstract": "Mobile-edge computing (MEC) has emerged as a prominent technique to provide mobile services with high computation requirement, by migrating the computation-intensive tasks from the mobile devices to the nearby MEC servers. To reduce the execution latency and device energy consumption, in this paper, we jointly optimize task offloading scheduling and transmit power allocation for MEC systems with multiple independent tasks. A low-complexity sub-optimal algorithm is proposed to minimize the weighted sum of the execution delay and device energy consumption based on alternating minimization. Specifically, given the transmit power allocation, the optimal task offloading scheduling, i.e., to determine the order of offloading, is obtained with the help of flow shop scheduling theory. Besides, the optimal transmit power allocation with a given task offloading scheduling decision will be determined using convex optimization techniques. Simulation results show that task offloading scheduling is more critical when the available radio and computational resources in MEC systems are relatively balanced. In addition, it is shown that the proposed algorithm achieves near-optimal execution delay along with a substantial device energy saving. Index Terms-Mobile-edge computing, task offloading scheduling, power control, flow shop scheduling, convex optimization." }
1810.11445
1305.5268
Introduction
See also #REFR , where a spectral-Lagrangian Boltzmann solver for a multi-energy level gas was developed.
[ "Such schemes are designed such that they mimic the asymptotic transition from one scale to another at the discrete level, and also use specially designed explicit-implicit time discretizations so as to reduce the algebraic complexity when implicit discretizations are needed. See review articles #OTHEREFR .", "For single species particles, in order to overcome the stiffness of the collision operators, one could penalize the collision operators by simple ones that are easier to invert, see #OTHEREFR , or uses exponential Runge-Kutta methods #OTHEREFR , or via the micro-macro decomposition #OTHEREFR . See also #OTHEREFR .", "However, for binary interactions in multispecies models, one encounters extra difficulties due to the coupling of collision terms between different species.", "The Cauchy problem for the full non-linear homogeneous Boltzmann system describing multi-component monatomic gas mixtures has been studied recently in #OTHEREFR .", "For relatively simpler scalings which lead to hydrodynamic limits, multispecies AP schemes were developed in for examples #OTHEREFR ." ]
[ "However, none of the previous works dealt with the disparate mass systems under the long-time scale studied in this paper.", "The main challenges to develop efficient AP schemes for the problems under study include: 1) the strong coupling of the binary collision terms between different species; 2) the disparate mass scalings so different species evolve with different time scales thus different species needed to be treated differently and 3) the long-time scale.", "In fact, other than utilizing several existing AP techniques for single species problems, we also introduce two new ideas: a novel splitting of the system, guided by the asymptotic analysis introduced in #OTHEREFR , which is a natural formulation for the design of AP schemes, and identifying less stiff terms from the stiff ones, again taking advantage of the asymptotic behavior of the collision operators.", "We will handle both the Boltzmann and FPL collision terms, thanks to their bilinear structure, and in the end the algebraic complexity, judged by the kind of algebraic systems to be inverted, somehow similar to the single species counterparts as in #OTHEREFR and #OTHEREFR .", "Due to the complexity of the systems under study, we split our results in several papers." ]
[ "spectral-Lagrangian Boltzmann" ]
method
{ "title": "N A ] 1 D ec 2 01 8 Asymptotic-preserving schemes for two-species binary collisional kinetic system with disparate masses I : time discretization and asymptotic analysis ∗", "abstract": "We develop efficient asymptotic-preserving time discretization to solve the disparate mass kinetic system of a binary gas or plasma in the \"relaxation time scale\" relevant to the epochal relaxation phenomenon. Both the Boltzmann and Fokker-Planck-Landau (FPL) binary collision operators will be considered. Other than utilizing several AP strategies for single-species binary kinetic equations, we also introduce a novel splitting and a carefully designed explicit-implicit approximation, which are guided by the asymptotic analysis of the system. We also conduct asymptotic-preserving analysis for the time discretization, for both space homogenous and inhomogeneous systems." }
{ "title": "A Spectral-Lagrangian Boltzmann Solver for a Multi-Energy Level Gas", "abstract": "In this paper a spectral-Lagrangian method for the Boltzmann equation for a multi-energy level gas is proposed. Internal energy levels are treated as separate species and inelastic collisions (leading to internal energy excitation and relaxation) are accounted for. The formulation developed can also be used for the case of a mixture of monatomic gases without internal energy (where only elastic collisions occur). The advantage of the spectral-Lagrangian method lies in the generality of the algorithm in use for the evaluation of the elastic and inelastic collision operators. The computational procedure is based on the Fourier transform of the partial elastic and inelastic collision operators and exploits the fact that these can be written as weighted convolutions in Fourier space with no restriction on the crosssection model. The conservation of mass, momentum and energy during collisions is enforced through the solution of constrained optimization problems. Numerical solutions are obtained for both space homogeneous and space inhomogeneous problems. Computational results are compared with those obtained by means of the DSMC method in order to assess the accuracy of the proposed spectral-Lagrangian method." }
1611.04171
1305.5268
Introduction
It has also been extended to systems of elastic and inelastic hard potential problems modeling of a multi-energy level gas #REFR .
[ "We do not use periodic representations for the distribution function and the only restriction of the current method is that it requires that the distribution function to be Fourier transformable at any time step.", "This is requirement is met by imposing L 2 -integrability to the initial datum.", "The required conservation properties of the distribution function are enforced through an optimization problem with the desired conservation quantities set as the constraints.", "The correction to the distribution function that makes the approximation conservative is very small but crucial for the evolution of the probability distribution function according to the Boltzmann equation.", "More recently, this conservative spectral Lagrangian method for the Boltzmann equation was applied to the calculation of the Boltzmann flow for anisotropic collisions, even in the Coulomb interaction regime #OTHEREFR , where the solution of the Boltzmann equation approximates solution for Landau equation [57; 58] ." ]
[ "In this case, the formulation of the numerical method accounts for both elastic and inelastic collisions.", "It was also be used for the particular case of a chemical mixture of monatomic gases without internal energy.", "The conservation of mass, momentum and energy during collisions is enforced through the solution of constrained optimization problem to keep the collision invariances associated to the mixtures.", "The implementation was done in the space inhomogeneous setting (see #OTHEREFR , section 4.3), where the advection along the free Hamiltonian dynamics is modeled by time splitting methods following the initial approach in #OTHEREFR .", "The effectiveness of the scheme applied to these mixtures has been compared with the results obtained by means of the DSMC method and excellent agreement has been observed." ]
[ "inelastic hard potential", "multi-energy level gas" ]
background
{ "title": "Convergence and error estimates for the Lagrangian based Conservative Spectral method for Boltzmann Equations", "abstract": "In this manuscript we develop error estimates for the semi-discrete approximation properties of the conservative spectral method for the elastic and inelastic Boltzmann problem introduced by the authors in [47] . The method is based on the Fourier transform of the collisional operator and a Lagrangian optimization correction used for conservation of mass, momentum and energy. We present an analysis on the accuracy and consistency of the method, for both elastic and inelastic collisions, and a discussion of the L 1 − L 2 theory for the scheme in the elastic case which includes the estimation of the negative mass created by the scheme. This analysis allows us to present Sobolev convergence, error estimates and convergence to equilibrium for the numerical approximation. The estimates are based on recent progress of convolution and gain of integrability estimates by some of the authors and a corresponding moment inequality for the discretized collision operator. The Lagrangian optimization correction algorithm is not only crucial for the error estimates and the convergence to the equilibrium Maxwellian, but also it is necessary for the moment conservation for systems of kinetic equations in mixtures and chemical reactions. The results of this work answer a long standing open problem posed by Cercignani et al. in [31, Chapter 12] about finding error estimates for a Boltzmann scheme as well as to show that the semi-discrete numerical solution converges to the equilibrium Maxwellian distribution." }
{ "title": "A Spectral-Lagrangian Boltzmann Solver for a Multi-Energy Level Gas", "abstract": "In this paper a spectral-Lagrangian method for the Boltzmann equation for a multi-energy level gas is proposed. Internal energy levels are treated as separate species and inelastic collisions (leading to internal energy excitation and relaxation) are accounted for. The formulation developed can also be used for the case of a mixture of monatomic gases without internal energy (where only elastic collisions occur). The advantage of the spectral-Lagrangian method lies in the generality of the algorithm in use for the evaluation of the elastic and inelastic collision operators. The computational procedure is based on the Fourier transform of the partial elastic and inelastic collision operators and exploits the fact that these can be written as weighted convolutions in Fourier space with no restriction on the crosssection model. The conservation of mass, momentum and energy during collisions is enforced through the solution of constrained optimization problems. Numerical solutions are obtained for both space homogeneous and space inhomogeneous problems. Computational results are compared with those obtained by means of the DSMC method in order to assess the accuracy of the proposed spectral-Lagrangian method." }
1806.06885
1312.1414
The Linear Combination of Unitaries method
Childs and Wiebe #REFR show how to implement a sum of two unitaries. We describe this simple case below.
[ "One of the disadvantages in using QPE is that achieving ǫ-precision requires O(1/ǫ) uses of the matrix oracle.", "The LCU method offers a way to overcome this disadvantage by exploiting results from approximation theory.", "The LCU method is a way to probabilistically implement an operator specified as a linear combination of unitary operators with known implementations.", "In essence, we construct a larger unitary matrix of which the the matrix f (A) is a sub-matrix or block." ]
[ "Suppose A = α 0 U 0 + α 1 U 1 .", "Without loss of generality α i > 0, since phase factors can be absorbed into the unitaries.", "Consider a state preparation unitary V α which has the action", "where α = α 0 + α 1 .", "When dealing with a linear combination of more than two unitaries, there is a lot of freedom in the choice of this V α , as we will see later." ]
[ "two unitaries" ]
background
{ "title": "Implementing smooth functions of a Hermitian matrix on a quantum computer", "abstract": "We review existing methods for implementing smooth functions f (A) of a sparse Hermitian matrix A on a quantum computer, and analyse a further combination of these techniques which has some advantages of simplicity and resource consumption in some cases. Our construction uses the linear combination of unitaries method with Chebyshev polynomial approximations. The query complexity we obtain is O(log C/ǫ) where ǫ is the approximation precision, and C > 0 is an upper bound on the magnitudes of the derivatives of the function f over the domain of interest. The success probability depends on the 1-norm of the Taylor series coefficients of f , the sparsity d of the matrix, and inversely on the smallest singular value of the target matrix f (A)." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1707.05391
1312.1414
Universality of the Standard-Form
This approach is considerably simpler than that of compressed fractional queries #REFR , and essentially works by using Thm.
[ "This is proven through the flexible quantum signal processing Thm. 4 using a particular choice of polynomial.", "It is important to note however the caveat that our equivalence limits Ĥ t = O(1), and also fails when time-evolution can be approximated with o(t) queries.", "Fortunately, the latter scenario can be disregarded with limited loss as 'no-fast-forwarding' theorems #OTHEREFR prove the necessity of Ω( Ĥ t) queries for generic computational problems and physical systems.", "One useful application of this reverse direction is an alternate technique Cor.", "1 for simulating time evolution by a sum of d Hermitian components d=1Ĥ j , given their controlled-exponentials e −iĤj tj ." ]
[ "9 to map each e −iĤj tj , where Ĥ j t j = O(1) to a standard-form encoding ofĤ j t j .", "Corollary 1 (Hamiltonian simulation with exponentials). Given standard-form-(", "α j /α|j a with α j ≥ 0, normalization α = controlled-queries, and O(Q log (d)) primitive quantum gates.", "Composite quantum gates #OTHEREFR Quantum signal processing #OTHEREFR Qubitization #OTHEREFR Standard-form (Sec." ]
[ "compressed fractional queries" ]
method
{ "title": "Hamiltonian Simulation by Uniform Spectral Amplification", "abstract": "The exponential speedups promised by Hamiltonian simulation on a quantum computer depends crucially on structure in both the HamiltonianĤ, and the quantum circuitÛ that encodes its description. In the quest to better approximate time-evolution e −iĤt with error , we motivate a systematic approach to understanding and exploiting structure, in a setting where Hamiltonians are encoded as measurement operators of unitary circuitsÛ for generalized measurement. This allows us to define a uniform spectral amplification problem on this framework for expanding the spectrum of encoded Hamiltonian with exponentially small distortion. We present general solutions to uniform spectral amplification in a hierarchy where factoringÛ into n = 1, 2, 3 unitary oracles represents increasing structural knowledge of the encoding. Combined with structural knowledge of the Hamiltonian, specializing these results allow us simulate time-evolution by d-sparse Hamiltonians using Up to logarithmic factors, this is a polynomial improvement upon prior In the process, we also prove a matching lower bound of Ω(t(d Ĥ max Ĥ 1) 1/2 ) queries, present a distortion-free generalization of spectral gap amplification, and an amplitude amplification algorithm that performs multiplication on unknown state amplitudes." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1707.05391
1312.1414
Lower Bound on Sparse Hamiltonian Simulation
Note that our construction is based on a modification of #REFR , whereĈ 1 there is zero matrix.
[ "In constrast, the lower bound of #OTHEREFR quotes Ω(sparsity × t) as they consider the case where one is given information only on the sparsity..", "We now present the extension to creating a Hamiltonian that solves PARITY • OR.", "Notably, this Hamiltonian allows one to vary sparsity and 1-norm independently.", "Proof of Thm. 6.", "The first step is construct a Hamiltonian that solves the OR function on m bits x 0 x 1 ...x m−1 , promised that at most 1 bit is non-zero. This Hamiltonian of dimension 2m, in the computational basis" ]
[ "Here,Ĉ 1 mimics the top-left component ofĤ NOT in that is performs a bit-flip on the output register if OR(x) = 0, andĈ 0 mimics the top-right component ofĤ NOT in that it performs a bit-flip on the output register if OR(x) = 1. These matrices are defined as follows:", "It is easy to verify that if at most one bit in x is non-zero,", "Note thatĤ OR has sparsity 2m, max-norm Θ(1), and 1-norm Θ(1).", "Given an nm-bit string x 0,0 x 0,1 ...x 0,m−1 x 1,0 ...x n−1,m−1 , the HamiltonianĤ PARITY•OR that computes the n-bit PARITY of a number n of m-bit OR functions is similar toĤ PARITY in Eq.", "45, except that instead of composing with NOT Hamiltonians defined by the bit x j for each j ∈ [n], we compose with OR Hamiltonians defined by the bits x j,0 x j,1 ...x j,m−1 for each j ∈ [n]." ]
[ "zero matrix" ]
background
{ "title": "Hamiltonian Simulation by Uniform Spectral Amplification", "abstract": "The exponential speedups promised by Hamiltonian simulation on a quantum computer depends crucially on structure in both the HamiltonianĤ, and the quantum circuitÛ that encodes its description. In the quest to better approximate time-evolution e −iĤt with error , we motivate a systematic approach to understanding and exploiting structure, in a setting where Hamiltonians are encoded as measurement operators of unitary circuitsÛ for generalized measurement. This allows us to define a uniform spectral amplification problem on this framework for expanding the spectrum of encoded Hamiltonian with exponentially small distortion. We present general solutions to uniform spectral amplification in a hierarchy where factoringÛ into n = 1, 2, 3 unitary oracles represents increasing structural knowledge of the encoding. Combined with structural knowledge of the Hamiltonian, specializing these results allow us simulate time-evolution by d-sparse Hamiltonians using Up to logarithmic factors, this is a polynomial improvement upon prior In the process, we also prove a matching lower bound of Ω(t(d Ĥ max Ĥ 1) 1/2 ) queries, present a distortion-free generalization of spectral gap amplification, and an amplitude amplification algorithm that performs multiplication on unknown state amplitudes." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1612.09512
1312.1414
Concentration bound and encoding scheme
Roughly speaking, this is qualitatively the same scaling that arises in Hamiltonian evolution simulation #REFR , hence the same so called Hamming weight cut-off applies. Below is a more precise explanation of this.
[ "More precisely, we compute the amplitude associated with this I being performed.", "For each instance of W acting on |0 |µ |ψ , consider the state of indicator and purifier registers which control the multiplexed-U gates (i.e., the state multi-B|0 |µ ). The state |0 |0 corresponds to unitary I. The amplitude of |0 |0 is", "where s j and c j are defined in Eqns. (60), (61), and (62) (j ∈ {0, . . . , m}), and δ is defined in Eq. (71).", "If this indicator and purifier registers are measured in the computational basis then the probability that the outcome is not (0, 0) is", "Therefore, after the multi-B acting on |0 |µ |ψ , if the indicator and purifier registers are measured, then the probability that the outcome is not (0, 0) is at upper-bounded by 3 2r ." ]
[ "In the indicator and purifier registers, after applying multi-B, the computational basis states of the indicator and purifier registers are of the form |k 0 , l 0 . . . |k r−1 , l r−1 .", "Define the Hamming weight of such a state as the number of i ∈ {0, . . .", ", r − 1} such that (k i , l i ) = (0, 0).", "If the indicator and purifier registers are restricted to states that have Hamming weight at most h then the circuit can be restructured so that there are only h occurrences of the multiplexed-U gates.", "Consider the state of the indicator and purifier registers right before multiplexed-U gates are applied (i.e., the state (multi-B|0 |µ ) ⊗r )." ]
[ "Hamiltonian evolution simulation" ]
background
{ "title": "Efficient Quantum Algorithms for Simulating Lindblad Evolution", "abstract": "The Lindblad equation is a natural generalization to open systems of the Schrödinger equation. We give a quantum algorithm for simulating the evolution of an n-qubit system for time t within precision . If the Lindbladian consists of poly(n) operators that can each be expressed as a linear combination of poly(n) tensor products of Pauli operators then the gate cost of our algorithm is O(t polylog(t/ )poly(n)). We also show that this efficiency cannot be obtained via a trivial reduction of Lindblad evolution to Hamiltonian evolution in a larger system. Instead, the approach of our algorithm is to use a novel variation of the \"linear combinations of unitaries\" construction that pertains to channels." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1612.09512
1312.1414
Concentration bound and encoding scheme
We use the similar compression scheme as in #REFR to reduce the number of qubits for indicator and purifier registers.
[ "This is related to the probability distribution of the Hamming weight h if the state is measured in the computational basis.", "For large r, this is approximated by a Poisson distribution with λ =", "From this it can be calculated that the probability that the Hamming weight h is upper bounded by provided", "Therefore, the number of occurrences of the multiplexed-U gates can be reduced to O log(1/ ) log log(1/ ) with error .", "The number of qubits for indicator and purifier registers in a segment is still O(r log(mq))." ]
[ "The intuition is to only store and positions of components with non-zero Hamming weight, and we also need two other register to store the actual state in this position.", "The compression scheme works as follows.", "We consider the initial sate (|0 |µ ) ⊗r .", "After applying the multiplexed-B gates (before applying the multiplexed-U gates), the states becomes (multi-B|0 |µ ) ⊗r .", "It can be written as a linear combination of basis states in the form" ]
[ "qubits" ]
method
{ "title": "Efficient Quantum Algorithms for Simulating Lindblad Evolution", "abstract": "The Lindblad equation is a natural generalization to open systems of the Schrödinger equation. We give a quantum algorithm for simulating the evolution of an n-qubit system for time t within precision . If the Lindbladian consists of poly(n) operators that can each be expressed as a linear combination of poly(n) tensor products of Pauli operators then the gate cost of our algorithm is O(t polylog(t/ )poly(n)). We also show that this efficiency cannot be obtained via a trivial reduction of Lindblad evolution to Hamiltonian evolution in a larger system. Instead, the approach of our algorithm is to use a novel variation of the \"linear combinations of unitaries\" construction that pertains to channels." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1805.00582
1312.1414
IV. ALGORITHM OVERVIEW
We use a standard technique to implement linear combinations of unitaries (referred to as 'LCU technique') involving the use of an ancillary register to encode the coefficients β j #REFR .
[ ". . ≤ j k , and zero otherwise.", "The quantity σ is the number of distinct values of j, and k 1 , k 2 , . . .", ", k σ are the number of repetitions for each distinct value of j.", "That is, we have the indices j for the times sorted in ascending order, and we have multiplied by a factor of k!/(k 1 !k 2 ! . . .", "k σ !) to take account of the number of unordered sets of indices which give the same ordered set of indices. The multi-index set J is defined as" ]
[ "In the next section, we present two approaches to implementation of the (multi-qubit) ancilla state preparation", "as part of the LCU approach, where s := j∈J β j .", "For the LCU technique we introduce the operator Select(V ) := j |j j| a ⊗ V j acting as", "on the joint ancilla and system states.", "This operation implements a term from the decomposition ofŨ selected by the ancilla state |j a with weight β j . Following the method in Ref. #OTHEREFR , we also define" ]
[ "unitaries" ]
method
{ "title": "Simulating the dynamics of time-dependent Hamiltonians with a truncated Dyson series", "abstract": "We provide a general method for efficiently simulating time-dependent Hamiltonian dynamics on a circuit-model based quantum computer. Our approach is based on approximating the truncated Dyson series of the evolution operator, extending the earlier proposal by Berry et al. [Phys. Rev. Lett. 114, 090502 (2015)] to evolution generated by explicitly time-dependent Hamiltonians. Two alternative strategies are proposed to implement time ordering while exploiting the superposition principle for sampling the Hamiltonian at different times. The resource cost of our simulation algorithm retains the optimal logarithmic dependence on the inverse of the desired precision." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1805.00582
1312.1414
VI. COMPLEXITY REQUIREMENTS
Each controlled H ℓ can be implemented with O (1) queries to the oracles that give the matrix entries of the Hamiltonian #REFR .
[ "In the case where the clock register is prepared with a quantum sort, the complexity is O (K log M log K).", "The preparation of the registers ℓ 1 , . . . , ℓ k requires only creating an equal superposition.", "It is easiest if L is a power of two, in which case it is just Hadamards.", "It can also be achieved efficiently for more general L, and in either case the complexity is O (K log L) elementary gates.", "Next, one needs to implement the controlled unitaries H ℓ ." ]
[ "Since the unitaries are controlled by O (log L) qubits and act on n qubits, each control-H ℓ requires O (log L + n) gates.", "Scaling with M does not appear here, because the qubits encoding the times are just used as input to the oracles, and no direct operations are performed.", "As there are K controlled operations in a segment, the complexity of this step for a segment is O (K) queries and O (K(log L + n)) gates.", "In order to perform the simulation over the entire time T , we need to perform all r segments, and since we take r = Θ(λT ) the overall complexity is multiplied by a factor of λT .", "For the number of queries to the oracle for the Hamiltonian, the complexity is" ]
[ "Hamiltonian" ]
background
{ "title": "Simulating the dynamics of time-dependent Hamiltonians with a truncated Dyson series", "abstract": "We provide a general method for efficiently simulating time-dependent Hamiltonian dynamics on a circuit-model based quantum computer. Our approach is based on approximating the truncated Dyson series of the evolution operator, extending the earlier proposal by Berry et al. [Phys. Rev. Lett. 114, 090502 (2015)] to evolution generated by explicitly time-dependent Hamiltonians. Two alternative strategies are proposed to implement time ordering while exploiting the superposition principle for sampling the Hamiltonian at different times. The resource cost of our simulation algorithm retains the optimal logarithmic dependence on the inverse of the desired precision." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1609.03603
1312.1414
B. Fixed-point oblivious amplitude amplification
Oblivious amplitude amplification #REFR is a technique for implementing, on a state |ψ , a desired unitary V that we cannot construct directly with a larger unitary U that we can construct, at least when given some ancilla qubits.
[]
[ "The \"oblivious\" in the name comes from the fact that amplitude amplification still works despite not having the full ability to reflect about the initial state |ψ .", "Thus, this procedure is particularly useful when the initial state |ψ is not only unknown but also not renewable -we have only one copy and cannot or would prefer not to make another.", "For instance, this is the case in Hamiltonian simulation #OTHEREFR .", "For convenience, we consider |ψ to be an n-qubit state and the extension to Hilbert space to be m-qubits. Then, write the start state as #OTHEREFR", "The target state is |Φ = |0 ⊗m V |ψ ." ]
[ "ancilla qubits" ]
background
{ "title": "Fixed-Point Adiabatic Quantum Search", "abstract": "Fixed-point quantum search algorithms succeed at finding one of M target items among N total items even when the run time of the algorithm is longer than necessary. While the famous Grover's algorithm can search quadratically faster than a classical computer, it lacks the fixed-point property -the fraction of target items must be known precisely to know when to terminate the algorithm. Recently, Yoder et al. [1] gave an optimal gate-model search algorithm with the fixed-point property. Meanwhile, it is known [2] that an adiabatic quantum algorithm, operating by continuously varying a Hamiltonian, can reproduce the quadratic speedup of gate-model Grover search. We ask, can an adiabatic algorithm also reproduce the fixed-point property? We show that the answer depends on what interpolation schedule is used, so as in the gate model, there are both fixed-point and nonfixed-point versions of adiabatic search, only some of which attain the quadratic quantum speedup. Guided by geometric intuition on the Bloch sphere, we rigorously justify our claims with an explicit upper bound on the error in the adiabatic approximation. We also show that the fixed-point adiabatic search algorithm can be simulated in the gate model with neither loss of the quadratic Grover speedup nor of the fixed-point property. Finally, we discuss natural uses of fixed-point algorithms such as preparation of a relatively prime state and oblivious amplitude amplification." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1501.01715
1312.1414
Combining this result with the error-dependent lower bound of Ref. #REFR , we find a lower bound as follows.
[ "additional 2-qubit gates, where τ := d H max t.", "This result provides a strict improvement over the methods of #OTHEREFR , removing a factor of d in τ , and thus providing near-linear instead of superquadratic dependence on d.", "We also prove a lower bound showing that any algorithm must use Ω(τ ) queries.", "While a lower bound of Ω(t) was known previously #OTHEREFR , our new lower bound shows that the complexity must be at least linear in the product of the sparsity and the evolution time.", "Our proof is similar to a previous limitation on the ability of quantum computers to simulate non-sparse Hamiltonians #OTHEREFR : by replacing each edge in the graph of the Hamiltonian by a complete bipartite graph K d,d , we effectively boost the strength of the Hamiltonian by a factor of d at the cost of increasing the sparsity by a factor of d." ]
[ "Theorem 2.", "For any ǫ, t > 0, integer d ≥ 2, and fixed value of H max , there exists a d-sparse Hamiltonian H such that simulating H for time t with precision ǫ requires", "queries.", "Thus our result is near-optimal for the scaling in either τ or ǫ on its own.", "However, our upper bound (3) has a product, whereas the lower bound (5) has a sum." ]
[ "Ref", "result" ]
result
{ "title": "Hamiltonian simulation with nearly optimal dependence on all parameters", "abstract": "We present an algorithm for sparse Hamiltonian simulation that has optimal dependence on all parameters of interest (up to log factors). Previous algorithms had optimal or near-optimal scaling in some parameters at the cost of poor scaling in others. Hamiltonian simulation via a quantum walk has optimal dependence on the sparsity d at the expense of poor scaling in the allowed error ǫ. In contrast, an approach based on fractional-query simulation provides optimal scaling in ǫ at the expense of poor scaling in d. Here we combine the two approaches, achieving the best features of both. By implementing a linear combination of quantum walk steps with coefficients given by Bessel functions, our algorithm achieves near-linear scaling in τ := d H max t and sublogarithmic scaling in 1/ǫ. Our dependence on ǫ is optimal, and we prove a new lower bound showing that no algorithm can have sublinear dependence on τ ." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1501.01715
1312.1414
Overview of algorithms
Such an operation can be implemented using the techniques that were developed to simulate the fractional-query model #REFR .
[ "Our algorithm uses a Szegedy quantum walk as in Refs.", "#OTHEREFR , but with a linear combination of different numbers of steps." ]
[ "This allows us to introduce a desired phase more accurately than with the phase estimation approach of #OTHEREFR .", "As in #OTHEREFR , we first implement the approximated evolution for some time interval with some amplitude and then use oblivious amplitude amplification to make the implementation deterministic, facilitating simulations for longer times.", "In the rest of this section, we describe the approach in more detail.", "References #OTHEREFR define a quantum walk step U that depends on the Hamiltonian H to be simulated.", "This operation can be implemented using a state preparation procedure that only requires one call to the sparse Hamiltonian oracle, avoiding the need to decompose H into terms as in product-formula approaches." ]
[ "fractional-query model" ]
method
{ "title": "Hamiltonian simulation with nearly optimal dependence on all parameters", "abstract": "We present an algorithm for sparse Hamiltonian simulation that has optimal dependence on all parameters of interest (up to log factors). Previous algorithms had optimal or near-optimal scaling in some parameters at the cost of poor scaling in others. Hamiltonian simulation via a quantum walk has optimal dependence on the sparsity d at the expense of poor scaling in the allowed error ǫ. In contrast, an approach based on fractional-query simulation provides optimal scaling in ǫ at the expense of poor scaling in d. Here we combine the two approaches, achieving the best features of both. By implementing a linear combination of quantum walk steps with coefficients given by Bessel functions, our algorithm achieves near-linear scaling in τ := d H max t and sublogarithmic scaling in 1/ǫ. Our dependence on ǫ is optimal, and we prove a new lower bound showing that no algorithm can have sublinear dependence on τ ." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1501.01715
1312.1414
Lower bound
In the case where the second term is larger, we use Theorem 6.1 of #REFR .
[ "We choose one of two Hamiltonians depending on whether the first or second term in (5) For Theorem 2, we are also given a required value for H max .", "The Hamiltonian used in Lemma 9 has H max = Θ(1).", "By multiplying that Hamiltonian by a scaling factor, we obtain a Hamiltonian with the required value of H max .", "Dividing the time used in Lemma 9 by the same factor, the simulation requires time Ω(τ ) for constant precision.", "In Theorem 2 we require precision ǫ, which can only increase the complexity." ]
[ "There it is shown that performing a simulation of a 2-sparse Hamiltonian with precision ǫ and H max t = O(1) requires", "queries. Because d ≥ 2, this Hamiltonian is also d-sparse.", "As using larger values of H max t can only increase the complexity, we also have this lower bound in the more general case.", "Therefore, regardless of whether the first or second term in (5) is larger, this expression provides a lower bound on the complexity.", "It is also possible to combine our lower bound with the lower bound of #OTHEREFR to obtain a combined lower bound in terms of d, t, and ǫ, that is stronger than Theorem 2." ]
[ "Theorem" ]
method
{ "title": "Hamiltonian simulation with nearly optimal dependence on all parameters", "abstract": "We present an algorithm for sparse Hamiltonian simulation that has optimal dependence on all parameters of interest (up to log factors). Previous algorithms had optimal or near-optimal scaling in some parameters at the cost of poor scaling in others. Hamiltonian simulation via a quantum walk has optimal dependence on the sparsity d at the expense of poor scaling in the allowed error ǫ. In contrast, an approach based on fractional-query simulation provides optimal scaling in ǫ at the expense of poor scaling in d. Here we combine the two approaches, achieving the best features of both. By implementing a linear combination of quantum walk steps with coefficients given by Bessel functions, our algorithm achieves near-linear scaling in τ := d H max t and sublogarithmic scaling in 1/ǫ. Our dependence on ǫ is optimal, and we prove a new lower bound showing that no algorithm can have sublinear dependence on τ ." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1503.01755
1312.1414
Introduction
Recently the error complexity of the evolution has been reduced from power-law to logarithmic in the inverse error, using the strategy of discrete time simulation of multi-query problems #REFR .
[ "Formalisation of this advantage, in terms of computational complexity, has gradually improved over the years. Real physical systems are governed by local Hamiltonians, i.e.", "where each component interacts only with a limited number of its neighbours independent of the overall size of the system.", "Lloyd constructed a quantum evolution algorithm for such systems #OTHEREFR , based on the discrete time Lie-Trotter decomposition of the unitary evolution operator, and showed that it is efficient in the required time and space resources.", "Aharonov and TaShma rephrased the problem as quantum state generation, treating the terms in the Hamiltonian as black box oracles, and extended the result to sparse Hamiltonians in graph theoretical language #OTHEREFR .", "The time complexity of the algorithm was then improved #OTHEREFR , using Suzuki's higher order generalisations of the Lie-Trotter formula #OTHEREFR and clever decompositions of the Hamiltonian." ]
[ "This is a significant jump in computational complexity improvement that needs elaboration and understanding.", "In this article, we explicitly construct efficient evolution algorithms for Hamiltonians that are linear combinations of two projection operators, to expose the physical reasons behind the improvement. Our methods differ from the reductionist approach of Ref. #OTHEREFR .", "They clearly demonstrate how the improvement works in practice, as well as allow straightforward extension to arbitrary Hamiltonians [8] .", "Computational complexity of a problem is a measure of the resources needed to solve it.", "Conventionally, the computational complexity of a decision problem is specified in terms of the size of its input, noting that the size of its output is only one bit." ]
[ "discrete time simulation", "error complexity" ]
method
{ "title": "Optimisation of Quantum Hamiltonian Evolution: From Two Projection Operators to Local Hamiltonians", "abstract": "Given a quantum Hamiltonian and its evolution time, the corresponding unitary evolution operator can be constructed in many different ways, corresponding to different trajectories between the desired end-points. A choice among these trajectories can then be made to obtain the best computational complexity and control over errors. It is shown how a construction based on Grover's algorithm scales linearly in time and logarithmically in error bound, and is clearly superior to the scheme based on straightforward application of the Lie-Trotter formula. The strategy is then extended to simulation of any Hamiltonian that is a linear combination of two projection operators." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
1906.07115
1312.1414
Simulation algorithms with L 1 -norm scaling
It was mentioned in #REFR that this approach can simulate time-dependent Hamiltonians with L ∞ -norm scaling, but we find that its query complexity scales with the L 1 norm. We give this improved analysis in the next section.
[ "We say that an algorithm has L 1 -norm scaling if, for any continuously differentiable vector-valued function Λ(t) with Λ l (τ ) ≥ α l (τ ), the algorithm has query and gate complexity that scale with Λ ∞,1 = t 0 dτ max l Λ l (τ ) up to logarithmic factors.", "For better readability, we express the complexity of simulation algorithms in terms of the norm of the original Hamiltonian, such as H max,1 and α ∞,1 , instead of the upper bounds Λ max 1 and Λ ∞,1 .", "We also suppress logarithmic factors using the O notation when the complexity expression becomes too complicated.", "Table 1 compares the results of this paper with previous results on simulating time-dependent Hamiltonians.", "Our goal is to develop simulation algorithms that scale with the L 1 -norm with respect to the time variable τ , for both query complexity and gate complexity. We start by reexamining the fractional-query approach." ]
[]
[ "time-dependent Hamiltonians" ]
background
{ "title": "Time-dependent Hamiltonian simulation with $L^1$-norm scaling", "abstract": "The difficulty of simulating quantum dynamics depends on the norm of the Hamiltonian. When the Hamiltonian varies with time, the simulation complexity should only depend on this quantity instantaneously. We develop quantum simulation algorithms that exploit this intuition. For the case of sparse Hamiltonian simulation, the gate complexity scales with the L 1 norm t 0 dτ H(τ ) max , whereas the best previous results scale with t max τ ∈[0,t] H(τ ) max . We also show analogous results for Hamiltonians that are linear combinations of unitaries. Our approaches thus provide an improvement over previous simulation algorithms that can be substantial when the Hamiltonian varies significantly. We introduce two new techniques: a classical sampler of time-dependent Hamiltonians and a rescaling principle for the Schrödinger equation. The rescaled Dyson-series algorithm is nearly optimal with respect to all parameters of interest, whereas the sampling-based approach is easier to realize for near-term simulation. By leveraging the L 1 -norm information, we obtain polynomial speedups for semi-classical simulations of scattering processes in quantum chemistry." }
{ "title": "Exponential improvement in precision for simulating sparse Hamiltonians", "abstract": "We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a d-sparse Hamiltonian H acting on n qubits can be simulated for time t with precision ǫ using O τ log(τ /ǫ) log log(τ /ǫ) queries and O τ log log(τ /ǫ) n additional 2-qubit gates, where τ = d 2 H max t. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous-and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of \"oblivious amplitude amplification\" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error." }
2003.13159
2002.05406
Similarities
A recent paper #REFR describes an implementation of a neural guidance system for saturation-style automated theorem provers.
[ "An important part of CSR is reasoning by analogues and similarities.", "Similaritybased reasoning in FOL has been researched before, see e.g. #OTHEREFR and #OTHEREFR ." ]
[ "In the simplest cases we incorporate similarity-based reasoning by treating similar predicates/functions/constants as equivalent or equal with a measure of confidence approximated from the given similarity measure.", "For example, given that kings are similar to queens with a measure s we create an implicit axiom king(X) <=> queen(X), the confidence of which is calculated by a heuristic algoritm from s.", "This allows the reasoner to produce derivations carrying the knowledge associated with one kind of object over to similar kinds, but with diminished confidence.", "In case we additionally know that king(X) => male(X) and queen(X) => female(X) with confidence 1, we apply algorithms similar to the default reasoning described in the previous section. We will skip the details." ]
[ "provers", "saturation-style automated theorem" ]
background
{ "title": "Extending Automated Deduction for Commonsense Reasoning", "abstract": "Commonsense reasoning has long been considered as one of the holy grails of artificial intelligence. Most of the recent progress in the field has been achieved by novel machine learning algorithms for natural language processing. However, without incorporating logical reasoning, these algorithms remain arguably shallow. With some notable exceptions, developers of practical automated logic-based reasoners have mostly avoided focusing on the problem. The paper argues that the methods and algorithms used by existing automated reasoners for classical first-order logic can be extended towards commonsense reasoning. Instead of devising new specialized logics we propose a framework of extensions to the mainstream resolution-based search methods to make these capable of performing search tasks for practical commonsense reasoning with reasonable efficiency. The proposed extensions mostly rely on operating on ordinary proof trees and are devised to handle commonsense knowledge bases containing inconsistencies, default rules, taxonomies, topics, relevance, confidence and similarity measures. We claim that machine learning is best suited for the construction of commonsense knowledge bases while the extended logic-based methods would be well-suited for actually answering queries from these knowledge bases." }
{ "title": "ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (system description)", "abstract": "We describe an implementation of gradient boosting and neural guidance of saturation-style automated theorem provers that does not depend on consistent symbol names across problems. For the gradientboosting guidance, we manually create abstracted features by considering arity-based encodings of formulas. For the neural guidance, we use symbol-independent graph neural networks and their embedding of the terms and clauses. The two methods are efficiently implemented in the E prover and its ENIGMA learning-guided framework and evaluated on the MPTP large-theory benchmark. Both methods are shown to achieve comparable real-time performance to state-of-the-art symbol-based methods. In this work, we develop two symbol-independent (anonymous) inference guiding methods for saturation-style automated theorem provers (ATPs) such as E [25] and Vampire [20] . Both methods are based on learning clause classifiers from previous proofs within the ENIGMA framework [13, 14, 5] implemented in E. By symbol-independence we mean that no information about the symbol names is used by the learned guidance. In particular, if all symbols in a particular ATP problem are consistently renamed to new symbols, the learned guidance will result in the same proof search and the same proof modulo the renaming. Symbol-independent guidance is an important challenge for learning-guided ATP, addressed already in Schulz's early work on learning guidance in E [23]. With ATPs being increasingly used and trained on large ITP libraries [3, 2, 16, 18, 6, 8] ," }
1804.09089
1605.05850
Boundaries and Procedures
In case they need to be updated, DevOps strategies like those proposed in #REFR could be used.
[ "• Performance and fault data, as specified in the Monitored Info attribute (see Fig. 3 ).", "This includes periodical resource-related performance metrics [6] #OTHEREFR , and/or asynchronous alarms (performance metric-based threshold crossing, and VNF indicator value changes) #OTHEREFR .", "• Runtime information of the NS instance and each constituent VNF instance, accessible from the NS Info and each VNF Info.", "• The entire set of NS-ILs and VNF-ILs available for use in the NSD and VNFD(s).", "These levels, built by NSD/VNFD developers at design time, cannot be changed at operation time." ]
[ "• Resource capacity information from each accessible VIM.", "This information can be found in the data repositories (see Fig. 2 ).", "The DRPA applies the Auto Scaling Rules to the incoming performance/fault data. If they are not satisfied, NS scaling is required.", "In that case, the DRPA determines the NS-ILs that are candidate to satisfy the performance/fault criteria specified in the Auto Scaling Rules.", "Over these candidates, the DRPA applies the pertinent optimization criteria (e.g., minimize resource costs, energy consumption) and a set of constraints (e.g., available resource capacity, placement constraints) to output:" ]
[ "DevOps strategies" ]
method
{ "title": "Automated Network Service Scaling in NFV: Concepts, Mechanisms and Scaling Workflow", "abstract": "Abstract -Next-generation systems are anticipated to be digital platforms supporting innovative services with rapidly changing traffic patterns. To cope with this dynamicity in a cost-efficient manner, operators need advanced service management capabilities such as those provided by NFV. NFV enables operators to scale network services with higher granularity and agility than today. For this end, automation is key. In search of this automation, the European Telecommunications Standards Institute (ETSI) has defined a reference NFV framework that make use of model-driven templates called Network Service Descriptors (NSDs) to operate network services through their lifecycle. For the scaling operation, an NSD defines a discrete set of instantiation levels among which a network service instance can be resized throughout its lifecycle. Thus, the design of these levels is key for ensuring an effective scaling. In this article, we provide an overview of the automation of the network service scaling operation in NFV, addressing the options and boundaries introduced by ETSI normative specifications. We start by providing a description of the NSD structure, focusing on how instantiation levels are constructed. For illustrative purposes, we propose an NSD for a representative NS. This NSD includes different instantiation levels that enable different ways to automatically scale this NS. Then, we show the different scaling procedures the NFV framework has available, and how it may automate their triggering. Finally, we propose an ETSI-compliant workflow to describe in detail a representative scaling procedure. This workflow clarifies the interactions and information exchanges between the functional blocks in the NFV framework when performing the scaling operation." }
{ "title": "SONATA: Service Programming and Orchestration for Virtualized Software Networks", "abstract": "Abstract-In conventional large-scale networks, creation and management of network services are costly and complex tasks that often consume a lot of resources, including time and manpower. Network softwarization and network function virtualization have been introduced to tackle these problems. They replace the hardware-based network service components and network control mechanisms with software components running on general-purpose hardware, aiming at decreasing costs and complexity of implementing new services, maintaining the implemented services, and managing available resources in service provisioning platforms and underlying infrastructures. To experience the full potential of these approaches, innovative development support tools and service provisioning environments are needed. To answer these needs, we introduce the SONATA architecture, a service programming, orchestration, and management framework. We present a development toolchain for virtualized network services, fully integrated with a service platform and orchestration system. We motivate the modular and flexible architecture of our system and discuss its main components and features, such as function-and service-specific managers that allow fine-grained service management, slicing support to facilitate multi-tenancy, recursiveness for improved scalability, and full-featured DevOps support." }
2002.11059
1605.05850
End-to-end service provisioning
SONATA #REFR brings DevOps concept into NFV by providing a service development toolchain integrated with a service platform and orchestration system.
[ "The management system performs MANO operations across different service domains. GNF #OTHEREFR brings NFV to the network edge.", "It exposes a graphical user interface to specify service intent and display system events, and uses a manager to perform MANO operations.", "An agent is embedded into each edge device to manage the containerized VNFs.", "Considering the resource constraint of edge devices, it runs VNFs in lightweight Linux containers instead of VMs.", "NetFATE [20] also aims at deploying network service on the edge and its architecture is similar to GNF." ]
[ "The toolchain comprises of a service programming abstract with support tools to allow developers to implement, monitor, and optimize VNFs or SFCs.", "The service platform encompasses a customizable MANO framework to deploy and manage network services. It also supports platform recursion and network slicing.", "Eden #OTHEREFR is another platform purposed for provisioning network functions at end-hosts in a single administrative domain.", "It is composed of a controller, stages, and enclaves at the end-hosts.", "The controller provides centralized VNF coordination based on its global network view." ]
[ "service platform", "service development toolchain" ]
method
{ "title": "Design of NFV Platforms: A Survey.", "abstract": "Due to the intrinsically inefficient service provisioning in traditional networks, Network Function Virtualization (NFV) keeps gaining attention from both industry and academia. By replacing the purpose-built, expensive, proprietary network equipments with software network functions consolidated on commodity hardware, NFV envisions a shift towards a more agile and open service provisioning paradigm with much lower capital expenditure (CapEx) and operational expenditure (OpEx). Nonetheless, just like any complex system, NFV platforms commonly consist of abounding software and hardware components, and usually incorporate disparate design choices based on distinct motivations or use cases. This broad collection of convoluted alternatives makes it extremely arduous for network operators to make proper choices. Although numerous efforts have been devoted to investigating different aspects of NFV, none of them specifically focused on NFV platforms or attempted to explore the design space. In this paper, we present a comprehensive survey on NFV platform design. Our study solely targets existing NFV platform implementations. We begin with a top-down architectural view of the standard reference NFV platform and present our taxonomy of existing NFV platforms based on principal purpose of design. Then we thoroughly explore the design space and elaborate the implementation choices each platform opts for. We believe that our study gives a detailed guideline for network operators or service providers to choose the most appropriate NFV platform based on their respective requirements. switch router FW IDS proxy NAT NIC COTS hardware COTS hardware COTS hardware Virtualization layer NFV Manager Figure 1: Traditional vs. NFV paradigm Worse still, service maintenance usually involves constant repetition of the same process. Furthermore, because of the inherent inflexibility, it is not trivial for hardware middleboxes to elastically scale in and out based on the shifting demand or other system dynamics. Consequently, network operators usually resort to peak-load provisioning, which in turn leads to ineffective resource utilization and extravagant energy consumption. To improve service provisioning and get rid of network ossification, telecommunication operators began to pursue new solutions that can guarantee both cost-effectiveness and flexibility. The advent of Software Defined Networking (SDN) [4] and Network Function Virtualization (NFV) [5] provides alternative approaches for network management and service provisioning. SDN decouples the control plane from data plane and leverages a logically centralized controller to configure the programmable switches based on a global view, while NFV replaces specialized middleboxes with software Virtual Network Functions (VNFs) consolidated on Commodity Off-The-Shelf (COTS) hardware. The key to their success lies in separating the evolution timeline of software network functions and specialized hardware, completely unleashing the potential of the former. An illustrative example contrasting NFV paradigm with traditional network is shown in Fig." }
{ "title": "SONATA: Service Programming and Orchestration for Virtualized Software Networks", "abstract": "Abstract-In conventional large-scale networks, creation and management of network services are costly and complex tasks that often consume a lot of resources, including time and manpower. Network softwarization and network function virtualization have been introduced to tackle these problems. They replace the hardware-based network service components and network control mechanisms with software components running on general-purpose hardware, aiming at decreasing costs and complexity of implementing new services, maintaining the implemented services, and managing available resources in service provisioning platforms and underlying infrastructures. To experience the full potential of these approaches, innovative development support tools and service provisioning environments are needed. To answer these needs, we introduce the SONATA architecture, a service programming, orchestration, and management framework. We present a development toolchain for virtualized network services, fully integrated with a service platform and orchestration system. We motivate the modular and flexible architecture of our system and discuss its main components and features, such as function-and service-specific managers that allow fine-grained service management, slicing support to facilitate multi-tenancy, recursiveness for improved scalability, and full-featured DevOps support." }
1803.06596
1605.05850
Security and Resiliency
Softwarized networks modify the way how services are deployed replacing the hardware-based network service components with software-based solutions #REFR .
[]
[ "Through technologies such as SDN and NFV, such network can provide automation, programmability, and flexibility.", "Generally, it depends on centralized control, which leads to risks to security and resiliency [7] .", "Thus, new protection capabilities need to be put in place, including advanced management capabilities such as authentication, access control, and fault management.", "Security and resiliency must be considered both in design and operation stages of network services.", "Typically, the services are deployed first, prior to any efforts regarding security development." ]
[ "hardware-based network service" ]
background
{ "title": "Network Service Orchestration: A Survey", "abstract": "Business models of network service providers are undergoing an evolving transformation fueled by vertical customer demands and technological advances such as 5G, Software Defined Networking (SDN), and Network Function Virtualization (NFV). Emerging scenarios call for agile network services consuming network, storage, and compute resources across heterogeneous infrastructures and administrative domains. Coordinating resource control and service creation across interconnected domains and diverse technologies becomes a grand challenge. Research and development efforts are being devoted to enabling orchestration processes to automate, coordinate, and manage the deployment and operation of network services. In this survey, we delve into the topic of Network Service Orchestration (NSO) by reviewing the historical background, relevant research projects, enabling technologies, and standardization activities. We define key concepts and propose a taxonomy of NSO approaches and solutions to pave the way to a common understanding of the various ongoing efforts towards the realization of diverse NSO application scenarios. Based on the analysis of the state of affairs, we present a series of open challenges and research opportunities, altogether contributing to a timely and comprehensive survey on the vibrant and strategic topic of network service orchestration." }
{ "title": "SONATA: Service Programming and Orchestration for Virtualized Software Networks", "abstract": "Abstract-In conventional large-scale networks, creation and management of network services are costly and complex tasks that often consume a lot of resources, including time and manpower. Network softwarization and network function virtualization have been introduced to tackle these problems. They replace the hardware-based network service components and network control mechanisms with software components running on general-purpose hardware, aiming at decreasing costs and complexity of implementing new services, maintaining the implemented services, and managing available resources in service provisioning platforms and underlying infrastructures. To experience the full potential of these approaches, innovative development support tools and service provisioning environments are needed. To answer these needs, we introduce the SONATA architecture, a service programming, orchestration, and management framework. We present a development toolchain for virtualized network services, fully integrated with a service platform and orchestration system. We motivate the modular and flexible architecture of our system and discuss its main components and features, such as function-and service-specific managers that allow fine-grained service management, slicing support to facilitate multi-tenancy, recursiveness for improved scalability, and full-featured DevOps support." }
1807.05587
1606.06475
Application to the Unwinding Series
A similar phenomenon was already observed to occur for functions whose power series expansion has exponentially decaying coefficients in #REFR .
[ "Then, G is a random polynomial whose roots are also distributed according to µ as n → ∞.", "More precisely, let p n be a random polynomial created in the way described at above for some radial probability measure µ that is compactly supported outside a neighborhood of the unit disk.", "Then Theorem 1 implies that the roots of p n (z) − p n (0) are again distributed according to the measure µ as n → ∞.", "The proof of Theorem 1 also implies that with high probability all solutions of p n (z) = p n (0) except the trivial root in the origin are outside the unit disk since they are exponentially close to the n roots with high likelihood.", "We observe that in this case, when n is sufficiently large, the Blascke unwinding series reduces to a simple power series expansion." ]
[ "It seems likely that polynomials with roots outside the unit disk exhibit exponentially decaying coefficients at least in the generic case -simple power series expansion then naturally leads to exponentially convergence in the unit disk." ]
[ "whose power series", "functions" ]
result
{ "title": "On Zeroes of Random Polynomials and Applications to Unwinding", "abstract": "Abstract. Let µ be a probability measure in C with a continuous and compactly supported distribution function, let z 1 , . . . , zn be independent random variables, z i ∼ µ, and consider the random polynomial We determine the asymptotic distribution of {z ∈ C : pn(z) = pn(0)}. In particular, if µ is radial around the origin, then those solutions are also distributed according to µ as n → ∞. Generally, the distribution of the solutions will reproduce parts of µ and condense another part on curves. We use these insights to study the behavior of the Blaschke unwinding series on random data." }
{ "title": "Carrier frequencies, holomorphy and unwinding", "abstract": "Abstract. We prove that functions of intrinsic-mode type (a classical models for signals) behave essentially like holomorphic functions: adding a pure carrier frequency e int ensures that the anti-holomorphic part is much smaller than the holomorphic part This enables us to use techniques from complex analysis, in particular the unwinding series. We study its stability and convergence properties and show that the unwinding series can stabilize and show that the unwinding series can provide a high resolution time-frequency representation, which is robust to noise." }
1905.06487
1507.04113
Spectra of the Non-backtracking Operators
In #REFR a spectral algorithm was proposed for solving community detection problems on sparse random hypergraph, and it uses the eigenvectors of the non-backtracking operator defined above.
[ "Following the definition in #OTHEREFR , for a hypergraph H = (V, E), its non-backtracking operator B is a square matrix indexed by oriented hyperedges E = {(i, e) : i ∈ V, e ∈ E, i ∈ e} with entries given by B (i,e),(j,f ) = 1 if j ∈ e \\ {i}, f = e, 0 otherwise, for any oriented hyperedges (i, e), (j, f ).", "This is a generalization of the graph non-backtracking operators to hypergraphs." ]
[ "To obtain theoretical guarantees for this spectral algorithm, we need to prove a spectral gap for the non-backtracking operator.", "To the best of our knowledge, this operator has not been rigorously analyzed for any random hypergraph models.", "In the first step, we study the spectrum of the non-backtracking operator for the random regular hypergraphs.", "From the bijection in Lemma 4.2, it is important to find its connection to the non-backtracking operator of the corresponding bipartite biregular graph.", "Consider a bipartite graph G = (V (G), E(G)) with V (G) = V 1 (G)∪V 2 (G)." ]
[ "sparse random hypergraph" ]
method
{ "title": "Spectra of random regular hypergraphs", "abstract": "Abstract. In this paper we study the spectra of regular hypergraphs following the definitions from [15] . Our main result is an analog of Alon's conjecture [1] for the spectral gap of the random regular hypergraphs. We then relate the second eigenvalues to both its expansion property and the mixing rate of the non-backtracking random walk on regular hypergraphs. We also prove spectral gap for the non-backtracking operator associated to a random regular hypergraph introduced in [3] . Finally we prove the convergence of the empirical spectral distribution (ESD) for random regular hypergraphs in different regimes. Under certain conditions, we can show a local law for the ESD." }
{ "title": "Spectral Detection on Sparse Hypergraphs", "abstract": "Abstract-We consider the problem of the assignment of nodes into communities from a set of hyperedges, where every hyperedge is a noisy observation of the community assignment of the adjacent nodes. We focus in particular on the sparse regime where the number of edges is of the same order as the number of vertices. We propose a spectral method based on a generalization of the non-backtracking Hashimoto matrix into hypergraphs. We analyze its performance on a planted generative model and compare it with other spectral methods and with Bayesian belief propagation (which was conjectured to be asymptotically optimal for this model). We conclude that the proposed spectral method detects communities whenever belief propagation does, while having the important advantages to be simpler, entirely nonparametric, and to be able to learn the rule according to which the hyperedges were generated without prior information." }
1902.11280
1811.10561
Questions
A minimum of 20 000 and a maximum of 4 000 000 questions have been created. For more details, please refer to #REFR .
[ "In the data set many variants of questions are included for each question type, depending on the kind of relations the question implies.", "The number of possible answers is also reported in the last column.", "Each possible answer is modelled by one output node in the neural network.", "Note that for absolute and relative positions, the same nodes are used with different meanings: in the first case we enumerate all sounds, in the second case, only the sounds played by a specific instrument.", "questions during the generation phase." ]
[]
[ "questions" ]
method
{ "title": "From Visual to Acoustic Question Answering", "abstract": "We introduce the new task of Acoustic Question Answering (AQA) to promote research in acoustic reasoning. The AQA task consists of analyzing an acoustic scene composed by a combination of elementary sounds and answering questions that relate the position and properties of these sounds. The kind of relational questions asked, require that the models perform non-trivial reasoning in order to answer correctly. Although similar problems have been extensively studied in the domain of visual reasoning, we are not aware of any previous studies addressing the problem in the acoustic domain. We propose a method for generating the acoustic scenes from elementary sounds and a number of relevant questions for each scene using templates. We also present preliminary results obtained with two models (FiLM and MAC) that have been shown to work for visual reasoning." }
{ "title": "CLEAR: A Dataset for Compositional Language and Elementary Acoustic Reasoning", "abstract": "We introduce the task of acoustic question answering (AQA) in the area of acoustic reasoning. In this task an agent learns to answer questions on the basis of acoustic context. In order to promote research in this area, we propose a data generation paradigm adapted from CLEVR [11] . We generate acoustic scenes by leveraging a bank of elementary sounds. We also provide a number of functional programs that can be used to compose questions and answers that exploit the relationships between the attributes of the elementary sounds in each scene. We provide AQA datasets of various sizes as well as the data generation code. As a preliminary experiment to validate our data, we report the accuracy of current state of the art visual question answering models when they are applied to the AQA task without modifications. Although there is a plethora of question answering tasks based on text, image or video data, to our knowledge, we are the first to propose answering questions directly on audio streams. We hope this contribution will facilitate the development of research in the area." }
1203.0129
1111.1475
Remark 2.3 (Equivalence With Other Models):
In #REFR it is shown that the controllability of (3), expressed by a Lie algebra rank condition, is equivalent to the controllability of system (1).
[ "Formally, for quantum mechanical systems which are closed (i.e., not interacting with the environment) and finite dimensional, one considers the Schrödinger equation where is the quantum state and the Hamiltonian matrix is Hermitian and depends on a control .", "Continuous time quantum walks are quantum systems whose dynamics is defined on a graph .", "Specifically, the Hamiltonian has the form where is the adjacency matrix or the Laplacian of .", "The resulting dynamics for quantum walks on a grid (or lattice) graph with being the grid Laplacian is", "The controllability problem consists of achieving a desired (assigned) probability distribution by controlling only few nodes." ]
[ "Our analysis is strictly related to the line pursued in #OTHEREFR of finding more easily verifiable graph theoretic tests." ]
[ "controllability" ]
background
{ "title": "Controllability and Observability of Grid Graphs via Reduction and Symmetries", "abstract": "Abstract-In this paper, we investigate the controllability and observability properties of a family of linear dynamical systems, whose structure is induced by the Laplacian of a grid graph. This analysis is motivated by several applications in network control and estimation, quantum computation and discretization of partial differential equations. Specifically, we characterize the structure of the grid eigenvectors by means of suitable decompositions of the graph. For each eigenvalue, based on its multiplicity and on suitable symmetries of the corresponding eigenvectors, we provide necessary and sufficient conditions to characterize all and only the nodes from which the induced dynamical system is controllable (observable). We discuss the proposed criteria and show, through suitable examples, how such criteria reduce the complexity of the controllability (respectively, observability) analysis of the grid. Index Terms-Complex networks, controllability and observability, cooperative control, lattice, linear systems, network analysis and control." }
{ "title": "Zero forcing, linear and quantum controllability for systems evolving on networks", "abstract": "We study the dynamics of systems on networks from a linear algebraic perspective. The control theoretic concept of controllability describes the set of states that can be reached for these systems. Under appropriate conditions, there is a connection between the quantum (Lie theoretic) property of controllability and the linear systems (Kalman) controllability condition. We investigate how the graph theoretic concept of a zero forcing set impacts the controllability property. In particular, we prove that if a set of vertices is a zero forcing set, the associated dynamical system is controllable. The results open up the possibility of further exploiting the analogy between networks, linear control systems theory, and quantum systems Lie algebraic theory. This study is motivated by several quantum systems currently under study, including continuous quantum walks modeling transport phenomena. Additionally, it proposes zero forcing as a new notion in the analysis of complex networks." }
1708.06448
1501.02627
Fast numeric max-convolution
Moreover, it is possible to either directly compute or approximate (depending on the p desired) a continuum between sum-product inference (equivalent to p = 1) and max-product inference (equivalent to p = ∞) #REFR , which we denote here as numeric p-convolution.
[ "Likewise, another method for computing max-convolution of two vectors based on sorting the vector arguments and visiting them in descending order #OTHEREFR has a runtime that depends somewhat cryptically on the input data, and thus is also not reliably ∈ o(n 2 ); however, that sorting-based approach has been used quite successfully in practice for work calculating the most intense isotope peaks in mass spectrometry [?] , which is quite interesting given the additive nature of isotope problems (which will be exploited here with max-convolution rather than sorting) and the fact that Lącki et al. are not explicitly using max-convolution.", "This suggests the possibility of more unified approaches to additive problems, which would connect max-convolution on one hand and a priority queue of the top values in the cartesian product.", "The lack of inverse operations in max-product space can approached by using rings that behave similar to semirings.", "Specifically, the L p ring space defines x ⊕ y = (x p + y p ) 1/p , and when p 1, z = x ⊕ y ≈ max(x, y), but with the option of an inverse operation: given x and z, it is possible to solve for y (this would not be possible in a genuine semiring).", "By using L p ring spaces, it is possible to numerically approximate max-convolution." ]
[ "This continuum is useful in its own right, and p can be thought of as a hyperparameter.", "p = 1 is democratic and places a high value on popularity, p = ∞ is more like a dictatorship where only the strongest solution is weighed, and finite p > 1 resembles a republic, where the results reflect a compromise between popularity and quality of the solutions.", "This numeric p-convolution approach generalizes to convolution on tensors, whereas the approach in Bremner et al. is as of now only applicable to 1D vectors.", "Underflow concerns sometimes limit the choice of p for which p-convolution can be stably computed, particularly when many values in the input arrays are close to zero; therefore, a collection of a small or constant number of L p ring spaces can be used.", "Rather than using a single L p space (e.g., the one corresponding to the largest p that is numerically stable for a result of interest), the shape of the collection of L p spaces can be used to more accurately approximate the result #OTHEREFR ." ]
[ "max-product inference" ]
background
{ "title": "The p-convolution forest: a method for solving graphical models with additive probabilistic equations", "abstract": "Convolution trees, loopy belief propagation, and fast numerical p-convolution are combined for the first time to efficiently solve networks with several additive constraints between random variables. An implementation of this \"convolution forest\" approach is constructed from scratch, including an improved trimmed convolution tree algorithm and engineering details that permit fast inference in practice, and improve the ability of scientists to prototype models with additive relationships between discrete variables. The utility of this approach is demonstrated using several examples: these include illustrations on special cases of some classic NP-complete problems (subset sum and knapsack), identification of GC-rich genomic regions with a large hidden Markov model, inference of molecular composition from summary statistics of the intact molecule, and estimation of elemental abundance in the presence of overlapping isotope peaks." }
{ "title": "A fast numerical method for max-convolution and the application to efficient max-product inference in Bayesian networks", "abstract": "Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as \"infimal convolution\", \"min-convolution\", or \"convolution on the tropical semiring\"), for which no O(k log(k)) method is currently known. Here I present a O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical maxconvolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk) log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical maxconvolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk 2 to nk log(k), and has potential application to the all-pairs shortest paths problem." }
2002.04648
1908.01119
I. INTRODUCTION
In #REFR , a metric called "Value of Information" (VoI) is introduced to facilitate packet scheduling for low-error Kalman filter based estimation.
[ "It refers to the duration since the destination became desynchronized with the source.", "In the same spirit, another metric called \"Age of Incorrect Information\" (AoII) is proposed in #OTHEREFR .", "AoII takes both the time that the monitor is unaware of the correct status of the system and the difference between the current estimate at the monitor and the actual state of system into the definition. With particular penalty functions, AoII reduces to AoS.", "Reference #OTHEREFR proposes to use the mutual information between the real-time source value and the delivered samples at the receiver to quantify the freshness of the information contained in the delivered samples.", "It shows that for a time-homogeneous Markov chain, the mutual information can be expressed as a non-negative and non-increasing function of the age." ]
[ "VoI depends on the age of the packet, as well as the mutual information between the packet content and the system status, which is equivalent to the variance of the noise associated with the measurement.", "Generally speaking, how to define a universal information freshness metric that accounts for dynamically evolving system states remains open.", "In this paper, we propose an information theoretic measure of information freshness by taking the dynamics of the monitored system into account, where we introduce a two-dimensional discrete-time Markov chain to model the underlying state changes. Our main contributions are threefold:", "First, the introduced information theoretic measure generalizes the definition of AoI.", "It takes the age of updates and the dynamics of the monitored system into the definition, and suggests a unified approach to define proper age penalty functions #OTHEREFR for various dynamic systems." ]
[ "low-error Kalman filter" ]
method
{ "title": "Information Freshness for Timely Detection of Status Changes", "abstract": "In this paper, we aim to establish the connection between Age of Information (AoI) in network theory, information uncertainty in information theory, and detection delay in time series analysis. We consider a dynamic system whose state changes at discrete time points, and a state change won't be detected until an update generated after the change point is delivered to the destination for the first time. We introduce an information theoretic metric to measure the information freshness at the destination, and name it as generalized Age of Information (GAoI). We show that under any state-independent online updating policy, if the underlying state of the system evolves according to a stationary Markov chain, the GAoI is proportional to the AoI. Besides, the accumulative GAoI and AoI are proportional to the expected accumulative detection delay of all changes points over a period of time. Thus, any (G)AoI-optimal state-independent updating policy equivalently minimizes the corresponding expected change point detection delay, which validates the fundamental role of (G)AoI in real-time status monitoring. Besides, we also investigate a Bayesian change point detection scenario where the underlying state evolution is not stationary. Although AoI is no longer related to detection delay explicitly, we show that the accumulative GAoI is still an affine function of the expected detection delay, which indicates the versatility of GAoI in capturing information freshness in dynamic systems. Index Terms-Age of Information, information freshness, change point detection." }
{ "title": "Optimal Information Updating based on Value of Information", "abstract": "We address the problem of how to optimally schedule data packets over an unreliable channel in order to minimize the estimation error of a simple-to-implement remote linear estimator using a constant \"Kalman\" gain to track the state of a Gauss Markov process. The remote estimator receives time-stamped data packets which contain noisy observations of the process. Additionally, they also contain the information about the \"quality\" of the sensor/source, i.e., the variance of the observation noise that was used to generate the packet. In order to minimize the estimation error, the scheduler needs to use both while prioritizing packet transmissions. It is shown that a simple index rule that calculates the value of information (VoI) of each packet, and then schedules the packet with the largest current value of VoI, is optimal. The VoI of a packet decreases with its age, and increases with the precision of the source. Thus, we conclude that, for constant filter gains, a policy which minimizes the age of information does not necessarily maximize the estimator performance. R. Singh is with the" }
1901.10911
1712.07867
Introduction
This paper is a sequel of #REFR where we have proved that the smallest number of vertices of a snark -a connected cubic graph whose edges cannot be properly coloured with three colours -which has cyclic connectivity 4 and oddness at least 4 is 44.
[]
[ "The purpose of the present paper is to show that there are precisely 31 such snarks, all of them having oddness exactly 4, resistance 3, and girth 5.", "Together with #OTHEREFR , this paper provides a partial answer to the following question posed in #OTHEREFR Problem 2] , leaving open the existence of cyclically 5-edge-connected snarks of oddness at least 4 on fewer than 44 vertices:", "Problem #OTHEREFR .", "Which is the smallest snark (with cyclic connectivity 4 and girth 5) of oddness strictly greater than 2?", "The oddness of a bridgeless cubic graph G is the smallest number of odd circuits in a 2-factor of G, and the resistance of G is the smallest number of vertices (or edges) of G whose removal yields a 3-edge-colourable graph." ]
[ "cubic graph", "snark -a connected" ]
background
{ "title": "The smallest nontrivial snarks of oddness 4", "abstract": "The oddness of a cubic graph is the smallest number of odd circuits in a 2-factor of the graph. This invariant is widely considered to be one of the most important measures of uncolourability of cubic graphs and as such has been repeatedly reoccurring in numerous investigations of problems and conjectures surrounding snarks (connected cubic graphs admitting no proper 3-edge-colouring). In [Ars Math. Contemp. 16 (2019), 277-298] we have proved that the smallest number of vertices of a snark with cyclic connectivity 4 and oddness 4 is 44. We now show that there * Supported by a Postdoctoral Fellowship of the Research Foundation Flanders (FWO). † Partially supported by VEGA 1/0876/16, VEGA 1/0813/18, and by APVV-15-0220." }
{ "title": "Smallest snarks with oddness 4 and cyclic connectivity 4 have order 44", "abstract": "The family of snarks -connected bridgeless cubic graphs that cannot be 3edge-coloured -is well-known as a potential source of counterexamples to several important and long-standing conjectures in graph theory. These include the cycle double cover conjecture, Tutte's 5-flow conjecture, Fulkerson's conjecture, and several others. One way of approaching these conjectures is through the study of structural properties of snarks and construction of small examples with given properties. In this paper we deal with the problem of determining the smallest order of a nontrivial snark (that is, one which is cyclically 4-edge-connected and has girth at least 5) of oddness at least 4. Using a combination of structural analysis with extensive computations we prove that the smallest order of a snark with oddness at least 4 and cyclic connectivity 4 is 44. Formerly it was known that such a snark must have at least 38 vertices [J. Combin. Theory Ser. B 103 (2013), 468-488] and one such snark on 44 vertices was constructed by Lukot'ka et al. [Electron. J. Combin. 22 (2015), #P1.51]. The proof requires determining all cyclically 4-edge-connected snarks on 36 vertices, which extends the previously compiled list of all such snarks up to 34 vertices [J. Combin. Theory Ser. B, loc. cit.]. As a by-product, we use this new list to test the validity of several conjectures where snarks can be smallest counterexamples." }
1901.10911
1712.07867
Completeness of M
This result is a consequence of the following stronger and more detailed result from #REFR which will be needed for the proof of the main result of this paper.
[ "In this section we prove that the set M, constructed and analysed in Section 3, is the complete set of pairwise nonisomorphic snarks with cyclic connectivity 4, oddness at least 4, and minimum order.", "Our point of departure is the following theorem proved in #OTHEREFR ." ]
[ "Theorem 5.", "Let G be a snark with oddness at least 4, cyclic connectivity 4, and minimum number of vertices.", "Let S be a cycle-separating 4-edge-cut in G whose removal leaves components G 1 and G 2 .", "Then, up to permutation of the index set {1, 2}, exactly one of the following occurs.", "(i) Both G 1 and G 2 are uncolourable, in which case each of them can be extended to a cyclically 4-edge-connected snark by adding two vertices." ]
[ "main result", "proof" ]
background
{ "title": "The smallest nontrivial snarks of oddness 4", "abstract": "The oddness of a cubic graph is the smallest number of odd circuits in a 2-factor of the graph. This invariant is widely considered to be one of the most important measures of uncolourability of cubic graphs and as such has been repeatedly reoccurring in numerous investigations of problems and conjectures surrounding snarks (connected cubic graphs admitting no proper 3-edge-colouring). In [Ars Math. Contemp. 16 (2019), 277-298] we have proved that the smallest number of vertices of a snark with cyclic connectivity 4 and oddness 4 is 44. We now show that there * Supported by a Postdoctoral Fellowship of the Research Foundation Flanders (FWO). † Partially supported by VEGA 1/0876/16, VEGA 1/0813/18, and by APVV-15-0220." }
{ "title": "Smallest snarks with oddness 4 and cyclic connectivity 4 have order 44", "abstract": "The family of snarks -connected bridgeless cubic graphs that cannot be 3edge-coloured -is well-known as a potential source of counterexamples to several important and long-standing conjectures in graph theory. These include the cycle double cover conjecture, Tutte's 5-flow conjecture, Fulkerson's conjecture, and several others. One way of approaching these conjectures is through the study of structural properties of snarks and construction of small examples with given properties. In this paper we deal with the problem of determining the smallest order of a nontrivial snark (that is, one which is cyclically 4-edge-connected and has girth at least 5) of oddness at least 4. Using a combination of structural analysis with extensive computations we prove that the smallest order of a snark with oddness at least 4 and cyclic connectivity 4 is 44. Formerly it was known that such a snark must have at least 38 vertices [J. Combin. Theory Ser. B 103 (2013), 468-488] and one such snark on 44 vertices was constructed by Lukot'ka et al. [Electron. J. Combin. 22 (2015), #P1.51]. The proof requires determining all cyclically 4-edge-connected snarks on 36 vertices, which extends the previously compiled list of all such snarks up to 34 vertices [J. Combin. Theory Ser. B, loc. cit.]. As a by-product, we use this new list to test the validity of several conjectures where snarks can be smallest counterexamples." }
1808.05750
1804.00073
C. Testing properties defined incorrectly
In the second method, we simply applied random tests #REFR to N k until a counterexample was generated or a resource was exceeded.
[ "In the first method, we used testing driven by a coverage metric.", "Namely, we generated a test set T aimed at setting the output 11 of every gate G of N k both to 0 and 1.", "Then we applied T to N k to disprove N k ≡ 0.", "Note that a single test sets the output of every gate of N k to 0 or 1.", "To make T stronger, when processing a gate G of N k we tried to find a new test setting the output of G to b ∈ {0, 1}, even if this goal was \"inadvertently\" achieved earlier." ]
[ "In the third method, we applied GenPCT to circuit N k to generate a CTS aa T .", "Then we used T to break N k ≡ 0.", "A sample of 17 benchmarks is shown in Table III .", "When compiling this sample we dropped the easy examples solved by all three methods.", "The first column of Table III lists names of benchmarks." ]
[ "random tests" ]
method
{ "title": "Complete Test Sets And Their Approximations", "abstract": "We use testing to check if a combinational circuit N always evaluates to 0 (written as N ≡ 0). We call a set of tests proving N ≡ 0 a complete test set (CTS). The conventional point of view is that to prove N ≡ 0 one has to generate a trivial CTS. It consists of all 2 |X| input assignments where X is the set of input variables of N . We use the notion of a Stable Set of Assignments (SSA) to show that one can build a non-trivial CTS consisting of less than 2 |X| tests. Given an unsatisfiable CNF formula H(W ), an SSA of H is a set of assignments to W that proves unsatisfiability of H. A trivial SSA is the set of all 2 |W | assignments to W . Importantly, real-life formulas can have non-trivial SSAs that are much smaller than 2 |W | . In general, construction of even non-trivial CTSs is inefficient. We describe a much more efficient approach where tests are extracted from an SSA built for a projection of N on a subset of its variables. These tests can be viewed as an approximation of a CTS for N . We describe potential applications of our approach. We show experimentally that it can be used to facilitate hitting corner cases and expose bugs in sequential circuits overlooked due to checking \"misdefined\" properties." }
{ "title": "Generation of complete test sets", "abstract": "Abstract. We use testing to check if a combinational circuit N always evaluates to 0 (written as N ≡ 0). The usual point of view is that to prove N ≡ 0 one has to check the value of N for all 2 |X| input assignments where X is the set of input variables of N . We use the notion of a Stable Set of Assignments (SSA) to show that one can build a complete test set (i.e. a test set proving N ≡ 0) that consists of less than 2 |X| tests. Given an unsatisfiable CNF formula H(W ), an SSA of H is a set of assignments to W proving unsatisfiability of H. A trivial SSA is the set of all 2 |W | assignments to W . Importantly, real-life formulas can have SSAs that are much smaller than 2 |W | . Generating a complete test set for N using only the machinery of SSAs is inefficient. We describe a much faster algorithm that combines computation of SSAs with resolution derivation and produces a complete test set for a \"projection\" of N on a subset of variables of N . We give experimental results and describe potential applications of this algorithm." }
1905.12413
1802.09568
Motivation
It is worth mentioning that the diagonal approximation methods are often preferred in practice because of the super-linear memory consumption of the other methods #REFR .
[ "Common well-known optimization preconditioning methods include the Newton's method, which employs the exact Hessian matrix, and the quasi-Newton methods, which do not require the knowledge of the exact Hessian matrix, as described in #OTHEREFR .", "The quasi-Newton methods are generally preferred over the Newton's method when the exact Hessian matrix is too expensive to compute or is unknown.", "Introduced to answer specifically some of the challenges facing ML and DL, AdaGrad #OTHEREFR uses the co-variance matrix of the accumulated gradients as a preconditioner.", "Because of the dimensions of the ML problems, the full-matrix preconditioning methods are generally not the first choice for the optimizers.", "Specialized variants have been proposed to replace the full preconditioning methods by diagonal approximation methods such as Adam in #OTHEREFR , by a sketched version #OTHEREFR or by other schemes such as Nesterov Accelerated Gradient (NAG) #OTHEREFR or SAGA #OTHEREFR ." ]
[ "In this work, we take an alternative approach to preconditioning and we describe how to exploit Newton's convergence using a diagonal approximation of the Hessian matrix.", "Our approach is motivated by the efficient and accurate resolution of complex tensor decomposition for which most of the ML and DL state-of-the-art optimizers fail.", "Our algorithm, called VecHGrad for Vector Hessian Gradient, returns the tensor structure of the gradient and uses a separate preconditioner vector.", "Although our algorithm is motivated by the resolution of complex tensor decomposition, its range of application is wide and it could be used in ML and DL.", "Our analysis targets non-trivial high-order tensor decomposition and relies on the extensions of vector analysis to the tensor world." ]
[ "diagonal approximation methods" ]
method
{ "title": "VecHGrad for Solving Accurately Complex Tensor Decomposition", "abstract": "Abstract. Tensor decomposition, a collection of factorization techniques for multidimensional arrays, are among the most general and powerful tools for scientific analysis. However, because of their increasing size, today's data sets require more complex tensor decomposition involving factorization with multiple matrices and diagonal tensors such as DEDI-COM or PARATUCK2. Traditional tensor resolution algorithms such as Stochastic Gradient Descent (SGD), Non-linear Conjugate Gradient descent (NCG) or Alternating Least Square (ALS), cannot be easily applied to complex tensor decomposition or often lead to poor accuracy at convergence. We propose a new resolution algorithm, called VecHGrad, for accurate and efficient stochastic resolution over all existing tensor decomposition, specifically designed for complex decomposition. VecHGrad relies on gradient, Hessian-vector product and adaptive line search to ensure the convergence during optimization. Our experiments on five real-world data sets with the state-of-the-art deep learning gradient optimization models show that VecHGrad is capable of converging considerably faster because of its superior theoretical convergence rate per step. Therefore, VecHGrad targets as well deep learning optimizer algorithms. The experiments are performed for various tensor decomposition including CP, DEDICOM and PARATUCK2. Although it involves a slightly more complex update rule, VecHGrad's runtime is similar in practice to that of gradient methods such as SGD, Adam or RMSProp." }
{ "title": "Shampoo: Preconditioned Stochastic Tensor Optimization", "abstract": "Preconditioned gradient methods are among the most general and powerful tools in optimization. However, preconditioning requires storing and manipulating prohibitively large matrices. We describe and analyze a new structure-aware preconditioning algorithm, called Shampoo, for stochastic optimization over tensor spaces. Shampoo maintains a set of preconditioning matrices, each of which operates on a single dimension, contracting over the remaining dimensions. We establish convergence guarantees in the stochastic convex setting, the proof of which builds upon matrix trace inequalities. Our experiments with state-ofthe-art deep learning models show that Shampoo is capable of converging considerably faster than commonly used optimizers. Although it involves a more complex update rule, Shampoo's runtime per step is comparable to that of simple gradient methods such as SGD, AdaGrad, and Adam." }
1803.07950
1611.07675
Baseline methods
LSTM-TSA #REFR presents a transfer unit to control and fuse the attribute, motion, and visual features for the video representations.
[ "Attention fusion #OTHEREFR develops a modality-dependent attention mechanism together with temporal attention to combines the cues of multiple modalities, which can attend not only time but also the modalities.", "BA encoder #OTHEREFR presents a new boundary-aware LSTM cell to detect the discontinuity of consecutive frames.", "Then the cell is used to build a hierarchical encoder and makes its structure adapt to the inputs.", "SCN #OTHEREFR detects semantic concepts from videos and proposes a tag-dependent LSTM whose weights matrix depends on the semantic concepts.", "TDDF #OTHEREFR combines motion feature and appearence feature, and automatically determines which feature should be focused according to the word." ]
[ "MVRM #OTHEREFR learns a multirate video representation which can adaptively fit the motion speeds in videos.", "On the MSR-VTT dataset, we include four methods in the comparison: TDDF #OTHEREFR , v2t navigator #OTHEREFR , Aalto #OTHEREFR , and Attention fusion #OTHEREFR .", "V2t navigator #OTHEREFR represents the videos by their visual, aural, speech, and category cues, while we only employ the raw video frames in our approach.", "Aalto #OTHEREFR trains an evaluator network to drive the captioning model towards semantically interesting sentences.", "During the experiments, REINFORCE (RFC) denotes the vailla self-critical REINFORCE algorithm extended to video captioning, and REINFORCE+ (RFC+) represents our REINFORCE algorithm with multi-sampling trajectories." ]
[ "video representations" ]
method
{ "title": "End-to-End Video Captioning With Multitask Reinforcement Learning", "abstract": "Although end-to-end (E2E) learning has led to impressive progress on a variety of visual understanding tasks, it is often impeded by hardware constraints (e.g., GPU memory) and is prone to overfitting. When it comes to video captioning, one of the most challenging benchmark tasks in computer vision, those limitations of E2E learning are especially amplified by the fact that both the input videos and output captions are lengthy sequences. Indeed, state-ofthe-art methods for video captioning process video frames by convolutional neural networks and generate captions by unrolling recurrent neural networks. If we connect them in an E2E manner, the resulting model is both memoryconsuming and data-hungry, making it extremely hard to train. In this paper, we propose a multitask reinforcement learning approach to training an E2E video captioning model. The main idea is to mine and construct as many effective tasks (e.g., attributes, rewards, and the captions) as possible from the human captioned videos such that they can jointly regulate the search space of the E2E neural network, from which an E2E video captioning model can be found and generalized to the testing phase. To the best of our knowledge, this is the first video captioning model that is trained end-to-end from the raw video input to the caption output. Experimental results show that such a model outperforms existing ones to a large margin on two benchmark video captioning datasets." }
{ "title": "Video Captioning with Transferred Semantic Attributes", "abstract": "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)-a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods." }
1910.09937
1108.4146
Optimal sensor placement based on Information gain
The optimal sensor locations are identified using Bayesian experimental design #REFR so that the information obtained from the collected measurements is maximized.
[ "In the present work a swimmer is equipped with sensors that are used to identify the size and location of a nearby school." ]
[ "We define the information gain as the distance between the prior belief on the quantities of interest and the posterior belief after obtaining the measurements.", "Here, we choose as measure of the distance the Kullback-Leibler divergence between the prior and the posterior distribution." ]
[ "Bayesian experimental design" ]
method
{ "title": "Optimal sensing for fish school identification", "abstract": "Fish schooling implies an awareness of the swimmers for their companions. In flow mediated environments, in addition to visual cues, pressure and shear sensors on the fish body are critical for providing quantitative information that assists the quantification of proximity to other swimmers. Here we examine the distribution of sensors on the surface of an artificial swimmer so that it can optimally identify a leading group of swimmers. We employ Bayesian experimental design coupled with two-dimensional Navier Stokes equations for multiple self-propelled swimmers. The follower tracks the school using information from its own surface pressure and shear stress. We demonstrate that the optimal sensor distribution of the follower is qualitatively similar to the distribution of neuromasts on fish. Our results show that it is possible to identify accurately the center of mass and even the number of the leading swimmers using surface only information." }
{ "title": "Simulation-based optimal Bayesian experimental design for nonlinear systems", "abstract": "The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics." }
1910.09937
1108.4146
Optimal sensor placement based on Information gain
The optimal sensor locations are identified using Bayesian experimental design #REFR so that the information obtained from the collected measurements is maximized.
[ "In the present work a swimmer is equipped with sensors that are used to identify the size and location of a nearby school." ]
[ "We define the information gain as the distance between the prior belief on the quantities of interest and the posterior belief after obtaining the measurements.", "Here, we choose as measure of the distance the Kullback-Leibler divergence between the prior and the posterior distribution." ]
[ "Bayesian experimental design" ]
method
{ "title": "Optimal sensing for fish school identification", "abstract": "Fish schooling implies an awareness of the swimmers for their companions. In flow mediated environments, in addition to visual cues, pressure and shear sensors on the fish body are critical for providing quantitative information that assists the quantification of proximity to other swimmers. Here we examine the distribution of sensors on the surface of an artificial swimmer so that it can optimally identify a leading group of swimmers. We employ Bayesian experimental design coupled with two-dimensional Navier Stokes equations for multiple self-propelled swimmers. The follower tracks the school using information from its own surface pressure and shear stress. We demonstrate that the optimal sensor distribution of the follower is qualitatively similar to the distribution of neuromasts on fish. Our results show that it is possible to identify accurately the center of mass and even the number of the leading swimmers using surface only information." }
{ "title": "Simulation-based optimal Bayesian experimental design for nonlinear systems", "abstract": "The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics." }
1610.02558
1108.4146
Introduction
In this work, we aim to demonstrate significant potential for application of an optimal Bayesian experimental design framework #REFR to guide the selection of test conditions (experimental scenarios) in intermixing calibration experiments for reactive multilayers.
[ "#OTHEREFR further generalized the methodology developed in #OTHEREFR by adapting a Bayesian framework for inferring the Arrhenius parameters.", "The calibration step was accelerated by constructing functional representations of experimental observables, judiciously selected so that they exhibit smooth dependence on the uncertain model parameters.", "In particular, polynomial surrogates were constructed for reaction temperature in ignition experiments, and for reaction velocity in self-propagating front experiments.", "The overall method was applied to calibrate a composite Arrhenius diffusivity law for Zr-Al multilayers.", "Experience from the study in #OTHEREFR also highlighted the potential impact of data noise and scarcity on residual uncertainties in inferred quantities." ]
[ "Consequently, the experimental data is expected to significantly enhance the information gain in mixing rates and hence offer a robust strategy for model calibration.", "Specifically, we focus our attention on low-temperature homogeneous ignition experiments and self-propagating front experiments.", "Description of the reaction models used for both setups can be found in Section 2.", "Section 3 introduces the optimal Bayesian experimental design framework, where an objective function (expected utility) is formulated to quantify and reflect the average information that can be gained from an experiment, on the uncertain model parameters.", "Furthermore, this quantity is estimated using Monte Carlo sampling, and requires a very high number of reaction model evaluations, making it impractical." ]
[ "optimal Bayesian experimental" ]
method
{ "title": "Design Analysis for Optimal Calibration of Diffusivity in Reactive Multilayers", "abstract": "Calibration of the uncertain Arrhenius diffusion parameters for quantifying mixing rates in Zr-Al nanolaminate foils was performed in a Bayesian setting [1] . The parameters were inferred in a low temperature regime characterized by homogeneous ignition and a high temperature regime characterized by self-propagating reactions in the multilayers. In this work, we extend the analysis to find optimal experimental designs that would provide the best data for inference. We employ a rigorous framework that quantifies the expected information gain in an experiment, and find the optimal design conditions using numerical techniques of Monte Carlo, sparse quadrature, and polynomial chaos surrogates. For the low temperature regime, we find the optimal foil heating rate and pulse duration, and confirm through simulation that the optimal design indeed leads to sharper posterior distributions of the diffusion parameters. For the high temperature regime, we demonstrate potential for increase in the expected information gain of the posteriors by increasing sample size and reducing uncertainty in measurements. Moreover, posterior marginals are also produced to verify favorable experimental scenarios for this regime. † Corresponding Author" }
{ "title": "Simulation-based optimal Bayesian experimental design for nonlinear systems", "abstract": "The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics." }
1506.00053
1108.4146
The model and experimental scenarios.
Both cases of this example have been studied in #REFR using the direct Monte Carlo estimate of U (d).
[ "For prior we choose initially κ ∼ U (0, 1) and later on we discuss a few more cases.", "Suppose we have control over d ∈ D where D = [0, 1] is the design space and we are interested in inferring κ.", "We explore two cases, first the case where inference is carried out using a single observation of y and second the case where two observations of y can be obtained, corresponding to different values of d.", "One can think of the design parameter d as the location where y is observed.", "Before observing y, we would like to know the value of d that would make our observations the most informative ones." ]
[ "The first case was also studied in #OTHEREFR using the Laplace approximation of the posterior distribution of κ and then performing the integration with sparse quadratures." ]
[ "direct Monte Carlo" ]
method
{ "title": "Efficient Bayesian experimentation using an expected information gain lower bound", "abstract": "Abstract. Experimental design is crucial for inference where limitations in the data collection procedure are present due to cost or other restrictions. Optimal experimental designs determine parameters that in some appropriate sense make the data the most informative possible. In a Bayesian setting this is translated to updating to the best possible posterior. Information theoretic arguments have led to the formation of the expected information gain as a design criterion. This can be evaluated mainly by Monte Carlo sampling and maximized by using stochastic approximation methods, both known for being computationally expensive tasks. We propose a framework where a lower bound of the expected information gain is used as an alternative design criterion. In addition to alleviating the computational burden, this also addresses issues concerning estimation bias. The problem of permeability inference in a large contaminated area is used to demonstrate the validity of our approach where we employ the massively parallel version of the multiphase multicomponent simulator TOUGH2 to simulate contaminant transport and a Polynomial Chaos approximation of the forward model that further accelerates the objective function evaluations. The proposed methodology is demonstrated to a setting where field measurements are available." }
{ "title": "Simulation-based optimal Bayesian experimental design for nonlinear systems", "abstract": "The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics." }
1506.00053
1108.4146
Results.
For comparison, we also reproduce the results of #REFR using the expected information gain estimate which from now on we call double loop Monte Carlo (dlMC) estimate.
[ "We are using the expected information gain lower bound estimate as given in (2.7) as our criterion to determine the optimal design d for inferring κ." ]
[ "The exact expression of the dlMC estimate is given in A.", "Our computations are performed using an ensemble of samples with N = M = 10 4 for all different priors that we consider below, while keeping the error variance fixed.", "We demonstrate that the two estimate share the same maxima and their graphs have good quantitative agreement with the lower bound providing slightly less noisy values. Fig.", "3 shows the values of estimates of U * L (d) for the design of one experiment after using a 101-point uniform partition on the design space D = [0, 1] for N = M = 10 3 and for N = M = 10 4 as well as the estimates of dlMC.", "All cases show the existence of two local maxima at d = 0.2 and d = 1." ]
[ "expected information gain" ]
result
{ "title": "Efficient Bayesian experimentation using an expected information gain lower bound", "abstract": "Abstract. Experimental design is crucial for inference where limitations in the data collection procedure are present due to cost or other restrictions. Optimal experimental designs determine parameters that in some appropriate sense make the data the most informative possible. In a Bayesian setting this is translated to updating to the best possible posterior. Information theoretic arguments have led to the formation of the expected information gain as a design criterion. This can be evaluated mainly by Monte Carlo sampling and maximized by using stochastic approximation methods, both known for being computationally expensive tasks. We propose a framework where a lower bound of the expected information gain is used as an alternative design criterion. In addition to alleviating the computational burden, this also addresses issues concerning estimation bias. The problem of permeability inference in a large contaminated area is used to demonstrate the validity of our approach where we employ the massively parallel version of the multiphase multicomponent simulator TOUGH2 to simulate contaminant transport and a Polynomial Chaos approximation of the forward model that further accelerates the objective function evaluations. The proposed methodology is demonstrated to a setting where field measurements are available." }
{ "title": "Simulation-based optimal Bayesian experimental design for nonlinear systems", "abstract": "The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics." }
1509.04613
1108.4146
Numerical implementation
Recall that in #REFR , the entropy term H[p(y|D)] is computed using a nested MC.
[ "where H(·) is the notation for entropy and Z is a constant independent of D.", "The detailed derivation of Eq. (9) is given in Appendix B. Note that Eq.", "(9) typically has no closed-form expression and has to be evaluated with MC simulations.", "Draw m pairs of samples {(y 1 , z 1 ), . . .", ", (y m , z m )} from p(y, z|D), and the MC estimator of E[ln |C|] iŝ" ]
[ "For efficiency's sake, we use the resubstitution method developed in #OTHEREFR to estimate the entropy.", "The basic idea of the method is rather straightforward: given a set of samples {y 1 , . . .", ", y m } of p(y|D), one first computes an estimator of the density p(y|D), sayp(y), with certain density estimation approach, and then estimates the entropy with,", "Theoretical properties of the resubstitution method are analyzed in #OTHEREFR and other entropy estimation methods can be found in #OTHEREFR .", "In the original work #OTHEREFR , the distributionp is obtained with kernel density estimation, which can become very costly when the dimensionality of y gets high, and to address the issue, we use Gaussian mixture based density estimation method #OTHEREFR ." ]
[ "entropy term" ]
method
{ "title": "Gaussian process surrogates for failure detection: a Bayesian experimental design approach", "abstract": "An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples." }
{ "title": "Simulation-based optimal Bayesian experimental design for nonlinear systems", "abstract": "The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics." }
1404.1263
1108.4146
3:
The low-rank decomposition Figure 10 : Visualization of the diagonals of the posterior covariance matrix in Equation #REFR .
[ "The setup of the pumping tests is provided in Figure 8 ; the black squares indicate the locations at which water is pumped and the head response is computed at the receiver locations (indicated by red asterisks).", "For the reconstruction, we use a covariance kernel, that is part of the Matèrn family, and is given by Figure 9 : (left) True field which is a realization of a Gaussian process with zero mean and covariance kernel #OTHEREFR , and (right) reconstruction using the geostatistical approach on a grid with 401 × 401 points.", "The reconstruction error in relative L 2 sense was 0.29.", "The MAP estimate is computed by solving the optimization problem (5) using the Gauss-Newton algorithm that was described in algorithm 5.", "At each iteration the system of equations is solved using restarted GMRES (50) using the same preconditioner described in #OTHEREFR ." ]
[ "The computation was calculated with the Jacobian evaluated at the MAP estimate.", ".", "is computed in the same fashion described in Section 4 with 30 terms in the low-rank approximation.", "The low-rank decomposition is computed only once; however, the preconditioner is re-built every Gauss-Newton iteration.", "The iterative solver took about 16 iterations on average to converge to a relative tolerance of 10 −7 ." ]
[ "posterior covariance matrix" ]
background
{ "title": "Fast computation of uncertainty quantification measures in the geostatistical approach to solve inverse problems", "abstract": "We consider the computational challenges associated with uncertainty quantification involved in parameter estimation arising from the geostatistical approach to parameter estimation such as seismic slowness and hydraulic transmissivity fields in inverse problems. The quantification of uncertainty involves computing the posterior covariance matrix which is prohibitively expensive to fully compute and store. We consider an efficient representation of the posterior covariance matrix at the maximum a posteriori (MAP) point as the sum of the prior covariance matrix and a low-rank update that contains information from the dominant generalized eigenmodes of the data misfit part of the Hessian and the inverse covariance matrix. The rank of the low-rank update is typically independent of the dimension of the unknown parameter. We provide an efficient randomized algorithm for computing the dominant eigenmodes of the generalized eigenvalue problem (and as a result, the low-rank decomposition) that avoids forming square-roots of the covariance matrix or its inverse. The method that scales almost linearly with the dimension of unknown parameter space and the data. Furthermore, we show how to efficiently compute some measures of uncertainty that are based on scalar functions of the posterior covariance matrix. The resulting uncertainty measures can be used in the context of optimal experimental design. The performance of our algorithms is demonstrated by application to model problems in synthetic travel-time tomography and steady-state hydraulic tomography." }
{ "title": "Simulation-based optimal Bayesian experimental design for nonlinear systems", "abstract": "The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics." }
1710.03500
1108.4146
Expected information gain
The resulting expected information gain curve is in agreement with the one reported previously, see #REFR .
[ "In Figure 11 , we present the estimation of the expected information gain using MCLA and DLMCIS for the experiment setups ξ ∈ [0, 1].", "In Figure 11a , MCLA is applied for TOL = 3 × 10 −2 , and DLMCIS for TOL = 10 −3 .", "The confidence bars of the MCLA curve show the 97.5% confidence intervals.", "Different tolerances are specified for each of the two methods due to the Laplace bias constraint.", "However, in Figure 11b , we omit the bias constraint by enforcing κ * = 1, and see that the MCLA curve matches well with the DLMCIS curve for TOL = 10 −3 ." ]
[]
[ "resulting expected information" ]
result
{ "title": "Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain", "abstract": "In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites." }
{ "title": "Simulation-based optimal Bayesian experimental design for nonlinear systems", "abstract": "The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics." }
1611.02568
1512.01655
RELATED WORK
More recently, an approximate, but user-steerable t-SNE, which provides interactions with which a user can control the degree of approximation on user-speci ed areas, has also been studied #REFR .
[ "For example, a neural network has been integrated with t-SNE to learn the parametric representation of 2D embedding #OTHEREFR .", "Rather than the Euclidean distance or its derived similarity information, other information types such as non-metric similarities #OTHEREFR and relative ordering information about pairwise distances in the form of similarity triplets #OTHEREFR have been considered as the target information to preserve.", "Additionally, various other optimization criteria and their optimization approaches, such as elastic embedding #OTHEREFR and NeRV #OTHEREFR , have been proposed.", "e computational e ciency and scalability of 2D embedding approaches has also been widely studied.", "An accelerated t-SNE based on the approximation using the Barnes-Hut tree algorithm has been proposed #OTHEREFR . Gisbrecht et al. proposed a linear approximation of t-SNE #OTHEREFR ." ]
[ "In addition, a scalable 2D embedding technique called LargeVis #OTHEREFR signi cantly reduced the computing times with a linear time complexity in terms of the number of data items.", "Even with such a plethora of 2D embedding approaches, to the best of our knowledge, none of the previous studies have directly exploited the limited precision of our screen space and human perception for developing highly e cient 2D embedding algorithms, and our novel framework of controlling the precision in return for algorithm e ciency and the proposed PixelSNE, which signi cantly improves the e ciency of BH-SNE, can be one such example." ]
[ "user-steerable t-SNE" ]
background
{ "title": "PixelSNE: Visualizing Fast with Just Enough Precision via Pixel-Aligned Stochastic Neighbor Embedding", "abstract": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem since such visualization can reveal deep insights out of complex data. Most of the existing embedding approaches, however, run on an excessively high precision, ignoring the fact that at the end, embedding outputs are mapped into coarse-grained pixel coordinates in a limited screen space. Motivated by this observation and directly considering it in an embedding algorithm, we accelerate Barnes-Hut tree-based t-distributed stochastic neighbor embedding (BH-SNE), known as a state-of-the-art 2D embedding method, and propose a novel alternative called PixelSNE, a highly-e cient, screen resolution-driven 2D embedding method with a linear computational complexity in terms of the number of data items. Our experimental results show the signi cantly fast running time of PixelSNE by a large margin against BH-SNE, while maintaining the comparable embedding quality. Finally, the source code of our method is publicly available at h ps://github.com/awesome-davian/pixelsne." }
{ "title": "Approximated and User Steerable tSNE for Progressive Visual Analytics", "abstract": "Progressive Visual Analytics aims at improving the interactivity in existing analytics techniques by means of visualization as well as interaction with intermediate results. One key method for data analysis is dimensionality reduction, for example, to produce 2D embeddings that can be visualized and analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a well-suited technique for the visualization of high-dimensional data. tSNE can create meaningful intermediate results but suffers from a slow initialization that constrains its application in Progressive Visual Analytics. We introduce a controllable tSNE approximation (A-tSNE), which trades off speed and accuracy, to enable interactive data exploration. We offer real-time visualization techniques, including a density-based solution and a Magic Lens to inspect the degree of approximation. With this feedback, the user can decide on local refinements and steer the approximation level during the analysis. We demonstrate our technique with several datasets, in a real-world research scenario and for the real-time analysis of high-dimensional streams to illustrate its effectiveness for interactive data analysis." }
0908.3184
0906.0798
INTRODUCTION
How the B-matrix approach works together with a specified proximity matrix for the neurons was recently shown by Kak #REFR .
[ "One of the approaches taken for the construction of an artificial neural network is the feedback network with indexed memory retrieval.", "This particular method, developed by Kak, is called the B-matrix Approach #OTHEREFR , #OTHEREFR .", "The B-matrix approach is a generator model for the neural network memory retrieval.", "By this, we mean that the activity starts from one neuron and then spreads to the adjacent neurons to increase the fragment length by one.", "The obtained fragment is then fed back to the network recursively until the entire memory is generated." ]
[ "In this paper, we perform experiments to see the relationship of single neuron memories to their location.", "The manner in which the location of memories scales up and the capacity for this storage have been estimated by performing experiments on a large number of networks with random memories." ]
[ "B-matrix approach" ]
background
{ "title": "Location of Single Neuron Memories in a Hebbian Network", "abstract": "Abstract-This paper reports the results of an experiment on the use of Kak's B-Matrix approach to spreading activity in a Hebbian neural network. Specifically, it concentrates on the memory retrieval from single neurons and compares the performance of the B-Matrix approach to that of the traditional approach." }
{ "title": "Single Neuron Memories and the Network's Proximity Matrix", "abstract": "Abstract: This paper extends the treatment of single-neuron memories obtained by the use of the B-matrix approach. The spreading of activity within the network is determined by the network's proximity matrix which represents the separations amongst the neurons through the neural pathways." }
1603.09707
1405.1156
APPENDIX A A GENERAL CAPACITY THEOREM
Observe that for fixed e n , p(u n ) induces a distribution p(x n ), and since the objective does not depend on U n , this yields #REFR .
[ "Proof of Theorem 1:", "1) Generic Charger: When the charger does not observe any side information, we consider the model in Case I and apply Theorem 4 with feedback signal Z t = 0.", "Since the strategy v n is essentially a function over an empty set, it can be replaced with a fixed sequence e n , hence (58) reduces to:", "Since e n is deterministic, we have:", "where the second line is due to the Markov chain U n − X n − Y n ." ]
[ "2) Receiver Charges Transmitter: Here the charger observes Y t−1 .", "This scenario is realized by considering Case I, where the charger does not observe the message, and setting the feedback signal to be Y t , i.e.", "Z t = f (X t , Y t ) = Y t . Theorem 4 gives:", "where the deterministic charger strategies are v t : Y t−1 → E, and the encoder strategies are U t : E t → X , for t = 1, . . . , n. For fixed v n and p(u n ):", "induces a causally conditioned distribution p(x n e n ), and since the objective does not depend on U n , we can optimize over p(x n e n ) to yield (13)." ]
[ "fixed e", "distribution p(x" ]
background
{ "title": "Capacity of Remotely Powered Communication", "abstract": "Abstract-Motivated by recent developments in wireless power transfer, we study communication with a remotely powered transmitter. We propose an information-theoretic model where a charger can dynamically decide on how much power to transfer to the transmitter based on its side information regarding the communication, while the transmitter needs to dynamically adapt its coding strategy to its instantaneous energy state, which in turn depends on the actions previously taken by the charger. We characterize the capacity as an n-letter mutual information rate under various levels of side information available at the charger. When the charger is finely tunable to different energy levels, referred to as a \"precision charger\", we show that these expressions reduce to single-letter form and there is a simple and intuitive joint charging and coding scheme achieving capacity. The precision charger scenario is motivated by the observation that in practice the transferred energy can be controlled by simply changing the amplitude of the beamformed signal. When the charger does not have sufficient precision, for example when it is restricted to use a few discrete energy levels, we show that the computation of the n-letter capacity can be cast as a Markov decision process if the channel is noiseless. This allows us to numerically compute the capacity for specific cases and obtain insights on the corresponding optimal policy, or even to obtain closed form analytical solutions by solving the corresponding Bellman equations, as we demonstrate through examples. Our findings provide some surprising insights on how side information at the charger can be used to increase the overall capacity of the system." }
{ "title": "Near Optimal Energy Control and Approximate Capacity of Energy Harvesting Communication", "abstract": "We consider an energy-harvesting communication system where a transmitter powered by an exogenous energy arrival process and equipped with a finite battery of size B max communicates over a discrete-time AWGN channel. We first concentrate on a simple Bernoulli energy arrival process where at each time step, either an energy packet of size E is harvested with probability p, or no energy is harvested at all, independent of the other time steps. We provide a near optimal energy control policy and a simple approximation to the information-theoretic capacity of this channel. Our approximations for both problems are universal in all the system parameters involved (p, E and B max ), i.e., we bound the approximation gaps by a constant independent of the parameter values. Our results suggest that a battery size B max ≥ E is (approximately) sufficient to extract the infinite battery capacity of this channel. We then extend our results to general i.i.d. energy arrival processes. Our approximate capacity characterizations provide important insights for the optimal design of energy harvesting communication systems in the regime where both the battery size and the average energy arrival rate are large. Index Terms-Energy harvesting channel, information-theoretic capacity, online power control, constant gap approximation, receiver side information." }
1408.6385
1405.1156
I. INTRODUCTION
Recently, some progress has been reported in approximating the per slot throughput (or long term throughput) by a universal constant in #REFR , for an AWGN channel.
[ "Finding optimal power/energy transmission policies to maximize the long-term throughput in an energy harvesting (EH) communication system is a challenging problem and has remained open in full generality.", "Structural results are known for the optimal solution #OTHEREFR , however, explicit solutions are only known for a sub-class of problems, for example, binary transmission power #OTHEREFR , discrete transmission power #OTHEREFR , etc." ]
[ "In this paper, we approximate the per-slot throughput of the EH system with fading by a universal constant for a class of energy arrival distributions.", "The fading channel problem is more challenging than the AWGN case, since the energy/power transmitted per-slot depends on the realization of the channel unlike the AWGN problem.", "Thus, finding an upper bound on the long term throughput is hard.", "We take recourse in Cauchy-Schwarz inequality for this purpose, and then surprisingly using a channel independent power transmission policy proposed in #OTHEREFR , show that the upper and lower bound on the per-slot throughput differ at most by a constant.", "Using the techniques of #OTHEREFR , we also show that our universal bound also provides an approximation of the Shannon capacity of the energy harvesting channel with fading upto a constant." ]
[ "long term throughput" ]
background
{ "title": "Long term throughput and approximate capacity of transmitter-receiver energy harvesting channel with fading", "abstract": "We first consider an energy harvesting channel with fading, where only the transmitter harvests energy from natural sources. We bound the optimal long term throughput by a constant for a class of energy arrival distributions. The proposed method also gives a constant approximation to the capacity of the energy harvesting channel with fading. Next, we consider a more general system where both the transmitter and the receiver employ energy harvesting to power themselves. In this case, we show that finding an approximation to the optimal long term throughput is far more difficult, and identify a special case of unit battery capacity at both the transmitter and the receiver for which we obtain a universal bound on the ratio of the upper and lower bound on the long term throughput." }
{ "title": "Near Optimal Energy Control and Approximate Capacity of Energy Harvesting Communication", "abstract": "We consider an energy-harvesting communication system where a transmitter powered by an exogenous energy arrival process and equipped with a finite battery of size B max communicates over a discrete-time AWGN channel. We first concentrate on a simple Bernoulli energy arrival process where at each time step, either an energy packet of size E is harvested with probability p, or no energy is harvested at all, independent of the other time steps. We provide a near optimal energy control policy and a simple approximation to the information-theoretic capacity of this channel. Our approximations for both problems are universal in all the system parameters involved (p, E and B max ), i.e., we bound the approximation gaps by a constant independent of the parameter values. Our results suggest that a battery size B max ≥ E is (approximately) sufficient to extract the infinite battery capacity of this channel. We then extend our results to general i.i.d. energy arrival processes. Our approximate capacity characterizations provide important insights for the optimal design of energy harvesting communication systems in the regime where both the battery size and the average energy arrival rate are large. Index Terms-Energy harvesting channel, information-theoretic capacity, online power control, constant gap approximation, receiver side information." }
1408.6385
1405.1156
A. Bernoulli Energy Arrival
Using this universal bound, we can get bounds on the capacity of this channel similar to Theorem 9 of #REFR .
[ "Thus, we know that with probability p = 0.5, X t > δ.", "We now propose to use CFP as if the energy arrival process were i.i.d. Bernoulli with fixed size δ and p = 0.5.", "Thus, the actual energy stored in the battery is 0 if X t ≤ δ, and δ if X t > δ.", "Theorem 2: The per slot throughput achieved by CFP T lb satisfies the following,", "(13) for uniform energy arrivals between 0 and B max in Theorem 2." ]
[ "Theorem 3: The capacity C of the fading channel with EH is bounded by", "where c is a constant that depends on the distribution of X." ]
[ "capacity" ]
background
{ "title": "Long term throughput and approximate capacity of transmitter-receiver energy harvesting channel with fading", "abstract": "We first consider an energy harvesting channel with fading, where only the transmitter harvests energy from natural sources. We bound the optimal long term throughput by a constant for a class of energy arrival distributions. The proposed method also gives a constant approximation to the capacity of the energy harvesting channel with fading. Next, we consider a more general system where both the transmitter and the receiver employ energy harvesting to power themselves. In this case, we show that finding an approximation to the optimal long term throughput is far more difficult, and identify a special case of unit battery capacity at both the transmitter and the receiver for which we obtain a universal bound on the ratio of the upper and lower bound on the long term throughput." }
{ "title": "Near Optimal Energy Control and Approximate Capacity of Energy Harvesting Communication", "abstract": "We consider an energy-harvesting communication system where a transmitter powered by an exogenous energy arrival process and equipped with a finite battery of size B max communicates over a discrete-time AWGN channel. We first concentrate on a simple Bernoulli energy arrival process where at each time step, either an energy packet of size E is harvested with probability p, or no energy is harvested at all, independent of the other time steps. We provide a near optimal energy control policy and a simple approximation to the information-theoretic capacity of this channel. Our approximations for both problems are universal in all the system parameters involved (p, E and B max ), i.e., we bound the approximation gaps by a constant independent of the parameter values. Our results suggest that a battery size B max ≥ E is (approximately) sufficient to extract the infinite battery capacity of this channel. We then extend our results to general i.i.d. energy arrival processes. Our approximate capacity characterizations provide important insights for the optimal design of energy harvesting communication systems in the regime where both the battery size and the average energy arrival rate are large. Index Terms-Energy harvesting channel, information-theoretic capacity, online power control, constant gap approximation, receiver side information." }
1801.10484
1709.06951
I. INTRODUCTION
In fact, the performance gains due to caching and NOMA add up as they exploit different resources #REFR .
[ "For this reason, NOMA has mainly been exploited to improve user fairness #OTHEREFR - #OTHEREFR , #OTHEREFR - #OTHEREFR .", "Second, the performance gains of NOMA over conventional OMA are fundamentally limited by the users' channel conditions #OTHEREFR .", "For example, it is shown in #OTHEREFR that fixed-power NOMA can achieve a significant performance gain only when the channel gains of the UEs are substantially different.", "So far, wireless caching and NOMA have been either investigated separately or combined in a relatively straightforward manner #OTHEREFR .", "In the latter case, NOMA is shown to improve the performance of both caching and delivery." ]
[ "In this paper, however, the joint design of caching and NOMA, which we refer to as cache-aided NOMA, is advocated to maximize the performance gains introduced by caching at the UEs.", "We show that cache-aided NOMA can significantly outperform the straightforward combination of caching and NOMA with respect to both the achievable rate region and the achievable sum rate.", "Thereby, we consider a simple distributed caching scheme for video file delivery.", "By splitting the video files into several subfiles, superposition transmission, rather than coded multicast as in #OTHEREFR - #OTHEREFR , #OTHEREFR - #OTHEREFR , of the requested uncached subfiles is enabled during delivery.", "If the cached content is a hit, i.e., requested by the caching UE, cache-aided NOMA enables the conventional offloading of the video files." ]
[ "NOMA" ]
background
{ "title": "Cache-Aided Non-Orthogonal Multiple Access: The Two-User Case", "abstract": "In this paper, we propose a cache-aided non-orthogonal multiple access (NOMA) scheme for spectrally efficient downlink transmission in the fifth-generation (5G) cellular networks. The proposed scheme not only reaps the benefits associated with caching and NOMA, but also exploits the data cached at the users for interference cancellation. As a consequence, caching can help to reduce the residual interference power, making multiple decoding orders at the users feasible. The resulting flexibility in decoding can be exploited for realizing additional performance gains. We characterize the achievable rate region of cache-aided NOMA and derive the Pareto optimal rate tuples forming the boundary of the rate region. Moreover, we optimize cache-aided NOMA for minimization of the time required for video file delivery. The optimal decoding order and the optimal transmit power and rate allocation are derived as functions of the cache status, the file sizes, and the channel conditions. Our simulation results confirm that compared to several baseline schemes, the proposed cache-aided NOMA scheme significantly expands the achievable rate region and increases the sum rate for downlink transmission, which translates into substantially reduced file delivery times." }
{ "title": "NOMA Assisted Wireless Caching: Strategies and Performance Analysis", "abstract": "Conventional wireless caching assumes that content can be pushed to local caching infrastructure during off-peak hours in an error-free manner; however, this assumption is not applicable if local caches need to be frequently updated via wireless transmission. This paper investigates a new approach to wireless caching for situations in which the cache content has to be updated during on-peak hours. Two non-orthogonal multiple access (NOMA)-assisted caching strategies are developed, namely, the push-then-deliver strategy and the push-and-deliver strategy. In the push-then-deliver strategy, the NOMA principle is applied to push more content files to the content servers during a short time interval reserved for content pushing during on-peak hours and to provide more connectivity for content delivery, compared with the conventional orthogonal multiple access (OMA) strategy. The push-and-deliver strategy is motivated by the fact that some users' requests cannot be accommodated locally and the base station has to serve them directly. These events during the content delivery phase are exploited as opportunities for content pushing, which further facilitates the frequent update of the files cached at the content servers. It is also shown that this strategy can be straightforwardly extended to device-to-device caching, and various analytical results are developed to illustrate the superiority of the proposed caching strategies compared with OMA based schemes." }
1906.06025
1709.06951
A. Related Work and Motivation
Additionally, the framework in #REFR does not take into consideration the detrimental impairments due to encountered fading effects.
[ "An optimum power allocation for the considered network is investigated, aiming to maximize the probability of successful decoding of files at each user.", "However, the simplistic Rayleigh fading conditions were assumed, which is not practically realistic in vehicular networks.", "Moreover, the model in #OTHEREFR only considers the full file caching case; that is, the authors assume that in the caching phase, the files are cached as a whole, which is largely restrictive.", "On the contrary, only a split file caching framework was considered in #OTHEREFR .", "The optimum power allocation and the performance of the proposed system are characterized by the achievable rate region." ]
[]
[ "encountered fading effects" ]
background
{ "title": "Cache-Aided Non-Orthogonal Multiple Access for 5G-Enabled Vehicular Networks", "abstract": "The increasing demand for rich multimedia services and the emergence of the Internet of Things (IoT) pose challenging requirements for the next-generation vehicular networks. Such challenges are largely related to high spectral efficiency and low latency requirements in the context of massive content delivery and increased connectivity. In this respect, caching and non-orthogonal multiple access (NOMA) paradigms have been recently proposed as potential solutions to effectively address some of these key challenges. In this paper, we introduce cache-aided NOMA as an enabling technology for vehicular networks. In this context, we first consider the full file caching case, where each vehicle caches and requests entire files using the NOMA principle. Without loss of generality, we consider a two-user vehicular network communication scenario under double Nakagami-m fading conditions and propose an optimum power allocation policy. To this end, an optimization problem that maximizes the overall probability of successful decoding of files at each vehicle is formulated and solved. Furthermore, we consider the case of split file caching, where each file is divided into two parts. A joint power allocation optimization problem is formulated, where power allocation across vehicles and cached split files is investigated. The offered analytic results are corroborated by extensive results from computer simulations and interesting insights are developed. Indicatively, it is shown that the proposed caching-aided NOMA outperforms the conventional NOMA technique. Index Terms-Caching, double Nakagami−m fading channels, non-orthogonal multiple access, vehicular communications." }
{ "title": "NOMA Assisted Wireless Caching: Strategies and Performance Analysis", "abstract": "Conventional wireless caching assumes that content can be pushed to local caching infrastructure during off-peak hours in an error-free manner; however, this assumption is not applicable if local caches need to be frequently updated via wireless transmission. This paper investigates a new approach to wireless caching for situations in which the cache content has to be updated during on-peak hours. Two non-orthogonal multiple access (NOMA)-assisted caching strategies are developed, namely, the push-then-deliver strategy and the push-and-deliver strategy. In the push-then-deliver strategy, the NOMA principle is applied to push more content files to the content servers during a short time interval reserved for content pushing during on-peak hours and to provide more connectivity for content delivery, compared with the conventional orthogonal multiple access (OMA) strategy. The push-and-deliver strategy is motivated by the fact that some users' requests cannot be accommodated locally and the base station has to serve them directly. These events during the content delivery phase are exploited as opportunities for content pushing, which further facilitates the frequent update of the files cached at the content servers. It is also shown that this strategy can be straightforwardly extended to device-to-device caching, and various analytical results are developed to illustrate the superiority of the proposed caching strategies compared with OMA based schemes." }
1909.11074
1709.06951
I. INTRODUCTION
In terms of these techniques combination, #REFR jointly considered the advantages of caching and NOMA.
[ "When enhancing the system performance with the involvement of caching, the situation will be different and more complicated.", "The reason is that users now are not only affected by the channel conditions, but also the cache placement at the time of generating requests.", "Because the cached content can be used to eliminate (part of) 0090-6778 © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.", "See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.", "the interference in the superposed signal." ]
[ "This work designed a power allocation method to ensure that the most popular files could be obtained by a predefined number of content servers.", "In the recent work #OTHEREFR , the authors narrowed their analysis to a specific case when the user with weaker channel cached information of the user with stronger channel.", "In #OTHEREFR , the authors focused on minimizing the power consumption in the system.", "From another point of view, designing a power allocation policy to maximize the users' QoS as well as to guarantee fairness among users is necessary.", "Moreover, exploiting users' cached content for interference cancellation can improve users achievable rates and should be paid sufficient attention." ]
[ "NOMA" ]
background
{ "title": "Power Allocation in Cache-Aided NOMA Systems: Optimization and Deep Reinforcement Learning Approaches", "abstract": "This work exploits the advantages of two prominent techniques in future communication networks, namely caching and non-orthogonal multiple access (NOMA). Particularly, a system with Rayleigh fading channels and cache-enabled users is analyzed. It is shown that the caching-NOMA combination provides a new opportunity of cache hit which enhances the cache utility as well as the effectiveness of NOMA. Importantly, this comes without requiring users' collaboration, and thus, avoids many complicated issues such as users' privacy and security, selfishness, etc. In order to optimize users' quality of service and, concurrently, ensure the fairness among users, the probability that all users can decode the desired signals is maximized. In NOMA, a combination of multiple messages are sent to users, and the defined objective is approached by finding an appropriate power allocation for message signals. To address the power allocation problem, two novel methods are proposed. The first one is a divide-and-conquer-based method for which closedform expressions for the optimal resource allocation policy are derived making this method simple and flexible to the system context. The second one is based on deep reinforcement learning method that allows all users to share the full bandwidth. Finally, simulation results are provided to demonstrate the effectiveness of the proposed methods and to compare their performance. Index Terms-Caching, non-orthogonal multiple access (NOMA), deep reinforcement learning, deep learning, power allocation, interference cancellation." }
{ "title": "NOMA Assisted Wireless Caching: Strategies and Performance Analysis", "abstract": "Conventional wireless caching assumes that content can be pushed to local caching infrastructure during off-peak hours in an error-free manner; however, this assumption is not applicable if local caches need to be frequently updated via wireless transmission. This paper investigates a new approach to wireless caching for situations in which the cache content has to be updated during on-peak hours. Two non-orthogonal multiple access (NOMA)-assisted caching strategies are developed, namely, the push-then-deliver strategy and the push-and-deliver strategy. In the push-then-deliver strategy, the NOMA principle is applied to push more content files to the content servers during a short time interval reserved for content pushing during on-peak hours and to provide more connectivity for content delivery, compared with the conventional orthogonal multiple access (OMA) strategy. The push-and-deliver strategy is motivated by the fact that some users' requests cannot be accommodated locally and the base station has to serve them directly. These events during the content delivery phase are exploited as opportunities for content pushing, which further facilitates the frequent update of the files cached at the content servers. It is also shown that this strategy can be straightforwardly extended to device-to-device caching, and various analytical results are developed to illustrate the superiority of the proposed caching strategies compared with OMA based schemes." }
1712.09557
1709.06951
I. INTRODUCTION
So far, wireless caching and NOMA were either investigated separately or combined in a straightforward manner #REFR .
[ "On the other hand, non-orthogonal multiple access (NOMA) is an efficient approach for wireless multiuser transmission that alleviates the adverse effects of fading #OTHEREFR , #OTHEREFR .", "Different from multicast and coded multicast, NOMA pairs multiple simultaneous downlink transmissions on the same time-frequency resource via power domain or code domain multiplexing #OTHEREFR .", "Strong users with favorable channel conditions can cancel the interference caused by weak users with poor channel conditions that are paired on the same timefrequency resource, and hence, achieve a high data rate at low transmit powers.", "Therefore, high transmit powers can be allocated to weak users to achieve communication fairness #OTHEREFR .", "NOMA has also been extended to multicarrier and multiantenna systems; see #OTHEREFR , #OTHEREFR and references therein." ]
[ "For the latter case, NOMA was shown to improve the performance of both caching and delivery in #OTHEREFR .", "In this paper, however, the joint design of caching and NOMA is advocated to maximize the performance gains introduced by caching at UEs.", "We show that the joint design of caching and NOMA can significantly outperform the straightforward combination of caching and NOMA.", "To this end, we consider a simple distributed caching scheme for video file delivery.", "By splitting the video files into several subfiles, superposition transmission of the requested uncached subfiles is enabled during delivery." ]
[ "NOMA" ]
background
{ "title": "Cache-Aided Non-Orthogonal Multiple Access", "abstract": "In this paper, we propose a novel joint caching and non-orthogonal multiple access (NOMA) scheme to facilitate advanced downlink transmission for next generation cellular networks. In addition to reaping the conventional advantages of caching and NOMA transmission, the proposed cache-aided NOMA scheme also exploits cached data for interference cancellation which is not possible with separate caching and NOMA transmission designs. Furthermore, as caching can help to reduce the residual interference power, several decoding orders are feasible at the receivers, and these decoding orders can be flexibly selected for performance optimization. We characterize the achievable rate region of cache-aided NOMA and investigate its benefits for minimizing the time required to complete video file delivery. Our simulation results reveal that, compared to several baseline schemes, the proposed cache-aided NOMA scheme significantly expands the achievable rate region for downlink transmission, which translates into substantially reduced file delivery times." }
{ "title": "NOMA Assisted Wireless Caching: Strategies and Performance Analysis", "abstract": "Conventional wireless caching assumes that content can be pushed to local caching infrastructure during off-peak hours in an error-free manner; however, this assumption is not applicable if local caches need to be frequently updated via wireless transmission. This paper investigates a new approach to wireless caching for situations in which the cache content has to be updated during on-peak hours. Two non-orthogonal multiple access (NOMA)-assisted caching strategies are developed, namely, the push-then-deliver strategy and the push-and-deliver strategy. In the push-then-deliver strategy, the NOMA principle is applied to push more content files to the content servers during a short time interval reserved for content pushing during on-peak hours and to provide more connectivity for content delivery, compared with the conventional orthogonal multiple access (OMA) strategy. The push-and-deliver strategy is motivated by the fact that some users' requests cannot be accommodated locally and the base station has to serve them directly. These events during the content delivery phase are exploited as opportunities for content pushing, which further facilitates the frequent update of the files cached at the content servers. It is also shown that this strategy can be straightforwardly extended to device-to-device caching, and various analytical results are developed to illustrate the superiority of the proposed caching strategies compared with OMA based schemes." }
1702.01309
1504.01274
Introduction
This paper continues the work of #REFR to determine the weight hierarchy of a family of cyclic codes with arbitrary number of nonzeroes.
[ "An [n, k] linear code C over finite field F q is a k-dimensional subspace of the linear space F The concept of GHWs was first introduced by Helleseth, Kløve, Mykkeltveit #OTHEREFR and was used in the computation of weight distributions.", "It was rediscovered by Wei #OTHEREFR to fully characterize the performance of linear codes when used in a wire-tap channel of type II or as a t-resilient function.", "Indeed, the GHWs provide detailed structural information of linear codes, which can also be used to compute the state and branch complexity profiles of linear codes #OTHEREFR , to determine the erasure list-decodability of linear codes #OTHEREFR and so on.", "In general, the determination of weight hierarchy is very difficult and there are only a few classes of linear codes whose weight hierarchies are known (see #OTHEREFR for a comprehensive enumeration of related references)." ]
[ "Our result can be regarded as an extension of the results in #OTHEREFR , where the weight hierarchy of the semiprimitive codes was computed.", "We achieve this by generalizing a number-theoretic approach introduced in #OTHEREFR .", "The rest of this paper is organized as follows.", "In Section 2, we introduce the concerned family of cyclic codes and state the main result.", "In Section 3, we present a number-theoretic approach to the computation of GHWs. In Section 4, we prove the main result. Section 5 concludes the paper." ]
[ "cyclic codes" ]
background
{ "title": "The Weight Hierarchy of a Family of Cyclic Codes with Arbitrary Number of Nonzeroes", "abstract": "The generalized Hamming weights (GHWs) are fundamental parameters of linear codes. GHWs are of great interest in many applications since they convey detailed information of linear codes. In this paper, we continue the work of [10] to study the GHWs of a family of cyclic codes with arbitrary number of nonzeroes. The weight hierarchy is determined by employing a number-theoretic approach." }
{ "title": "The Weight Hierarchy of Some Reducible Cyclic Codes", "abstract": "The generalized Hamming weights (GHWs) of linear codes are fundamental parameters, the knowledge of which is of great interest in many applications. However, to determine the GHWs of linear codes is difficult in general. In this paper, we study the GHWs for a family of reducible cyclic codes and obtain the complete weight hierarchy in several cases. This is achieved by extending the idea of Yang et al. into higher dimension and by employing some interesting combinatorial arguments. It shall be noted that these cyclic codes may have arbitrary number of nonzeros. Index Terms-Cyclic code, exponential sum, generalized Hamming weight, weight hierarchy." }
2003.11467
1911.11523
VI. HIGH-ACCURACY USER POSITIONING
In #REFR , the measured channels of the ultra-dense Ra-dioWeaves dataset were used to train a Machine Learning model to estimate the position of the users.
[ "The large amount of information provided by the antennas in the proposed systems can be used to provide extra services to the users.", "One service in particular is of high interest: User localisation.", "Since the antennas of a RadioWeaves system are used to focus wireless power in the spatial domain, the system has information about the position of the users." ]
[ "The Machine Learning model is based on Convolutional Neural Networks (CNNs), which have been proven to be very effective to extract complex features of large amounts of data. In this case, the Fig. 5 .", "Weaving the antennas in the environment is expected to result in locally-confined communication, which means that it is possible to target a user precisely without harming other nearby users.", "We study the spatial confinement in our measured channel database by iteratively targeting each location in the dataset, and logging also the RSS measured in the most interfered location, which we call the victim user.", "Clearly, a RadioWeaves weaving methodology results in the most spatial confinement, represented by the smallest overlap between the target and victim users signal strength histograms.", "CNN used the CSI as input and was trained to estimate the exact location of the user." ]
[ "measured channels" ]
method
{ "title": "Weave and Conquer: A Measurement-based Analysis of Dense Antenna Deployments", "abstract": "Massive MIMO is bringing significant performance improvements in the context of outdoor macrocells, such as favourable propagation conditions, spatially confined communication, high antenna gains to overcome pathloss, and good angular localisation. In this paper we explore how these benefits scale to indoor scattering-rich deployments based on a dense indoor measured Massive MIMO dataset. First, we design and implement three different and relevant topologies to position our 64 antennas in the environment: Massive MIMO, RadioStripes and RadioWeaves topologies. Second, we measure 252004 indoor channels for a 3x3m 2 area for each topology, using an automated userpositioning and measurement system. Using this dense dataset, we provide a unique analysis of system level properties such as pathloss, favourable propagation, spatial focusing and localisation performance. Our measurement-based analyses verify and quantify that distributing the antennas throughout the environment results in an improved propagation fairness, better favourable propagation conditions, higher spatial confinement and finally a high localisation performance. The dataset is publicly available and can serve as a reference database for benchmarking of future indoor communication systems and communication models. We outline the implementation challenges we observed, and also list diverse R&D challenges that can benefit from using this dataset as a benchmark." }
{ "title": "CSI-based Positioning in Massive MIMO systems using Convolutional Neural Networks", "abstract": "This paper studies the performance of a user positioning system using Channel State Information (CSI) of a Massive MIMO (MaMIMO) system. To infer the position of the user from the CSI, a Convolutional Neural Network is designed and evaluated through a novel dataset. This dataset contains indoor MaMIMO CSI measurements using three different antenna topologies, covering a 2.5 m by 2.5 m indoor area. We show that we can train a Convolutional Neural Network (CNN) model to estimate the position of a user inside this area with a mean error of less than half a wavelength. Moreover, once the model is trained on a given scenario and antenna topology, Transfer Learning is used to repurpose the acquired knowledge towards another scenario with significantly different antenna topology and configuration. Our results show that it is possible to further train the CNN using only a small amount of extra labelled samples for the new topology. This transfer learning approach is able to reach accurate results, paving the road to a practical CSI-based positioning system powered by CNNs." }
2003.04581
1911.11523
I. INTRODUCTION
To further explore the limits to the accuracy of the proposed MaMIMO CSI-based localisation systems using CNNs, we generated three indoor spatially labelled dense open datasets #REFR .
[ "generated a spatially labelled dataset of a large corridor in their office building.", "They showed a fair localisation accuracy of the CNN based positioning solution.", "#OTHEREFR But, this accuracy was lower than the accuracy achieved by Vieira et al. #OTHEREFR .", "Furthermore, the authors presented some experiments to test the robustness and reproducibility over time of the proposed system.", "They found that the systems accuracy was affected by moving pedestrians and a changing propagation environment (closing and opening of windows and doors)." ]
[ "Here, each dataset represented the same room but the MIMO array topology was changed.", "All of these measurements were done in a static Line-of-Sight (LoS) propagation environment.", "In this earlier work, we showed a positioning accuracy of 55.35 mm (0.48 λ).", "This was achieved through the high density and size of the datasets, consisting of 252004 CSI-samples each covering an area of around 3 by 3 metres.", "Furthermore, we showed that transfer learning can lower the need for such a large dataset when transferring knowledge from a model trained on one array topology to a model that has to be trained for another topology." ]
[ "CNNs" ]
method
{ "title": "MaMIMO CSI-based positioning using CNNs: Peeking inside the black box", "abstract": "Massive MIMO (MaMIMO) Channel State Information (CSI) based user positioning systems using Convolutional Neural Networks (CNNs) show great potential, reaching a very high accuracy without introducing any overhead in the MaMIMO communication system. In this study, we show that both these systems can position indoor users in both Line-of-Sight and in non-Line-of-Sight conditions with an accuracy of around 20 mm. However, to further develop these positioning systems, more insight in how the CNN infers the position is needed. The used CNNs are a black box and we can only guess how they position the users. Therefore, the second focus of this paper is on opening the black box using several experiments. We explore the current limitations and promises using the open dataset gathered on a real-life 64-antenna MaMIMO testbed. In this way, extra insight in the system is gathered, guiding research on MaMIMO CSIbased positioning systems using CNNs in the right direction." }
{ "title": "CSI-based Positioning in Massive MIMO systems using Convolutional Neural Networks", "abstract": "This paper studies the performance of a user positioning system using Channel State Information (CSI) of a Massive MIMO (MaMIMO) system. To infer the position of the user from the CSI, a Convolutional Neural Network is designed and evaluated through a novel dataset. This dataset contains indoor MaMIMO CSI measurements using three different antenna topologies, covering a 2.5 m by 2.5 m indoor area. We show that we can train a Convolutional Neural Network (CNN) model to estimate the position of a user inside this area with a mean error of less than half a wavelength. Moreover, once the model is trained on a given scenario and antenna topology, Transfer Learning is used to repurpose the acquired knowledge towards another scenario with significantly different antenna topology and configuration. Our results show that it is possible to further train the CNN using only a small amount of extra labelled samples for the new topology. This transfer learning approach is able to reach accurate results, paving the road to a practical CSI-based positioning system powered by CNNs." }
2003.04581
1911.11523
B. CSI-based Positioning Performance
For the boardroom dataset, this new model more than doubles the accuracy in comparison to the previous reported mean estimation error of 55.35 mm (0.48 λ) #REFR .
[ "The training set consisted of 85% of the available samples.", "To fine-tune the training, a validation set with a size of 5% of the available data was used.", "Finally, the performance was tested using a test set containing the remaining 10% of the dataset.", "The positioning accuracy of the proposed CNN on these three datasets can be seen in Table I.", "As can be seen, the model reaches an accuracy of 17.16 -17.30 mm (0.150 -0.151 λ) on the LoS datasets and 20.26 mm (0.176 λ) on the nLoS dataset." ]
[ "This is due to the architecture of the proposed model.", "This model has many more Convolutional layers in comparison to the model proposed in #OTHEREFR and is therefore a deeper model.", "In general, a deeper model can learn more complex features than a more shallow model.", "However, this comes at the cost of slower training and the vanishing gradient problem.", "During the design of the new proposed model, these two problems were taken into account, resulting in an architecture that is able to train the higher layers effectivly 2 https://homes.esat.kuleuven.be/ ∼ sdebast in an efficient way." ]
[ "accuracy" ]
result
{ "title": "MaMIMO CSI-based positioning using CNNs: Peeking inside the black box", "abstract": "Massive MIMO (MaMIMO) Channel State Information (CSI) based user positioning systems using Convolutional Neural Networks (CNNs) show great potential, reaching a very high accuracy without introducing any overhead in the MaMIMO communication system. In this study, we show that both these systems can position indoor users in both Line-of-Sight and in non-Line-of-Sight conditions with an accuracy of around 20 mm. However, to further develop these positioning systems, more insight in how the CNN infers the position is needed. The used CNNs are a black box and we can only guess how they position the users. Therefore, the second focus of this paper is on opening the black box using several experiments. We explore the current limitations and promises using the open dataset gathered on a real-life 64-antenna MaMIMO testbed. In this way, extra insight in the system is gathered, guiding research on MaMIMO CSIbased positioning systems using CNNs in the right direction." }
{ "title": "CSI-based Positioning in Massive MIMO systems using Convolutional Neural Networks", "abstract": "This paper studies the performance of a user positioning system using Channel State Information (CSI) of a Massive MIMO (MaMIMO) system. To infer the position of the user from the CSI, a Convolutional Neural Network is designed and evaluated through a novel dataset. This dataset contains indoor MaMIMO CSI measurements using three different antenna topologies, covering a 2.5 m by 2.5 m indoor area. We show that we can train a Convolutional Neural Network (CNN) model to estimate the position of a user inside this area with a mean error of less than half a wavelength. Moreover, once the model is trained on a given scenario and antenna topology, Transfer Learning is used to repurpose the acquired knowledge towards another scenario with significantly different antenna topology and configuration. Our results show that it is possible to further train the CNN using only a small amount of extra labelled samples for the new topology. This transfer learning approach is able to reach accurate results, paving the road to a practical CSI-based positioning system powered by CNNs." }
1611.06092
1603.09711
Supplementary Note 3 HCM and HCM*
Now we analyze HCM in more detail, to analytically derive the size of its largest component as in #REFR .
[ "Select two inter-community edges uniformly at random, {u, v} and {w, x}.", "Now delete these edges and replace them by {u, x}, {w, v} if this results in a simple graph.", "Otherwise keep the original edges {u, v} and {w, x}.", "This randomizes the inter-community edges uniformly if this procedure is repeated at least 100E times, where E is the number of inter-community edges #OTHEREFR . This creates HCM.", "To create HCM*, the edges within the communities are also randomized after rewiring the inter-community edges, again using the switching algorithm. This is repeated for all communities." ]
[ "Let s i be the size of community i, and k i the number of half-edges from community i to other communities.", "We call k i the inter-community degree of community i.", "We define the joint distribution p k,s to be the fraction of communities of size s with inter-community degree k.", "We define two distributions and their probability generating functions to calculate the size of the largest component. The excess inter-community degree distribution", "can be interpreted as the probability to arrive in a community with inter-community degree k and size s when traversing a random inter-community edge, excluding the traversed edge." ]
[ "HCM", "size" ]
background
{ "title": "Epidemic spreading on complex networks with community structures", "abstract": "Many real-world networks display a community structure. We study two random graph models that create a network with similar community structure as a given network. One model preserves the exact community structure of the original network, while the other model only preserves the set of communities and the vertex degrees. These models show that community structure is an important determinant of the behavior of percolation processes on networks, such as information diffusion or virus spreading: the community structure can both enforce as well as inhibit diffusion processes. Our models further show that it is the mesoscopic set of communities that matters. The exact internal structures of communities barely influence the behavior of percolation processes across networks. This insensitivity is likely due to the relative denseness of the communities." }
{ "title": "Power-law relations in random networks with communities", "abstract": "Most random graph models are locally tree-like -do not contain short cycles-rendering them unfit for modeling networks with a community structure. We introduce the hierarchical configuration model (HCM), a generalization of the configuration model that includes community structures, while properties such as the size of the giant component, and the size of the giant percolating cluster under bond percolation can still be derived analytically. Viewing real-world networks as realizations of HCM, we observe two previously undiscovered power-law relations: between the number of edges inside a community and the community sizes, and between the number of edges going out of a community and the community sizes. We also relate the power-law exponent τ of the degree distribution with the power-law exponent of the community size distribution γ. In the case of extremely dense communities (e.g., complete graphs), this relation takes the simple form τ = γ − 1." }
1512.08397
1603.09711
Conclusions and discussion
We have further investigated power-law relations in several real-world networks, and compare these to the power-law relations in our hierarchical configuration model in a companion paper #REFR .
[ "For example, the condition for a giant component to emerge in the hierarchical configuration model is completely determined by properties of the macroscopic configuration model.", "However, the size of the giant component also depends on the community sizes.", "In contrast, the asymptotic clustering coefficient is entirely defined by the clustering inside the communities.", "For bond percolation on the hierarchical configuration model, the critical percolation value depends on both the inter-community degree distribution, and the shape of the communities.", "Furthermore, we have shown that if communities are dense with a power-law degree distribution, then the edges between communities follow a power law with an exponent that is one higher than the exponent of the degree distribution." ]
[ "These real-world networks do not display this power-law shift, which implies that most communities in real-world networks do not satisfy the intuitive picture of dense communities.", "In fact, we find a power-law relation between the denseness of the communities and their sizes, so that the large communities are less dense than the smaller communities.", "Finally, we have shown that several existing models incorporating a community structure can be interpreted as a special case of the hierarchical configuration model, which underlines its generality.", "Worthwhile extensions of the hierarchical configuration model for future research include directed or weighted counterparts and a version that allows for overlapping communities.", "The analysis of percolation on the hierarchical configuration model has shown that the size of the largest percolating cluster and the critical percolation value do not necessarily increase or decrease when adding clustering." ]
[ "several real-world networks" ]
background
{ "title": "Hierarchical Configuration Model", "abstract": "We introduce a class of random graphs with a community structure, which we call the hierarchical configuration model. On the inter-community level, the graph is a configuration model, and on the intra-community level, every vertex in the configuration model is replaced by a community: i.e., a small graph. These communities may have any shape, as long as they are connected. For these hierarchical graphs, we find the size of the largest component, the degree distribution and the clustering coefficient. Furthermore, we determine the conditions under which a giant percolation cluster exists, and find its size." }
{ "title": "Power-law relations in random networks with communities", "abstract": "Most random graph models are locally tree-like -do not contain short cycles-rendering them unfit for modeling networks with a community structure. We introduce the hierarchical configuration model (HCM), a generalization of the configuration model that includes community structures, while properties such as the size of the giant component, and the size of the giant percolating cluster under bond percolation can still be derived analytically. Viewing real-world networks as realizations of HCM, we observe two previously undiscovered power-law relations: between the number of edges inside a community and the community sizes, and between the number of edges going out of a community and the community sizes. We also relate the power-law exponent τ of the degree distribution with the power-law exponent of the community size distribution γ. In the case of extremely dense communities (e.g., complete graphs), this relation takes the simple form τ = γ − 1." }
1804.05560
1503.05897
Amazon Mechanical Turk Study
This can be interpreted #REFR as a success in eliciting effort from the crowd and discouraging low quality/heuristic reporting.
[ "In both settings, we had 3 workers giving answers for each paragraph, giving us a total 3 × 480 HITs from 480 paragraphs.", "We thus collected a dataset of 1440 worker responses on these HITs, 720 in each setting. In total, 129 workers participated in the experiment. We judge the mechanism on two most important criteria.", "First, the ability to discourage workers from heuristic reporting and second, the ability to get more accurate answers from crowd.", "Figure 3 compares the time workers spent on solving the tasks in the two settings.", "The fraction of HITs that were given very little time has significantly decreased with our mechanism and the fraction of HITS that were given more time has significantly increased (the green distribution with dots is more skewed towards the right side as compared to the red distribution with slashes, which is more skewed towards the left)." ]
[ "We used a browser based JavaScript solution to measure the actual time spent on solving tasks to get tight estimates of time spent in the DB Trust setting, without workers being aware of it.", "Amazon uses the difference between time of accepting and submitting a HIT as estimates of time spent, which (even after filtering very large values) tend to be highly inflated.", "As one can see, even with such tight estimates in the DB Trust setting, the time spent by workers is better.", "2.", "The average accuracy of workers was found to increase from 70.86% in the unspecified setting to 79.17% in the DB Trust setting." ]
[ "eliciting effort" ]
background
{ "title": "Deep Bayesian Trust : A Dominant Strategy and Fair Reward Mechanism for Crowdsourcing", "abstract": "Abstract A common mechanism to assess trust in crowdworkers is to have them answer gold tasks. However, assigning gold tasks to all workers reduces the efficiency of the platform. We propose a mechanism that exploits transitivity so that a worker can be certified as trusted by other trusted workers who solve common tasks. Thus, trust can be derived from a smaller number of gold tasks assignment through multiple layers of peer relationship among the workers, a model we call deep trust. We use the derived trust to incentivize workers for high quality work and show that the resulting mechanism is dominant strategy incentive compatible. We also show that the mechanism satisfies a notion of fairness in that the trust assessment (and thus the reward) of a worker in the limit is independent of the quality of other workers." }
{ "title": "Incentivizing high quality crowdwork", "abstract": "We study the causal effects of financial incentives on the quality of crowdwork. We focus on performance-based payments (PBPs), bonus payments awarded to workers for producing high quality work. We design and run randomized behavioral experiments on the popular crowdsourcing platform Amazon Mechanical Turk with the goal of understanding when, where, and why PBPs help, identifying properties of the payment, payment structure, and the task itself that make them most effective. We provide examples of tasks for which PBPs do improve quality. For such tasks, the effectiveness of PBPs is not too sensitive to the threshold for quality required to receive the bonus, while the magnitude of the bonus must be large enough to make the reward salient. We also present examples of tasks for which PBPs do not improve quality. Our results suggest that for PBPs to improve quality, the task must be effort-responsive: the task must allow workers to produce higher quality work by exerting more effort. We also give a simple method to determine if a task is effort-responsive a priori. Furthermore, our experiments suggest that all payments on Mechanical Turk are, to some degree, implicitly performance-based in that workers believe their work may be rejected if their performance is sufficiently poor. In the full version of this paper, we propose a new model of worker behavior that extends the standard principal-agent model from economics to include a worker's subjective beliefs about his likelihood of being paid, and show that the predictions of this model are in line with our experimental findings. This model may be useful as a foundation for theoretical studies of incentives in crowdsourcing markets." }
1804.03178
1503.05897
Optimal Common Pricing
This assumption is motivated by the empirical studies which reveal that according to the type of the tasks, induced profile distribution has quite specific form #REFR .
[ "In this section, we now study the optimal common pricing problem (OCP) in #OTHEREFR , that has much less freedom in choosing the price, yet used in many practical crowdsourcing systems.", "Despite the dimensional reduction of OCP, compared to OPP, we still find that solving OCP has a fundamental hardness due to the non-convex utility function and non-convex constraint even for an additive utility function.", "In order to handle such challenge in finding a global optimum in OCP, we introduce an assumption of one-to-one correspondence of worker's quality and cost.", "In detail, we assume that a quality of any worker follows a monotone increasing mapping f : [0, 1] → R, where r i = f (c i ).", "Moreover, we assume that f is twice-differentiable on its domain." ]
[ "Following the notion of task types used in #OTHEREFR , we introduce the following regimes of workers' profile: Definition 4.1 We divide the class of function f into the three regimes:", "Technically, the size of f ′ (x) and", "decides whether the bang-per-buck value,", "is increasing with respect to c or not, since ∂", "In this context, the provided regimes can be interpreted using the behavior of bang-per-buck of workers." ]
[ "tasks" ]
background
{ "title": "On the Posted Pricing in Crowdsourcing: Power of Bonus", "abstract": "In practical crowdsourcing systems such as Amazon Mechanical Turk, posted pricing is widely used due to its simplicity, where a task requester publishes a pricing rule a priori, on which workers decide whether to accept and perform the task or not, and are often paid according to the quality of their effort. One of the key ingredients of a good posted pricing lies in how to recruit more high-quality workers with less budget, for which the following two schemes are considered: (i) personalized pricing by profiling users in terms of their quality and cost, and (ii) additional bonus payment offered for more qualified task completion. Despite their potential benefits in crowdsourced pricing, it has been under-explored how much gain each or both of personalization and bonus payment actually provides to the requester. In this paper, we study four possible combinations of posted pricing made by pricing with/without personalization and bonus. We aim at analytically quantifying when and how much such two ideas contribute to the requester's utility. To this end, we first derive the optimal personalized and common pricing schemes and analyze their computational tractability. Next, we quantify the gap in the utility between with and without bonus payment in both pricing schemes. We analytically prove that the impact of bonus is negligible in personalized pricing, whereas crucial in common pricing. Finally, we study the notion of Price of Agnosticity (PoA) that quantifies the utility gap between personalized and common pricing policies, where we show that PoA is not significant under many practical conditions. This implies that a complex personalized pricing with more privacy concerns can be replaced by a simple common pricing with bonus, if designed well. We validate our analytical findings through extensive simulations and real experiments done in Amazon Mechanical Turk, and provide additional implications that are useful in designing a pricing policy in crowdsourcing." }
{ "title": "Incentivizing high quality crowdwork", "abstract": "We study the causal effects of financial incentives on the quality of crowdwork. We focus on performance-based payments (PBPs), bonus payments awarded to workers for producing high quality work. We design and run randomized behavioral experiments on the popular crowdsourcing platform Amazon Mechanical Turk with the goal of understanding when, where, and why PBPs help, identifying properties of the payment, payment structure, and the task itself that make them most effective. We provide examples of tasks for which PBPs do improve quality. For such tasks, the effectiveness of PBPs is not too sensitive to the threshold for quality required to receive the bonus, while the magnitude of the bonus must be large enough to make the reward salient. We also present examples of tasks for which PBPs do not improve quality. Our results suggest that for PBPs to improve quality, the task must be effort-responsive: the task must allow workers to produce higher quality work by exerting more effort. We also give a simple method to determine if a task is effort-responsive a priori. Furthermore, our experiments suggest that all payments on Mechanical Turk are, to some degree, implicitly performance-based in that workers believe their work may be rejected if their performance is sufficiently poor. In the full version of this paper, we propose a new model of worker behavior that extends the standard principal-agent model from economics to include a worker's subjective beliefs about his likelihood of being paid, and show that the predictions of this model are in line with our experimental findings. This model may be useful as a foundation for theoretical studies of incentives in crowdsourcing markets." }
2003.11014
1703.05884
State-of-the-art Comparison
NFS #REFR : The need for speed dataset consists of 100 challenging videos captured using a high frame rate (240 FPS) camera.
[ "The baseline approach DiMP-50 already achieves the best results with an AUC of 74.0.", "Our approach achieves a similar performance to the baseline, showing that it generalizes well to such real world videos.", "OTB-100 #OTHEREFR : Figure 4b shows the success plots over all the 100 videos.", "Discriminative correlation filter based UPDT #OTHEREFR tracker achieves the best results with an AUC score of 70.4.", "Our approach obtains results comparable with the state-of-the-art, while outperforming the baseline DiMP-50 by over 1% in AUC." ]
[ "We evaluate our approach on the downsampled 30 FPS version of this dataset.", "The success plots over all the 100 videos are shown in Fig. 4c .", "Among previous methods, our appearance model DiMP-50 obtains the best results.", "Our approach significantly outperforms DiMP-50 with a relative gain of 2.6%, achieving 63.5% AUC score." ]
[ "100 challenging videos", "speed dataset" ]
method
{ "title": "Know Your Surroundings: Exploiting Scene Information for Object Tracking", "abstract": "Current state-of-the-art trackers only rely on a target appearance model in order to localize the object in each frame. Such approaches are however prone to fail in case of e.g.fast appearance changes or presence of distractor objects, where a target appearance model alone is insufficient for robust tracking. Having the knowledge about the presence and locations of other objects in the surrounding scene can be highly beneficial in such cases. This scene information can be propagated through the sequence and used to, for instance, explicitly avoid distractor objects and eliminate target candidate regions. In this work, we propose a novel tracking architecture which can utilize scene information for tracking. Our tracker represents such information as dense localized state vectors, which can encode, for example, if the local region is target, background, or distractor. These state vectors are propagated through the sequence and combined with the appearance model output to localize the target. Our network is learned to effectively utilize the scene information by directly maximizing tracking performance on video segments. The proposed approach sets a new stateof-the-art on 3 tracking benchmarks, achieving an AO score of 63.6% on the recent GOT-10k dataset." }
{ "title": "Need for Speed: A Benchmark for Higher Frame Rate Object Tracking", "abstract": "In this paper, we propose the first higher frame rate video dataset (called Need for Speed -NfS)" }
2003.12565
1703.05884
B. Target Center Regression Module
In #REFR , the per-sample loss L CE is the grid approximation (9) of the original KL-divergence objective.
[ "We also add an L 2 regularization term to benefit generalization to unseen frames. The loss is thus formulated as follows,", "The non-negative scalars λ and γ j control the impact of the regularization term and sample z j respectively. We also make the following definitions for convenience,", "Note that we use superscript k ∈ {1, . . .", ", K} to denote spatial grid location y tc,(k) ∈ R 2 .", "In the following, the quantities in (15b)-(15d) are either seen as vectors in R K or 2Dmaps R H×W (with K = HW ), as made clear from the context." ]
[ "Without loss of generality, we may assume A = 1, obtaining", "Here, 1 T = [1, . . . , 1] denotes a vector of ones.", "Note that the grid approximation thus corresponds to the SoftMax-Cross Entropy loss, commonly employed for classification.", "To derive the optimization module, we adopt the steepest descent formulation #OTHEREFR , but employ the Newton approximation discussed above. This results in the following optimization strategy,", "." ]
[ "original KL-divergence objective" ]
background
{ "title": "Probabilistic Regression for Visual Tracking", "abstract": "Visual tracking is fundamentally the problem of regressing the state of the target in each video frame. While significant progress has been achieved, trackers are still prone to failures and inaccuracies. It is therefore crucial to represent the uncertainty in the target estimation. Although current prominent paradigms rely on estimating a state-dependent confidence score, this value lacks a clear probabilistic interpretation, complicating its use. In this work, we therefore propose a probabilistic regression formulation and apply it to tracking. Our network predicts the conditional probability density of the target state given an input image. Crucially, our formulation is capable of modeling label noise stemming from inaccurate annotations and ambiguities in the task. The regression network is trained by minimizing the Kullback-Leibler divergence. When applied for tracking, our formulation not only allows a probabilistic representation of the output, but also substantially improves the performance. Our tracker sets a new state-of-the-art on six datasets, achieving 59.8% AUC on LaSOT and 75.8% Success on TrackingNet. The code and models are available at" }
{ "title": "Need for Speed: A Benchmark for Higher Frame Rate Object Tracking", "abstract": "In this paper, we propose the first higher frame rate video dataset (called Need for Speed -NfS)" }
1808.02134
1703.05884
II. MOTIVATION AND RELATED WORK
In 2017 Need for Speed (NFS) was introduced as a dataset created with very high quality videos used for benchmarking and divides object tracking to deep trackers and correlation filter (CF) trackers #REFR . Several best performing algorithms are used in the benchmarks.
[ "The well-known region based tracking algorithm detects a human and extracts it from the background #OTHEREFR .", "Because it only detects the background and foreground based on Gaussian modeling, the region based tracking is not suitable for surveillance application at hand.", "Another method is Feature Based Tracking, where a classifier looks for features that are well-describing the object of interest such as lines, point that separates the object from the background.", "The feature based methods suffer from occlusion problem as the they need at least some sub-features to remain visible and even then the accuracy of the classification drops #OTHEREFR .", "Active Contour Based Tracking represents object's outlines as bounding contours #OTHEREFR ." ]
[ "The fastest algorithm is the Multi-Dimensional Network (MDNet) that has more than 50 FPS tested using the benchmark #OTHEREFR .", "In contrast, the CF trackers like Multiple Instance Learning (MIL) #OTHEREFR and Boosting algorithm #OTHEREFR are slow.", "The MOSSE filter #OTHEREFR is very fast but not accurate.", "The KCF #OTHEREFR is based on MOSSE too but it achieved better accuracy with supports from the HOG features.", "Because of the boundary issues in frequency domain learning #OTHEREFR , some researchers use boundary learning methods to reach good performances." ]
[ "deep trackers" ]
method
{ "title": "Kerman: A Hybrid Lightweight Tracking Algorithm to Enable Smart Surveillance as an Edge Service", "abstract": "Edge computing pushes the cloud computing boundaries beyond uncertain network resource by leveraging computational processes close to the source and target of data. Time-sensitive and data-intensive video surveillance applications benefit from on-site or near-site data mining. In recent years, many smart video surveillance approaches are proposed for object detection and tracking by using Artificial Intelligence (AI) and Machine Learning (ML) algorithms. However, it is still hard to migrate those computing and data-intensive tasks from Cloud to Edge due to the high computational requirement. In this paper, we envision to achieve intelligent surveillance as an edge service by proposing a hybrid lightweight tracking algorithm named Kerman (Kernelized Kalman filter). Kerman is a decision tree based hybrid Kernelized Correlation Filter (KCF) algorithm proposed for human object tracking, which is coupled with a lightweight Convolutional Neural Network (L-CNN) for high performance. The proposed Kerman algorithm has been implemented on a couple of single board computers (SBC) as edge devices and validated using real-world surveillance video streams. The experimental results are promising that the Kerman algorithm is able to track the object of interest with a decent accuracy at a resource consumption affordable by edge devices." }
{ "title": "Need for Speed: A Benchmark for Higher Frame Rate Object Tracking", "abstract": "In this paper, we propose the first higher frame rate video dataset (called Need for Speed -NfS)" }
1908.07904
1703.05884
Tracking benchmarks
For example, the NfS #REFR benchmark consists of 100 high frame rate videos and analyze the influence of appearance variation to deep and correlation filter-based trackers respectively.
[ "In recent years, numerous tracking benchmarks have been proposed for general performance evaluation or specific issues #OTHEREFR 25, #OTHEREFR .", "The OTB #OTHEREFR , ALOV++ #OTHEREFR , VOT #OTHEREFR 25] , TrackingNet #OTHEREFR , LaSOT #OTHEREFR , and GOT-10K #OTHEREFR benchmarks provide unified platforms to compare state-of-the-art trackers. More recent ones, e.g.", "TrackingNet, LaSOT and GOT-10K, contain a large scale of videos and cover a wide range of classes, which will make training a high performance deep learning based trackers available. Other benchmarks focus on specific applications or problems." ]
[ "Among these benchmarks, the OTB-2013 #OTHEREFR , OTB-2015 #OTHEREFR , TC-128 #OTHEREFR , and LaSOT #OTHEREFR datasets contain motion blur subsets that can be used to evaluate the ability of trackers to handle the motion blur.", "Nevertheless, the evaluation results are incomplete, since other interference that also affects the tracking accuracy is not excluded.", "A better solution is to compare trackers on the videos that are captured at the same scene but have different levels of motion blur to see if the tracker can obtain the same performance.", "In this paper, we construct a dataset for motion blur evaluation by averaging the frames on high frame rate videos with different ranges, thus generate testing videos having the same content with different levels of motion blur.", "By doing this, we are able to score the robustness of trackers and help study the effects of motion blur." ]
[ "correlation filter-based trackers" ]
background
{ "title": "Effects of Blur and Deblurring to Visual Object Tracking", "abstract": "Intuitively, motion blur may hurt the performance of visual object tracking. However, we lack quantitative evaluation of a tracker's robustness to different levels of motion blur. Meanwhile, while image-deblurring methods can produce visually clearer videos for pleasing human eyes, it is unknown whether visual object tracking can benefit from image deblurring or not. In this paper, we address these two problems by constructing a Blurred Video Tracking benchmark, which contains a variety of videos with different levels of motion blurs, as well as ground-truth tracking results for evaluating trackers. We extensively evaluate 23 trackers on this benchmark and observe several new interesting results. Specifically, we find that light blur may improve the performance of many trackers, but heavy blur always hurts the tracking performance. We also find that image deblurring may help to improve tracking performance on heavilyblurred videos but hurt the performance on lightly-blurred videos. According to these observations, we propose a new GAN-based scheme to improve the tracker's robustness to motion blurs. In this scheme, a fine-tuned discriminator is used as an adaptive assessor to selectively deblur frames during tracking process. We use this scheme to successfully improve the accuracy and robustness of 6 trackers." }
{ "title": "Need for Speed: A Benchmark for Higher Frame Rate Object Tracking", "abstract": "In this paper, we propose the first higher frame rate video dataset (called Need for Speed -NfS)" }
1906.01551
1703.05884
Displacement Consistency
Note that for ω d = ϕ = 1, the updated u * k+1 , v * k+1 of equation (13) remains unaltered from the optimal solution of equation #REFR . In the following Sec.
[ "Motivated by the displacement consistency techniques, as proposed in #OTHEREFR , we enhance the degree of smoothness imposed on the movement variables, such as speed and angular displacement.", "We update the sub-grid location, u * k+1 , v * k+1", "obtained from equation (12) by,", "where", "is restricted by reducing the contribution of d 1 and ϕ 1 slightly to 0.9." ]
[ "6, we briefly describe our experimental setup, and critically analyze the results." ]
[ "following Sec" ]
background
{ "title": "Learning Rotation Adaptive Correlation Filters in Robust Visual Object Tracking", "abstract": "Abstract. Visual object tracking is one of the major challenges in the field of computer vision. Correlation Filter (CF) trackers are one of the most widely used categories in tracking. Though numerous tracking algorithms based on CFs are available today, most of them fail to efficiently detect the object in an unconstrained environment with dynamically changing object appearance. In order to tackle such challenges, the existing strategies often rely on a particular set of algorithms. Here, we propose a robust framework that offers the provision to incorporate illumination and rotation invariance in the standard Discriminative Correlation Filter (DCF) formulation. We also supervise the detection stage of DCF trackers by eliminating false positives in the convolution response map. Further, we demonstrate the impact of displacement consistency on CF trackers. The generality and efficiency of the proposed framework is illustrated by integrating our contributions into two state-of-the-art CF trackers: SRDCF and ECO. As per the comprehensive experiments on the VOT2016 dataset, our top trackers show substantial improvement of 14.7% and 6.41% in robustness, 11.4% and 1.71% in Average Expected Overlap (AEO) over the baseline SRDCF and ECO, respectively." }
{ "title": "Need for Speed: A Benchmark for Higher Frame Rate Object Tracking", "abstract": "In this paper, we propose the first higher frame rate video dataset (called Need for Speed -NfS)" }
1908.06886
1804.05093
II. RELATED WORK
Most recently the heterogeneous GOP structures, where each layer can have neurons with differing operations, have received research attention #REFR .
[ "The similar approach is taken by the automatically evolving CNN (AE-CNN) #OTHEREFR , which runs a genetic algorithm on the population of networks composed of customized ResNet and DenseNet blocks, achieving competitive results.", "While deep networks can be difficult for the neuroevolution to handle, a viable alternative can be found in expanding the operation set of the shallow networks, allowing for more powerful representations.", "Generalized Operational Perceptron (GOP) model substitutes the standard neuron by offering a wider choice of nodal and pooling operations instead of the standard multiplication and addition.", "The choice of operations can be optimized simultaneously with the network architecture by a greedy incremental procedure #OTHEREFR .", "Operational Neural Networks (ONNs), composed of such units, have been shown to achieve superior performance to CNNs on some practical problems #OTHEREFR ." ]
[ "While flexibility of operators allows ONNs to stay relatively shallow, it also results in a vast unstructured design space which is computationally costly to traverse.", "Many recent works in architecture optimization utilize various techniques to reduce the computation needed, primarily by simplifying the evaluation procedure.", "SMASH #OTHEREFR learns a hypernetwork that can predict weights for all the connections of an arbitrary deep network (given a specific representation), which reduces the need for training and makes random search a viable solution for discovering architectures.", "Progressive Neural Architecture Search (PNAS) #OTHEREFR uses a separate recurrent network to approximately rank the candidate models without training them, allowing the search to focus only on more promising options.", "NASH #OTHEREFR and LEMONADE #OTHEREFR take advantage of network morphisms-operations that modify the structure of a trained network without affecting its outputto navigate the search space without training the models from scratch." ]
[ "neurons" ]
background
{ "title": "Architecture Search by Estimation of Network Structure Distributions", "abstract": "Abstract-The influence of deep learning is continuously expanding across different domains, and its new applications are ubiquitous. The question of neural network design thus increases in importance, as traditional empirical approaches are reaching their limits. Manual design of network architectures from scratch relies heavily on trial and error, while using existing pretrained models can introduce redundancies or vulnerabilities. Automated neural architecture design is able to overcome these problems, but the most successful algorithms operate on significantly constrained design spaces, assuming the target network to consist of identical repeating blocks. We propose a probabilistic representation of a neural network structure under the assumption of independence between layer types. The probability matrix (prototype) can describe general feedforward architectures and is equivalent to the population of models, while being simple to interpret and analyze. We construct an architecture search algorithm, inspired by the estimation of distribution algorithms, to take advantage of this representation. The probability matrix is tuned towards generating high-performance models by repeatedly sampling the architectures and evaluating the corresponding networks. Our algorithm is shown to discover models which are competitive with those produced by existing architecture search methods, both in accuracy and computational costs, despite the conceptual simplicity and the comparatively limited scope of achievable designs." }
{ "title": "Heterogeneous Multilayer Generalized Operational Perceptron", "abstract": "The traditional multilayer perceptron (MLP) using a McCulloch-Pitts neuron model is inherently limited to a set of neuronal activities, i.e., linear weighted sum followed by nonlinear thresholding step. Previously, generalized operational perceptron (GOP) was proposed to extend the conventional perceptron model by defining a diverse set of neuronal activities to imitate a generalized model of biological neurons. Together with GOP, a progressive operational perceptron (POP) algorithm was proposed to optimize a predefined template of multiple homogeneous layers in a layerwise manner. In this paper, we propose an efficient algorithm to learn a compact, fully heterogeneous multilayer network that allows each individual neuron, regardless of the layer, to have distinct characteristics. Based on the complexity of the problem, the proposed algorithm operates in a progressive manner on a neuronal level, searching for a compact topology, not only in terms of depth but also width, i.e., the number of neurons in each layer. The proposed algorithm is shown to outperform other related learning methods in extensive experiments on several classification problems. Index Terms-Architecture learning, feedforward network, generalized operational perceptron (GOP), progressive learning. University, Doha, Qatar. He has published two books, three book chapters, more than 40 journal papers in ten different IEEE transactions and other high impact journals, and around 80 papers in international conferences. He made significant contributions on biosignal analysis, particularly EEG and ECG analysis and processing, classification, and segmentation, computer vision with applications to recognition, classification, multimedia retrieval, evolving systems and evolutionary machine learning, swarm intelligence, and stochastic optimization." }