citing_id
stringlengths
9
16
cited_id
stringlengths
9
16
section_title
stringlengths
0
2.25k
citation
stringlengths
52
442
text_before_citation
sequence
text_after_citation
sequence
keywords
sequence
citation_intent
stringclasses
3 values
citing_paper_content
dict
cited_paper_content
dict
1904.05767
1512.03012
ObMan dataset
In order to find a variety of high-quality meshes of frequently manipulated everyday objects, we selected models from the ShapeNet #REFR dataset.
[ "To overcome the lack of adequate training data for our models, we generate a large-scale synthetic image dataset of hands grasping objects which we call the ObMan dataset.", "Here, we describe how we scale automatic generation of hand-object images. Objects." ]
[ "We selected 8 object categories of everyday objects (bottles, bowls, cans, jars, knifes, cellphones, cameras and remote controls).", "This results in a total of 2772 meshes which are split among the training, validation and test sets. Grasps.", "In order to generate plausible grasps, we use the GraspIt software #OTHEREFR following the methods used to collect the Grasp Database #OTHEREFR .", "In the robotics community, this dataset has remained valuable over many years #OTHEREFR and is still a reference for the fast synthesis of grasps given known object models #OTHEREFR .", "We favor simplicity and robustness of the grasp generation over the accuracy of the underlying model." ]
[ "ShapeNet dataset" ]
method
{ "title": "Learning Joint Reconstruction of Hands and Manipulated Objects", "abstract": "Estimating hand-object manipulations is essential for in- terpreting and imitating human actions. Previous work has made significant progress towards reconstruction of hand poses and object shapes in isolation. Yet, reconstructing hands and objects during manipulation is a more challeng- ing task due to significant occlusions of both the hand and object. While presenting challenges, manipulations may also simplify the problem since the physics of contact re- stricts the space of valid hand-object configurations. For example, during manipulation, the hand and object should be in contact but not interpenetrate. In this work, we regu- larize the joint reconstruction of hands and objects with ma- nipulation constraints. We present an end-to-end learnable model that exploits a novel contact loss that favors phys- ically plausible hand-object constellations. Our approach improves grasp quality metrics over baselines, using RGB images as input. To train and evaluate the model, we also propose a new large-scale synthetic dataset, ObMan, with hand-object manipulations. We demonstrate the transfer- ability of ObMan-trained models to real data." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2003.08593
1512.03012
Experiments
In this section, we perform a thorough comparison of our proposed Curriculum DeepSDF to DeepSDF along with comprehensive ablation studies for the shape reconstruction task on the ShapeNet dataset #REFR .
[]
[ "We use the missing part recovery task as an application to demonstrate the usage of our method.", "Following #OTHEREFR , we report the standard distance metrics of mesh reconstruction including the mean and the median of Chamfer distance (CD), mean Earth Mover's distance (EMD) #OTHEREFR , and mean mesh accuracy #OTHEREFR .", "For evaluating CD, we sample 30,000 points from mesh surfaces.", "For evaluating EMD, we follow #OTHEREFR by sampling 500 points from mesh surfaces due to a high computation cost.", "For evaluating mesh accuracy, following #OTHEREFR , we sample 1,000 points from mesh surfaces and compute the minimum distance d such that 90% of the points lie within d of the ground truth surface." ]
[ "shape reconstruction task", "ShapeNet" ]
method
{ "title": "Curriculum DeepSDF", "abstract": "When learning to sketch, beginners start with simple and flexible shapes, and then gradually strive for more complex and accurate ones in the subsequent training sessions. In this paper, we design a \"shape curriculum\" for learning continuous Signed Distance Function (SDF) on shapes, namely Curriculum DeepSDF. Inspired by how humans learn, Curriculum DeepSDF organizes the learning task in ascending order of difficulty according to the following two criteria: surface accuracy and sample difficulty. The former considers stringency in supervising with ground truth, while the latter regards the weights of hard training samples near complex geometry and fine structure. More specifically, Curriculum DeepSDF learns to reconstruct coarse shapes at first, and then gradually increases the accuracy and focuses more on complex local details. Experimental results show that a carefully-designed curriculum leads to significantly better shape reconstructions with the same training data, training epochs and network architecture as DeepSDF. We believe that the application of shape curricula can benefit the training process of a wide variety of 3D shape representation learning methods." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1910.02686
1512.03012
Experiment set-up
ShapeNetCore #REFR contains 51,300 unique 3-D objects under 55 different categories, such as motorbike, Table 1 : Main quantitive results.
[ "We have implemented our models and all extensions with python library TensorFlow #OTHEREFR . We used 2 different datasets in our experiments.", "For the point cloud auto-encoder, we used the ShapeNetCore dataset #OTHEREFR as it is a large-scale dataset contains various 3-D models with annotations, and it is available in the public domain." ]
[ "Random gaussian stands for a random gaussian with the same mean and variance compared to the training set.", "The M-EMD for row \"Optimal (GT)\" was obtained via different point-cloud samples from the same ground-truth polygon mesh, hence it states a lower-bound of error that we could ever achieve.", "as polygon meshes.", "We sampled points from mesh surfaces by CloudCompare [71] .", "We use the official training, validation and testing splits of #OTHEREFR , as a result, there are 35,708 samples in the training set for training, 5,158 samples in the validation set for hyper-parameter tuning and 10,261 samples in the testing set for evaluation." ]
[ "51,300 unique 3-D" ]
background
{ "title": "Irregular Convolutional Auto-Encoder on Point Clouds", "abstract": "We proposed a novel graph convolutional neural network that could construct a coarse, sparse latent point cloud from a dense, raw point cloud. With a novel non-isotropic convolution operation defined on irregular geometries, the model then can reconstruct the original point cloud from this latent cloud with fine details. Furthermore, we proposed that it is even possible to perform particle simulation using the latent cloud encoded from some simulated particle cloud (e.g. fluids), to accelerate the particle simulation process. Our model has been tested on ShapeNetCore dataset for Auto-Encoding with a limited latent dimension and tested on a synthesis dataset for fluids simulation. We also compare the model with other state-of-the-art models, and several visualizations were done to intuitively understand the model. Recently, deep auto-encoders have been proven to have a strong capability in finding latent representations of data in an unsupervised manner, such as images [2, 3, 4] , 3D volumes [5, 6] , polygon meshes [7, 8], videos [9, 10], texts [11] , as well as point clouds [12, 13] . In this work, we aim to learn light-weight and information-rich representations of point arXiv:1910.02686v1 [cs.LG] 7 Oct 2019 * For consistency, we still write f (0) as vector form even it is real-valued and the norm operator · is unnecessary." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1910.02686
1512.03012
Results and quantitative metrics
We picked some complex objects in the ShapeNetCore #REFR dataset in order to better illustrate the improvements, such as shelf and piano.
[ "We summarized our results as shown in table 1, as well as in figure 4 .", "As stated in table 1, our approach is significantly better than prior works, such as FoldingNet #OTHEREFR or AE-EMD as described in #OTHEREFR .", "The improvement of our model could be easily observed from figure 4." ]
[ "We also picked some uncommon objects, such as headphone and vase, as well as common objects such as airplane and bench.", "Latent vector-based models such as FoldingNet #OTHEREFR works better in common objects, as they could reconstruct the over-all shape of those objects, even some detailed parts.", "For example, they could reconstruct the engines under the wings of a airplane, or the supporting legs of a bench.", "As those objects appear many times among the dataset, the network is more likely to learn the shape of those objects.", "And for uncommon object categories, like piano and vase, baseline models could only reconstruct a rough and inaccurate over-all shape of those objects." ]
[ "complex objects" ]
method
{ "title": "Irregular Convolutional Auto-Encoder on Point Clouds", "abstract": "We proposed a novel graph convolutional neural network that could construct a coarse, sparse latent point cloud from a dense, raw point cloud. With a novel non-isotropic convolution operation defined on irregular geometries, the model then can reconstruct the original point cloud from this latent cloud with fine details. Furthermore, we proposed that it is even possible to perform particle simulation using the latent cloud encoded from some simulated particle cloud (e.g. fluids), to accelerate the particle simulation process. Our model has been tested on ShapeNetCore dataset for Auto-Encoding with a limited latent dimension and tested on a synthesis dataset for fluids simulation. We also compare the model with other state-of-the-art models, and several visualizations were done to intuitively understand the model. Recently, deep auto-encoders have been proven to have a strong capability in finding latent representations of data in an unsupervised manner, such as images [2, 3, 4] , 3D volumes [5, 6] , polygon meshes [7, 8], videos [9, 10], texts [11] , as well as point clouds [12, 13] . In this work, we aim to learn light-weight and information-rich representations of point arXiv:1910.02686v1 [cs.LG] 7 Oct 2019 * For consistency, we still write f (0) as vector form even it is real-valued and the norm operator · is unnecessary." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1910.02686
1512.03012
Results and quantitative metrics
We fairly selected some complex and uncommon objects in the ShapeNetCore #REFR testing-set for better comparison.
[ "As those objects appear many times among the dataset, the network is more likely to learn the shape of those objects.", "And for uncommon object categories, like piano and vase, baseline models could only reconstruct a rough and inaccurate over-all shape of those objects.", "They work even worse for the object headphone, where baseline models failed to reconstruct nearly anything senseful.", "However, our model works well in either of those objects as shown in figure 4 .", "Figure 4 : Reconstruction results of our models with cloud and vector as latent code." ]
[ "Baseline models are FoldingNet #OTHEREFR and AE-EMD as in #OTHEREFR .", "Note how our latent cloud model could reconstruct complex objects such as shelf, lamp and piano, and how our latent vector model could still produce good results by preserving a hollow structure for object post, preserving the over-all, sharp structure as well as all 3 legs for object piano, and a clean reconstruction for table, while other baseline models struggled on those objects.", "Even stylized objects under common categories are relatively hard to reconstruct, as the row chair shows in figure 4 , baseline models failed to reconstruct that stylized chair object, and they reconstructed the special chair to a common chair.", "In the same row, we could observe that our latent cloud-based model is significantly better than other models, where it reconstructs mostly of the parts in that uncommon object, including those thin armrests.", "Meanwhile, the object shelf is particularly difficult to reconstruct as it has many hollow holes inside it." ]
[ "uncommon objects" ]
method
{ "title": "Irregular Convolutional Auto-Encoder on Point Clouds", "abstract": "We proposed a novel graph convolutional neural network that could construct a coarse, sparse latent point cloud from a dense, raw point cloud. With a novel non-isotropic convolution operation defined on irregular geometries, the model then can reconstruct the original point cloud from this latent cloud with fine details. Furthermore, we proposed that it is even possible to perform particle simulation using the latent cloud encoded from some simulated particle cloud (e.g. fluids), to accelerate the particle simulation process. Our model has been tested on ShapeNetCore dataset for Auto-Encoding with a limited latent dimension and tested on a synthesis dataset for fluids simulation. We also compare the model with other state-of-the-art models, and several visualizations were done to intuitively understand the model. Recently, deep auto-encoders have been proven to have a strong capability in finding latent representations of data in an unsupervised manner, such as images [2, 3, 4] , 3D volumes [5, 6] , polygon meshes [7, 8], videos [9, 10], texts [11] , as well as point clouds [12, 13] . In this work, we aim to learn light-weight and information-rich representations of point arXiv:1910.02686v1 [cs.LG] 7 Oct 2019 * For consistency, we still write f (0) as vector form even it is real-valued and the norm operator · is unnecessary." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1604.06079
1512.03012
Experimental Results
In this paper, we propose to generate the ground-truth data through rendering ShapeNet models #REFR .
[ "We consider four popular object categories where the underlying reflectional symmetry is salient: chair, car, table, and sofa.", "An important challenge is to obtain ground-truth data to train each individual network.", "Standard dataset creation approaches such as human labeling or scanning are inappropriate for us due to the limitations in cost and in collecting diverse physical objects." ]
[ "We employ an open-source physically-based rendering software, Mitsuba, to generate realistic renderings.", "We use 700−2500 models for each category to generate training data.", "For each selected object, we choose 36 random views, each of which provides an image with ground-truth geometric information.", "For each training dataset, we leave out 20% of the data for validation. Figure 2 shows some example renderings." ]
[ "ShapeNet models" ]
method
{ "title": "Symmetry-aware Depth Estimation using Deep Neural Networks", "abstract": "Abstract. Due to the abundance of 2D product images from the internet, developing efficient and scalable algorithms to recover the missing depth information is central to many applications. Recent works have addressed the single-view depth estimation problem by utilizing convolutional neural networks. In this paper, we show that exploring symmetry information, which is ubiquitous in man made objects, can significantly boost the quality of such depth predictions. Specifically, we propose a new convolutional neural network architecture to first estimate dense symmetric correspondences in a product image and then propose an optimization which utilizes this information explicitly to significantly improve the quality of single-view depth estimations. We have evaluated our approach extensively, and experimental results show that this approach outperforms state-of-the-art depth estimation techniques." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1906.01618
1512.03012
Experiments
We consider the "chair" and "car" classes of Shapenet v.2 #REFR with 4.5k and 2.5k model instances respectively.
[ "On the training objects, SRNs achieve almost pixel-perfect results with a PSNR of 30.41 dB.", "The dGQN fails to learn object shape and multi-view geometry on this limited dataset, achieving 20.85 dB. See Fig. 2 for a qualitative comparison. In a two-shot setting (see Fig.", "7 for reference views), we succeed in reconstructing any part of the object that has been observed, achieving 24.36 dB, while the dGQN achieves 18.56 dB.", "In a one-shot setting, SRNs reconstruct an object consistent with the observed view.", "As expected, due to the current non-probabilistic implementation, both the dGQN and SRNs reconstruct an object resembling the mean of the hundreds of feasible objects that may have generated the observation, achieving 17.51 dB and 18.11 dB respectively. Shapenet v2." ]
[ "We disable transparencies and specularities, and train on 50 observations of each instance at a resolution of 128 × 128 pixels.", "Camera poses are randomly generated on a sphere with the object at the origin.", "We evaluate performance on (1) novel-view synthesis of objects in the training set and (2) novel-view synthesis on objects in the held-out, official Shapenet v2 test sets, reconstructed from one or two observations, as discussed in Sec. 3.4. Fig. 7 shows the sampled poses for the few-shot case.", "In all settings, we assemble ground-truth novel views by sampling 250 views in an Archimedean spiral around each object instance. We compare SRNs to three baselines from recent literature. Table 1 and Fig. 6 report quantitative and qualitative results respectively.", "In all settings, we outperform all baselines by a wide margin." ]
[ "Shapenet" ]
method
{ "title": "Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations", "abstract": "The advent of deep learning has given rise to neural scene representations -learned mathematical models of a 3D environment. However, many of these representations do not explicitly reason about geometry and thus do not account for the underlying 3D structure of the scene. In contrast, geometric deep learning has explored 3D-structure-aware representations of scene geometry, but requires explicit 3D supervision. We propose Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions that map world coordinates to a feature representation of local scene properties. By formulating the image formation as a differentiable ray-marching algorithm, SRNs can be trained end-to-end from only 2D observations, without access to depth or geometry. This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1908.06989
1512.03012
Introduction
With the simultaneous availability of synthetic CAD model datasets #REFR , we have an opportunity to drive forward both 3D scene understanding and geometric reconstruction.
[ "The capture and reconstruction of real-world 3D scenes has seen significant progress in recent years, driven by increasing availability of commodity RGB-D sensors such as the Microsoft Kinect or Intel RealSense.", "State-of-the-art 3D reconstruction approaches can achieve impressive reconstruction fidelity with robust tracking #OTHEREFR .", "Such 3D reconstructions have now begun to drive forward 3D scene understanding with the recent availability of annotated reconstruction datasets #OTHEREFR ." ]
[ "3D models of scanned real-world objects as well as synthetic CAD models of shapes contain significant information about understanding environments, often in a complementary fashion.", "Where CAD models often comprise relatively simple, clean, compact geometry, real-world objects are often more complex, and scanned real-world object geometry is then more complex, as well as noisy and incomplete.", "It is thus very informative to establish mappings between the two domains -for instance, to visually transform scans to CAD representations, or transfer learned semantic knowledge from CAD models to a real-world scan.", "Such a semantic mapping is difficult to obtain due to the lack of exact matches between synthetic models and real-world objects and these strong, low-level geometric differences.", "Current approaches towards retrieving CAD models representative of scanned objects thus focus on the task of retrieving a CAD model of the correct object class category #OTHEREFR 13, #OTHEREFR , without considering within-class similarities or rankings." ]
[ "3D scene understanding" ]
background
{ "title": "Joint Embedding of 3D Scan and CAD Objects", "abstract": ": We learn a joint embedding space of scan and CAD object geometry, visualized here by t-SNE. Semantically similar objects lie close together, despite very different lower-level geometric characteristics (clutter, noise, partialness, etc). 3D scan geometry and CAD models often contain complementary information towards understanding environments, which could be leveraged through establishing a mapping between the two domains. However, this is a challenging task due to strong, lower-level differences between scan and CAD geometry. We propose a novel approach to learn a joint embedding space between scan and CAD geometry, where semantically similar objects from both domains lie close together. To achieve this, we introduce a new 3D CNN-based approach to learn a joint embedding space representing object similarities across these domains. To learn a shared space where scan objects and CAD models can interlace, we propose a stacked hourglass approach to separate foreground and background from a scan object, and transform it to a complete, CAD-like representation to produce a shared embedding space. This embedding space can then be used for CAD model retrieval; to further enable this task, we introduce a new dataset of ranked scan-CAD similarity annotations, enabling new, fine-grained evaluation of CAD model retrieval to cluttered, noisy, partial scans. Our learned joint embedding outperforms current state of the art for CAD model retrieval by 12% in instance retrieval accuracy." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1912.08265
1512.03012
Consistency-Constrained Semi-Supervised Learning (CC-SSL)
For the nth iteration, model f n is learned using L (n) defined as Equation #REFR .
[]
[ "The loss function is defined to be the Mean Square Error on heatmaps of both the source data and target data and γ is used to balance the loss between source and target datasets.", "To this end, we present our Consistency-Constrained Semi-Supervised Learning (CC-SSL) approach as following: we start with training a model only using synthetic data and obtain a initial weak model f (0) = f s . Then we iterate the following procedure.", "For the nth iteration, we first use Algorithm 1 to generate labelsŶ (n) t .", "with the generated labels, we simply train the model using (X s , Y s ) and (X t ,Ŷ (n) t ) jointly." ]
[ "model f" ]
method
{ "title": "Learning from Synthetic Animals", "abstract": "Despite great success in human parsing, progress for parsing other deformable articulated objects, like animals, is still limited by the lack of labeled data. In this paper, we use synthetic images and ground truth generated from CAD animal models to address this challenge. To bridge the gap between real and synthetic images, we propose a novel consistency-constrained semi-supervised learning method (CC-SSL). Our method leverages both spatial and temporal consistencies, to bootstrap weak models trained on synthetic data with unlabeled real images. We demonstrate the effectiveness of our method on highly deformable animals, such as horses and tigers. Without using any real image label, our method allows for accurate keypoints prediction on real images. Moreover, we quantitatively show that models using synthetic data achieve better generalization performance than models trained on real images across different domains in the Visual Domain Adaptation Challenge dataset. Our synthetic dataset contains 10+ animals with diverse poses and rich ground truth, which enables us to use the multi-task learning strategy to further boost models' performance." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1901.05239
1808.06583
A. Coded Multicasting
Minimization is carried out over the parameters (r 1 , r 2 ) of the concatenated code. See also #REFR for further details.
[ "Coded multicasting leverages computational redundancy, that is, a multiplicity larger than one, by scheduling a sequence of one-to-many multicasting transmissions that are simultaneously useful to more devices.", "For each group of IVs of multiplicity j, from s max to j = s q − 1, devices transmit in turn by serving j other users simultaneously via coded multicasting, whereby the j IVs are XORed and decoding leverages the available IVs as side information #OTHEREFR .", "Proposition 1: For a number q ∈ [q min : K] of nonstraggling devices, the Shuffle phase delay (6) of coded multicasting is given as", "where the minimization is subject to constraints #OTHEREFR .", "Proof : The proof follows immediately by noting that the first sum is the normalized delay (6) for the transmission of B j IVs given the coded multicasting gain of j, while the second term corresponds to the transmission of the remaining IVs." ]
[ "Remark 1: When the degree of function is d = 1, the Shuffle phase delay (11) coincides with the communication load derived in #OTHEREFR Proposition 2] normalized by N ." ]
[ "concatenated code" ]
method
{ "title": "Coded Federated Computing in Wireless Networks with Straggling Devices and Imperfect CSI", "abstract": "Distributed computing platforms typically assume the availability of reliable and dedicated connections among the processors. This work considers an alternative scenario, relevant for wireless data centers and federated learning, in which the distributed processors, operating on generally distinct coded data, are connected via shared wireless channels accessed via full-duplex transmission. The study accounts for both wireless and computing impairments, including interference, imperfect Channel State Information, and straggling processors, and it assumes a Map-Shuffle-Reduce coded computing paradigm. The total latency of the system, obtained as the sum of computing and communication delays, is studied for different shuffling strategies revealing the interplay between distributed computing, coding, and cooperative or coordinated transmission." }
{ "title": "Improved Latency-communication Trade-off for Map-shuffle-reduce Systems with Stragglers", "abstract": "In a distributed computing system operating according to the map-shuffle-reduce framework, coding data prior to storage can be useful both to reduce the latency caused by straggling servers and to decrease the inter-server communication load in the shuffle phase. In prior work, a concatenated coding scheme was proposed for a matrix multiplication task. In this scheme, the outer Maximum Distance Separable (MDS) code is leveraged to correct erasures caused by stragglers, while the inner repetition code is used to improve the communication efficiency in the shuffle phase by means of coded multicasting. In this work, it is demonstrated that it is possible to leverage the redundancy created by repetition coding in order to increase the rate of the outer MDS code and hence to increase the multicasting opportunities in the shuffle phase. As a result, the proposed approach is shown to improve over the best known latency-communication overhead trade-off." }
2002.07007
1811.12823
Synthesizability of unoptimized generated molecules
Here, we evaluate methods implemented in the MOSES #REFR benchmarking set, which cover diverse approaches to the molecular generation problem: a SMILES long short-term memory (LSTM) model, a variational auto-encoder (VAE), and an adversarial auto-encoder (AAE) (see Methods).
[ "As alluded to above, distribution learning methods are capable of generating \"unoptimized\" molecules that share properties (in aggregate) with the database used for training." ]
[ "There are more deep learning approaches for molecular generation and optimization than can be compared here, 11 so we focus on these top-performing classes of approaches.", "In this task, we can use post hoc filtering or training set biasing by separately training distribution learning models on ChEMBL (less synthesizable) and MOSES (more synthesizable).", "Figure 2b shows the fraction of synthesizable molecules from 300 generated by each distribution learning method trained on the ChEMBL and MOSES.", "We observe that the fraction of synthesizable molecules are comparable to that of the training set, while no method improves synthesizability relative to its training set.", "The stark difference between results using MOSES and ChEMBL suggests that a priori biasing by training on a \"more synthesizable\" data set is a viable approach for distribution learning algorithms. There is no one method particularly superior than others." ]
[ "molecular generation problem" ]
method
{ "title": "The Synthesizability of Molecules Proposed by Generative Models", "abstract": "The discovery of functional molecules is an expensive and time-consuming process, exemplified by the rising costs of small molecule therapeutic discovery. One class of techniques of growing interest for early-stage drug discovery is de novo molecular generation and optimization, catalyzed by the development of new deep learning approaches. 1 These techniques can suggest novel molecular structures intended to maximize a multi-objective function, e.g., suitability as a therapeutic against a particular target, 2 without relying on brute-force exploration of a chemical space. 3 However, the utility of these approaches is stymied by ignorance of synthesizability. To highlight the severity of this issue, we use a data-driven computer-aided synthesis planning program 4 to quantify how often molecules proposed by state-of-the-art generative models cannot be readily synthesized. Our analysis demonstrates that there are several tasks for which these models generate unrealistic molecular structures despite performing well on popular quantitative benchmarks. Synthetic complexity heuristics can successfully bias generation toward synthetically-tractable chemical space, although doing so 1 arXiv:2002.07007v1 [q-bio.QM] 17 Feb 2020 necessarily detracts from the primary objective. This analysis suggests that to improve the utility of these models in real discovery workflows, new algorithm development is warranted." }
{ "title": "Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models", "abstract": "Deep generative models such as generative adversarial networks, variational autoencoders, and autoregressive models are rapidly growing in popularity for the discovery of new molecules and materials. In this work, we introduce MOlecular SEtS (MOSES), a benchmarking platform to support research on machine learning for drug discovery. MOSES implements several popular molecular generation models and includes a set of metrics that evaluate the diversity and quality of generated molecules. MOSES is meant to standardize the research on molecular generation and facilitate the sharing and comparison of new models. Additionally, we provide a large-scale comparison of existing state of the art models and elaborate on current challenges for generative models that might prove fertile ground for new research. Our platform and source code are freely available at https://github.com/molecularsets/moses." }
1612.07335
1304.3568
IV. NUMERICAL RESULTS
In this section we present some numerical results comparing Algorithm 1 with the (distributed) ATC algorithm #REFR .
[]
[ "For Algorithm 1 we simulated two instances, namely: i) one based on the surrogates (13) and (5), which we will refer to as \"Plain D 2 L\"; and ii) one using the surrogates (13) and (15), which will be termed \"Linearized D 2 L\".", "Setting and tuning: We consider denosing a 512 × 512 pixels corrupted boat image in a distributed setting.", "The data set S is composed of the stacked 8 × 8 sliding patches of the image.", "The size of the dictionary and the sparse representation matrices X i are 64 × 64 and 64 × 255, 150, respectively (overall, the number of variables is around 16 million), and the parameters in (P2) are set to 2 μ = λ = 1/8 and α = 1.", "We simulated a time-invariant undirected connected network composed of 150 agents." ]
[ "(distributed) ATC algorithm" ]
method
{ "title": "Distributed dictionary learning", "abstract": "Abstract-The paper studies distributed Dictionary Learning (DL) problems where the learning task is distributed over a multiagent network with time-varying (nonsymmetric) connectivity. This formulation is relevant, for instance, in Big Data scenarios where massive amounts of data are collected/stored in different spatial locations and it is unfeasible to aggregate and/or process all data in a fusion center, due to resource limitations, communication overhead or privacy considerations. We develop a general distributed algorithmic framework for the (nonconvex) DL problem and establish its asymptotic convergence. The new method hinges on Successive Convex Approximation (SCA) techniques coupled with i) a gradient tracking mechanism instrumental to locally estimate the missing global information; and ii) a consensus step, as a mechanism to distribute the computations among the agents. To the best of our knowledge, this is the first distributed algorithm with provable convergence for the DL problem and, more in general, bi-convex optimization problems over (time-varying) directed graphs." }
{ "title": "Distributed dictionary learning over a sensor network", "abstract": "We consider the problem of distributed dictionary learning, where a set of nodes is required to collectively learn a common dictionary from noisy measurements. This approach may be useful in several contexts including sensor networks. Diffusion cooperation schemes have been proposed to solve the distributed linear regression problem. In this work we focus on a diffusion-based adaptive dictionary learning strategy: each node records observations and cooperates with its neighbors by sharing its local dictionary. The resulting algorithm corresponds to a distributed block coordinate descent (alternate optimization). Beyond dictionary learning, this strategy could be adapted to many matrix factorization problems and generalized to various settings. This article presents our approach and illustrates its efficiency on some numerical examples." }
1509.08628
1410.5186
Corollary 4. (D, A, C)-DB is N P-complete for each combination of a voting rule
The bribery costs for letting them win the election can be calculated by the algorithm introduced by Dorn and Krüger #REFR Theorem 10] for solving the constructive case in polynomial time.
[ "Each of these steps can be done in polynomial time in n and m, implying an overall polynomial running time.", "The remaining case of OK eff -with k being a power of 2 and a global order over the issues for all voters given -coincide with OP when the least important log k issues are removed.", "This is due to the fact that a bribery of any subset of those least log k issues for any voter only permutes the set of candidates this voter votes for.", "We can solve this again by identifying the potential winners and calculating the costs for the required bribery.", "Since with OP every voter votes just for one candidate, there are only up to n such potential winning candidates." ]
[ "The non-negative case can be solved by a very similar algorithm. Proof.", "Corollary 6 can be shown by small adjustments to the proof of Theorem 5, where the analogous negative cases are covered.", "For voting rule OK eff and the case that k is polynomial in n and m, the briber is now able to bribe the voters in V h freely, therefore we have to build a second similar bribery-costs-sorted list of voters in V h .", "Summing up the cheapest of those costs to make a specific candidate a winner of the election is a bit more complicated, since a bribery of a voter in V h could change this voter to not vote for h any longer.", "But this can be handeled in time poly(n, m) easily." ]
[ "bribery costs" ]
method
{ "title": "Often harder than in the Constructive Case: Destructive Bribery in CP-nets", "abstract": "We study the complexity of the destructive bribery problem-an external agent tries to prevent a disliked candidate from winning by bribery actionsin voting over combinatorial domains, where the set of candidates is the Cartesian product of several issues. This problem is related to the concept of the margin of victory of an election which constitutes a measure of robustness of the election outcome and plays an important role in the context of electronic voting. In our setting, voters have conditional preferences over assignments to these issues, modelled by CP-nets. We settle the complexity of all combinations of this problem based on distinctions of four voting rules, five cost schemes, three bribery actions, weighted and unweighted voters, as well as the negative and the non-negative scenario. We show that almost all of these cases are N P-complete or N P-hard for weighted votes while approximately half of the cases can be solved in polynomial time for unweighted votes." }
{ "title": "On the hardness of bribery variants in voting with CP-nets", "abstract": "We continue previous work by Mattei et al. (Ann. Math. Artif. Intell. 1042 68(1-3), 135-160 2013) in which they study the computational complexity of bribery schemes when voters have conditional preferences modeled as CP-nets. For most of the cases they considered, they showed that the bribery problem is solvable in polynomial time. Some cases remained open-we solve several of them and extend the previous results to the case that voters are weighted. Additionally, we consider negative (weighted) bribery in CP-nets, when the briber is not allowed to pay voters to vote for his preferred candidate. Mathematics Subject Classifications (2010) 91B14 · 91B10 · 68Q25 · 91B12 · 68Q17" }
1712.08209
1508.03959
B. PEB Observer
The PEBO design proposed in #REFR , although related with the KKLO, aims at formulating the state reconstruction problem as a parameter estimation problem.
[]
[ "Towards this end, we are looking for an injection B(h(x), u) and a (left invertible) mapping φ(x) that transforms the system (1) into 2 φ(x) = B(h(x), u).", "In this way, selecting (part of) the observer dynamics aṡ", "we establish, via simple integration, the key relationship", "where θ is a constant vector defined as θ := φ(x(0)) − ξ(0).", "It is clear that, if θ is known, we have that" ]
[ "parameter estimation problem", "state reconstruction problem" ]
background
{ "title": "On State Observers for Nonlinear Systems: A New Design and a Unifying Framework", "abstract": "In this paper we propose a new observer design technique for nonlinear systems. It combines the well-known Kazantzis-Kravaris-Luenberger observer and the recently introduced parameter estimation-based observer, which become special cases of it-extending the realm of applicability of both methods. A second contribution of the paper is the proof that these designs can be recast as particular cases of immersion and invariance observers-providing in this way a unified framework for their analysis and design. Simulation results of a physical system that illustrates the superior performance of the proposed observer compared to other existing observers are presented." }
{ "title": "A Parameter Estimation Approach to State Observation of Nonlinear Systems", "abstract": "A novel approach to the problem of partial state estimation of nonlinear systems is proposed. The main idea is to translate the state estimation problem into one of estimation of constant, unknown parameters related to the systems initial conditions. The class of systems for which the method is applicable is identified via two assumptions related to the transformability of the system into a suitable cascaded form and our ability to estimate the unknown parameters. The first condition involves the solvability of a partial differential equation while the second one requires some persistency of excitation-like conditions. The proposed observer is shown to be applicable to position estimation of a class of electromechanical systems, for the reconstruction of the state of power converters and for speed observation of a class of mechanical systems." }
1204.2240
1104.4887
Phase diagrams for the unbiased case
More details on the solutions found (including unstable ones) and dependence on the different parameters can be found in #REFR .
[ "The Newton−Raphson algorithm has been used to numerically solve the equations of state for the unbiased case (h s = h t = 0), fixed J s = 1 in some given units (which is equivalent to measuring the rest of couplings and fields in terms of J s ) and different values of the rest of the parameters.", "Python libraries developed for the computation and analysis of these are available at https://github.com/anafrio/.", "Results are summarised through figures of the phase diagram's cross-sections for both models for specific values of the rest of parameters.", "These have been chosen for the sections to be representative of all the possible phenomenology." ]
[ "Unbiased situations refer to what has been loosely referred to as trends or traditions, i.e., social conventions which will give no particular advantage or disadvantage to the isolated individual and choice.", "For the group interdependence model, this would be the case, for example, to study the use of a particular outfit or accessory in two different socioeconomic groups, or the adoption of specific vocabulary (argot, technical term) by two such groups.", "Unbiased individual interdependence could be of use to tackle problems such as how the use of an accessory in a group will be affected by the use of another when it is considered to be fashionable or unfashionable to wear them together.", "The study of the unbiased case is also useful to get some insight on what will happen in a more general situation of biased homogeneous groups.", "The introduction of nonzero constant opinion fields, will in general determine the sign of the average magnetisation and break the equilibria degeneracy, and the unpolarised state will no longer be stable." ]
[ "dependence" ]
background
{ "title": "Interdependent binary choices under social influence: phase diagram for homogeneous unbiased populations", "abstract": "Coupled Ising models are studied in a discrete choice theory framework, where they can be understood to represent interdependent choice making processes for homogeneous populations under social influence. Two different coupling schemes are considered. The nonlocal or group interdependence model is used to study two interrelated groups making the same binary choice. The local or individual interdependence model represents a single group where agents make two binary choices which depend on each other. For both models, phase diagrams, and their implications in socioeconomic contexts, are described and compared in the absence of private deterministic utilities (zero opinion fields)." }
{ "title": "Coupled Ising models and interdependent discrete choices under social influence in homogeneous populations", "abstract": "The use of statistical physics to study problems of social sciences is motivated and its current state of the art briefly reviewed, in particular for the case of discrete choice making. The coupling of two binary choices is studied in some detail, using an Ising model for each of the decision variables (the opinion or choice moments or spins, socioeconomic equivalents to the magnetic moments or spins). Toy models for two different types of coupling are studied analytically and numerically in the mean field (infinite range) approximation. This is equivalent to considering a social influence effect proportional to the fraction of adopters or average magnetisation. In the nonlocal case, the two spin variables are coupled through a Weiss mean field type term. In a socioeconomic context, this can be useful when studying individuals of two different groups, making the same decision under social influence of their own group, when their outcome is affected by the fraction of adopters of the other group. In the local case, the two spin variables are coupled only through each individual. This accounts to considering individuals of a single group each making two different choices which affect each other. In both cases, only constant (intra- and inter-) couplings and external fields are considered, i.e., only completely homogeneous populations. Most of the results presented are for the zero field case, i.e. no externalities or private utilities. Phase diagrams and their interpretation in a socioeconomic context are discussed and compared to the uncoupled case. The two systems share many common features including the existence of both first and second order phase transitions, metastability and hysteresis. To conclude, some general remarks, pointing out the limitations of these models and suggesting further improvements are given." }
1601.05388
1310.1068
Discussion
These results stand in contrast to those of #REFR , who found that the most likely mode of evolution for both loci under a constant demographic history is one of overdominance. There are a several reasons for this discrepancy.
[ "This is because for weak selection, the trajectory is extremely stochastic and it is difficult to disentangle the effects of drift and selection #OTHEREFR .", "We then applied our method to a classic dataset from horses.", "We found that our inference of both the strength and mode of natural selection depended strongly on whether or not we incorporated demography.", "For the MC1R locus, a constantsize demographic model results in an inference of positive selection, while the more complicated demographic model inferred by [DS + 15] causes the inference to tilt toward overdominance, as well as a much younger allele age.", "In contrast, the ASIP locus is inferred to be overdominant under a constant-size demography, but the complicated demographic history results in an inference of positive selection, and a much older allele age." ]
[ "First, we computed the diffusion time units differently, using N 0 = 16000 and a generation time of 8 years, as inferred by [DS + 15], while #OTHEREFR used N 0 = 2500 (consistent with the bottleneck size found by [DS + 15] ) and a generation time of 5 years.", "Hence, our constant-size model has far less genetic drift than the constant-size model assumed by #OTHEREFR .", "This emphasizes the importance of inferring appropriate demographic scaling parameters, even when a constant population size is assumed.", "Secondly, we use MCMC to integrate over the distribution of allele ages, which can have a very long tail going into the past, while [SBS14] assume a fixed allele age.", "One key limitation of this method is that it assumes that the aDNA samples all come from the same, continuous population." ]
[ "evolution" ]
result
{ "title": "Bayesian inference of natural selection from allele frequency time series", "abstract": "Abstract. The advent of accessible ancient DNA technology now allows the direct ascertainment of allele frequencies in ancestral populations, thereby enabling the use of allele frequency time series to detect and estimate natural selection. Such direct observations of allele frequency dynamics are expected to be more powerful than inferences made using patterns of linked neutral variation obtained from modern individuals. We developed a Bayesian method to make use of allele frequency time series data and infer the parameters of general diploid selection, along with allele age, in non-equilibrium populations. We introduce a novel path augmentation approach, in which we use Markov chain Monte Carlo to integrate over the space of allele frequency trajectories consistent with the observed data. Using simulations, we show that this approach has good power to estimate selection coefficients and allele age. Moreover, when applying our approach to data on horse coat color, we find that ignoring a relevant demographic history can significantly bias the results of inference. Our approach is made available in a C++ software package." }
{ "title": "A novel spectral method for inferring general diploid selection from time series genetic data", "abstract": "Recently there has been growing interest in using time series genetic variation data, either from experimental evolution studies or ancient DNA samples, to make inference about evolutionary processes. While such temporal data can facilitate identifying genomic regions under selective pressure and estimating associated fitness parameters, it is a challenging problem to compute the likelihood of the underlying selection model given DNA samples obtained at several time points. Here, we develop an efficient algorithm to tackle this challenge. The key methodological advance in our work is the development of a novel spectral method to analytically and efficiently integrate over all trajectories of the population allele frequency between consecutive time points. This advance circumvents the limitations of existing methods which require fine-tuning the discretization of the allele frequency space to approximate certain integrals using numerical schemes. Furthermore, our method is flexible enough to handle general diploid models of selection where the heterozygote and homozygote fitness parameters can take any values, while previous methods focused on only a few restricted models of selection. We demonstrate the utility of our method on simulated data and apply the method to analyze time series ancient DNA data from genetic loci (ASIP and MC1R) associated with coat coloration in horses. In contrast to the conclusions of previous studies which considered only a few special selection schemes, our exploration of the full fitness parameter space reveals that balancing selection (in the form of heterozygote advantage) may have been acting on these loci." }
1601.05388
1310.1068
Discussion
First, we computed the diffusion time units differently, using N 0 = 16000 and a generation time of 8 years, as inferred by [DS + 15], while #REFR used N 0 = 2500 (consistent with the bottleneck size found by [DS + 15] ) and a generation time of 5 years.
[ "We then applied our method to a classic dataset from horses.", "We found that our inference of both the strength and mode of natural selection depended strongly on whether or not we incorporated demography.", "For the MC1R locus, a constantsize demographic model results in an inference of positive selection, while the more complicated demographic model inferred by [DS + 15] causes the inference to tilt toward overdominance, as well as a much younger allele age.", "In contrast, the ASIP locus is inferred to be overdominant under a constant-size demography, but the complicated demographic history results in an inference of positive selection, and a much older allele age.", "These results stand in contrast to those of #OTHEREFR , who found that the most likely mode of evolution for both loci under a constant demographic history is one of overdominance. There are a several reasons for this discrepancy." ]
[ "Hence, our constant-size model has far less genetic drift than the constant-size model assumed by #OTHEREFR .", "This emphasizes the importance of inferring appropriate demographic scaling parameters, even when a constant population size is assumed.", "Secondly, we use MCMC to integrate over the distribution of allele ages, which can have a very long tail going into the past, while [SBS14] assume a fixed allele age.", "One key limitation of this method is that it assumes that the aDNA samples all come from the same, continuous population.", "If there is in fact a discontinuity in the populations from which alleles have been sampled, this could cause rapid allele frequency change and create spurious signals of natural selection." ]
[ "generation time" ]
method
{ "title": "Bayesian inference of natural selection from allele frequency time series", "abstract": "Abstract. The advent of accessible ancient DNA technology now allows the direct ascertainment of allele frequencies in ancestral populations, thereby enabling the use of allele frequency time series to detect and estimate natural selection. Such direct observations of allele frequency dynamics are expected to be more powerful than inferences made using patterns of linked neutral variation obtained from modern individuals. We developed a Bayesian method to make use of allele frequency time series data and infer the parameters of general diploid selection, along with allele age, in non-equilibrium populations. We introduce a novel path augmentation approach, in which we use Markov chain Monte Carlo to integrate over the space of allele frequency trajectories consistent with the observed data. Using simulations, we show that this approach has good power to estimate selection coefficients and allele age. Moreover, when applying our approach to data on horse coat color, we find that ignoring a relevant demographic history can significantly bias the results of inference. Our approach is made available in a C++ software package." }
{ "title": "A novel spectral method for inferring general diploid selection from time series genetic data", "abstract": "Recently there has been growing interest in using time series genetic variation data, either from experimental evolution studies or ancient DNA samples, to make inference about evolutionary processes. While such temporal data can facilitate identifying genomic regions under selective pressure and estimating associated fitness parameters, it is a challenging problem to compute the likelihood of the underlying selection model given DNA samples obtained at several time points. Here, we develop an efficient algorithm to tackle this challenge. The key methodological advance in our work is the development of a novel spectral method to analytically and efficiently integrate over all trajectories of the population allele frequency between consecutive time points. This advance circumvents the limitations of existing methods which require fine-tuning the discretization of the allele frequency space to approximate certain integrals using numerical schemes. Furthermore, our method is flexible enough to handle general diploid models of selection where the heterozygote and homozygote fitness parameters can take any values, while previous methods focused on only a few restricted models of selection. We demonstrate the utility of our method on simulated data and apply the method to analyze time series ancient DNA data from genetic loci (ASIP and MC1R) associated with coat coloration in horses. In contrast to the conclusions of previous studies which considered only a few special selection schemes, our exploration of the full fitness parameter space reveals that balancing selection (in the form of heterozygote advantage) may have been acting on these loci." }
1601.05388
1310.1068
Discussion
Hence, our constant-size model has far less genetic drift than the constant-size model assumed by #REFR .
[ "We found that our inference of both the strength and mode of natural selection depended strongly on whether or not we incorporated demography.", "For the MC1R locus, a constantsize demographic model results in an inference of positive selection, while the more complicated demographic model inferred by [DS + 15] causes the inference to tilt toward overdominance, as well as a much younger allele age.", "In contrast, the ASIP locus is inferred to be overdominant under a constant-size demography, but the complicated demographic history results in an inference of positive selection, and a much older allele age.", "These results stand in contrast to those of #OTHEREFR , who found that the most likely mode of evolution for both loci under a constant demographic history is one of overdominance. There are a several reasons for this discrepancy.", "First, we computed the diffusion time units differently, using N 0 = 16000 and a generation time of 8 years, as inferred by [DS + 15], while #OTHEREFR used N 0 = 2500 (consistent with the bottleneck size found by [DS + 15] ) and a generation time of 5 years." ]
[ "This emphasizes the importance of inferring appropriate demographic scaling parameters, even when a constant population size is assumed.", "Secondly, we use MCMC to integrate over the distribution of allele ages, which can have a very long tail going into the past, while [SBS14] assume a fixed allele age.", "One key limitation of this method is that it assumes that the aDNA samples all come from the same, continuous population.", "If there is in fact a discontinuity in the populations from which alleles have been sampled, this could cause rapid allele frequency change and create spurious signals of natural selection.", "Several methods have been devised to test this hypothesis #OTHEREFR , and one possibility would be to apply these methods to putatively neutral loci sampled from the same individuals, thus determining which samples form a continuous population." ]
[ "far less genetic" ]
background
{ "title": "Bayesian inference of natural selection from allele frequency time series", "abstract": "Abstract. The advent of accessible ancient DNA technology now allows the direct ascertainment of allele frequencies in ancestral populations, thereby enabling the use of allele frequency time series to detect and estimate natural selection. Such direct observations of allele frequency dynamics are expected to be more powerful than inferences made using patterns of linked neutral variation obtained from modern individuals. We developed a Bayesian method to make use of allele frequency time series data and infer the parameters of general diploid selection, along with allele age, in non-equilibrium populations. We introduce a novel path augmentation approach, in which we use Markov chain Monte Carlo to integrate over the space of allele frequency trajectories consistent with the observed data. Using simulations, we show that this approach has good power to estimate selection coefficients and allele age. Moreover, when applying our approach to data on horse coat color, we find that ignoring a relevant demographic history can significantly bias the results of inference. Our approach is made available in a C++ software package." }
{ "title": "A novel spectral method for inferring general diploid selection from time series genetic data", "abstract": "Recently there has been growing interest in using time series genetic variation data, either from experimental evolution studies or ancient DNA samples, to make inference about evolutionary processes. While such temporal data can facilitate identifying genomic regions under selective pressure and estimating associated fitness parameters, it is a challenging problem to compute the likelihood of the underlying selection model given DNA samples obtained at several time points. Here, we develop an efficient algorithm to tackle this challenge. The key methodological advance in our work is the development of a novel spectral method to analytically and efficiently integrate over all trajectories of the population allele frequency between consecutive time points. This advance circumvents the limitations of existing methods which require fine-tuning the discretization of the allele frequency space to approximate certain integrals using numerical schemes. Furthermore, our method is flexible enough to handle general diploid models of selection where the heterozygote and homozygote fitness parameters can take any values, while previous methods focused on only a few restricted models of selection. We demonstrate the utility of our method on simulated data and apply the method to analyze time series ancient DNA data from genetic loci (ASIP and MC1R) associated with coat coloration in horses. In contrast to the conclusions of previous studies which considered only a few special selection schemes, our exploration of the full fitness parameter space reveals that balancing selection (in the form of heterozygote advantage) may have been acting on these loci." }
1711.10776
1301.0859
APPENDIX C PROOF OF THEOREM 2
To obtain the threshold T Upp , we consider a special solution that satisfies all the constraints of problem #REFR except the maximal transmission time constraint (19e).
[ ".", "Using (21), the corresponding power p * l strictly deceases to p ′ l , l ∈ J n .", "According to Theorem 1, the energy E nl decreases with the transmission time 0 ≤ t n ≤ T * nl , ∀l ∈ J n .", "As a result, with new power-time pair (p The last half part of Theorem 2 indicates that transmitting with maximal transmission time is not optimal when T is larger than a threshold T Upp . This can be proved by using the contradiction method.", "Specifically, assuming that total transmission time of the optimal solution is the maximal transmission time T , we can find a special solution with total transmission time less than T , which strictly outperforms the optimal solution." ]
[ "Since E ij is convex w.r.t.", "t i according to Theorem 1, the energy E i = Ji j=Ji−1+1 E ij consumed by all MTCDs in J i served by MTCG i is also convex w.r.t. t i .", "Based on the proof of Theorem 1, we directly obtain the following lemma." ]
[ "maximal transmission time" ]
background
{ "title": "Energy Efficient Resource Allocation in Machine-to-Machine Communications with Multiple Access and Energy Harvesting for IoT", "abstract": "Abstract-This paper studies energy efficient resource allocation for a machine-to-machine (M2M) enabled cellular network with non-linear energy harvesting, especially focusing on two different multiple access strategies, namely non-orthogonal multiple access (NOMA) and time division multiple access (TDMA). Our goal is to minimize the total energy consumption of the network via joint power control and time allocation while taking into account circuit power consumption. For both NOMA and TDMA strategies, we show that it is optimal for each machine type communication device (MTCD) to transmit with the minimum throughput, and the energy consumption of each MTCD is a convex function with respect to the allocated transmission time. Based on the derived optimal conditions for the transmission power of MTCDs, we transform the original optimization problem for NOMA to an equivalent problem which can be solved suboptimally via an iterative power control and time allocation algorithm. Through an appropriate variable transformation, we also transform the original optimization problem for TDMA to an equivalent tractable problem, which can be iteratively solved. Numerical results verify the theoretical findings and demonstrate that NOMA consumes less total energy than TDMA at low circuit power regime of MTCDs, while at high circuit power regime of MTCDs TDMA achieves better network energy efficiency than NOMA. Index Terms-Internet of Things (IoT), machine-to-machine (M2M), non-orthogonal multiple access (NOMA), energy harvesting, resource allocation." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1711.10776
1301.0859
Lemma 5:
From (C.7), the objective value (19a) can be decreased with solution (p p p,q q q,t t t), which contradicts that (p p p * , q q q * , t t t * ) is the optimal solution to problem #REFR .
[ "If T ≥ T Upp , we show that optimal solution (p p p * , q q q * , t t t * ) to problem (19) must satisfy constraint (26), i.e., (19e) is inactive, by contradiction. Assume that", "With (p p p * , q q q * , t t t * ), we denote E * i as the energy consumed by all MTCDs in J i , i ∈ N , E * N +k as the system energy consumption during the (N + k)-th phase, and E * Tot as the total energy of the whole system. Thus, we have", "where inequality (a) follows from the fact that E i achieves the minimum when t i = T * i according to Lemma 5, equality (b)", "holds from (5) and (9), and inequality (c) follows from (5), for all i ∈ N , j ∈ J i .", "Considering that p * j ≥ 0 in the left hand side of (C.9) and u(x) is a increasing function as well as q * i ≤ Q i in the right hand side of (C.9), we have According to (C.2) and (C.4), solution (p p p,q q q,t t t) is a feasible solution to problem #OTHEREFR ." ]
[ "Hence, the optimal solution to problem (19) must satisfy constraint #OTHEREFR ." ]
[ "optimal solution" ]
background
{ "title": "Energy Efficient Resource Allocation in Machine-to-Machine Communications with Multiple Access and Energy Harvesting for IoT", "abstract": "Abstract-This paper studies energy efficient resource allocation for a machine-to-machine (M2M) enabled cellular network with non-linear energy harvesting, especially focusing on two different multiple access strategies, namely non-orthogonal multiple access (NOMA) and time division multiple access (TDMA). Our goal is to minimize the total energy consumption of the network via joint power control and time allocation while taking into account circuit power consumption. For both NOMA and TDMA strategies, we show that it is optimal for each machine type communication device (MTCD) to transmit with the minimum throughput, and the energy consumption of each MTCD is a convex function with respect to the allocated transmission time. Based on the derived optimal conditions for the transmission power of MTCDs, we transform the original optimization problem for NOMA to an equivalent problem which can be solved suboptimally via an iterative power control and time allocation algorithm. Through an appropriate variable transformation, we also transform the original optimization problem for TDMA to an equivalent tractable problem, which can be iteratively solved. Numerical results verify the theoretical findings and demonstrate that NOMA consumes less total energy than TDMA at low circuit power regime of MTCDs, while at high circuit power regime of MTCDs TDMA achieves better network energy efficiency than NOMA. Index Terms-Internet of Things (IoT), machine-to-machine (M2M), non-orthogonal multiple access (NOMA), energy harvesting, resource allocation." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1610.01723
1301.0859
II. SYSTEM MODEL
As in #REFR , we assume that there is no coordination in choosing a random code and thus, the chosen codes are not necessarily orthogonal or unique.
[ "Consider the uplink of a wireless IoT system consisting of one base station (BS) located at the center of a geographical area in which N MTDs are deployed.", "For the communication between an MTD and the BS, we consider a time slotted system with time slot duration of τ using a code division multiple access (CDMA) scheme with code length l.", "CDMA is chosen here since it has been recently shown to be promising for supporting a high IoT message arrive rate in #OTHEREFR .", "We focus on uplink transmissions during which the MTDs transmit their data to the BS using one out of a fixed number C = (2 l − 1) of binary spreading codes." ]
[ "If more than one MTD use a given code, it would be impossible for the BS to distinguish the messages and thus, all transmissions using the given code will fail.", "A transmission is considered to be successful if it is the only MTD using a given code.", "Furthermore, the messages transmitted to the BS are assumed to be short such that, when successful, the transmission can be completed in one period τ .", "The MTDs are said to be active if they have a message to transmit, otherwise, they will be considered inactive and will not transmit in that given slot.", "We let S be the random variable capturing the number of MTDs that have a successful transmission in a given slot." ]
[ "random code" ]
background
{ "title": "Learning with finite memory for machine type communication", "abstract": "Machine-type devices (MTDs) will lie at the heart of the Internet of things (IoT) system. A key challenge in such a system is sharing network resources between small MTDs, which have limited memory and computational capabilities. In this paper, a novel learning with finite memory framework is proposed to enable MTDs to effectively learn about each others message state, so as to properly adapt their transmission parameters. In particular, an IoT system in which MTDs can transmit both delay tolerant, periodic messages and critical alarm messages is studied. For this model, the characterization of the exponentially growing delay for critical alarm messages and the convergence of the proposed learning framework in an IoT are analyzed. Simulation results show that the delay of critical alarm messages is significantly reduced up to 94% with very minimal memory requirements. The results also show that the proposed learning with finite memory framework is very effective in mitigating the limiting factors of learning that prevent proper learning procedures." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1611.05548
1301.0859
III. MULTIPLE ACCESS TECHNIQUES FOR M2M
The channel from a device located at distance r from the base station is modelled by g = (r/R) −γ , where γ denote the path loss exponent and we ignore shadowing and small scale fading #REFR .
[ "For the analysis in this paper, we consider a single cell centered by base station and devices uniformly distributed around it in a circular region with radius R.", "The uplink load seen by the base station is modeled by a Poisson point process with mean λ arrivals per second.", "We further assume a time slotted system with a slot duration of τs.", "We perform our analysis on a typical radio resource with slot duration τs and bandwidth W .", "Each device packet is assumed to have a payload of L bits." ]
[ "The received signal-to-noise ratio (SNR) for a device transmitting with power P t over bandwidth W t is then given by #OTHEREFR :", "where Pmax is the maximum transmit power and µ is the reference SNR, defined as the average received SNR from a device transmitting at maximum power Pmax over bandwidth W located at the cell edge. Without loss of generality, we assume ordered channel gain", "K is the number of devices." ]
[ "base station" ]
background
{ "title": "Multiple Access Technologies for cellular M2M Communications: An Overview", "abstract": "Abstract-This paper reviews the multiple access techniques for machine-to-machine (M2M) communications in future wireless cellular networks. M2M communications aims at providing the communication infrastructure for the emerging Internet of Things (IoT), which will revolutionize the way we interact with our surrounding physical environment. We provide an overview of the multiple access strategies and explain their limitations when used for M2M communications. We show the throughput efficiency of different multiple access techniques when used in coordinated and uncoordinated scenarios. Non-orthogonal multiple access is also shown to support a larger number of devices compared to orthogonal multiple access techniques, especially in uncoordinated scenarios. We also detail the issues and challenges of different multiple access techniques to be used for M2M applications in cellular networks. Index Terms-Internet of Things, massive access, M2M communications, multiple access, ." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1705.10471
1301.0859
t(k)
The received SNR can then be expressed as follows #REFR : As information symbols might be transmitted over a smaller bandwidth W s , the effective noise power will be reduced by a factor W/W s .
[ "time slot duration of a sub-band containing k devices Pc collision probability W s = W/N s , where W is the total available bandwidth.", "Following #OTHEREFR , the channel between each MTC device and the BS is modeled by path loss, shadowing and small scale fading.", "The received power at the BS from an MTC device located at distance r with transmit power P t is given by:", "where α is the path loss exponent, χ is the large scale shadowing gain, h is the small scale fading gain, and G is the antenna gain.", "Similar to #OTHEREFR , we introduce the term reference signal-to-noise ratio (SNR), µ ref , which is defined as the average received SNR from a device transmitting at maximum power P max over the whole bandwidth W located at the cell edge, i.e. at distance R o ." ]
[ "Therefore, the received SNR from an MTC device located at distance r from the BS and transmitting over bandwidth W s can be expressed as follows:", "We assume that the channel gain χh(r/R o ) −α varies very slowly in time and is known at the MTC device.", "This is particularly advantageous for many fixed-location MTC applications as the device location is usually fixed and the MTC device can obtain accurate channel information in a timely manner.", "Moreover, the devices can perform the channel estimation by using regular pilot signals transmitted by the BS.", "This assumption will significantly reduce the complexity at the BS as it does not need to estimate the channel to a very large number of MTC devices." ]
[ "effective noise power" ]
background
{ "title": "On the Fundamental Limits of Random Non-orthogonal Multiple Access in Cellular Massive IoT", "abstract": "Machine-to-machine (M2M) constitutes the communication paradigm at the basis of Internet of Things (IoT) vision. M2M solutions allow billions of multi-role devices to communicate with each other or with the underlying data transport infrastructure without, or with minimal, human intervention. Current solutions for wireless transmissions originally designed for human-based applications thus require a substantial shift to cope with the capacity issues in managing a huge amount of M2M devices. In this paper, we consider the multiple access techniques as promising solutions to support a large number of devices in cellular systems with limited radio resources. We focus on non-orthogonal multiple access (NOMA) where, with the aim to increase the channel efficiency, the devices share the same radio resources for their data transmission. This has been shown to provide optimal throughput from an information theoretic point of view. We consider a realistic system model and characterize the system performance in terms of throughput and energy efficiency in a NOMA scenario with a random packet arrival model, where we also derive the stability condition for the system to guarantee the performance. Internet of Things, Machine-to-machine, Machine-type communication, non-orthogonal multiple access, NOMA. M. Shirvanimoghaddam is with the School" }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1307.0585
1301.0859
IV. UNCOORDINATED MULTIPLE ACCESS
Another popular random access strategy is code-division-multiple-access (CDMA), which was recently shown to perform better than random access FDMA for transmit power and energy minimization when the channel gains are known at the transmitters #REFR .
[ "Unlike the coordinated transmission discussed in the previous section, it is important to consider retransmissions here.", "Therefore, before discussing the main problem formulation, we first incorporate retransmissions in the system model with special focus on characterizing the effective arrival rate and deriving the effective failure probability after retransmissions.", "This forms the first main technical contribution of this section.", "We propose a novel multiuser detection strategy and establish its optimality for uncoordinated transmission, which forms the second main technical contribution of this section.", "For fair comparison with the coordinated strategies studied in the previous section, we also consider uncoordinated FDMA with equal bandwidth allocation in which the total bandwidth is partitioned into subbands of equal bandwidth and a transmitter chooses to transmit on a randomly selected subband using the slotted aloha protocol." ]
[ "However, for throughput maximization under no CSI at the transmitter, random access FDMA is known to perform better than random access CDMA #OTHEREFR . Therefore we do not consider CDMA in this study.", "We now introduce the formal setup along with the details of the slot structure and discuss retransmissions in detail in the following subsection." ]
[ "random access FDMA" ]
background
{ "title": "Fundamentals of Throughput Maximization With Random Arrivals for M2M Communications", "abstract": "Abstract-For wireless systems in which randomly arriving devices attempt to transmit a fixed payload to a central receiver, we develop a framework to characterize the system throughput as a function of arrival rate and per-device data rate. The framework considers both coordinated transmission (where devices are scheduled) and uncoordinated transmission (where devices communicate on a random access channel and a provision is made for retransmissions). Our main contribution is a novel characterization of the optimal throughput for the case of uncoordinated transmission and a strategy for achieving this throughput that relies on overlapping transmissions and joint decoding. Simulations for a noise-limited cellular network show that the optimal strategy provides a factor of four improvement in throughput compared with slotted ALOHA. We apply our framework to evaluate more general system-level designs that account for overhead signaling. We demonstrate that, for small payload sizes relevant for machine-to-machine (M2M) communications (200 bits or less), a one-stage strategy, where identity and data are transmitted optimally over the random access channel, can support at least twice the number of devices compared with a conventional strategy, where identity is established over an initial random-access stage and data transmission is scheduled." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1608.06830
1301.0859
B. Cluster-Head (Re)Selection for FED Maximization
The derived R in (15) can be employed subsequently in #REFR in order to derive an approximation of d υ,k .
[ "Now, we need to estimate R for a given density of nodes and cluster size.", "Define R seg as a random variable to represent the length of the segment from a randomly selected point inside a circle to the center of the circle, where the circle is located at (0, 0), and has a radius of R circ . The expected value of R seg is derived as:", "where (x, y) shows the position of the selected point with regard to the origin.", "Recall from (7), where we have derived the average distance between a CM and its initial CH, which is located at the cluster center as d m = √ z/4σ , in which z and σ show the cluster size and density of nodes, respectively.", "Then, if one estimates the shape of constructed clusters inside a cell with circle, the average radius of the constructed clusters can be estimated by combining (7) and (14), as follows:" ]
[ "In light of the above derivations, one can find the index of the desired CH as:", "From (12) and (16), one sees that the choice of the CH is dependent upon: (i) the remaining energy of devices, and hence, it is time-dependent; (ii) the distance between machine devices; (iii) the distance between each device and the BS; and (iv) the average length of the queued data at each device.", "If adjacent triggers for CH reselection are too closely placed, then it may result in energy wasting as no change in the CH selection is needed in multiple consecutive periods.", "If adjacent triggers are too far apart, then negative impact on the network lifetime is possible as a previously selected CH might be nonoptimal in some periods.", "where K is the smallest non-negative integer that satisfies the following condition for any j ∈ :" ]
[ "approximation", "k" ]
method
{ "title": "$E^{2}$ -MAC: Energy Efficient Medium Access for Massive M2M Communications", "abstract": "Abstract-In this paper, we investigate energy-efficient clustering and medium access control for cellular-based machineto-machine (M2M) networks to minimize device energy consumption and prolong network battery lifetime. First, we present an accurate energy consumption model that considers both static and dynamic energy consumptions, and utilize this model to derive the network lifetime. Second, we find the cluster size to maximize the network lifetime and develop an energy-efficient cluster-head selection scheme. Furthermore, we find feasible regions where clustering is beneficial in enhancing network lifetime. We further investigate communications protocols for both intra-and inter-cluster communications. While inter-cluster communications use conventional cellular access schemes, we develop an energy-efficient and load-adaptive multiple access scheme, called n-phase carrier sense multiple access with collision avoidance (CSMA/CA), which provides a tunable tradeoff between energy efficiency, delay, and spectral efficiency of the network. The simulation results show that the proposed clustering, clusterhead selection, and communications protocol design outperform the others in energy saving and significantly prolong the lifetimes of both individual nodes and the whole M2M network. Index Terms-Machine to machine communications, Internet of Things, MAC, energy efficiency, lifetime, delay." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1711.02056
1301.0859
A. Related Work and Motivation
In #REFR , an optimal transmitter and receiver strategy for maximizing the number of devices transmitting data with a fixed payload size is derived.
[ "A number of recent references describe different strategies for accommodating massive uplink access including algorithms for collision resolution #OTHEREFR , spatial diversity with multiple base antennas #OTHEREFR , load control via pricing algorithms #OTHEREFR , and interference cancellation #OTHEREFR .", "Because of possibly limited energy resources of devices in these use cases, another body of work considers both throughput and energy efficiency as metrics #OTHEREFR - #OTHEREFR .", "In terms of more fundamental results, reference #OTHEREFR provides a degrees-of-freedom characterization of throughput when considering both device identification and communication." ]
[ "The optimal receiver jointly decodes a subset of the devices using an interference canceller, where the subset is determined randomly based on the target outage rate.", "Because ideal interference cancellation is not realizable in practice, especially for a large number of devices, reference #OTHEREFR characterizes the throughput of a suboptimal but more practical random access system where both the time and frequency domains are slotted.", "The receiver uses conventional singleuser detection which demodulates a desired user's data stream by treating other users' interfering signals as noise, and as a simplifying assumption, the analysis uses Shannon capacity to approximate the SINR threshold for a failed transmission.", "This approximation leads to an optimistic bound on the throughput, and it becomes exact in the limit of infinite coding block lengths.", "The current paper reviews and extends the results in #OTHEREFR by incorporating recent characterizations of capacity under finite block length transmissions #OTHEREFR - #OTHEREFR ." ]
[ "optimal transmitter" ]
background
{ "title": "Throughput Maximization for Delay-Sensitive Random Access Communication", "abstract": "Future 5G cellular networks supporting delaysensitive, low-latency communications could employ random access communication to reduce the overhead compared to scheduled access techniques used in 4G networks. We consider a wireless communication system where multiple devices transmit payloads of a given fixed size in a random access fashion over shared radio resources to a common receiver. We allow retransmissions and assume Chase combining at the receiver. The radio resources are partitioned in the time and frequency dimensions, and we determine the optimal partition granularity to maximize throughput, subject to given constraints on latency and outage. In the regime of high and low signal-to-noise ratio (SNR), we derive explicit expressions for the granularity and throughput, first using a Shannon capacity approximation and then using finite block length analysis. Numerical results show that the throughput scaling results are applicable over a range of SNRs. The proposed analytical framework can provide insights for resource allocation strategies in reliable and delaysensitive random access systems and in specific 5G use cases for massive, short packet uplink access." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1711.02056
1301.0859
IV. OPTIMAL DESIGNS FOR HIGH AND LOW SNR
The maximum number of aggregate arrivals that can be supported in TFS m ∈ M, denoted by k * m := k * m (K m−1 ), is determined using #REFR .
[ "At another extreme, however, the combined SINR will be very low no matter how the resources are split, and the resources have to be shared in order not to violate the constraints of (1).", "At constant SNR, the Chase combiner output SINR as a result of m ∈ M transmissions, i.e.", "ζ m as a function of K m , is derived from #OTHEREFR .", "Combining the capacity constraint in (1) with the Chase combiner output #OTHEREFR , and by incorporating (4), we have", "which implies that ζ m ≥ Γ = 2 L n −1. This yields the following relation," ]
[ "The constraint for the probability of outage in (1) as a function of K m can be rewritten as", "Hence, a typical device fails when the number of aggregate arrivals at each retransmission attempt, i.e.", "k m , m ∈ M given K m−1 , exceeds some threshold.", "The average rate of aggregate arrivals λ M will be approximately", "where B ≤ N nM and n satisfies #OTHEREFR . Therefore, we have the following relationship:" ]
[ "aggregate arrivals", "K m−1" ]
method
{ "title": "Throughput Maximization for Delay-Sensitive Random Access Communication", "abstract": "Future 5G cellular networks supporting delaysensitive, low-latency communications could employ random access communication to reduce the overhead compared to scheduled access techniques used in 4G networks. We consider a wireless communication system where multiple devices transmit payloads of a given fixed size in a random access fashion over shared radio resources to a common receiver. We allow retransmissions and assume Chase combining at the receiver. The radio resources are partitioned in the time and frequency dimensions, and we determine the optimal partition granularity to maximize throughput, subject to given constraints on latency and outage. In the regime of high and low signal-to-noise ratio (SNR), we derive explicit expressions for the granularity and throughput, first using a Shannon capacity approximation and then using finite block length analysis. Numerical results show that the throughput scaling results are applicable over a range of SNRs. The proposed analytical framework can provide insights for resource allocation strategies in reliable and delaysensitive random access systems and in specific 5G use cases for massive, short packet uplink access." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1711.02056
1301.0859
V. FINITE BLOCK LENGTHS
In the IBL regime, for a given block length n, for any k m value less than or equal to k * m (K m−1 ) given in #REFR , m ∈ M, the capacity constraint in (5) is satisfied.
[ "The throughput optimization problem in the FBL regime can also be written as #OTHEREFR .", "We denote the probability of outage up to and including the M th (re)transmission attempt for this regime by P Fail,FBL (λ, L, B, M ), as will be detailed in Prop. 7.", "While multiple transmissions (M > 1) cannot be independently decoded without error, i.e., the block error rate of an individual transmission can be larger than the target SINR outage rate δ, Chase combining of M transmissions may help meet the target outage rate.", "If M = 1, the block error rate ε has to satisfy ε ≤ δ.", "Proposition 7: Given B, M , and N , the probability of outage in the FBL regime is given by Note the relationship between the probability of outage for the FBL regime in (29), and for the IBL regime, as given in #OTHEREFR ." ]
[ "However, in the FBL regime, the capacity constraint is stricter than the IBL capacity constraint.", "Given a block length n, although FBL n, L, ζ m can be made arbitrarily small for small k m at high SNR, FBL n, L, ζ m > 0 whenever ζ m is finite, even when k m = 1.", "Although (25) and (26) are valid for the additive white Gaussian noise (AWGN) channel #OTHEREFR , the block error probability changes for different retransmissions.", "Similar to #OTHEREFR , we assume that decoding errors are independent for different retransmissions.", "However, the block error probabilities are no longer independent because the Chase combiner output SINRs across m ∈ M retransmissions, i.e." ]
[ "given block length", "K m−1" ]
background
{ "title": "Throughput Maximization for Delay-Sensitive Random Access Communication", "abstract": "Future 5G cellular networks supporting delaysensitive, low-latency communications could employ random access communication to reduce the overhead compared to scheduled access techniques used in 4G networks. We consider a wireless communication system where multiple devices transmit payloads of a given fixed size in a random access fashion over shared radio resources to a common receiver. We allow retransmissions and assume Chase combining at the receiver. The radio resources are partitioned in the time and frequency dimensions, and we determine the optimal partition granularity to maximize throughput, subject to given constraints on latency and outage. In the regime of high and low signal-to-noise ratio (SNR), we derive explicit expressions for the granularity and throughput, first using a Shannon capacity approximation and then using finite block length analysis. Numerical results show that the throughput scaling results are applicable over a range of SNRs. The proposed analytical framework can provide insights for resource allocation strategies in reliable and delaysensitive random access systems and in specific 5G use cases for massive, short packet uplink access." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1802.06759
1301.0859
A. Literature Study 1) MTC over Cellular Networks:
Power-optimized resource allocation for time, frequency, and code division multiple access (TDMA, FDMA, CDMA) systems has been investigated in #REFR .
[ "A thorough survey on LTE scheduling algorithms for M2M traffic is presented in #OTHEREFR .", "This survey indicates that existing scheduling algorithms could be categorized into 4 main categories with regard to the scheduling metric as follows #OTHEREFR : (i) channel-based schedulers, in which UEs with the highest signal to noise ratio (SNR) have priority in resource allocation in order to minimize the bit error rate and maximize the system throughput #OTHEREFR ; (ii) delay-based schedulers, in which the delay budget prioritize devices for resource allocation #OTHEREFR ; (iii) fairness-based schedulers, which are designed to guarantee a fair distribution of radio resources among UEs #OTHEREFR ; and (iv) hybrid schedulers, which consider a combination of the aforementioned metrics as well as other metrics like power consumption #OTHEREFR , buffer status, and data arrival rates #OTHEREFR .", "3) Energy-Efficient MTC Scheduling: While providing scalable yet energy efficient communications is considered as the key requirement for successful deployment of MTC over existing cellular networks #OTHEREFR , a limited number of research works has been focused on energy efficient uplink MTC scheduling.", "Energy efficiency of M2M communications over LTE networks is investigated in #OTHEREFR , and it is shown that LTE physical layer is not optimized for small data communications.", "Power-efficient uplink scheduling for delay-sensitive traffic over LTE systems is investigated in #OTHEREFR , where the considered traffic and delay models are not consistent with the MTC characteristics #OTHEREFR , and hence, the derived results cannot be used here." ]
[ "Uplink scheduling for LTE networks with M2M traffic is investigated in #OTHEREFR , where the ratio between the sum data rates and the power consumptions of all users is maximized.", "In #OTHEREFR , the authors have considered a simple model for energy consumption considering only the transmit power for reliable data transmission and neglected the other energy consumptions by the operation of electronic circuits which are comparable or more dominant than the energy consumption for reliable data transmission #OTHEREFR .", "In #OTHEREFR , a clean slate solution for dense machine deployment scenarios is proposed in which, each communications frame is divided into two subframes.", "The first subframe is dedicated to the contention of machine nodes for access reservation, and the later is dedicated to scheduled data transmission of successful nodes using TDMA scheme.", "To the best of our knowledge, accurate modeling of energy consumption in machine-type communications, individual and network battery lifetime models, and corresponding scheduling algorithms are absent in literature." ]
[ "Power-optimized resource allocation" ]
background
{ "title": "Network Lifetime Maximization for Cellular-Based M2M Networks", "abstract": "Abstract-High energy efficiency is critical for enabling massive machine-type communications (MTC) over cellular networks. This work is devoted to energy consumption modeling, battery lifetime analysis, lifetime-aware scheduling and transmit power control for massive MTC over cellular networks. We consider a realistic energy consumption model for MTC and model network battery-lifetime. Analytic expressions are derived to demonstrate the impact of scheduling on both the individual and network battery lifetimes. The derived expressions are subsequently employed in the uplink scheduling and transmit power control for mixed-priority MTC traffic in order to maximize the network lifetime. Besides the main solutions, low-complexity solutions with limited feedback requirement are investigated, and the results are extended to existing LTE networks. Also, the energy efficiency, spectral efficiency, and network lifetime tradeoffs in resource provisioning and scheduling for MTC over cellular networks are investigated. The simulation results show that the proposed solutions can provide substantial network lifetime improvement and network maintenance cost reduction in comparison with the existing scheduling schemes." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1504.03242
1301.0859
WIde-AreA m2m communIcAtIon chAllenges
For small payloads, the control overhead for scheduled transmission may not be justified, and thus the traditional connection oriented approach of establishing radio bearers prior to data transmission will be inefficient for M2M #REFR .
[ "These differences affect the assumptions and performance metrics of the system design, and potentially motivate novel designs at the PHY and MAC layer.", "We highlight some key attributes of M2M networks and describe their impact on the system design.", "Small payloads.", "In conventional broadband streaming or high data rate applications, it makes sense to invest in control overhead to establish bearers for scheduled transmission, as is done in LTE networks.", "In many IoT applications such as meter reading or actuation, the payload could be relatively small (~1000 bits), consisting of an encrypted device ID and a measurement or actuation command #OTHEREFR ." ]
[ "Different IoT applications could have different latency and reliability requirements, which will impact the optimal design.", "For example, a meter reading for water consumption would have a longer latency requirement than a sensor for detecting a basement flood condition.", "Large number of devices.", "The number of IoT devices per cell could be significantly larger than the number of mobile devices per cell if multiple devices are associated with each person, car and building, and if additional devices are deployed throughout the environment #OTHEREFR .", "For a given set of radio resources, more devices require improved efficiency of both the control plane and the data plane." ]
[ "M2M" ]
background
{ "title": "Wide-area Wireless Communication Challenges for the Internet of Things", "abstract": "The deployment of Internet of Things (IoT) devices and services is accelerating, aided by ubiquitous wireless connectivity, declining communication costs, and the emergence of cloud platforms. Most major mobile network operators view machine-to-machine (M2M) communication networks for supporting IoT as a significant source of new revenue. In this article, we discuss the need for wide-area M2M wireless networks, especially for short data packet communication to support a very large number of IoT devices. We first present a brief overview of current and emerging technologies for supporting wide area M2M, and then using communication theory principles, discuss the fundamental challenges and potential solutions for these networks, highlighting tradeoffs and strategies for random and scheduled access. We conclude with recommendations for how future 5G networks should be designed for efficient wide-area M2M communications." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1504.03242
1301.0859
Scheduled Transmission
The transmit power needed to communicate L bits in the given resource slice using this strategy was derived in #REFR . Suboptimal Scheduled Strategies.
[ "Optimal scheduled strategy.", "For given K devices, the optimal strategy is the one where all the devices transmit simultaneously over all the resources and the receiver uses a weakest-last successive interference cancellation (SIC) strategy #OTHEREFR .", "Under this strategy, the receiver first decodes the device with the highest channel gain, assuming interference from the K-1 other devices.", "Using the decoded bits, the received signal for this device is reconstructed and subtracted from the received signal.", "Devices are decoded and cancelled successively in order from highest to FDMA, equal bandwidth FDMA, optimal allocation Optimal lowest channel gains." ]
[ "The optimal strategy discussed above is sensitive to channel estimation errors.", "As was the case with the RACH transmission above, we consider more practical strategies using FDMA with either optimal or equal bandwidth allocation strategies #OTHEREFR .", "Under optimal FDMA bandwidth allocation, W Hz bandwidth is allocated among the K devices to minimize the sum power.", "Under the equal bandwidth allocation, each device is allocated bandwidth W/K Hz.", "Figure 2 shows the peak (95th percentile) power for the FDMA and the optimal SIC strategies, using the same system assumptions as the RACH simulations (L = 500 bits, W = 100 KHz, T = 1 second, cell radius 2 km, pathloss exponent 3.7)." ]
[ "transmit power" ]
method
{ "title": "Wide-area Wireless Communication Challenges for the Internet of Things", "abstract": "The deployment of Internet of Things (IoT) devices and services is accelerating, aided by ubiquitous wireless connectivity, declining communication costs, and the emergence of cloud platforms. Most major mobile network operators view machine-to-machine (M2M) communication networks for supporting IoT as a significant source of new revenue. In this article, we discuss the need for wide-area M2M wireless networks, especially for short data packet communication to support a very large number of IoT devices. We first present a brief overview of current and emerging technologies for supporting wide area M2M, and then using communication theory principles, discuss the fundamental challenges and potential solutions for these networks, highlighting tradeoffs and strategies for random and scheduled access. We conclude with recommendations for how future 5G networks should be designed for efficient wide-area M2M communications." }
{ "title": "Power-Efficient System Design for Cellular-Based Machine-to-Machine Communications", "abstract": "Abstract-The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is in general not significant in the regime of interest for M2M." }
1701.01290
1210.4901
I. INTRODUCTION
Risk-averse dual DP is introduced in #REFR for MDPs with hybrid continuous-discrete state space.
[ "Specifically, the recent work #OTHEREFR proposes a simulation-based ADP algorithm for risk-aware MDPs.", "However, it has limited use, since it only considers a subclass of timeconsistent Markov risk measures called dynamic-quantile-based risk measures.", "A cutting plane algorithm for time-consistent multistage linear stochastic programming problems is given in #OTHEREFR , but restricted to finite decision horizons.", "In #OTHEREFR , an actor-critic-style sampling-based algorithm for the Markov risk is developed.", "Although the sensitivity of an approximation error is analyzed, the algorithm can only search for a locally optimal policy." ]
[ "Even though the method yields an output that converges to the optimal solution, the significant weaknesses are that it requires the linearity of state and action spaces and the convergence criterion is not well defined.", "Our goal in this technical note is to consider the whole class of time-consistent Markov risk measures, propose a new simulation-based ADP approach, and develop improved convergence results and error bounds under mild technical conditions.", "Our first contribution is a new family of computationally tractable and simulation-based algorithms for risk-aware MDPs with infinite state space.", "We show how to develop risk-aware analogs of several major simulation-based algorithms for classical MDPs (e.g., #OTHEREFR , #OTHEREFR ), which cannot optimize time-consistent Markov risk measures.", "In particular, the main novelty of our proposed algorithms is twofold." ]
[ "Risk-averse dual DP" ]
background
{ "title": "Approximate Value Iteration for Risk-Aware Markov Decision Processes", "abstract": "Abstract-We consider large-scale Markov decision processes (MDPs) with a time-consistent risk measure of variability in cost under the risk-aware MDP paradigm. Previous studies showed that risk-aware MDPs, based on a minimax approach to handling risk, can be solved using dynamic programming for small-to mediumsized problems. However, due to the \"curse of dimensionality,\" MDPs that model real-life problems are typically prohibitively large for such approaches. In this technical note, we employ an approximate dynamic programming approach and develop a family of simulation-based algorithms to approximately solve large-scale risk-aware MDPs with time-consistent risk measures. In parallel, we develop a unified convergence analysis technique to derive sample complexity bounds for this new family of algorithms." }
{ "title": "An Approximate Solution Method for Large Risk-Averse Markov Decision Processes", "abstract": "Stochastic domains often involve risk-averse decision makers. While recent work has focused on how to model risk in Markov decision processes using risk measures, it has not addressed the problem of solving large risk-averse formulations. In this paper, we propose and analyze a new method for solving large risk-averse MDPs with hybrid continuous-discrete state spaces and continuous action spaces. The proposed method iteratively improves a bound on the value function using a linearity structure of the MDP. We demonstrate the utility and properties of the method on a portfolio optimization problem." }
1708.00883
1407.5663
Introduction
In #REFR the authors characterized the set of graphs whose separability are invariant under graph isomorphisms.
[ "For instance, new algorithm based on graph states #OTHEREFR was given and showed improvement in comparison with exploiting the physics of optically active multi-level nano structures #OTHEREFR .", "Graph theoretic methods have also been developed to analyze maximally entangled pure states distributed between a number of different parties #OTHEREFR .", "Recently, theoretical principle of representing the quantum state and local unitary graph was established in #OTHEREFR .", "Conditions for separability of generalized Laplacian matrices of weighted graphs with unit trace were given in #OTHEREFR .", "Further results on the multipartite separability of Laplacian matrices of graphs were provided in #OTHEREFR ." ]
[ "Two classes of generalized graph product states were also constructed in #OTHEREFR .", "These have provided an alternative interesting graph theoretic approach to separability and several well-known criteria have been formulated in the new method.", "For instance, it was proved that the degree criterion is equivalent to the PPT-criterion #OTHEREFR .", "And a degree condition to test separability of density matrices of graphs was described in #OTHEREFR .", "It was further shown that the well-known matrix realignment criterion can be used to test separability for a class of quantum states (cf. #OTHEREFR )." ]
[ "graph isomorphisms" ]
background
{ "title": "Multipartite separability of density matrices of graphs", "abstract": "Abstract A new layers method is presented for multipartite separability of density matrices from simple graphs. Full separability of tripartite states is studied for graphs on degree symmetric premise. The models are generalized to multipartite systems by presenting a class of fully separable states arising from partially symmetric graphs." }
{ "title": "Graphs whose normalized Laplacian matrices are separable as density matrices in quantum mechanics", "abstract": "Recently normalized Laplacian matrices of graphs are studied as density matrices in quantum mechanics. Separability and entanglement of density matrices are important properties as they determine the nonclassical behavior in quantum systems. In this note we look at the graphs whose normalized Laplacian matrices are separable or entangled. In particular, we show that the number of such graphs is related to the number of 0-1 matrices that are line sum symmetric and to the number of graphs with at least one vertex of degree 1." }
1509.04075
1406.5886
In #REFR , another computeand-forward based scheme was proposed for the multi-way relay channel which achieves weak secrecy rate. Bi-directional 0018-9448 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
[ "lattice structure of the code to achieve a higher secrecy rate.", "A compute-and-forward #OTHEREFR based scheme was introduced in #OTHEREFR for a symmetric two-hop channel, in which both of the transmitting and the jamming messages are encoded with lattice codes.", "The relay decodes a linear combination of these messages and then sends it to the destination.", "Although the achievable secrecy rate of #OTHEREFR is lower than #OTHEREFR , this compute-and-forward based scheme can be used in a line network since it does not suffer from noise accumulation.", "In #OTHEREFR , a similar compute-and-forward based scheme was introduced which achieves strong secrecy with the same secrecy rate as #OTHEREFR ." ]
[ "See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.", "transmission on this channel is studied in #OTHEREFR , in which a higher level of secrecy, namely perfect secrecy, is achieved by another compute-and-forward based scheme.", "In this paper, we propose two novel secure and reliable transmission schemes based on a modified version of computeand-forward #OTHEREFR , which we call scaled compute-and-forward. The main contributions of this paper are the following:", "1) This paper is the first to apply scaled compute-andforward to a number of secrecy problems, including the two-hop channel with an untrusted relay and the twohop channel with an external eavesdropper.", "Also, a new proof technique supported by the scaled compute-andforward is used, which allows a significant improvement in the achievable secrecy rate." ]
[ "multi-way relay channel" ]
method
{ "title": "Secure Transmission on the Two-Hop Relay Channel With Scaled Compute-and-Forward", "abstract": "In this paper, we consider communication on a twohop channel in which a source wants to send information reliably and securely to the destination via a relay. We consider both the untrusted relay case and the external eavesdropper case. In the untrusted relay case, the relay behaves as an eavesdropper, and there is a cooperative node, which sends a jamming signal to confuse the relay when it is receiving from the source. In the external eavesdropper case, the relay is trusted, and there is an external node eavesdropping the communication. We propose two secure transmission schemes using the scaled compute-andforward technique. One of the schemes is based on a random binning code, and the other one is based on a lattice chain code. It is proved that in the high signal-to-noise-ratio (SNR) scenario and/or the limited relay power scenario, if the destination is used as the jammer, both schemes outperform all existing schemes and achieve the upper bound. In particular, if the SNR is large and the source, the relay, and the cooperative jammer have identical power and channels, both schemes achieve the upper bound for secrecy rate, which is merely 1/2 bit per channel use lower than the channel capacity without secrecy constraints. We also prove that one of our schemes achieves a positive secrecy rate in the external eavesdropper case in which the relay is trusted and there exists an external eavesdropper. Index Terms-Compute-and-forward, lattice codes, two-hop channel, untrusted relay, information theoretic security." }
{ "title": "Weak Secrecy in the Multiway Untrusted Relay Channel With Compute-and-Forward", "abstract": "We investigate the problem of secure communications in a Gaussian multiway relay channel applying the compute-and-forward scheme under usage of nested lattice codes. All nodes employ half-duplex operation and can exchange confidential messages only via an untrusted relay. The relay is assumed to be honest but curious, i.e., an eavesdropper that conforms to the system rules and applies the intended relaying scheme. We start with the general case of the single-input multiple-output L-user multiway relay channel and provide an achievable secrecy rate region under a weak secrecy criterion. We show that the securely achievable sum rate is equivalent to the difference between the computation rate and the multiple access channel (MAC) capacity. In particular, we show that all nodes must encode their messages such that the common computation rate tuple falls outside the MAC capacity region of the relay. We provide results for the single-input single-output and the multiple-input single-input L-user multiway relay channel as well as the two-way relay channel. We discuss these results and show the dependence between channel realization and achievable secrecy rate. We further compare our result to available results in the literature for different schemes and show that the proposed scheme operates close to the compute-and-forward rate without secrecy. Index Terms-Physical layer secrecy, multi-way relay channel, network coding, compute-and-forward, lattice codes." }
1509.04075
1406.5886
B. Lattice Chain Based Scheme
The lattice chain based scheme (LC scheme) is inspired by the lattice chain code used in an older version of #REFR .
[]
[ "Here, we propose an (a, β) SCF lattice chain code, which is an (a, β) SCF code with the transmitted lattice vector splitting into two parts, a message vector and a random vector. Now, we describe this code in detail.", "Since it is modified over an (a, β) SCF code, we only focus on the parts that are modified.", "All the notations and terms have the same meanings as in Section III without further explanation.", "1) Coding Scheme: The codebook of an (a, β) SCF lattice chain code is also constructed with the lattices and i C (a, β) of an (a, β) SCF code under the condition of (30).", "Besides, a mid-layer lattice A E (a, β) for which A C (a, β) ⊆ A E (a, β) ⊆ is introduced for the codebook construction." ]
[ "lattice chain code" ]
method
{ "title": "Secure Transmission on the Two-Hop Relay Channel With Scaled Compute-and-Forward", "abstract": "In this paper, we consider communication on a twohop channel in which a source wants to send information reliably and securely to the destination via a relay. We consider both the untrusted relay case and the external eavesdropper case. In the untrusted relay case, the relay behaves as an eavesdropper, and there is a cooperative node, which sends a jamming signal to confuse the relay when it is receiving from the source. In the external eavesdropper case, the relay is trusted, and there is an external node eavesdropping the communication. We propose two secure transmission schemes using the scaled compute-andforward technique. One of the schemes is based on a random binning code, and the other one is based on a lattice chain code. It is proved that in the high signal-to-noise-ratio (SNR) scenario and/or the limited relay power scenario, if the destination is used as the jammer, both schemes outperform all existing schemes and achieve the upper bound. In particular, if the SNR is large and the source, the relay, and the cooperative jammer have identical power and channels, both schemes achieve the upper bound for secrecy rate, which is merely 1/2 bit per channel use lower than the channel capacity without secrecy constraints. We also prove that one of our schemes achieves a positive secrecy rate in the external eavesdropper case in which the relay is trusted and there exists an external eavesdropper. Index Terms-Compute-and-forward, lattice codes, two-hop channel, untrusted relay, information theoretic security." }
{ "title": "Weak Secrecy in the Multiway Untrusted Relay Channel With Compute-and-Forward", "abstract": "We investigate the problem of secure communications in a Gaussian multiway relay channel applying the compute-and-forward scheme under usage of nested lattice codes. All nodes employ half-duplex operation and can exchange confidential messages only via an untrusted relay. The relay is assumed to be honest but curious, i.e., an eavesdropper that conforms to the system rules and applies the intended relaying scheme. We start with the general case of the single-input multiple-output L-user multiway relay channel and provide an achievable secrecy rate region under a weak secrecy criterion. We show that the securely achievable sum rate is equivalent to the difference between the computation rate and the multiple access channel (MAC) capacity. In particular, we show that all nodes must encode their messages such that the common computation rate tuple falls outside the MAC capacity region of the relay. We provide results for the single-input single-output and the multiple-input single-input L-user multiway relay channel as well as the two-way relay channel. We discuss these results and show the dependence between channel realization and achievable secrecy rate. We further compare our result to available results in the literature for different schemes and show that the proposed scheme operates close to the compute-and-forward rate without secrecy. Index Terms-Physical layer secrecy, multi-way relay channel, network coding, compute-and-forward, lattice codes." }
1904.03844
1706.07529
10:
In fact, this is a general AS of type two (GAST) according to #REFR , but we abbreviate the notation here for simplicity.
[ "OD Code 2 has block length = 4240 bits and rate ≈ 0.90.", "OD Code 1 and OD Codes 2 are the underlying codes of our MD codes.", "OD Code 3 is an SC code that is designed exactly as OD Code 1, except for that OD Code 3 has coupling length = 21 instead of 7 (three times as long as OD Code 1).", "Thus, OD Code 3 has block length = 15162 bits and rate ≈ 0.83.", "From our simulations, the error profile in the error floor region of OD Code 1 when simulated over the NLM channel is dominated by the (4, 2) non-binary AS." ]
[ "Moreover, the error profile in the error floor region of OD Code 2 when simulated over the AWGN channel is dominated by the (4, 4) and the (6, 2) UASs.", "The overwhelming majority of the (6, 2) UAS instances found in the error profile of OD Code 2 simulated over the AWGN channel have the same configuration, which has the (4, 4) UAS as a substructure.", "Note that for a binary code, e.g., OD Code 2, a UAS is an AS.", "As for the MD codes, MD Code 1 is designed for practical Flash channels, while MD Code 2 is designed for AWGN channels. According to the analysis above, MD Code 1, with" ]
[ "notation", "(GAST" ]
background
{ "title": "Minimizing the Number of Detrimental Objects in Multi-Dimensional Graph-Based Codes", "abstract": "Abstract-In order to meet the demands of data-hungry applications, data storage devices are required to be denser. Various sources of error appear with this increase in density. Multidimensional (MD) graph-based codes are capable of mitigating error sources like interference and channel non-uniformity in dense storage devices. Recently, a technique was proposed to enhance the performance of MD spatially-coupled codes that are based on circulants. The technique adopts informed relocations of circulants to minimize the number of short cycles. However, cycles become more detrimental when they combine together to form more advanced objects, e.g., absorbing sets including lowweight codewords. In this paper, we show how MD relocations can be exploited to minimize the number of detrimental objects in the graph of an MD code. Moreover, we demonstrate the savings in the number of relocation arrangements earned by focusing on objects rather than cycles. Our technique has less restrictions on the one-dimensional (OD) code. Simulation results reveal significant lifetime gains in practical Flash systems achieved by MD codes designed using our technique compared with OD codes having similar parameters." }
{ "title": "A Combinatorial Methodology for Optimizing Non-Binary Graph-Based Codes: Theoretical Analysis and Applications in Data Storage", "abstract": "Non-binary (NB) low-density parity-check (LDPC) codes are graph-based codes that are increasingly being considered as a powerful error correction tool for modern dense storage devices. Optimizing NB-LDPC codes to overcome their error floor is one of the main code design challenges facing storage engineers upon deploying such codes in practice. Furthermore, the increasing levels of asymmetry incorporated by the channels underlying modern dense storage systems, e.g., multi-level Flash systems, exacerbate the error floor problem by widening the spectrum of problematic objects that contribute to the error floor of an NB-LDPC code. In a recent research, the weight consistency matrix (WCM) framework was introduced as an effective combinatorial NB-LDPC code optimization methodology that is suitable for modern Flash memory and magnetic recording (MR) systems. The WCM framework was used to optimize codes for asymmetric Flash channels, MR channels that have intrinsic memory, in addition to canonical symmetric additive white Gaussian noise channels. In this paper, we provide an in-depth theoretical analysis needed to understand and properly apply the WCM framework. We focus on general absorbing sets of type two (GASTs) as the detrimental objects of interest. In particular, we introduce a novel tree representation of a GAST called the unlabeled GAST tree, using which we prove that the WCM framework is optimal in the sense that it operates on the minimum number of matrices, which are the WCMs, to remove a GAST. Then, we enumerate WCMs and demonstrate the significance of the savings achieved by the WCM framework in the number of matrices processed to remove a GAST. Moreover, we provide a linear-algebraic analysis of the null spaces of WCMs associated with a GAST. We derive the minimum number of edge weight changes needed to remove a GAST via its WCMs, along with how to choose these changes. In addition, we propose a new set of problematic objects, namely oscillating sets of type two (OSTs), which contribute to the error floor of NB-LDPC codes with even column weights on asymmetric channels, and we show how to customize the WCM framework to remove OSTs. We also extend the domain of the WCM framework applications by demonstrating its benefits in optimizing column weight 5 codes, codes used over Flash channels with additional soft information, and spatially coupled codes. The performance gains achieved via the WCM framework range between 1 and nearly 2.5 orders of magnitude in the error floor region over interesting channels." }
1205.6256
math/0201131
Introduction
As an application of these conditions, we present in this paper a lattice in L(CFG)\L(ASM) that is smaller than the one shown in #REFR .
[ "These objects are meet-irreducibles, simple CFGs, firing vertices of a CFG, and systems of linear inequalities.", "In particular, we establish a one-to-one correspondence between the firing vertices of a simple CFG and the meet-irreducibles of the lattice generated by this CFG.", "Using this correspondence we achieve a necessary and sufficient condition for L(CFG).", "By generalizing this correspondence to CFGs that are not necessarily simple, we also obtain a necessary and sufficient condition for L(ASM).", "Both conditions provide polynomial-time algorithms that address the above computational problems." ]
[ "In #OTHEREFR , to prove D L(ASM) the author studied simple CFGs on directed acyclic graphs (DAGs) and showed that such a CFG is equivalent to a CFG on an undirected graph.", "It is natural to study CFGs on DAGs which are not necessarily simple.", "Again our method is applicable to this model and we show that any CFG on a DAG is equivalent to a simple CFG on a DAG.", "As a corollary, the class of lattices generated by CFGs on DAGs is strictly included in L(ASM).", "We also give a necessary and sufficient condition for the class of lattices generated by this model." ]
[ "lattice" ]
method
{ "title": "Lattices generated by Chip Firing Game models: criteria and recognition algorithm", "abstract": "It is well-known that the class of lattices generated by Chip Firing games (CFGs) is strictly included in the class of upper locally distributive lattices (ULD). However a necessary and sufficient criterion for this class is still an open question. In this paper we settle this problem by giving such a criterion. This criterion provides a polynomial-time algorithm for constructing a CFG which generates a given lattice if such a CFG exists. Going further we solve the same problem on two other classes of lattices which are generated by CFGs on the classes of undirected graphs and directed acyclic graphs." }
{ "title": "Classes of lattices induced by chip firing (and sandpile) dynamics", "abstract": "In this paper we study three classes of models widely used in physics, computer science and social science: the Chip Firing Game, the Abelian Sandpile Model and the Chip Firing Game on a mutating graph. We study the set of configurations reachable from a given initial configuration, called the configuration space of a model, and try to determine the main properties of such sets. We study the order induced over the configurations by the evolution rule. This makes it possible to compare the power of expression of these models. It is known that the configuration spaces we obtain are lattices, a special kind of partially ordered set. Although the Chip Firing Game on a mutating graph is a generalization of the usual Chip Firing Game, we prove that these models generate exactly the same configuration spaces. We also prove that the class of lattices induced by the Abelian Sandpile Model is strictly included in the class of lattices induced by the Chip Firing Game, but contains the class of distributive lattices, a very well known class. Corollary 3.4 Let C be a simple CFG with support graph G = (V, E) such that G has no cycle. Then C is equivalent to an ASM." }
1811.12063
1701.07738
I. INTRODUCTION
More recent works such as #REFR use the more advanced deep ANNs for decoding structured polar codes.
[ "The first attempts for using Artificial Neural Networks (ANNs) for decoding turbo codes were presented in #OTHEREFR where the author proposes an ANN based on Multi Layer Perceptrons (MLPs).", "With the advent of training techniques such as layer-by-layer unsupervised pre-training followed by gradient descent fine-tuning and back propagation, the interest for using ANNs for channel coding is renewed.", "Different ideas around the use of ANNs for decoding emerged in the 1990s with works such as #OTHEREFR - #OTHEREFR for decoding block and hamming codes.", "Subsequently, ANNs were used for decoding convolutional codes in #OTHEREFR , #OTHEREFR .", "In #OTHEREFR , the author used MLPs to generate Low-Density Parity-Check (LDPC) codes." ]
[ "In this work, we investigate DL architectures to design and analyse an ANN for turbo coding and decoding operations that are typically performed at the PHY.", "Specifically, we use the turbo encoder and decoder variant specified for LTE #OTHEREFR .", "To this end, we frame the encoding and decoding operations as a supervised learning problem and use a RNN architecture to autoencode-decode the data and compare its performance in terms of Bit Error Rate (BER) to the legacy LTE turbo encoding/decoding blocks. Our motivation is two fold i.", "Traditional signal processing is done by logically separated blocks that are independently optimized to recover the data signal from imperfect channels.", "Although this approach is perfected over many years, it may not achieve the optimal end-to-end performance. A well ii." ]
[ "advanced deep ANNs", "structured polar codes" ]
method
{ "title": "Performance Analysis of Deep Learning based on Recurrent Neural Networks for Channel Coding", "abstract": "Abstract-Channel Coding has been one of the central disciplines driving the success stories of current generation LTE systems and beyond. In particular, turbo codes are mostly used for cellular and other applications where a reliable data transfer is required for latency-constrained communication in the presence of data-corrupting noise. However, the decoding algorithm for turbo codes is computationally intensive and thereby limiting its applicability in hand-held devices. In this paper, we study the feasibility of using Deep Learning (DL) architectures based on Recurrent Neural Networks (RNNs) for encoding and decoding of turbo codes. In this regard, we simulate and use data from various stages of the transmission chain (turbo encoder output, Additive White Gaussian Noise (AWGN) channel output, demodulator output) to train our proposed RNN architecture and compare its performance to the conventional turbo encoder/decoder algorithms. Simulation results show, that the proposed RNN model outperforms the decoding performance of a conventional turbo decoder at low Signal to Noise Ratio (SNR) regions." }
{ "title": "On deep learning-based channel decoding", "abstract": "We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity." }
2003.00081
1701.07738
I. INTRODUCTION
Recently, it was shown that a decoding algorithm could be learned for structured codes #REFR , however this design still requires a dataset with at least 90% percent of the codebook, which limits its practicality to small block lengths.
[ "Employing a neural network for error correction codes dates back to late eighties.", "More precisely, #OTHEREFR shows how to decode linear block codes.", "Similarly, the Viterbi decoder was implemented with a neural network for convolutional codes in the late nineties #OTHEREFR , #OTHEREFR .", "A simple classifier is learned in these studies instead of a decoding algorithm.", "This leads to a training dataset that must include all codewords, which makes them infeasible for most codes due to the exponential complexity." ]
[ "To learn decoding for large block lengths, #OTHEREFR trained a recurrent neural network for small block lengths that can generalize well for large block lengths.", "Although there are many papers that propose a deep learning-based decoding algorithm, there are only a few papers that aim to learn an encoder #OTHEREFR , #OTHEREFR .", "In this paper, we design an error correction code, i.e., learn an encoder-decoder pair for a severe nonlinear channel model: a one-bit quantized AWGN channel.", "For this purpose, we train an autoencoder, and then incorporate an LDPC code to this autoencoder.", "In the case of QPSK or BPSK modulation one-bit quantization corresponds to hard decision decoding and only leads to a few dB signal-to-noise-ratio (SNR) loss." ]
[ "decoding algorithm" ]
background
{ "title": "High Rate Communication over One-Bit Quantized Channels via Deep Learning and LDPC Codes", "abstract": "This paper proposes a method for designing error correction codes by combining a known coding scheme with an autoencoder. Specifically, we integrate an LDPC code with a trained autoencoder to develop an error correction code for intractable nonlinear channels. The LDPC encoder shrinks the input space of the autoencoder, which enables the autoencoder to learn more easily. The proposed error correction code shows promising results for one-bit quantization, a challenging case of a nonlinear channel. Specifically, our design gives a waterfall slope bit error rate even with high order modulation formats such as 16-QAM and 64-QAM despite one-bit quantization. This gain is theoretically grounded by proving that the trained autoencoder provides approximately Gaussian distributed data to the LDPC decoder even though the received signal has non-Gaussian statistics due to the one-bit quantization." }
{ "title": "On deep learning-based channel decoding", "abstract": "We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity." }
1901.03664
1701.07738
III. A PRIMER ON DEEP LEARNING AND DATASETS
Intuitively, the optimal training signal-to-noise-ratio (SNR) is a trade-off between high noise power, i.e., learning robustness to noisy data and noiseless samples, i.e., learning the underlying (deterministic) channel transfer function #REFR .
[ "This is called a regression task and we can use well-established algorithms to fit the weights to our datasets such that they minimize a certain loss metric.", "A single complex-valued number is split into two consecutive real-valued numbers and used as input for the NN and, vice versa, at the output.", "We start training with small mini-batches containing only 16 samples and increase the batchsize during the process stepwise up to 512 to obtain more fine-grained weight updates.", "During training, we also add additive white Gaussian noise (AWGN)", "as regularization to the training samples to prevent overfitting." ]
[]
[ "underlying (deterministic) channel" ]
background
{ "title": "Enabling FDD Massive MIMO through Deep Learning-based Channel Prediction", "abstract": "A major obstacle for widespread deployment of frequency division duplex (FDD)-based Massive multiple-input multipleoutput (MIMO) communications is the large signaling overhead for reporting full downlink (DL) channel state information (CSI) back to the basestation (BS), in order to enable closed-loop precoding. We completely remove this overhead by a deep-learning based channel extrapolation (or \"prediction\") approach and demonstrate that a neural network (NN) at the BS can infer the DL CSI centered around a frequency fDL by solely observing uplink (UL) CSI on a different, yet adjacent frequency band around fUL; no more pilot/reporting overhead is needed than with a genuine time division duplex (TDD)-based system. The rationale is that scatterers and the large-scale propagation environment are sufficiently similar to allow a NN to learn about the physical connections and constraints between two neighboring frequency bands, and thus provide a well-operating system even when classic extrapolation methods, like the Wiener filter (used as a baseline for comparison throughout) fails. We study its performance for various state-of-the-art Massive MIMO channel models, and, even more so, evaluate the scheme using actual Massive MIMO channel measurements, rendering it to be practically feasible at negligible loss in spectral efficiency when compared to a genuine TDD-based system." }
{ "title": "On deep learning-based channel decoding", "abstract": "We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity." }
1809.01859
1701.07738
C. Training method
In order to keep the training set small, we follow the training method in #REFR where the DNN was extended with additional layers of modulation, noise addition and detection that have no additional parameters that need to be trained.
[]
[ "Therefore, it is sufficient to work only with the sets of all possible noiseless codewords v ∈ F |v| 2 , F 2 ∈ {0, 1}, i.e., training epoches, as input to the DNNs.", "For the additional layer of detection, we calculate the log-likelihood ratio (LLR) of each received bit and forward it to the DNN.", "We use the mean squared error (MSE) as the loss function, which is defined as:", "Both the MLP networks and CNNs employ three hidden layers. The detailed parameters are discussed in the next section.", "We aim at training a network that is able to generalize, i.e., we train at a particular signal-to-noise ratio (SNR), and test it within a wide range of SNRs." ]
[ "DNN" ]
method
{ "title": "Deep Learning-Based Decoding for Constrained Sequence Codes", "abstract": "Constrained sequence codes have been widely used in modern communication and data storage systems. Sequences encoded with constrained sequence codes satisfy constraints imposed by the physical channel, hence enabling efficient and reliable transmission of coded symbols. Traditional encoding and decoding of constrained sequence codes rely on table look-up, which is prone to errors that occur during transmission. In this paper, we introduce constrained sequence decoding based on deep learning. With multiple layer perception (MLP) networks and convolutional neural networks (CNNs), we are able to achieve low bit error rates that are close to maximum a posteriori probability (MAP) decoding as well as improve the system throughput. Moreover, implementation of capacity-achieving fixed-length codes, where the complexity is prohibitively high with table look-up decoding, becomes practical with deep learningbased decoding." }
{ "title": "On deep learning-based channel decoding", "abstract": "We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity." }
1909.00935
1904.06591
Introduction
The voice assistants have enabled enormous connectivity among VCDs and are opening vistas of new research #REFR .
[ "The growing trend of personalization, realization of smart homes, and the desire for easy control of home devices are driving factors for the tremendous [19] ." ]
[ "Particularly, the addition of microphones arrays and speakers enable these devices to engage in two-way communication, allowing them to play audio and accept voice commands from other IoT devices.", "The most recognizable feature of VCDs has been the capability to connect all household IoT devices together with voice commands.", "Voice assistants are now Most VCDs are equipped with array microphones which means they have more than one microphone.", "The Amazon Echo Dot 3 uses an array of 4 microphones.", "This array of microphones allows the VCD to determine the location of the human speaker, selection of the best microphone and use the other microphones to reject background noise." ]
[ "voice assistants" ]
background
{ "title": "Voice Spoofing Detection Corpus for Single and Multi-order Audio Replays", "abstract": "The evolution of modern voice-controlled devices (VCDs) has revolutionized the Internet of Things (IoT), and resulted in increased realization of smart homes, personalization and home automation through voice commands. These VCDs can be exploited in IoT driven environment to generate various spoofing attacks including the chain of replay attacks (multi-order replay attacks). Existing datasets like ASVspoof and ReMASC contain only the first-order replay recordings, therefore, they cannot offer evaluation of the anti-spoofing algorithms capable of detecting the multi-order replay attacks. Additionally, these datasets do not capture the characteristics of microphone arrays, which is an important characteristic of modern VCDs. Therefore, there exists an urgent need to have a diverse replay spoofing detection corpus that consists of multiorder replay recordings against the bonafide voice samples. This paper presents a novel voice spoofing detection corpus (VSDC) to evaluate the performance of multi-order replay anti-spoofing methods. The proposed VSDC consists of first-order-and second-order-replay samples against the bonafide audio recordings. We ensured to create a diverse replay spoofing detection corpus in terms of second-order-replays to generate a total of 11,772 samples belonging to fifteen human speakers. Additionally, the proposed VSDC can also be used to evaluate the performance of speaker verification systems. To the best of our knowledge, this is the first publicly available replay spoofing detection corpus comprising of first-order-and second-order-replay samples." }
{ "title": "Towards Vulnerability Analysis of Voice-Driven Interfaces and Countermeasures for Replay Attacks", "abstract": "Fake audio detection is expected to become an important research area in the field of smart speakers such as Google Home, Amazon Echo and chatbots developed for these platforms. This paper presents replay attack vulnerability of voice-driven interfaces and proposes a countermeasure to detect replay attack on these platforms. This paper presents a novel framework to model replay attack distortion, and then use a non-learning-based method for replay attack detection on smart speakers. The reply attack distortion is modeled as a higher-order nonlinearity in the replay attack audio. Higher-order spectral analysis (HOSA) is used to capture characteristics distortions in the replay audio. Effectiveness of the proposed countermeasure scheme is evaluated on original speech as well as corresponding replayed recordings. The replay attack recordings are successfully injected into the Google Home device via Amazon Alexa using the drop-in conferencing feature." }
1909.00935
1904.06591
Introduction
However, we have demonstrated through the experimentation in our earlier work #REFR that VCDs are very vulnerable to even second-order replay attacks and are unable to clearly classify between the original and spoof samples in multi-hop scenarios.
[ "This fact enables the VCD to be more susceptible to replay attacks.", "Audio-specific spoofing attacks can be categorized into replay [12] , speechsynthesis (SS) #OTHEREFR , voice conversion (VC) #OTHEREFR and impersonation [17] .", "Among all audio spoofing attacks, replay attacks could be more prevalent in the future, as less tech savvy intruders can generate them and disrupt the automatic speaker verification system of a VCD based system #OTHEREFR .", "Existing spoofing datasets #OTHEREFR 16] are designed for evaluation of testbeds that consider replay spoofing as a two-class classification problem.", "The application focus of these datasets is mainly evaluating voice driven banking systems and they only address the scenario of a one-time replay." ]
[ "This vulnerability of VCDs can easily be exposed by an intruder to cause severe financial loss and data theft. Additionally, existing datasets i.e.", "ASVspoof do not contains the audio samples recorded from devices having array of microphones.", "Therefore, there exists a need to create a replay spoofing dataset to evaluate applications and testbeds that may involve multi-hop voice propagation scenarios and samples recorded with devices having microphone arrays.", "For this purpose, we designed a novel voice spoofing detection corpus (VSDC) for multi-hop replay scenarios that consist of bonafide, first-order-and second-order-replay audio samples.", "Additionally, we tried to ensure that our replay dataset should be diverse in terms of recording environment, background noise, recording and playback devices, microphones, speakers, replay scenarios, etc." ]
[ "spoof samples" ]
background
{ "title": "Voice Spoofing Detection Corpus for Single and Multi-order Audio Replays", "abstract": "The evolution of modern voice-controlled devices (VCDs) has revolutionized the Internet of Things (IoT), and resulted in increased realization of smart homes, personalization and home automation through voice commands. These VCDs can be exploited in IoT driven environment to generate various spoofing attacks including the chain of replay attacks (multi-order replay attacks). Existing datasets like ASVspoof and ReMASC contain only the first-order replay recordings, therefore, they cannot offer evaluation of the anti-spoofing algorithms capable of detecting the multi-order replay attacks. Additionally, these datasets do not capture the characteristics of microphone arrays, which is an important characteristic of modern VCDs. Therefore, there exists an urgent need to have a diverse replay spoofing detection corpus that consists of multiorder replay recordings against the bonafide voice samples. This paper presents a novel voice spoofing detection corpus (VSDC) to evaluate the performance of multi-order replay anti-spoofing methods. The proposed VSDC consists of first-order-and second-order-replay samples against the bonafide audio recordings. We ensured to create a diverse replay spoofing detection corpus in terms of second-order-replays to generate a total of 11,772 samples belonging to fifteen human speakers. Additionally, the proposed VSDC can also be used to evaluate the performance of speaker verification systems. To the best of our knowledge, this is the first publicly available replay spoofing detection corpus comprising of first-order-and second-order-replay samples." }
{ "title": "Towards Vulnerability Analysis of Voice-Driven Interfaces and Countermeasures for Replay Attacks", "abstract": "Fake audio detection is expected to become an important research area in the field of smart speakers such as Google Home, Amazon Echo and chatbots developed for these platforms. This paper presents replay attack vulnerability of voice-driven interfaces and proposes a countermeasure to detect replay attack on these platforms. This paper presents a novel framework to model replay attack distortion, and then use a non-learning-based method for replay attack detection on smart speakers. The reply attack distortion is modeled as a higher-order nonlinearity in the replay attack audio. Higher-order spectral analysis (HOSA) is used to capture characteristics distortions in the replay audio. Effectiveness of the proposed countermeasure scheme is evaluated on original speech as well as corresponding replayed recordings. The replay attack recordings are successfully injected into the Google Home device via Amazon Alexa using the drop-in conferencing feature." }
1909.00935
1904.06591
Experiment-3: Training on proposed VSDC for multi-order replay attacks
We already demonstrated through experiments in our previous work #REFR that ASV systems like Google Home and Amazon Alexa are even vulnerable to multi-hop scenarios (i.e. second-order replay attacks).
[ "Our VSDC is unique to existing spoofing datasets in the way that we further categorize the spoofing samples into first-order and second-order replay attacks." ]
[ "Therefore, we argue that anti-spoofing systems must have the capability to accurately detect the second-order replay attacks as well besides the first-order replays.", "To test the effectiveness of our dataset in this perspective, we performed an experiment in two stages, one using the bonafide and first-order replay samples, and second using the bonafide and second-order replay samples.", "In the first stage of this experiment, we evaluated the performance of ASV baseline method on our dataset using only the bonafide and first-order replay samples.", "For this purpose, we used 60% samples to train the ASV baseline method where half of the samples belong to the bonafide and rest to first-order replays.", "ASV baseline method provided an average EER of 20.54% on our dataset of bonafide and first-order replay samples as shown in Figure 10 ." ]
[ "Amazon Alexa" ]
background
{ "title": "Voice Spoofing Detection Corpus for Single and Multi-order Audio Replays", "abstract": "The evolution of modern voice-controlled devices (VCDs) has revolutionized the Internet of Things (IoT), and resulted in increased realization of smart homes, personalization and home automation through voice commands. These VCDs can be exploited in IoT driven environment to generate various spoofing attacks including the chain of replay attacks (multi-order replay attacks). Existing datasets like ASVspoof and ReMASC contain only the first-order replay recordings, therefore, they cannot offer evaluation of the anti-spoofing algorithms capable of detecting the multi-order replay attacks. Additionally, these datasets do not capture the characteristics of microphone arrays, which is an important characteristic of modern VCDs. Therefore, there exists an urgent need to have a diverse replay spoofing detection corpus that consists of multiorder replay recordings against the bonafide voice samples. This paper presents a novel voice spoofing detection corpus (VSDC) to evaluate the performance of multi-order replay anti-spoofing methods. The proposed VSDC consists of first-order-and second-order-replay samples against the bonafide audio recordings. We ensured to create a diverse replay spoofing detection corpus in terms of second-order-replays to generate a total of 11,772 samples belonging to fifteen human speakers. Additionally, the proposed VSDC can also be used to evaluate the performance of speaker verification systems. To the best of our knowledge, this is the first publicly available replay spoofing detection corpus comprising of first-order-and second-order-replay samples." }
{ "title": "Towards Vulnerability Analysis of Voice-Driven Interfaces and Countermeasures for Replay Attacks", "abstract": "Fake audio detection is expected to become an important research area in the field of smart speakers such as Google Home, Amazon Echo and chatbots developed for these platforms. This paper presents replay attack vulnerability of voice-driven interfaces and proposes a countermeasure to detect replay attack on these platforms. This paper presents a novel framework to model replay attack distortion, and then use a non-learning-based method for replay attack detection on smart speakers. The reply attack distortion is modeled as a higher-order nonlinearity in the replay attack audio. Higher-order spectral analysis (HOSA) is used to capture characteristics distortions in the replay audio. Effectiveness of the proposed countermeasure scheme is evaluated on original speech as well as corresponding replayed recordings. The replay attack recordings are successfully injected into the Google Home device via Amazon Alexa using the drop-in conferencing feature." }
1909.08749
1703.05449
Introduction
These analyses have led to more practically applicable algorithms that provide, for instance, horizon-independent regret bounds for certain episodic MDPs [ZB19; JA18], thereby improving upon worst-case bounds #REFR .
[ "While TD algorithms for policy evaluation have been analyzed by many previous papers, their focus is typically either on (i) how function approximation affects the algorithm #OTHEREFR , (ii) asymptotic convergence guarantees #OTHEREFR or (iii) establishing convergence rates in metrics of the 2 -type [Tad04; LS18; SY19].", "Since 2 -type metrics can be associated with an inner product, many specialized analyses can be ported over from the literature on stochastic optimization (e.g., #OTHEREFR ).", "1 On the other hand, our focus is on providing non-asymptotic guarantees in the ∞ -error metric.", "Also, given that we are interested in fine-grained, instance-dependent guarantees, we first study the problem without function approximation.", "As briefly alluded to before, there has also been some recent focus on obtaining instancedependent guarantees in online reinforcement learning settings #OTHEREFR ." ]
[ "Recent work has also established some instance-dependent bounds for the problem of state-action value function estimation in Markov decision processes, for both ordinary Q-learning #OTHEREFR and a variance-reduced improvement #OTHEREFR .", "However, we currently lack the localized lower bounds that would allow us to understand the fundamental limits of the problem in a more local sense.", "We hope that our analysis of the simpler policy evaluation problem will be useful in establishing these guarantees.", "Portions of our analysis exploit a decoupling that is induced by a leave-one-out technique.", "We note that leave-one-out techniques are frequently used in probabilistic analysis (e.g., #OTHEREFR )." ]
[ "horizon-independent regret" ]
background
{ "title": "Value function estimation in Markov reward processes: Instance-dependent 𝓁 ∞ -bounds for policy evaluation.", "abstract": "Markov reward processes (MRPs) are used to model stochastic phenomena arising in operations research, control engineering, robotics, artificial intelligence, as well as communication and transportation networks. In many of these cases, such as in the policy evaluation problem encountered in reinforcement learning, the goal is to estimate the long-term value function of such a process without access to the underlying population transition and reward functions. Working with samples generated under the synchronous model, we study the problem of estimating the value function of an infinite-horizon, discounted MRP in the ∞ -norm. We analyze both the standard plug-in approach to this problem and a more robust variant, and establish nonasymptotic bounds that depend on the (unknown) problem instance, as well as data-dependent bounds that can be evaluated based on the observed data. We show that these approaches are minimax-optimal up to constant factors over natural sub-classes of MRPs. Our analysis makes use of a leave-one-out decoupling argument tailored to the policy evaluation problem, one which may be of independent interest." }
{ "title": "Minimax Regret Bounds for Reinforcement Learning", "abstract": "We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of O( where H is the time horizon, S the number of states, A the number of actions and T the number of time-steps. This result improves over the best previous known bound O(HS √ AT ) achieved by the UCRL2 algorithm of Jaksch et al. (2010). The key significance of our new results is that when T ≥ H 3 S 3 A and SA ≥ H, it leads to a regret of O( √ HSAT ) that matches the established lower bound of Ω( √ HSAT ) up to a logarithmic factor. Our analysis contains two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in S), and we define Bernstein-based \"exploration bonuses\" that use the empirical variance of the estimated values at the next states (to improve scaling in H)." }
1706.10295
1703.05449
Deep Reinforcement Learning
Parameters of the value function are found to match on-policy returns: #REFR whereQ i is the return obtained by executing policy π starting in state x t+i :Q i = k j=i γ j−i r t+j +γ k−i V (x t+k ; θ).
[ "A3C's network directly learns a policy π and a value function V of its policy.", "The gradient of the loss on the A3C policy at step t for the roll-out (x t+i , a t+i ∼ π(·|x t+i ; θ), r t+i ) k i=0 is:", "∇ θ H(π(·|x t+i ; θ)) .", "(4) H[π(·|x t ; θ)] denotes the entropy of the policy π and β is a hyperparameter that trades off between optimising the advantage function and the entropy of the policy.", "The advantage function A(x t+i , a t+i ; θ) is the difference between observed returns and estimates of the return produced by A3C's value network: A(x t+i , a t+i ; θ) = k j=i γ j−i r t+j + γ k−i V (x t+k ; θ) − V (x t+i ; θ), r t+j being the reward at step t + j and V (x; θ) being the agent's estimate of value function of state x." ]
[ "The overall A3C loss is then L(θ) = L π (θ)+λL V (θ) where λ balances optimising the policy loss relative to the baseline value function loss." ]
[ "policy π", "value function" ]
background
{ "title": "Noisy Networks for Exploration", "abstract": "We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and dueling agents (entropy reward and -greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance. * Equal contribution." }
{ "title": "Minimax Regret Bounds for Reinforcement Learning", "abstract": "We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of O( where H is the time horizon, S the number of states, A the number of actions and T the number of time-steps. This result improves over the best previous known bound O(HS √ AT ) achieved by the UCRL2 algorithm of Jaksch et al. (2010). The key significance of our new results is that when T ≥ H 3 S 3 A and SA ≥ H, it leads to a regret of O( √ HSAT ) that matches the established lower bound of Ω( √ HSAT ) up to a logarithmic factor. Our analysis contains two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in S), and we define Bernstein-based \"exploration bonuses\" that use the empirical variance of the estimated values at the next states (to improve scaling in H)." }
1909.08749
1703.05449
Introduction
These analyses have led to more practically applicable algorithms that provide, for instance, horizon-independent regret bounds for certain episodic MDPs [ZB19; JA18], thereby improving upon worst-case bounds #REFR .
[ "While TD algorithms for policy evaluation have been analyzed by many previous papers, their focus is typically either on (i) how function approximation affects the algorithm #OTHEREFR , (ii) asymptotic convergence guarantees #OTHEREFR or (iii) establishing convergence rates in metrics of the 2 -type [Tad04; LS18; SY19].", "Since 2 -type metrics can be associated with an inner product, many specialized analyses can be ported over from the literature on stochastic optimization (e.g., #OTHEREFR ).", "1 On the other hand, our focus is on providing non-asymptotic guarantees in the ∞ -error metric.", "Also, given that we are interested in fine-grained, instance-dependent guarantees, we first study the problem without function approximation.", "As briefly alluded to before, there has also been some recent focus on obtaining instancedependent guarantees in online reinforcement learning settings #OTHEREFR ." ]
[ "Recent work has also established some instance-dependent bounds for the problem of state-action value function estimation in Markov decision processes, for both ordinary Q-learning #OTHEREFR and a variance-reduced improvement #OTHEREFR .", "However, we currently lack the localized lower bounds that would allow us to understand the fundamental limits of the problem in a more local sense.", "We hope that our analysis of the simpler policy evaluation problem will be useful in establishing these guarantees.", "Portions of our analysis exploit a decoupling that is induced by a leave-one-out technique.", "We note that leave-one-out techniques are frequently used in probabilistic analysis (e.g., #OTHEREFR )." ]
[ "horizon-independent regret" ]
background
{ "title": "Value function estimation in Markov reward processes: Instance-dependent $\\ell_\\infty$-bounds for policy evaluation", "abstract": "Markov reward processes (MRPs) are used to model stochastic phenomena arising in operations research, control engineering, robotics, artificial intelligence, as well as communication and transportation networks. In many of these cases, such as in the policy evaluation problem encountered in reinforcement learning, the goal is to estimate the long-term value function of such a process without access to the underlying population transition and reward functions. Working with samples generated under the synchronous model, we study the problem of estimating the value function of an infinite-horizon, discounted MRP in the ∞ -norm. We analyze both the standard plug-in approach to this problem and a more robust variant, and establish nonasymptotic bounds that depend on the (unknown) problem instance, as well as data-dependent bounds that can be evaluated based on the observed data. We show that these approaches are minimax-optimal up to constant factors over natural sub-classes of MRPs. Our analysis makes use of a leave-one-out decoupling argument tailored to the policy evaluation problem, one which may be of independent interest." }
{ "title": "Minimax Regret Bounds for Reinforcement Learning", "abstract": "We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of O( where H is the time horizon, S the number of states, A the number of actions and T the number of time-steps. This result improves over the best previous known bound O(HS √ AT ) achieved by the UCRL2 algorithm of Jaksch et al. (2010). The key significance of our new results is that when T ≥ H 3 S 3 A and SA ≥ H, it leads to a regret of O( √ HSAT ) that matches the established lower bound of Ω( √ HSAT ) up to a logarithmic factor. Our analysis contains two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in S), and we define Bernstein-based \"exploration bonuses\" that use the empirical variance of the estimated values at the next states (to improve scaling in H)." }
1906.11245
1703.05449
VII. EXPERIMENTS
The reward was scaled to the range [0, 1] to meet the requirements of #REFR .
[ "We have performed our experiments on one dimensional and two dimensional state spaces.", "For one-dimensional case we implemented the algorithm on a simple problem with A = 2 with [0, 5π] being the state space S.", "In this problem, the reward functions for the actions a 1 , a 2 are sin(x) and − sin(x) respectively.", "So the optimal policy is, for a state x in interval [nπ, (n + 1)π], the one which takes action a 2 if n is odd and a 1 if n is even." ]
[ "From the current state x, after taking an action a i such that i ∈ 1, 2, the next state is sampled uniformly from the state space S.", "We can see from Figure 6 that the empirical regret is converging faster for larger number of intervals, for the same horizon length The convergence can be better understood by comparing the the points where the empirical regret approaches to 0.", "The very first point, in every plot, where the empirical regret is zero is indicated by a dashed vertical line showing the episode number.", "The results match the intuition that for the same problem having more number of states improves the performance.", "By keeping n constant and varying H it can be seen from the plots that regret approaches zero slower as H is increases." ]
[ "reward" ]
method
{ "title": "A Tractable Algorithm for Finite-Horizon Continuous Reinforcement Learning", "abstract": "We consider the finite horizon continuous reinforcement learning problem. Our contribution is three-fold. First,we give a tractable algorithm based on optimistic value iteration for the problem. Next,we give a lower bound on regret of order Ω(T 2/3 ) for any algorithm discretizes the state space, improving the previous regret bound of Ω(T 1/2 ) of Ortner and Ryabko [1] for the same problem. Next,under the assumption that the rewards and transitions are Hölder Continuous we show that the upper bound on the discretization error is const.Ln −α T . Finally,we give some simple experiments to validate our propositions." }
{ "title": "Minimax Regret Bounds for Reinforcement Learning", "abstract": "We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of O( where H is the time horizon, S the number of states, A the number of actions and T the number of time-steps. This result improves over the best previous known bound O(HS √ AT ) achieved by the UCRL2 algorithm of Jaksch et al. (2010). The key significance of our new results is that when T ≥ H 3 S 3 A and SA ≥ H, it leads to a regret of O( √ HSAT ) that matches the established lower bound of Ω( √ HSAT ) up to a logarithmic factor. Our analysis contains two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in S), and we define Bernstein-based \"exploration bonuses\" that use the empirical variance of the estimated values at the next states (to improve scaling in H)." }
1807.03765
1703.05449
Preliminary
Since the state and action spaces, and the horizon, are all finite, there always exists (see, e.g., #REFR ) an optimal policy π which gives the optimal value V h (x) = sup π V π h (x) for all x ∈ S and h ∈ [H].
[ "#OTHEREFR In each episode of this MDP, an initial state x 1 is picked arbitrarily by an adversary.", "Then, at each step h ∈ [H], the agent observes state x h ∈ S, picks an action a h ∈ A, receives reward r h (x h , a h ), and then transitions to a next state, x h+1 , that is drawn from the distribution P h (·|x h , a h ). The episode ends when x H+1 is reached.", "A policy π of an agent is a collection of H functions π h : S → A h∈ [H] .", "We use V π h : S → R to denote the value function at step h under policy π, so that V π h (x) gives the expected sum of remaining rewards received under policy π, starting from x h = x, until the end of the episode. In symbols:", "Accordingly, we also define Q π h : S × A → R to denote Q-value function at step h so that Q π h (x, a) gives the expected sum of remaining rewards received under policy π, starting from x h = x, a h = a, till the end of the episode. In symbols:" ]
[ "For simplicity, we denote [P h V h+1 ](x, a) := E x ∼P(·|x,a) V h+1 (x ).", "Recall the Bellman equation and Algorithm 1 Q-learning with UCB-Hoeffding", "receive x 1 ." ]
[ "optimal policy π" ]
background
{ "title": "Is Q-learning Provably Efficient?", "abstract": "Model-free reinforcement learning (RL) algorithms, such as Q-learning, directly parameterize and update value functions or policies without explicitly modeling the environment. They are typically simpler, more flexible to use, and thus more prevalent in modern deep RL than model-based approaches. However, empirical work has suggested that model-free algorithms may require more samples to learn [7, 22] . The theoretical question of \"whether model-free algorithms can be made sample efficient\" is one of the most fundamental questions in RL, and remains unsolved even in the basic scenario with finitely many states and actions. We prove that, in an episodic MDP setting, Q-learning with UCB exploration achieves regret O( √ H 3 SAT ), where S and A are the numbers of states and actions, H is the number of steps per episode, and T is the total number of steps. This sample efficiency matches the optimal regret that can be achieved by any model-based approach, up to a single √ H factor. To the best of our knowledge, this is the first analysis in the model-free setting that establishes √ T regret without requiring access to a \"simulator.\"" }
{ "title": "Minimax Regret Bounds for Reinforcement Learning", "abstract": "We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of O( where H is the time horizon, S the number of states, A the number of actions and T the number of time-steps. This result improves over the best previous known bound O(HS √ AT ) achieved by the UCRL2 algorithm of Jaksch et al. (2010). The key significance of our new results is that when T ≥ H 3 S 3 A and SA ≥ H, it leads to a regret of O( √ HSAT ) that matches the established lower bound of Ω( √ HSAT ) up to a logarithmic factor. Our analysis contains two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in S), and we define Bernstein-based \"exploration bonuses\" that use the empirical variance of the estimated values at the next states (to improve scaling in H)." }
1912.04136
1703.05449
Related work
Via Fact 2, our results apply to this setting, and indeed our algorithm can be viewed as a generalization of an existing tabular algorithm #REFR to the function approximation setting.
[ "The majority of the theoretical results for reinforcement learning focus on the tabular setting where the state space is finite and sample complexities scaling polynomially with |S| are tolerable #OTHEREFR .", "Indeed, by now there are a number of algorithms that achieve strong guarantees in these settings #OTHEREFR ." ]
[ "1 Turning to the function approximation setting, several other results concern function approximation in setings where exploration is not an issue, including the infinite-data regime #OTHEREFR and \"batch RL\" settings where the agent does not control the data-collection process #OTHEREFR .", "While the settings differ, all of these results require that the function class satisfy some form of (approximate) closure with respect to the Bellman operator.", "These results therefore provide motivation for our optimistic closure assumption.", "A recent line of work studies function approximation in settings where the agent must explore the environment #OTHEREFR .", "The algorithms developed here can accommodate function classes beyond generalized linear models, but they are still relatively impractical and the more practical ones require strong dynamics assumptions #OTHEREFR ." ]
[ "generalization", "function approximation setting" ]
background
{ "title": "Optimism in Reinforcement Learning with Generalized Linear Function Approximation", "abstract": "We design a new provably efficient algorithm for episodic reinforcement learning with generalized linear function approximation. We analyze the algorithm under a new expressivity assumption that we call \"optimistic closure,\" which is strictly weaker than assumptions from prior analyses for the linear setting. With optimistic closure, we prove that our algorithm enjoys a regret bound ofÕ( where d is the dimensionality of the state-action features and T is the number of episodes. This is the first statistically and computationally efficient algorithm for reinforcement learning with generalized linear functions." }
{ "title": "Minimax Regret Bounds for Reinforcement Learning", "abstract": "We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of O( where H is the time horizon, S the number of states, A the number of actions and T the number of time-steps. This result improves over the best previous known bound O(HS √ AT ) achieved by the UCRL2 algorithm of Jaksch et al. (2010). The key significance of our new results is that when T ≥ H 3 S 3 A and SA ≥ H, it leads to a regret of O( √ HSAT ) that matches the established lower bound of Ω( √ HSAT ) up to a logarithmic factor. Our analysis contains two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in S), and we define Bernstein-based \"exploration bonuses\" that use the empirical variance of the estimated values at the next states (to improve scaling in H)." }
1110.0189
1101.4332
Introduction
This paper is motivated by the notion of Eulerian pairs introduced by Sagan and Savage #REFR in their study of Mahonian pairs.
[]
[ "Let P be the set of positive integers and let P * be the set of words on P.", "For two finite subsets S, T ⊂ P * , the pair (S, T ) is called a Mahonian pair if the distribution of the major index over S is the same as the distribution of the inversion number over T .", "Similarly, (S, T ) is said to be an Eulerian pair if the distribution of the descent number over S is the same as the distribution of the excedance number over T .", "The well-known theorem of MacMahon #OTHEREFR can be rephrased as the fact that (S n , S n ) is a Mahonian pair, where S n is the set of permutations on [n] = {1, 2, . . . , n}.", "Foata #OTHEREFR found a combinatorial proof of this fact by establishing a correspondence which has been called the second fundamental transformation, denoted Φ 2 ." ]
[ "Eulerian pairs", "Mahonian pairs" ]
background
{ "title": "Eulerian pairs on Fibonacci words", "abstract": "Recently, Sagan and Savage introduced the notion of Eulerian pairs. In this note, we find Eulerian pairs on Fibonacci words based on Foata's first transformation or Han's bijection and a map in the spirit of a bijection of Steingrímsson." }
{ "title": "Mahonian Pairs", "abstract": "We introduce the notion of a Mahonian pair. Consider the set, P * , of all words having the positive integers as alphabet. Given finite subsets S, T ⊂ P * , we say that (S, T ) is a Mahonian pair if the distribution of the major index, maj, over S is the same as the distribution of the inversion number, inv, over T . So the well-known fact that maj and inv are equidistributed over the symmetric group, S n , can be expressed by saying that (S n , S n ) is a Mahonian pair. We investigate various Mahonian pairs (S, T ) with S = T . Our principal tool is Foata's fundamental bijection φ : P * → P * since it has the property that maj w = inv φ(w) for any word w. We consider various families of words associated with Catalan and Fibonacci numbers. We show that, when restricted to words in {1, 2} * , φ transforms familiar statistics on words into natural statistics on integer partitions such as the size of the Durfee square. The Rogers-Ramanujan identities, the Catalan triangle, and various q-analogues also make an appearance. We generalize the definition of Mahonian pairs to infinite sets and use this as a tool to connect a partition bijection of Corteel-Savage-Venkatraman with the Greene-Kleitman decomposition of a Boolean algebra into symmetric chains. We close with comments about future work and open problems." }
1708.04903
1404.3248
A General Problem and Primal-Dual Approach
Makarychev and Sviridenko #REFR considered an offline variant of the problem in which the resource cost functions are convex.
[ "Primal-Dual Approach.", "We consider an approach based on linear programming for the problem.", "The first crucial step for any LP-based approach is to derive a LP formulation with reasonable integrality gap, which is defined as the ratio between the optimal integer solution of the formulation and the optimal solution without the integer condition.", "As the cost functions are non-linear, it is not surprising that the natural relaxation suffers from large integrality gap.", "This issue has been observed and resolved by Makarychev and Sviridenko #OTHEREFR ." ]
[ "They systematically strengthen the natural formulations by introducing an exponential number of new variables and new constraints connecting new variables to original ones.", "Consequently, the new formulation, in form of a configuration LP, significantly reduces the integrality gap.", "Although there are exponentially number of variables, Makarychev and Sviridenko showed that a fractional (1 + ǫ)-approximatly optimal solution of the configuration LP can be computed in polynomial time.", "Then, by rounding the fractional solution, the authors derived an B α -approximation algorithm for the resource cost minimization problem in which all cost functions are polynomial of degree at most α.", "Here B α denotes the Bell number and asymptotically B α = Θ (α/ log α) α ." ]
[ "offline variant", "resource cost functions" ]
background
{ "title": "Online Primal-Dual Algorithms with Configuration Linear Programs", "abstract": "In this paper, we present primal-dual approaches based on configuration linear programs to design competitive online algorithms for problems with arbitrarily-grown objective. Non-linear, especially convex, objective functions have been extensively studied in recent years in which approaches relies crucially on the convexity property of cost functions. Besides, configuration linear programs have been considered typically in offline setting and the main approaches are rounding schemes. In our framework, we consider configuration linear programs coupled with a primal-dual approach. This approach is particularly appropriate for non-linear (non-convex) objectives in online setting. By the approach, we first present a simple greedy algorithm for a general cost-minimization problem. The competitive ratio of the algorithm is characterized by the mean of a notion, called smoothness, which is inspired by a similar concept in the context of algorithmic game theory. The algorithm gives optimal (up to a constant factor) competitive ratios while applying to different contexts such as network routing, vector scheduling, energyefficient scheduling and non-convex facility location. Next, we consider the online 0 − 1 covering problems with non-convex objective. Building upon the resilient ideas from the primal-dual framework with configuration LPs, we derive a competitive algorithm for these problems. Our result generalizes the online primal-dual algorithm developed recently by Azar et al. [8] for convex objectives with monotone gradients to non-convex objectives. The competitive ratio is now characterized by a new concept, called local smoothness -a notion inspired by the smoothness. Our algorithm yields tight competitive ratio for the objectives such as the sum of ℓ k -norms and gives competitive solutions for online problems of submodular minimization and some natural non-convex minimization under covering constraints." }
{ "title": "Solving Optimization Problems with Diseconomies of Scale via Decoupling", "abstract": "We present a new framework for solving optimization problems with a diseconomy of scale. In such problems, our goal is to minimize the cost of resources used to perform a certain task. The cost of resources grows superlinearly, as x q , q ≥ 1, with the amount x of resources used. We define a novel linear programming relaxation for such problems and then show that the integrality gap of the relaxation is A q , where A q is the q-th moment of the Poisson random variable with parameter 1. Using our framework, we obtain approximation algorithms for the Minimum Energy Efficient Routing, Minimum Degree Balanced Spanning Tree, Load Balancing on Unrelated Parallel Machines, and Unrelated Parallel Machine Scheduling with Nonlinear Functions of Completion Times problems. Our analysis relies on the decoupling inequality for nonnegative random variables. The inequality states that where X i are independent nonnegative random variables, Y i are possibly dependent nonnegative random variables, and each Y i has the same distribution as X i . The inequality was proved by de la Peña in 1990. De la Peña, Ibragimov, and Sharakhmetov showed that C q ≤ 2 for q ∈ (1, 2) and C q ≤ A 1/q q for q ≥ 2. We show that the optimal constant is C q = A 1/q q for any q ≥ 1. We then prove a more general inequality: For every convex function φ, and, for every concave function ψ , where P is a Poisson random variable with parameter 1 independent of the random variables Y i . In this article, we study combinatorial optimization problems with a diseconomy of scale. We consider problems in which we need to minimize the cost of resources used to accomplish a certain task. Often, the cost grows linearly with the amount of resources used. In some applications, the cost is sublinear (e.g., if we can get a discount when we buy resources in bulk). Such phenomenon is known as \"economy of scale.\" However, in many applications, the cost is superlinear. In such cases, we say that the cost function exhibits a \"diseconomy of scale.\" A good example of a diseconomy of scale is the cost of energy used for computing. Modern hardware can run at different processing speeds. As we increase the speed, the energy consumption grows superlinearly. It can be modeled as a function P (s) = cs q of the processing speed s, where c and q are parameters that depend on the specific hardware. Typically, q ∈ (1, 3] (see, e.g., [2, 21, 41] ). As a running example, consider the Minimum Power Routing problem studied by Andrews, Fernández Anta, Zhang, and Zhao [3] . We are given a graph G = (V , E) and a set of demands D = {(d i , s i , t i )}. Our goal is to route d i (d i ∈ N) units of demand i from the source s i ∈ V to the destination t i ∈ V such that every demand i is routed along a single path p i (i.e., we need to find an unsplittable multi-commodity flow). We want to minimize the energy cost. Every link (edge) e ∈ E uses f e (x e ) = c e x" }
1708.04903
1404.3248
Related work
Makarychev and Sviridenko #REFR propose a scheme that consists of solving the new LPs (with exponential number of variables) and rounding the fractional solutions to integer ones using decoupling inequalities.
[ "In this section we summarize related work to our approach.", "Each problem, together with its related work, in the applications of the main theorems is formally given in the corresponding section.", "In this paper, we systematically strengthen natural LPs by the construction of the configuration LPs presented in #OTHEREFR ." ]
[ "By this method, they derive approximation algorithms for several (offline) optimization problems which can formulated by linear constraints and objective function as a power of some constant α.", "Specifically, the approximation ratio is proved to be the Bell number B α for several problems.", "In our approach, a crucial element to characterize the performance of an algorithm is the smoothness property of functions.", "The smooth argument is introduced by Roughgarden #OTHEREFR in the context of algorithmic game theory and it has successfully characterized the performance of equilibria (price of anarchy) in many classes of games such as congestion games, etc #OTHEREFR .", "This notion inspires the definition of smoothness in our paper." ]
[ "decoupling inequalities" ]
method
{ "title": "Online Primal-Dual Algorithms with Configuration Linear Programs", "abstract": "In this paper, we present primal-dual approaches based on configuration linear programs to design competitive online algorithms for problems with arbitrarily-grown objective. Non-linear, especially convex, objective functions have been extensively studied in recent years in which approaches relies crucially on the convexity property of cost functions. Besides, configuration linear programs have been considered typically in offline setting and the main approaches are rounding schemes. In our framework, we consider configuration linear programs coupled with a primal-dual approach. This approach is particularly appropriate for non-linear (non-convex) objectives in online setting. By the approach, we first present a simple greedy algorithm for a general cost-minimization problem. The competitive ratio of the algorithm is characterized by the mean of a notion, called smoothness, which is inspired by a similar concept in the context of algorithmic game theory. The algorithm gives optimal (up to a constant factor) competitive ratios while applying to different contexts such as network routing, vector scheduling, energyefficient scheduling and non-convex facility location. Next, we consider the online 0 − 1 covering problems with non-convex objective. Building upon the resilient ideas from the primal-dual framework with configuration LPs, we derive a competitive algorithm for these problems. Our result generalizes the online primal-dual algorithm developed recently by Azar et al. [8] for convex objectives with monotone gradients to non-convex objectives. The competitive ratio is now characterized by a new concept, called local smoothness -a notion inspired by the smoothness. Our algorithm yields tight competitive ratio for the objectives such as the sum of ℓ k -norms and gives competitive solutions for online problems of submodular minimization and some natural non-convex minimization under covering constraints." }
{ "title": "Solving Optimization Problems with Diseconomies of Scale via Decoupling", "abstract": "We present a new framework for solving optimization problems with a diseconomy of scale. In such problems, our goal is to minimize the cost of resources used to perform a certain task. The cost of resources grows superlinearly, as x q , q ≥ 1, with the amount x of resources used. We define a novel linear programming relaxation for such problems and then show that the integrality gap of the relaxation is A q , where A q is the q-th moment of the Poisson random variable with parameter 1. Using our framework, we obtain approximation algorithms for the Minimum Energy Efficient Routing, Minimum Degree Balanced Spanning Tree, Load Balancing on Unrelated Parallel Machines, and Unrelated Parallel Machine Scheduling with Nonlinear Functions of Completion Times problems. Our analysis relies on the decoupling inequality for nonnegative random variables. The inequality states that where X i are independent nonnegative random variables, Y i are possibly dependent nonnegative random variables, and each Y i has the same distribution as X i . The inequality was proved by de la Peña in 1990. De la Peña, Ibragimov, and Sharakhmetov showed that C q ≤ 2 for q ∈ (1, 2) and C q ≤ A 1/q q for q ≥ 2. We show that the optimal constant is C q = A 1/q q for any q ≥ 1. We then prove a more general inequality: For every convex function φ, and, for every concave function ψ , where P is a Poisson random variable with parameter 1 independent of the random variables Y i . In this article, we study combinatorial optimization problems with a diseconomy of scale. We consider problems in which we need to minimize the cost of resources used to accomplish a certain task. Often, the cost grows linearly with the amount of resources used. In some applications, the cost is sublinear (e.g., if we can get a discount when we buy resources in bulk). Such phenomenon is known as \"economy of scale.\" However, in many applications, the cost is superlinear. In such cases, we say that the cost function exhibits a \"diseconomy of scale.\" A good example of a diseconomy of scale is the cost of energy used for computing. Modern hardware can run at different processing speeds. As we increase the speed, the energy consumption grows superlinearly. It can be modeled as a function P (s) = cs q of the processing speed s, where c and q are parameters that depend on the specific hardware. Typically, q ∈ (1, 3] (see, e.g., [2, 21, 41] ). As a running example, consider the Minimum Power Routing problem studied by Andrews, Fernández Anta, Zhang, and Zhao [3] . We are given a graph G = (V , E) and a set of demands D = {(d i , s i , t i )}. Our goal is to route d i (d i ∈ N) units of demand i from the source s i ∈ V to the destination t i ∈ V such that every demand i is routed along a single path p i (i.e., we need to find an unsplittable multi-commodity flow). We want to minimize the energy cost. Every link (edge) e ∈ E uses f e (x e ) = c e x" }
1708.04903
1404.3248
A Applications of Theorem 1 A.1 Minimum Power Survival Network Routing
For the Load Balancing problem, the currently best-known approximation is B α due to #REFR via their rounding technique based on decoupling inequality.
[ "The objective is to minimize the total power e f e (ℓ e ).", "Typically f e (ℓ e ) = c e ℓ αe e where c e and α e are parameters depending on e.", "This problems generalizes the Minimum Power Routing problem -a variant in which k i = 1 and p i,e = 1 ∀i, e -and the Load Balancing problem -a variant in which k i = 1, all the sources (sinks) are the same s i = s i ′ ∀i, i ′ (t i = t i ′ ∀i, i ′ ) and every s i − t i path has length 2.", "For the Minimum Power Routing in offline setting, Andrews et al. #OTHEREFR gave a polynomial-time poly-log-approximation algorithm.", "The result has been improved by Makarychev and Sviridenko #OTHEREFR who gave an B α -approximation algorithm. In online setting, Gupta et al. #OTHEREFR presented an α α -competitive online algorithm." ]
[ "In online setting, it has been shown that the optimal competitive ratio for the Load Balancing problem is Θ(α α ) #OTHEREFR .", "Contribution.", "In the problem, the set of strategy S i for each request i is a solution consists of k i edge-disjoint paths connecting s i and t i .", "Applying the general framework, we deduce the following greedy algorithm.", "Let ℓ e be the load of edge e." ]
[ "Load Balancing problem" ]
method
{ "title": "Online Primal-Dual Algorithms with Configuration Linear Programs", "abstract": "In this paper, we present primal-dual approaches based on configuration linear programs to design competitive online algorithms for problems with arbitrarily-grown objective. Non-linear, especially convex, objective functions have been extensively studied in recent years in which approaches relies crucially on the convexity property of cost functions. Besides, configuration linear programs have been considered typically in offline setting and the main approaches are rounding schemes. In our framework, we consider configuration linear programs coupled with a primal-dual approach. This approach is particularly appropriate for non-linear (non-convex) objectives in online setting. By the approach, we first present a simple greedy algorithm for a general cost-minimization problem. The competitive ratio of the algorithm is characterized by the mean of a notion, called smoothness, which is inspired by a similar concept in the context of algorithmic game theory. The algorithm gives optimal (up to a constant factor) competitive ratios while applying to different contexts such as network routing, vector scheduling, energyefficient scheduling and non-convex facility location. Next, we consider the online 0 − 1 covering problems with non-convex objective. Building upon the resilient ideas from the primal-dual framework with configuration LPs, we derive a competitive algorithm for these problems. Our result generalizes the online primal-dual algorithm developed recently by Azar et al. [8] for convex objectives with monotone gradients to non-convex objectives. The competitive ratio is now characterized by a new concept, called local smoothness -a notion inspired by the smoothness. Our algorithm yields tight competitive ratio for the objectives such as the sum of ℓ k -norms and gives competitive solutions for online problems of submodular minimization and some natural non-convex minimization under covering constraints." }
{ "title": "Solving Optimization Problems with Diseconomies of Scale via Decoupling", "abstract": "We present a new framework for solving optimization problems with a diseconomy of scale. In such problems, our goal is to minimize the cost of resources used to perform a certain task. The cost of resources grows superlinearly, as x q , q ≥ 1, with the amount x of resources used. We define a novel linear programming relaxation for such problems and then show that the integrality gap of the relaxation is A q , where A q is the q-th moment of the Poisson random variable with parameter 1. Using our framework, we obtain approximation algorithms for the Minimum Energy Efficient Routing, Minimum Degree Balanced Spanning Tree, Load Balancing on Unrelated Parallel Machines, and Unrelated Parallel Machine Scheduling with Nonlinear Functions of Completion Times problems. Our analysis relies on the decoupling inequality for nonnegative random variables. The inequality states that where X i are independent nonnegative random variables, Y i are possibly dependent nonnegative random variables, and each Y i has the same distribution as X i . The inequality was proved by de la Peña in 1990. De la Peña, Ibragimov, and Sharakhmetov showed that C q ≤ 2 for q ∈ (1, 2) and C q ≤ A 1/q q for q ≥ 2. We show that the optimal constant is C q = A 1/q q for any q ≥ 1. We then prove a more general inequality: For every convex function φ, and, for every concave function ψ , where P is a Poisson random variable with parameter 1 independent of the random variables Y i . In this article, we study combinatorial optimization problems with a diseconomy of scale. We consider problems in which we need to minimize the cost of resources used to accomplish a certain task. Often, the cost grows linearly with the amount of resources used. In some applications, the cost is sublinear (e.g., if we can get a discount when we buy resources in bulk). Such phenomenon is known as \"economy of scale.\" However, in many applications, the cost is superlinear. In such cases, we say that the cost function exhibits a \"diseconomy of scale.\" A good example of a diseconomy of scale is the cost of energy used for computing. Modern hardware can run at different processing speeds. As we increase the speed, the energy consumption grows superlinearly. It can be modeled as a function P (s) = cs q of the processing speed s, where c and q are parameters that depend on the specific hardware. Typically, q ∈ (1, 3] (see, e.g., [2, 21, 41] ). As a running example, consider the Minimum Power Routing problem studied by Andrews, Fernández Anta, Zhang, and Zhao [3] . We are given a graph G = (V , E) and a set of demands D = {(d i , s i , t i )}. Our goal is to route d i (d i ∈ N) units of demand i from the source s i ∈ V to the destination t i ∈ V such that every demand i is routed along a single path p i (i.e., we need to find an unsplittable multi-commodity flow). We want to minimize the energy cost. Every link (edge) e ∈ E uses f e (x e ) = c e x" }
1901.05620
1901.05621
Time change
Instead, only the records were generated, using the importance-sampling scheme described and analyzed in #REFR . Figure 3 .
[ "It is natural to wonder about the appearance of the record-setting frontier (even in dimension 2) when many observations, or (equivalently) many records, have been generated.", "Figure 3 displays the record-setting frontier for one trial after 10,000 bivariate records had been generated, at which point results such as those in Section 1 suggest themselves.", "According to Theorem 4.1(b) [or Proposition 5.1(a2)], had this been done naively, by generating observations X (i) and waiting for new records to be set, it would have taken roughly 10 61 observations to obtain 10,000 records." ]
[ "Record frontier F 10,000 after 10, 000 records generated using the importance-sampling algorithm described in #OTHEREFR .", "The record-setting region process (RS n ), and therefore also the frontier process (F n ) we have studied in earlier sections, is adapted to the natural filtration for the process C = (C n ) n≥0 , where C n = (C The keys to doing so are (i) monotonicity of the sample paths of various processes of interest (such as F + and F − ) and (ii) the switching relation", "The switching relation enables us to obtain information about the recordcreation times T m from the records-counts Theorems 4.1(b) and 4.2(a).", "The following proposition is not the most elaborate result which can be obtained in such fashion, but it will suffice for our purposes. (a) Typical behavior as m → ∞:", "(b) Almost sure behavior as m → ∞:" ]
[ "records" ]
method
{ "title": "The Pareto Record Frontier", "abstract": "(1) , X (2) , . . . with independent Exponential(1) coordinates, consider the boundary (relative to the closed positive orthant), or \"frontier\", Fn of the closed Pareto record-setting (RS) region x ∈ Fn} and F + n := max{x+ : x ∈ Fn}, and define the width of Fn as We describe typical and almost sure behavior of the processes F + , F − , and W . In particular, we show that F + n ∼ ln n ∼ F − n almost surely and that Wn/ ln ln n converges in probability to d − 1; and for d ≥ 2 we show that, almost surely, the set of limit points of the sequence Wn/ ln ln n is We also obtain modifications of our results that are important in connection with efficient simulation of Pareto records. Let Tm denote the time that the mth record is set. We show that F almost surely and that WT m / ln m converges in probability to 1 − d −1 ; and for d ≥ 2 we show that, almost surely, the sequence WT m / ln m has lim inf equal to 1 − d −1 and lim sup equal to 1." }
{ "title": "Generating Pareto records", "abstract": "Abstract. We present, (partially) analyze, and apply an efficient algorithm for the simulation of multivariate Pareto records. A key role is played by minima of the record-setting region (we call these generators) each time a new record is generated, and two highlights of our work are (i) efficient dynamic maintenance of the set of generators and (ii) asymptotic analysis of the expected number of generators at each time." }
1901.08232
1901.05621
Introduction and main result
Although our attention in this paper will be focused on dimension d = 2 (see #REFR Conj.
[ "This paper proves an interesting phenomenon concerning the breaking of bivariate records first observed empirically by Daniel Q.", "Naiman, whom we thank for an introduction to the problem considered.", "We begin with some relevant definitions, taken (with trivial changes) from [4; 3] ." ]
[ "2.2] for general d), and the approach we utilize seems to be limited to the bivariate case, we begin by giving definitions that apply for general dimension d.", "Let 1(E) = 1 or 0 according as E is true or false.", "We write ln or L for natural logarithm, lg for binary logarithm, and log when the base doesn't matter. For d-dimensional vectors x = (x 1 , . . .", ", x d ) and y = (y 1 , . . .", ", y d ), write x ≺ y to mean that x j < y j for j = 1, . . . , d. The notation x ≻ y means y ≺ x." ]
[ "dimension" ]
background
{ "title": "Breaking Bivariate Records", "abstract": "Abstract. We establish a fundamental property of bivariate Pareto records for independent observations uniformly distributed in the unit square. We prove that the asymptotic conditional distribution of the number of records broken by an observation given that the observation sets a record is Geometric with parameter 1/2." }
{ "title": "Generating Pareto records", "abstract": "Abstract. We present, (partially) analyze, and apply an efficient algorithm for the simulation of multivariate Pareto records. A key role is played by minima of the record-setting region (we call these generators) each time a new record is generated, and two highlights of our work are (i) efficient dynamic maintenance of the set of generators and (ii) asymptotic analysis of the expected number of generators at each time." }
2001.08516
1606.08766
IV. PARALLEL STRING SORTING BASED ON ATOMIC PARALLEL QUICKSORT
This algorithm -hQuick-is a rather straightforward adaptation of an atomic sorting algorithm based on a Quicksort variant introduced in #REFR .
[ "This section serves two purposes.", "We describe a simple parallel string sorting algorithm whose analysis can serve as a basis for comparing it with the more sophisticated algorithms below.", "We also use this algorithm as a subroutine in the others." ]
[ "We therefore only outline it, focusing on the changes needed for string sorting. Let d = ⌊log p⌋.", "The algorithm employs only 2 d ≥ p/2 PEs which it logically arranges as a d-dimensional hypercube.", "The algorithm starts by moving each input string to a random hypercube node. hQuick proceeds in d iterations. In iteration i = d, . . .", ", 1, the remaining task is to sort the data within i-dimensional subcubes of this hypercube.", "To establish the loop invariant for the next iteration, a pivot string s is determined as a good approximation of the median of the strings within each subcube." ]
[ "atomic sorting algorithm" ]
method
{ "title": "Communication-Efficient String Sorting", "abstract": "There has been surprisingly little work on algorithms for sorting strings on distributed-memory parallel machines. We develop efficient algorithms for this problem based on the multi-way merging principle. These algorithms inspect only characters that are needed to determine the sorting order. Moreover, communication volume is reduced by also communicating (roughly) only those characters and by communicating repetitions of the same prefixes only once. Experiments on up to 1280 cores reveal that these algorithm are often more than five times faster than previous algorithms." }
{ "title": "Robust Massively Parallel Sorting", "abstract": "Abstract-We investigate distributed memory parallel sorting algorithms that scale to the largest available machines and are robust with respect to input size and distribution of the input elements. The main outcome is that three sorting algorithms cover the entire range of possible input sizes. For all three algorithms we devise new low overhead mechanisms to make them robust with respect to duplicate keys. The one for medium sized inputs is a new variant of quicksort with fast high-quality pivot selection. Asymptotic analysis at the same time provides performance guarantees and guides the selection and configuration of the algorithms. We validate these hypotheses using extensive experiments on 7 algorithms, 10 input distributions, up to 262 144 cores, and varying the input sizes over 9 orders of magnitude. For \"difficult\" input distributions, our algorithms are the only ones working at all. For all but the largest input sizes, we are the first to perform experiments on such large machines at all and our algorithms significantly outperform the ones on would conventionally have considered." }
1612.02534
1510.08973
Visual Analogy Results
We do not compare with the approach proposed in #REFR as their approach requires training on a large number of labeled quadruples of images. However, our approach does not need any labels.
[ "In total, we can build 428 types of instantiation of analogy questions with the shared properties (attributes) across categories.", "For each type of instantiation, we generate 10 questions so 4280 analogy questions are generated altogether.", "For each question, there are 5 cor-rect answers in the answer image set and other 185 images are served as distractor images.", "Note that when answering these questions, our approach only looks at the given images without being exposed to any category or property labels.", "Baseline: We compare with the baseline approach used in #OTHEREFR ." ]
[ "The baseline approach uses the subtraction (difference) between image features to capture the mapping between images.", "Then they compare the mapping between I 1 and I 2 to the mapping between I 3 to a potential answer image to evaluate the possibility of the answer image being correct.", "Concretely, given a visual analogy question I 1 : I 2 :: I 3 :?, they rank all the potential answer images {I k } according to the following score:", "where T (I i , I j ) is defined as", "where x i and x j denote the image feature of image I i and I j respectively." ]
[ "images" ]
method
{ "title": "Contextual Visual Similarity", "abstract": "Measuring visual similarity is critical for image understanding. But what makes two images similar? Most existing work on visual similarity assumes that images are similar because they contain the same object instance or category. However, the reason why images are similar is much more complex. For example, from the perspective of category, a black dog image is similar to a white dog image. However, in terms of color, a black dog image is more similar to a black horse image than the white dog image. This example serves to illustrate that visual similarity is ambiguous but can be made precise when given an explicit contextual perspective. Based on this observation, we propose the concept of contextual visual similarity. To be concrete, we examine the concept of contextual visual similarity in the application domain of image search. Instead of providing only a single image for image similarity search (e.g., Google image search), we require three images. Given a query image, a second positive image and a third negative image, dissimilar to the first two images, we define a contextualized similarity search criteria. In particular, we learn feature weights over all the feature dimensions of each image such that the distance between the query image and the positive image is small and their distances to the negative image are large after reweighting their features. The learned feature weights encode the contextualized visual similarity specified by the user and can be used for attribute specific image search. We also show the usefulness of our contextualized similarity weighting scheme for different tasks, such as answering visual analogy questions and unsupervised attribute discovery." }
{ "title": "VISALOGY: Answering Visual Analogy Questions", "abstract": "In this paper, we study the problem of answering visual analogy questions. These questions take the form of image A is to image B as image C is to what. Answering these questions entails discovering the mapping from image A to image B and then extending the mapping to image C and searching for the image D such that the relation from A to B holds for C to D. We pose this problem as learning an embedding that encourages pairs of analogous images with similar transformations to be close together using convolutional neural networks with a quadruple Siamese architecture. We introduce a dataset of visual analogy questions in natural images, and show first results of its kind on solving analogy questions on natural images." }
1703.06907
1412.7122
C. Bridging the reality gap
In #REFR , the authors find that by pretraining a network on ImageNet and fine-tuning on synthetic data created from 3D models, better detection performance on the PASCAL dataset can be achieved than training with only a few labeled examples from the real dataset.
[ "In #OTHEREFR , the authors suggested that transferability can be achieved by randomly varying the items in the implementation set -model parameters that are not essential to the controller achieving near-optimal performance.", "Our work can be interpreted in this framework by considering the rendering aspects of the simulator (lighting, texture, etc) as part of the implementation set.", "Researchers in computer vision have used 3D models as a tool to improve performance on real images since the earliest days of the field (e.g., #OTHEREFR ).", "More recently, 3D models have been used to augment training data to aid transferring deep neural networks between datasets and prevent over-fitting on small datasets for tasks like viewpoint estimation #OTHEREFR and object detection #OTHEREFR , #OTHEREFR .", "Recent work has explored using only synthetic data for training 2D object detectors (i.e., predicting a bounding box for objects in the scene)." ]
[ "In contrast to our work, most object detection results in computer vision use realistic textures, but do not create coherent 3D scenes.", "Instead, objects are rendered against a solid background or a randomly chosen photograph.", "As a result, our approach allows our models to understand the 3D spatial information necessary for rich interactions with the physical world.", "Sadeghi and Levine's work #OTHEREFR is the most similar to our own.", "The authors demonstrate that a policy mapping images to controls learned in a simulator with varied 3D scenes and textures can be applied successfully to real-world quadrotor flight." ]
[ "ImageNet" ]
background
{ "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "abstract": "Bridging the 'reality gap' that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control." }
{ "title": "Learning Deep Object Detectors from 3D Models", "abstract": "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark." }
1701.05524
1412.7122
Related Work
To mitigate this drawback, #REFR proposes to directly add auxiliary texture and background to the rendered results, with the help of commercial software (e.g. AutoDesk 3ds MAX 1 ).
[ "CAD Simulation CAD simulation has been extensively used by researchers since the early days of computer vision #OTHEREFR .", "3D CAD models have been utilized to generate stationary synthetic images with variable object poses, textures, and backgrounds #OTHEREFR .", "Recent usage of CAD simulation has been extended to multiple vision tasks, e.g.", "object detection #OTHEREFR , pose estimation #OTHEREFR , robotic simulation #OTHEREFR , semantic segmentation #OTHEREFR .", "However, for many tasks, CAD-synthetic images are too low-quality due to the absence of realistic backgrounds and texture." ]
[ "However, this method introduces new problems, such as unnatural positioning of objects (e.g.", "floating car above the road), high contrast between object boundaries and background, etc.", "Our approach tackles these problems by synthesizing novel imagery with DGCAN and can generate images with natural feature statistics.", "DCNN Image Synthesis Deep convolutional neural networks learn distributed, invariant and nonlinear feature representations from large-scale image repositories #OTHEREFR .", "Generative Adversarial Networks (GANs) #OTHEREFR and their variations #OTHEREFR aim to synthesize images that are indistinguishable from the distribution of images in their training set." ]
[ "auxiliary texture", "rendered results" ]
method
{ "title": "Synthetic to Real Adaptation with Generative Correlation Alignment Networks", "abstract": "Synthetic images rendered from 3D CAD models are useful for augmenting training data for object recognition algorithms. However, the generated images are nonphotorealistic and do not match real image statistics. This leads to a large domain discrepancy, causing models trained on synthetic data to perform poorly on real domains. Recent work has shown the great potential of deep convolutional neural networks to generate realistic images, but has not utilized generative models to address syntheticto-real domain adaptation. In this work, we propose a Deep Generative Correlation Alignment Network (DGCAN) to synthesize images using a novel domain adaption algorithm. DGCAN leverages a shape preserving loss and a low level statistic matching loss to minimize the domain discrepancy between synthetic and real images in deep feature space. Experimentally, we show training off-the-shelf classifiers on the newly generated data can significantly boost performance when testing on the real image domains (PAS-CAL VOC 2007 benchmark and Office dataset), improving upon several existing methods." }
{ "title": "Learning Deep Object Detectors from 3D Models", "abstract": "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark." }
1701.05524
1412.7122
Experiments
First, we apply DG-CAN to the CAD-synthetic dataset provided by #REFR to synthesize adapted DGCAN-synthetic images.
[ "Our experiments include two parts." ]
[ "Second, we train off-the-shelf classifiers on the DGCAN-synthetic images and test on the PASCAL 2007 #OTHEREFR and Office #OTHEREFR datasets.", "We implement our model with the Caffe #OTHEREFR (2) The right plot illustrates the reconstruction results generated by using the tools provided by #OTHEREFR .", "The reconstructions reveal that our DGCAN-synthetic images share more similarities with real images from the DCNN's perspective. The (uniform gray-scale) CAD-synthetic images only provides edge information.", "Thus, the pixels in the reconstructed images are dominated by the rich color and texture information encoded in the DCNN's parameters. (Best viewed in color!) framework.", "Datasets (both CAD-synthetic and DGCANsynthetic), code and experimental configurations will be made available publicly." ]
[ "CAD-synthetic dataset" ]
method
{ "title": "Synthetic to Real Adaptation with Generative Correlation Alignment Networks", "abstract": "Synthetic images rendered from 3D CAD models are useful for augmenting training data for object recognition algorithms. However, the generated images are nonphotorealistic and do not match real image statistics. This leads to a large domain discrepancy, causing models trained on synthetic data to perform poorly on real domains. Recent work has shown the great potential of deep convolutional neural networks to generate realistic images, but has not utilized generative models to address syntheticto-real domain adaptation. In this work, we propose a Deep Generative Correlation Alignment Network (DGCAN) to synthesize images using a novel domain adaption algorithm. DGCAN leverages a shape preserving loss and a low level statistic matching loss to minimize the domain discrepancy between synthetic and real images in deep feature space. Experimentally, we show training off-the-shelf classifiers on the newly generated data can significantly boost performance when testing on the real image domains (PAS-CAL VOC 2007 benchmark and Office dataset), improving upon several existing methods." }
{ "title": "Learning Deep Object Detectors from 3D Models", "abstract": "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark." }
1701.05524
1412.7122
Introduction
Previous work #REFR utilized computer graphics (CG) technique to render 2D CADsynthetic images and consequently train deep CNN-based classifiers on them.
[ "Rendering images from freely available CAD models can potentially produce an infinite number of training examples from many viewpoints and Figure 1 .", "Overview of our approach CAD-synthetic images rendered by CAD simulation can produce cheap training data for deep models, but the resulting performance on real test data is low due to the domain mismatch.", "We propose the Deep Generative Correlation Alignment Network (DGCAN) to bridge the domain gap between CAD-synthetic and real images.", "DGCAN can simultaneously generate the object shape from CAD models (by applying 2 loss to the hidden neuron activations) and structured natural texture matching real background images (by applying the CORAL loss to the neuron covariance matrices.) We then train a deep classifier on the DGCAN-rendered images and test them on the real domain, demonstrating a significant improvement over existing methods.", "for almost any object category." ]
[ "However, their CAD-synthetic images ( Figure 1 ) are highly non-realistic due to the absence of natural object texture and background.", "More specifically, they exhibit the following problems: 1) large mismatch between foreground and background (e.g.", "cars floating above the road), 2) higher contrast between the object edges and the background (e.g. poorly-blended aeroplane in the sky), 3) non-photorealistic scenery.", "These problems inevitably lead to a very significant domain shift between CAD-synthetic and real images, and models trained on the synthetic domain have poor performance on real test images #OTHEREFR .", "Generative neural networks have recently been proposed to create novel imagery that shares common properties with real images, such as content and style #OTHEREFR , similarity in feature space #OTHEREFR , etc." ]
[ "deep CNN-based classifiers" ]
method
{ "title": "Synthetic to Real Adaptation with Deep Generative Correlation Alignment Networks", "abstract": "Synthetic images rendered from 3D CAD models have been used in the past to augment training data for object recognition algorithms. However, the generated images are non-photorealistic and do not match real image statistics. This leads to a large domain discrepancy, causing models trained on synthetic data to perform poorly on real domains. Recent work has shown the great potential of deep convolutional neural networks to generate realistic images, but has rarely addressed synthetic-to-real domain adaptation. Inspired by these ideas, we propose the Deep Generative Correlation Alignment Network (DGCAN) to synthesize training images using a novel domain adaption algorithm. DGCAN leverages the 2 and the correlation alignment (CORAL) losses to minimize the domain discrepancy between generated and real images in deep feature space. The rendered results demonstrate that DGCAN can synthesize the object shape from 3D CAD models together with structured texture from a small amount of real background images. Experimentally, we show that training classifiers on the generated data can significantly boost performance when testing on the real image domain (PASCAL VOC 2007 and Office benchmark), improving upon several existing methods." }
{ "title": "Learning Deep Object Detectors from 3D Models", "abstract": "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark." }
1701.05524
1412.7122
Experiments on Office Dataset
Train/Test Set Acquisition We apply our model to 775 CAD-synthetic images provided by #REFR to generate the training dataset.
[ "Previous literature #OTHEREFR demonstrates that the image recognition system will encounter severe performance degradation when the test images come from a different visual domain.", "We evaluate the domain generalization ability of our approach by adapting synthetic domain to Amazon domain (downloaded from amazon.com) from Office Dataset #OTHEREFR ." ]
[ "The test set comes from Office Dataset #OTHEREFR , which has same 31 categories (e.g. backpack, cups, etc.) in three domains, i.e.", "Amazon, Webcam (collected by webcam) and DSLR (collected by DSLR camera).", "Specifically, we use the Amazon subset (2817 images) as the test set in our experiments for adapting other domains to Amazon domain is the most challenging setting.", "Baselines We compare our approach to two track of baselines, with one track trained on real image domain (Webcam domain, 795 images) and the other trained on CADsynthetic domain (775 images).", "In both tracks, we compare our model to basic \"AlexNet\" #OTHEREFR model or domain adaptation algorithms #OTHEREFR ." ]
[ "775 CAD-synthetic images" ]
method
{ "title": "Synthetic to Real Adaptation with Deep Generative Correlation Alignment Networks", "abstract": "Synthetic images rendered from 3D CAD models have been used in the past to augment training data for object recognition algorithms. However, the generated images are non-photorealistic and do not match real image statistics. This leads to a large domain discrepancy, causing models trained on synthetic data to perform poorly on real domains. Recent work has shown the great potential of deep convolutional neural networks to generate realistic images, but has rarely addressed synthetic-to-real domain adaptation. Inspired by these ideas, we propose the Deep Generative Correlation Alignment Network (DGCAN) to synthesize training images using a novel domain adaption algorithm. DGCAN leverages the 2 and the correlation alignment (CORAL) losses to minimize the domain discrepancy between generated and real images in deep feature space. The rendered results demonstrate that DGCAN can synthesize the object shape from 3D CAD models together with structured texture from a small amount of real background images. Experimentally, we show that training classifiers on the generated data can significantly boost performance when testing on the real image domain (PASCAL VOC 2007 and Office benchmark), improving upon several existing methods." }
{ "title": "Learning Deep Object Detectors from 3D Models", "abstract": "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark." }
1612.02559
1412.7122
Related work
This is done by leveraging 3D motion capture (MoCap) data. In #REFR , Peng et al.
[ "In #OTHEREFR , Chatfield and Zisserman demonstrate that the augmentation techniques of #OTHEREFR are not only beneficial for training deep architectures, but shallow learning approaches equally benefit from such simple and generic schemes.", "In the second category of guided-augmentation techniques, many approaches have recently been proposed.", "In #OTHEREFR , e.g., Charalambous and Bharath employ guidedaugmentation in the context of gait recognition.", "The authors suggest to simulate synthetic gait video data (obtained from avatars) with respect to various confounding factors (such as clothing, hair, etc.) to extend the training corpus.", "Similar in spirit, Rogez and Schmid #OTHEREFR propose an image-based synthesis engine for augmenting existing 2D human pose data by photorealistic images with greater pose variability." ]
[ "also use 3D data, in the form of CAD models, to render synthetic images of objects (with varying pose, texture, background) that are then used to train CNNs for object detection.", "It is shown that synthetic data is beneficial, especially in situations where few (or no) training instances are available, but 3D CAD models are. Su et al.", "#OTHEREFR follow a similar pipeline of rendering images from 3D models for viewpoint estimation, however, with substantially more synthetic data obtained, e.g., by deforming existing 3D models before rendering.", "Another (data-driven) guided augmentation technique is introduced by Hauberg et al. #OTHEREFR .", "The authors propose to learn class-specific transformations from external training data, instead of manually specifying transformations as in #OTHEREFR ." ]
[ "3D motion capture" ]
method
{ "title": "AGA: Attribute-Guided Augmentation", "abstract": "We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows synthesis of data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to an external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transferlearning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of highlevel CNN features considerably improves one-shot recognition performance on both problems." }
{ "title": "Learning Deep Object Detectors from 3D Models", "abstract": "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark." }
1612.02559
1412.7122
One-shot object recognition
The latter is important, since #REFR assumes the existence of 3D CAD models for objects in T i from which synthetic images can be rendered.
[ "As a Baseline, we \"train\" a linear C-SVM (on 1-norm normalized features) using only the single instances of each object class in T i (SVM cost fixed to 10).", "Exactly the same parameter settings of the SVM are then used to train on the single instances + features synthesized by AGA.", "We repeat the selection of one-shot instances 500 times and report the average recognition accuracy.", "For comparison, we additionally list 5-shot recognition results in the same setup. Remark.", "The design of this experiment is similar to #OTHEREFR Section 4.3.] , with the exceptions that we (1) do not detect objects, #OTHEREFR augmentation is performed in feature space and (3) no object-specific information is available." ]
[ "In our case, augmentation does not require any a-priori information about the objects classes. Results.", "Table 3 lists the classification accuracy for the different sets of one-shot training data.", "First, using original one-shot instances augmented by Depth-guided features (+D); second, using original features + Pose-guided features (+P) and third, a combination of both (+D, P); In general, we observe that adding AGA-synthesized features improves recognition accuracy over the Baseline in all cases.", "#OTHEREFR at 5% significance (indicated by ' ' in Table 3 ).", "Adding both Depth-and Pose-augmented features to the original one-shot features achieves the greatest improvement in recognition accuracy, ranging from 4-6 percentage points." ]
[ "synthetic images", "3D CAD models" ]
background
{ "title": "AGA: Attribute-Guided Augmentation", "abstract": "We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows synthesis of data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to an external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transferlearning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of highlevel CNN features considerably improves one-shot recognition performance on both problems." }
{ "title": "Learning Deep Object Detectors from 3D Models", "abstract": "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark." }
1607.02046
1412.7122
Introduction
Although synthesis seems like an appealing solution, there often exists a large domain shift from synthetic to real data #REFR .
[ "Recent work #OTHEREFR has introduced the use of data synthesis as a solution to train CNNs when only limited data is available.", "Synthesis can potentially provide infinite training data by rendering 3D CAD models from any camera viewpoint #OTHEREFR .", "Fisher et al #OTHEREFR generate a synthetic \"Flying Chairs\"dataset to learn optical flow with CNN and show that networks trained on this unrealistic data still generalize very well to existing datasets.", "In the context of scene text recognition, Jaderberg et al #OTHEREFR trained solely on data produced by a synthetic text generation engine.", "In this case, the synthetic data is highly realistic and sufficient to replace real data." ]
[ "Integrating a human 3D model in a given background in a realistic way is not trivial.", "Rendering a collection of photo-realistic images (in terms of color, texture, context, shadow) that would cover the variations in pose, body shape, clothing and scenes is a challenging task.", "Instead of rendering a human 3D model, we propose an image-based synthesis approach that makes use of Motion Capture (MoCap) data to augment an existing dataset of real images with 2D pose annotations.", "Our system synthesizes a very large number of new in-the-wild images showing more pose configurations and, importantly, it provides the corresponding 3D pose annotations (see Fig. 1 ).", "For each candidate 3D pose in the MoCap library, our system combines several annotated images to generate a synthetic image of a human in this particular pose." ]
[ "synthesis", "large domain shift" ]
background
{ "title": "MoCap-guided Data Augmentation for 3D Pose Estimation in the Wild", "abstract": "In this paper, we address the problem of 3D human pose understanding in the wild. A significant challenge is the lack of training data, i.e., 2D images of humans annotated with 3D pose. Such data is necessary to train state-of-the-art CNN architectures. Here, we propose a solution to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. We introduce an image-based synthesis engine that artificially augments a dataset of real images and 2D human pose annotations using 3D Motion Capture (MoCap) data. Given a candidate 3D pose, our algorithm selects for each joint an image whose 2D pose locally matches the projected 3D pose. The selected images are then combined to generate a new synthetic image by stitching local image patches in a kinematically constrained manner. The resulting images are used to train an end-to-end CNN for full-body 3D pose estimation. We cluster the training data into a large number of pose classes and tackle pose estimation as a K-way classification problem. Such approach is viable only with large training sets such as ours. Our method outperforms state-of-the-art in terms of 3D pose estimation in controlled environments (Human3.6M), showing promising results for in-the-wild images (LSP)." }
{ "title": "Learning Deep Object Detectors from 3D Models", "abstract": "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark." }
1807.07428
1412.7122
Related Work
However, a major difficulty for models trained on synthetic images is to guarantee that they will generalize well to real data since the synthesis process introduces significant changes of image statistics #REFR .
[ "Data augmentation is a major tool to train deep neural networks.", "If varies from trivial geometrical transformations such as horizontal flipping, cropping with color perturbations, and adding noise to an image #OTHEREFR , to synthesizing new training images #OTHEREFR .", "Some recent object detectors #OTHEREFR benefit from standard data augmentation techniques more than others #OTHEREFR .", "The performance of Fast-and Faster-RCNN could be for instance increased by simply corrupting random parts of an image in order to mimic occlusions #OTHEREFR .", "Regarding image synthesis, recent works such as #OTHEREFR build and train their models on purely synthetic rendered 2d and 3d scenes." ]
[ "To address this issue, the authors of #OTHEREFR adopt a different direction by pasting real segmented object into natural images, which reduces the presence of rendering artefacts.", "For object instance detection, the work #OTHEREFR estimates scene geometry and spatial layout, before synthetically placing objects in the image to create realistic training examples.", "In #OTHEREFR , the authors propose an even simpler solution to the same problem by pasting images in random positions but modeling well occluded and truncated objects, and making the training step robust to boundary artifacts at pasted locations." ]
[ "synthetic images" ]
background
{ "title": "Modeling Visual Context is Key to Augmenting Object Detection Datasets", "abstract": "Abstract. Performing data augmentation for learning deep neural networks is well known to be important for training visual recognition systems. By artificially increasing the number of training examples, it helps reducing overfitting and improves generalization. For object detection, classical approaches for data augmentation consist of generating images obtained by basic geometrical transformations and color changes of original training images. In this work, we go one step further and leverage segmentation annotations to increase the number of object instances present on training data. For this approach to be successful, we show that modeling appropriately the visual context surrounding objects is crucial to place them in the right environment. Otherwise, we show that the previous strategy actually hurts. With our context model, we achieve significant mean average precision improvements when few labeled examples are available on the VOC'12 benchmark." }
{ "title": "Learning Deep Object Detectors from 3D Models", "abstract": "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark." }
1203.1780
0804.1028
Proposition 2.15
These polynomials are classical in combinatorics, and known as the Narayana polynomials, see for example #REFR . We will call ca n,t a t-Narayana fraction.
[ "The statement on outputs follows easily by inspection of the bijection.", "The bijection is illustrated in figure 5 .", "Let ca n,t be the generating series E Lnrn,t and let ca n be the polynomial E Lnrn .", "The first few values of ca n,t are", "From the bijection above, it follows that ca n counts Dyck paths according to the number of peaks." ]
[ "Let us introduce ordinary generating series", "and let E c be the similar series for closed connected flows on linear trees.", "The analogous series for small flows are just x(1 + E) and x(1 + E t ), because a small flow on Lnr n+1 can be described by a flow on Lnr n .", "From the combinatorial decomposition used in the bijection with Dyck paths, one deduces that", "By decomposing a flow according to whether the root is an output or not, one obtains the equation" ]
[ "Narayana polynomials" ]
background
{ "title": "Flows on rooted trees and the Narayana idempotents", "abstract": "Several generating series for flows on rooted trees are introduced, as elements in the group of series associated with the Pre-Lie operad. By combinatorial arguments, one proves identities that characterise these series. One then gives a complete description of the image of these series in the group of series associated with the Dendriform operad. This allows to recover the Lie idempotents in the descent algebras recently introduced by Menous, Novelli and Thibon. Moreover, one defines new Lie idempotents and conjecture the existence of some others." }
{ "title": "NARAYANA NUMBERS AND SCHUR-SZEGŐ COMPOSITION", "abstract": "Abstract. In the present paper we find a new interpretation of Narayana polynomials Nn(x) which are the generating polynomials for the Narayana numbers N n,k =" }
1504.02321
0804.1028
Proof:
Remark 21 It is shown in paper #REFR (see part 1) of Corollary 7 there) that the roots of the Narayana polynomials N n and N n−1 interlace, i.e. satisfy the first interlacing property.
[ "(ζ i ) = 0.", "Recall that ζ 2 i − a 2 < 0, ζ i < 0.", "This means that the signs of G (k+2) n (ζ i ) and G (k+1) n (ζ i ) are opposite from where the theorem follows easily.", "✷ For Narayana polynomials the following recurrence relation holds (see #OTHEREFR ):", "(n + 1)N n (x) = (2n − 1)(1 + x)N n−1 (x) − (n − 2)(x − 1) 2 N n−2 (x)" ]
[]
[ "Narayana polynomials" ]
background
{ "title": "Interlacing properties and the Schur-Szeg\\H{o} composition", "abstract": "Each degree n polynomial in one variable of the form (x+1)(x n−1 +c 1 x n−2 +· · ·+c n−1 ) is representable in a unique way as a Schur-Szegő composition of n − 1 polynomials of the form (x + 1) n−1 (x + a i ), see [5] , [2] and [7] . Set σ j := 1≤i1<···<ij ≤n−1 a i1 · · · a ij . The eigenvalues of the affine mapping (c 1 , . . . , c n−1 ) → (σ 1 , . . . , σ n−1 ) are positive rational numbers and its eigenvectors are defined by hyperbolic polynomials (i.e. with real roots only). In the present paper we prove interlacing properties of the roots of these polynomials." }
{ "title": "NARAYANA NUMBERS AND SCHUR-SZEGŐ COMPOSITION", "abstract": "Abstract. In the present paper we find a new interpretation of Narayana polynomials Nn(x) which are the generating polynomials for the Narayana numbers N n,k =" }
1608.06336
1308.0345
J(Θ, T ; X(Θ, 0)) = E[L(Θ, T ; X(Θ, 0))]
In the rest of this paper, we will consider two families of trajectories motivated by a similar approach used in the multiagent persistent monitoring problem in #REFR : elliptical trajectories and a more general Fourier series trajectory representation better suited for nonuniform target topologies.
[ "where L(Θ, T ; X(Θ, 0)) is a sample function defined over [0, T ] and X(Θ, 0) is the initial value of the state vector.", "For convenience, in the following, we will use L i , i = 1, . . .", ", 4, and L f to denote sample functions of J i , i = 1, . . . , 4, and J f , respectively.", "Note that, in (37), we suppress the dependence of the four objective function components on the controls u(t) and θ(t) and stress instead their dependence on the parameter vector Θ." ]
[ "The hybrid dynamics of the data harvesting system allow us to apply the theory of IPA #OTHEREFR to obtain online the gradient of the sample function L(Θ, T ; X(Θ, 0)) with respect to Θ.", "The value of the IPA approach is twofold: 1) The sample gradient ∇L(Θ, T ) can be obtained online based on observable sample path data only; and 2) ∇L(Θ, T ) is an unbiased estimate of ∇J(Θ, T ) under mild technical conditions, as shown in #OTHEREFR .", "Therefore, we can use ∇L(Θ, T ) in a standard gradient-based stochastic optimization algorithm", "to converge (at least locally) to an optimal parameter vector Θ * with a proper selection of a step-size sequence {ν l } #OTHEREFR .", "We emphasize that this process is carried out online, that is, the gradient is evaluated by observing a trajectory with given Θ over [0, T ] and is iteratively adjusted until convergence is attained." ]
[ "multiagent persistent monitoring" ]
method
{ "title": "Event-Driven Trajectory Optimization for Data Harvesting in Multiagent Systems", "abstract": "Abstract-We propose a new event-driven method for online trajectory optimization to solve the data harvesting problem: in a 2-D mission space, N mobile agents are tasked with the collection of data generated at M stationary sources and delivery to a base with the goal of minimizing expected collection and delivery delays. We define a new performance measure that addresses the event excitation problem in event-driven controllers and formulate an optimal control problem. The solution of this problem provides some insight on its structure, but it is computationally intractable, especially in the case where the data generating processes are stochastic. We propose an agent trajectory parameterization in terms of general function families, which can be subsequently optimized online through the use of infinitesimal perturbation analysis. Properties of the solutions are identified, including robustness with respect to the stochastic data generation process and scalability in the size of the event set characterizing the underlying hybrid dynamical system. Explicit results are provided for the case of elliptical and Fourier series trajectories, and comparisons with a state-of-the-art graph-based algorithm are given." }
{ "title": "An optimal control approach to the multi-agent persistent monitoring problem in two-dimensional spaces", "abstract": "Ahstract-We address the persistent monitoring problem in two-dimensional (20) mission spaces where the objective is to control the movement of multiple cooperating agents to mini mize an uncertainty metric. In a one-dimensional (10) mission space, we have shown that the optimal solution is for each agent to move at maximal speed and switch direction at specific points, possibly waiting some time at each such point before switching. In a 20 mission space, such simple solutions can no longer be derived. An alternative is to optimally assign each agent a linear trajectory, motivated by the 10 analysis. We prove, however, that elliptical trajectories outperform linear ones. Therefore, we formulate a parametric optimization problem in which we seek to determine such trajectories. We show that the problem can be solved using Infinitesimal Perturbation Analysis (IPA) to obtain performance gradients on line and obtain a complete solution. Numerical examples are included to illustrate the main result and to compare our proposed scalable approach to trajectories obtained through off-line computationally intensive solutions." }
1911.04297
1308.0345
D. Optimal Control Problem
Note that the problem setting in #REFR , where σ i = 1, for i = 1, ..., M , is a special case of our setting here.
[ "Our goal is to minimize the uncertainty accumulated across all target points.", "We define J 1 (t) to be the weighted sum of target uncertainties:", "The weight coefficients σ i are set to capture the relative importance of different targets." ]
[ "Moreover, in contrast to #OTHEREFR where each agent is represented as a point mass and collisions among agents are ignored, we will consider the sizes of agents in this work.", "Note that to avoid collisions in persistent monitoring settings, any two agents cannot share the same location at the same time instant.", "Considering the size of each agent, for agent n we define a safety radius ρ n > 0, and the corresponding safety disk Q n = {x ∈ Ω | x − s n (t) ≤ ρ n }.", "We consider that a collision occurs between agents p and q at some location only if Q p ∩ Q q = ∅.", "Obviously, to avoid agent collisions, we must ensure that the Euclidean distance d pq (t) = s p (t) − s q (t) ≥ ρ p + ρ q at all times. To capture the collisions among agents, we define" ]
[ "problem" ]
background
{ "title": "Collision-Free Trajectory Design for 2D Persistent Monitoring Using Second-Order Agents", "abstract": "This paper considers a two-dimensional persistent monitoring problem by controlling movements of second-order agents to minimize some uncertainty metric associated with targets in a dynamic environment. In contrast to common sensing models depending only on the distance from a target, we introduce an active sensing model which considers the distance between an agent and a target, as well as the agent's velocity. We propose an objective function which can achieve a collision-free agent trajectory by penalizing all possible collisions. Applying structural properties of the optimal control derived from the Hamiltonian analysis, we limit agent trajectories to a simpler parametric form under a family of 2D curves depending on the problem setting, e.g. ellipses and Fourier trajectories. Our collision-free trajectories are optimized through an event-driven Infinitesimal Perturbation Analysis (IPA) and gradient descent method. Although the solution is generally locally optimal, this method is computationally efficient and offers an alternative to other traditional time-driven methods. Finally, simulation examples are provided to demonstrate our proposed results." }
{ "title": "An optimal control approach to the multi-agent persistent monitoring problem in two-dimensional spaces", "abstract": "Ahstract-We address the persistent monitoring problem in two-dimensional (20) mission spaces where the objective is to control the movement of multiple cooperating agents to mini mize an uncertainty metric. In a one-dimensional (10) mission space, we have shown that the optimal solution is for each agent to move at maximal speed and switch direction at specific points, possibly waiting some time at each such point before switching. In a 20 mission space, such simple solutions can no longer be derived. An alternative is to optimally assign each agent a linear trajectory, motivated by the 10 analysis. We prove, however, that elliptical trajectories outperform linear ones. Therefore, we formulate a parametric optimization problem in which we seek to determine such trajectories. We show that the problem can be solved using Infinitesimal Perturbation Analysis (IPA) to obtain performance gradients on line and obtain a complete solution. Numerical examples are included to illustrate the main result and to compare our proposed scalable approach to trajectories obtained through off-line computationally intensive solutions." }
1911.04297
1308.0345
IV. AGENT TRAJECTORY PARAMETERIZATION AND OPTIMIZATION
The result of #REFR indicates that under some assumptions an elliptical trajectory outperforms a linear one when using the average uncertainty metric as a comparison criterion.
[]
[ "In fact, elliptical trajectories degenerate to linear ones when the minor axis of the ellipse becomes zero.", "Based on the result that elliptical trajectories are smooth and periodic, and are more suitable for 2D monitoring #OTHEREFR , we also select them for agents to execute the monitoring task.", "Under the optimal control derived in Section III, the agent first accelerates along the elliptical trajectory with the maximal acceleration u max n .", "Ever since it reaches the maximal velocity, it maintains the maximal velocity along the monitoring task." ]
[ "elliptical trajectory" ]
method
{ "title": "Collision-Free Trajectory Design for 2D Persistent Monitoring Using Second-Order Agents", "abstract": "This paper considers a two-dimensional persistent monitoring problem by controlling movements of second-order agents to minimize some uncertainty metric associated with targets in a dynamic environment. In contrast to common sensing models depending only on the distance from a target, we introduce an active sensing model which considers the distance between an agent and a target, as well as the agent's velocity. We propose an objective function which can achieve a collision-free agent trajectory by penalizing all possible collisions. Applying structural properties of the optimal control derived from the Hamiltonian analysis, we limit agent trajectories to a simpler parametric form under a family of 2D curves depending on the problem setting, e.g. ellipses and Fourier trajectories. Our collision-free trajectories are optimized through an event-driven Infinitesimal Perturbation Analysis (IPA) and gradient descent method. Although the solution is generally locally optimal, this method is computationally efficient and offers an alternative to other traditional time-driven methods. Finally, simulation examples are provided to demonstrate our proposed results." }
{ "title": "An optimal control approach to the multi-agent persistent monitoring problem in two-dimensional spaces", "abstract": "Ahstract-We address the persistent monitoring problem in two-dimensional (20) mission spaces where the objective is to control the movement of multiple cooperating agents to mini mize an uncertainty metric. In a one-dimensional (10) mission space, we have shown that the optimal solution is for each agent to move at maximal speed and switch direction at specific points, possibly waiting some time at each such point before switching. In a 20 mission space, such simple solutions can no longer be derived. An alternative is to optimally assign each agent a linear trajectory, motivated by the 10 analysis. We prove, however, that elliptical trajectories outperform linear ones. Therefore, we formulate a parametric optimization problem in which we seek to determine such trajectories. We show that the problem can be solved using Infinitesimal Perturbation Analysis (IPA) to obtain performance gradients on line and obtain a complete solution. Numerical examples are included to illustrate the main result and to compare our proposed scalable approach to trajectories obtained through off-line computationally intensive solutions." }
1911.02658
1308.0345
II. PROBLEM FORMULATION
However, in our formulation above, we have shown that it can be done via using a simple discrete event system model #REFR .
[ "to update the TCP Θ iteratively.", "In #OTHEREFR , the projection operator [·] + = max{0, ·} is used.", "The step size β (l) is selected such that it diminishes according to the standard conditions ∑ ∞ l=1 β (l) = ∞ and lim l→∞ β (l) = 0 #OTHEREFR .", "Note that each iteration l of (8) uses the data collected from a single trajectory (i.e., ∀t ∈ [0, T ]) to evaluate ∇J T (Θ (l) ).", "The work in #OTHEREFR uses a hybrid system model to construct realizations of this persistent monitoring system." ]
[ "The use of a DES model results in faster and efficient simulations and provides more intuition about the underlying decision making process.", "However, this modeling discrepancy will not affect any of our comparisons/conclusions made with respect to #OTHEREFR .", "Initialization: Θ (0) : In #OTHEREFR a randomly generated set of values is used to initialize thresholds Θ (0) for #OTHEREFR .", "Due to the non-convexity of the objective function in #OTHEREFR , we expect that the resulting value of Θ when (8) converges depends on Θ (0) .", "Therefore, identifying well-performing initial thresholds will generally provide significant improvements over the local minimum resulting from randomly selected ones." ]
[ "system model" ]
method
{ "title": "Asymptotic Analysis for Greedy Initialization of Threshold-Based Distributed Optimization of Persistent Monitoring on Graphs", "abstract": "We consider the optimal multi-agent persistent monitoring problem defined for a team of agents on a set of nodes (targets) interconnected according to a fixed graph topology. The objective is to minimize a measure of mean overall node state uncertainty evaluated over a finite time interval. In prior work, a class of distributed threshold-based parametric controllers has been proposed where agent dwell times at nodes and transitions from one node to the next are controlled by enforcing thresholds on the respective node uncertainties. Under such a threshold policy, on-line gradient-based techniques (such as the Infinitesimal Perturbation Analysis (IPA)) are then used to determine optimal threshold values. However, due to the non-convexity of the problem, this approach leads to often poor local optima highly dependent on the initial thresholds used. To overcome this initialization challenge, in this paper, the asymptotic steadystate behavior of the agent-target system is extensively analyzed. Based on the obtained theoretical results, a computationally efficient off-line greedy technique is developed to systematically generate initial thresholds. Extensive numerical results show that the initial thresholds provided by this greedy technique are almost immediately (locally) optimal or quickly lead to optimal values. In all cases, they perform significantly better than the locally optimal solutions known to date." }
{ "title": "An optimal control approach to the multi-agent persistent monitoring problem in two-dimensional spaces", "abstract": "Ahstract-We address the persistent monitoring problem in two-dimensional (20) mission spaces where the objective is to control the movement of multiple cooperating agents to mini mize an uncertainty metric. In a one-dimensional (10) mission space, we have shown that the optimal solution is for each agent to move at maximal speed and switch direction at specific points, possibly waiting some time at each such point before switching. In a 20 mission space, such simple solutions can no longer be derived. An alternative is to optimally assign each agent a linear trajectory, motivated by the 10 analysis. We prove, however, that elliptical trajectories outperform linear ones. Therefore, we formulate a parametric optimization problem in which we seek to determine such trajectories. We show that the problem can be solved using Infinitesimal Perturbation Analysis (IPA) to obtain performance gradients on line and obtain a complete solution. Numerical examples are included to illustrate the main result and to compare our proposed scalable approach to trajectories obtained through off-line computationally intensive solutions." }
1803.02798
1308.0345
Definition 1. The neighborhood of node i is the set
All agent behaviors are therefore entirely governed by Θ through #REFR , which also implicitly determines the dwell time of the agent at node i.
[ ", D i , i.e., the neighbors are ordered based on their relative proximity to node i.", "We now define the threshold-based control to specify u a (t; Θ) in (6) as follows:", "Under (8), the agent first decreases R i (t) below the threshold θ a ii before moving to another node in the neighbor set N a i with the minimum index k whose associated state uncertainty value exceeds the threshold θ a ij k", ".", "If no such neighbor exists, the agent remains at the current node maintaining its uncertainty state under the given threshold level." ]
[ "(8) is designed to be distributed by considering only the states of neighboring nodes and not those of other nodes or of other agents.", "As such, it is limited to a one-step look-ahead policy.", "However, it can be extended to a richer family of more general multi-step lookahead policies based on node uncertainty state thresholds.", "While this causes the dimensionality of Θ to increase, the optimization framework presented in Sec. III is not affected." ]
[ "agent behaviors" ]
background
{ "title": "Optimal Threshold-Based Control Policies for Persistent Monitoring on Graphs", "abstract": "Abstract-We consider the optimal multi-agent persistent monitoring problem defined by a team of cooperating agents visiting a set of nodes (targets) on a graph with the objective of minimizing a measure of overall node state uncertainty. The solution to this problem involves agent trajectories defined both by the sequence of nodes to be visited by each agent and the amount of time spent at each node. Since such optimal trajectories are generally intractable, we propose a class of distributed threshold-based parametric controllers through which agent transitions from one node to the next are controlled by threshold parameters on the node uncertainty states. The resulting behavior of the agent-target system can be described by a hybrid dynamic system. This enables the use of Infinitesimal Perturbation Analysis (IPA) to determine on line (locally) optimal threshold parameters through gradient descent methods and thus obtain optimal controllers within this family of threshold-based policies. We further show that in a single-agent case the IPA gradient is monotonic, which implies a simple structure whereby an agent visiting a node should reduce the uncertainty state to zero before moving to the next node. Simulation examples are included to illustrate our results and compare them to optimal solutions derived through dynamic programming when this is possible." }
{ "title": "An optimal control approach to the multi-agent persistent monitoring problem in two-dimensional spaces", "abstract": "Ahstract-We address the persistent monitoring problem in two-dimensional (20) mission spaces where the objective is to control the movement of multiple cooperating agents to mini mize an uncertainty metric. In a one-dimensional (10) mission space, we have shown that the optimal solution is for each agent to move at maximal speed and switch direction at specific points, possibly waiting some time at each such point before switching. In a 20 mission space, such simple solutions can no longer be derived. An alternative is to optimally assign each agent a linear trajectory, motivated by the 10 analysis. We prove, however, that elliptical trajectories outperform linear ones. Therefore, we formulate a parametric optimization problem in which we seek to determine such trajectories. We show that the problem can be solved using Infinitesimal Perturbation Analysis (IPA) to obtain performance gradients on line and obtain a complete solution. Numerical examples are included to illustrate the main result and to compare our proposed scalable approach to trajectories obtained through off-line computationally intensive solutions." }
1803.02798
1308.0345
IV. ONE-AGENT CASE ANALYSIS
Recalling our control policy in #REFR , the diagonal entries in the parameter matrix control the dwell times at nodes, whereas the off-diagonal entries control the feasible node visiting sequence.
[]
[ "In what follows, we will show that in a single-agent case the optimal values of diagonal entries in (28) are always zero.", "This structural property indicates that the agent visiting a node should always reduce the uncertainty state to zero before moving to the next node.", "Ignoring the superscript agent index, the single-agent threshold matrix is written as", "Assumption 2. The current node visiting sequence is optimal.", "The first assumption is a technical one and it ensures that the optimization problem is defined over a sufficiently long time horizon T to allow the gradient to converge." ]
[ "feasible node visiting", "control policy" ]
background
{ "title": "Optimal Threshold-Based Control Policies for Persistent Monitoring on Graphs", "abstract": "Abstract-We consider the optimal multi-agent persistent monitoring problem defined by a team of cooperating agents visiting a set of nodes (targets) on a graph with the objective of minimizing a measure of overall node state uncertainty. The solution to this problem involves agent trajectories defined both by the sequence of nodes to be visited by each agent and the amount of time spent at each node. Since such optimal trajectories are generally intractable, we propose a class of distributed threshold-based parametric controllers through which agent transitions from one node to the next are controlled by threshold parameters on the node uncertainty states. The resulting behavior of the agent-target system can be described by a hybrid dynamic system. This enables the use of Infinitesimal Perturbation Analysis (IPA) to determine on line (locally) optimal threshold parameters through gradient descent methods and thus obtain optimal controllers within this family of threshold-based policies. We further show that in a single-agent case the IPA gradient is monotonic, which implies a simple structure whereby an agent visiting a node should reduce the uncertainty state to zero before moving to the next node. Simulation examples are included to illustrate our results and compare them to optimal solutions derived through dynamic programming when this is possible." }
{ "title": "An optimal control approach to the multi-agent persistent monitoring problem in two-dimensional spaces", "abstract": "Ahstract-We address the persistent monitoring problem in two-dimensional (20) mission spaces where the objective is to control the movement of multiple cooperating agents to mini mize an uncertainty metric. In a one-dimensional (10) mission space, we have shown that the optimal solution is for each agent to move at maximal speed and switch direction at specific points, possibly waiting some time at each such point before switching. In a 20 mission space, such simple solutions can no longer be derived. An alternative is to optimally assign each agent a linear trajectory, motivated by the 10 analysis. We prove, however, that elliptical trajectories outperform linear ones. Therefore, we formulate a parametric optimization problem in which we seek to determine such trajectories. We show that the problem can be solved using Infinitesimal Perturbation Analysis (IPA) to obtain performance gradients on line and obtain a complete solution. Numerical examples are included to illustrate the main result and to compare our proposed scalable approach to trajectories obtained through off-line computationally intensive solutions." }
1711.09082
1702.06506
Multi-task feature learning
The only exception is the work of #REFR , which learns using surface normals corresponding to real-world images.
[ "More specifically, we formulate the task as a binary semantic edge/non-edge prediction task, and use the classbalanced sigmoid cross entropy loss proposed in #OTHEREFR :", "where E is our predicted edge map, E is the ground-truth edge map, β = |E − |/|E − + E + |, and |E − | and |E + | denote the number of ground-truth edges and non-edges, respectively, i indexes the ground-truth edge pixels, j indexes the ground-truth background pixels, θ denotes the network parameters, and P (y i = 1|θ) and P (y j = 0|θ) are the predicted probabilities for a pixel corresponding to an edge and background, respectively.", "Depth prediction.", "Existing feature learning methods mainly focus on designing 'pre-text' tasks such as predicting the relative position of spatial patches #OTHEREFR or image in-painting #OTHEREFR .", "The underlying physical properties of a scene like its depth or surface normal have been largely unexplored for learning representations." ]
[ "#OTHEREFR Predicting the depth for each pixel in an image requires understanding high-level semantics about objects and their relative placements in a scene; it requires the model to figure out the objects that are closer/farther from the camera, and their shape and pose.", "While real-world depth imagery computed using a depth camera (e.g., the Kinect) can often be noisy, the depth map extracted from a synthetic scene is clean and accurate.", "To train the network to predict depth, we follow the approach of #OTHEREFR , which compares the predicted and ground-truth log depth maps of an image Q = log Y and Q = log Y , where Y and Y are the predicted and ground-truth depth maps, respectively. Their scale-invariant depth prediction loss is:", "where i indexes the pixels in an image, n is the total number of pixels, and d = Q−Q is the element-wise difference between the predicted and ground-truth log depth maps.", "The first term is the L2 difference and the second term tries to enforce errors to be consistent with one another in their sign." ]
[ "real-world images" ]
background
{ "title": "Cross-Domain Self-Supervised Multi-task Feature Learning Using Synthetic Imagery", "abstract": "In human learning, it is common to use multiple sources of information jointly. However, most existing feature learning approaches learn from only a single task. In this paper, we propose a novel multi-task deep network to learn generalizable high-level visual representations. Since multitask learning requires annotations for multiple properties of the same training instance, we look to synthetic images to train our network. To overcome the domain difference between real and synthetic data, we employ an unsupervised feature space domain adaptation method based on adversarial learning. Given an input synthetic RGB image, our network simultaneously predicts its surface normal, depth, and instance contour, while also minimizing the feature space domain differences between real and synthetic data. Through extensive experiments, we demonstrate that our network learns more transferable representations compared to single-task baselines. Our learned representation produces state-of-the-art transfer learning results on PAS-CAL VOC 2007 classification and 2012 detection." }
{ "title": "PixelNet: Representation of the pixels, by the pixels, and for the pixels", "abstract": ". Our framework applied to three different pixel prediction problems with minor modification of the architecture (last layer) and training process (epochs). Note how our approach recovers the fine details for segmentation (left), surface normal (middle), and semantic boundaries for edge detection (right). We explore design principles for general pixel-level prediction problems, from low-level edge detection to midlevel surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fullyconvolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that stratified sampling of pixels allows one to (1) add diversity during batch updates, speeding up learning; (2) explore complex nonlinear predictors, improving accuracy; and (3) efficiently train state-of-the-art models tabula rasa (i.e., \"from scratch\") for diverse pixel-labeling tasks. Our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context dataset, surface normal estimation on NYUDv2 depth dataset, and edge detection on BSDS." }
1707.07548
1607.08659
II. RELATED WORK
Finally, we demonstrate that our method can be applied on monocular videos, unlike the method in #REFR .
[ "As a general temporal smoothness model, DCT can be applied in any video sequence, without the need of learning from a training dataset.", "Third, in contrast to the sum-of-Gaussian model #OTHEREFR , we use the SMPL #OTHEREFR body model, which naturally encodes the statistical shape and pose dependency between different body parts in a holistic way.", "This enables our method to, not only estimate accurate 3D joint locations, but also a realistic body mesh. This facilitates future modification and animation.", "In comparison, a volumetric skinning approach is utilized in #OTHEREFR to estimate the actor body surface from the Gaussian representation.", "Their surface is coarser and does not allow for detailed deformations." ]
[ "Figure 2 : Automatically estimated 2D joint locations using DeepCut #OTHEREFR and the silhouette estimated via #OTHEREFR ; here shown on the HumanEva dataset #OTHEREFR ." ]
[ "monocular videos" ]
method
{ "title": "Towards Accurate Marker-Less Human Shape and Pose Estimation over Time", "abstract": "Existing markerless motion capture methods often assume known backgrounds, static cameras, and sequence specific motion priors, limiting their application scenarios. Here we present a fully automatic method that, given multi-view videos, estimates 3D human pose and body shape. We take the recently proposed SMPLify method [12] as the base method and extend it in several ways. First we fit a 3D human body model to 2D features detected in multi-view images. Second, we use a CNN method to segment the person in each image and fit the 3D body model to the contours, further improving accuracy. Third we utilize a generic and robust DCT temporal prior to handle the left and right side swapping issue sometimes introduced by the 2D pose estimator. Validation on standard benchmarks shows our results are comparable to the state of the art and also provide a realistic 3D shape avatar. We also demonstrate accurate results on HumanEva and on challenging monocular sequences of dancing from YouTube." }
{ "title": "General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues", "abstract": "Abstract. Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation -skeleton, volumetric shape, appearance, and optionally a body surface -and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as a Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume ray casting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, and variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way." }
1707.07548
1607.08659
B. Quantitative comparison
A qualitative comparison between our results and those of #REFR are shown in Figure 1 . For more results please refer to our supplementary materials.
[ "In the General case, with only 2 views, our method is more accurate than all the other methods using all 3 views.", "With 3 views we obtain a significant improvement relative to the second best method (55.52 vs 63.25).", "Our method also achieves the lowest error in the Specific case.", "Another advantage of our method over the state-of-the-art is that we return a highly realistic body mesh together with skeleton joints. Though the method proposed by Rhodin et al.", "#OTHEREFR also yields a blob-based 3D mesh, we argue that the underlying SMPL model we use is more realistic." ]
[ "Human3.6M: To further validate the generality and usefulness of MuVS, we also evaluate it on Human3.6M #OTHEREFR .", "Human3.6M is the largest public dataset for pose estimation, composed of a wide range of motion types, some of them being very challenging.", "We use the same parameters trained on HumanEva, then apply MuVS on all the 4 views of subjects S9 and S11.", "We compare it with SMPLify #OTHEREFR and other state-of-the-art multi-view pose estimation methods #OTHEREFR .", "The result is shown in our 3D joint estimation accuracy is quite close to that of #OTHEREFR , which is concurrent with our work." ]
[ "Figure" ]
result
{ "title": "Towards Accurate Marker-Less Human Shape and Pose Estimation over Time", "abstract": "Existing markerless motion capture methods often assume known backgrounds, static cameras, and sequence specific motion priors, limiting their application scenarios. Here we present a fully automatic method that, given multi-view videos, estimates 3D human pose and body shape. We take the recently proposed SMPLify method [12] as the base method and extend it in several ways. First we fit a 3D human body model to 2D features detected in multi-view images. Second, we use a CNN method to segment the person in each image and fit the 3D body model to the contours, further improving accuracy. Third we utilize a generic and robust DCT temporal prior to handle the left and right side swapping issue sometimes introduced by the 2D pose estimator. Validation on standard benchmarks shows our results are comparable to the state of the art and also provide a realistic 3D shape avatar. We also demonstrate accurate results on HumanEva and on challenging monocular sequences of dancing from YouTube." }
{ "title": "General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues", "abstract": "Abstract. Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation -skeleton, volumetric shape, appearance, and optionally a body surface -and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as a Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume ray casting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, and variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way." }
1910.02291
1809.05074
I. INTRODUCTION
Finally, we employ the recently-proposed derivative-free features #REFR as alternative to the standard derivative-based variables (velocity and acceleration) as inputs to the inverse dynamics model.
[ "This cascaded approach utilizes many fewer input dimensions, reducing computation, but more importantly, it re-uses the results and requires learning simpler functions at each joint.", "Following the forward and backwards recursion of NE, we attempt both inwards and outwards cascaded versions of our GP learner.", "Rather than discarding the classical inverse dynamics solution, we follow many previous approaches and combine parametric models with non-parametric models, resulting in a semi-parametric modeling framework #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR .", "Specifically, we utilize the classical inverse dynamics solution as our learner's mean function.", "This approach tends to achieve better generalization, data-efficiency, and faster learning due to its utilization of prior knowledge." ]
[ "We evaluate our proposed approach with extensive experiments carried out on a Kinova Jaco 2 six-DOF arm and using the public SARCOS dataset #OTHEREFR .", "We continue with Section II, which provides a background on model-based torque controllers and inverse dynamics learning.", "Section III describes the cascaded Gaussian process formulation while Section IV evaluates this method by presenting the details behind the experiments and the obtained results. Finally, section V summarizes our work." ]
[ "inverse dynamics model" ]
method
{ "title": "Cascaded Gaussian Processes for Data-efficient Robot Dynamics Learning", "abstract": "Motivated by the recursive Newton-Euler formulation, we propose a novel cascaded Gaussian process learning framework for the inverse dynamics of robot manipulators. This approach leads to a significant dimensionality reduction which in turn results in better learning and data efficiency. We explore two formulations for the cascading: the inward and outward, both along the manipulator chain topology. The learned modeling is tested in conjunction with the classical inverse dynamics model (semi-parametric) and on its own (non-parametric) in the context of feed-forward control of the arm. Experimental results are obtained with Jaco 2 six-DOF and SARCOS seven-DOF manipulators for randomly defined sinusoidal motions of the joints in order to evaluate the performance of cascading against the standard GP learning. In addition, experiments are conducted using Jaco 2 on a task emulating a pouring maneuver. Results indicate a consistent improvement in learning speed with the inward cascaded GP model and an overall improvement in data efficiency and generalization." }
{ "title": "Derivative-free online learning of inverse dynamics models", "abstract": "Abstract-This paper discusses online algorithms for inverse dynamics modelling in robotics. Several model classes including rigid body dynamics (RBD) models, data-driven models and semiparametric models (which are a combination of the previous two classes) are placed in a common framework. While model classes used in the literature typically exploit joint velocities and accelerations, which need to be approximated resorting to numerical differentiation schemes, in this paper a new \"derivative-free\" framework is proposed that does not require this preprocessing step. An extensive experimental study with real data from the right arm of the iCub robot is presented, comparing different model classes and estimation procedures, showing that the proposed \"derivative-free\" methods outperform existing methodologies." }
1910.02291
1809.05074
B. Derivative-Free Features
In a study reported in #REFR , the authors concluded that derivative-free features with reduced rank provide wellrounded performance.
[ "Utilizing derivative-free features in the context of inverse dynamics has been proposed in #OTHEREFR as a means to address the noisy nature of numerical differentiation, an inevitable step to obtain joint velocities and accelerations.", "Assuming we have access to the previous M joint positions, there are numerous ways to incorporate the state history as a feature." ]
[ "In this method, a smaller number of features k is chosen to compress the information within the position history.", "Physics suggests that 3 elements (position, velocity and acceleration) suffice to define a state; in addition, according to the empirical results in #OTHEREFR , k = 3 is the optimal choice for inverse dynamics learning.", "Therefore, we have set k = 3 in this paper for the implementation of derivative-free features:", "where ξ i ∈ R k is the reduced rank derivative-free feature of joint i, q i (t − ) ∈ R (M +1) is the history of joint i positions, and R ∈ R k×(M +1) is a fully parameterised matrix.", "Guidelines for designing R can be found in #OTHEREFR , but in the present implementation, we set R = I and M = 2." ]
[ "derivative-free features" ]
background
{ "title": "Cascaded Gaussian Processes for Data-efficient Robot Dynamics Learning", "abstract": "Motivated by the recursive Newton-Euler formulation, we propose a novel cascaded Gaussian process learning framework for the inverse dynamics of robot manipulators. This approach leads to a significant dimensionality reduction which in turn results in better learning and data efficiency. We explore two formulations for the cascading: the inward and outward, both along the manipulator chain topology. The learned modeling is tested in conjunction with the classical inverse dynamics model (semi-parametric) and on its own (non-parametric) in the context of feed-forward control of the arm. Experimental results are obtained with Jaco 2 six-DOF and SARCOS seven-DOF manipulators for randomly defined sinusoidal motions of the joints in order to evaluate the performance of cascading against the standard GP learning. In addition, experiments are conducted using Jaco 2 on a task emulating a pouring maneuver. Results indicate a consistent improvement in learning speed with the inward cascaded GP model and an overall improvement in data efficiency and generalization." }
{ "title": "Derivative-free online learning of inverse dynamics models", "abstract": "Abstract-This paper discusses online algorithms for inverse dynamics modelling in robotics. Several model classes including rigid body dynamics (RBD) models, data-driven models and semiparametric models (which are a combination of the previous two classes) are placed in a common framework. While model classes used in the literature typically exploit joint velocities and accelerations, which need to be approximated resorting to numerical differentiation schemes, in this paper a new \"derivative-free\" framework is proposed that does not require this preprocessing step. An extensive experimental study with real data from the right arm of the iCub robot is presented, comparing different model classes and estimation procedures, showing that the proposed \"derivative-free\" methods outperform existing methodologies." }
1910.02291
1809.05074
B. Derivative-Free Features
Physics suggests that 3 elements (position, velocity and acceleration) suffice to define a state; in addition, according to the empirical results in #REFR , k = 3 is the optimal choice for inverse dynamics learning.
[ "Utilizing derivative-free features in the context of inverse dynamics has been proposed in #OTHEREFR as a means to address the noisy nature of numerical differentiation, an inevitable step to obtain joint velocities and accelerations.", "Assuming we have access to the previous M joint positions, there are numerous ways to incorporate the state history as a feature.", "In a study reported in #OTHEREFR , the authors concluded that derivative-free features with reduced rank provide wellrounded performance.", "In this method, a smaller number of features k is chosen to compress the information within the position history." ]
[ "Therefore, we have set k = 3 in this paper for the implementation of derivative-free features:", "where ξ i ∈ R k is the reduced rank derivative-free feature of joint i, q i (t − ) ∈ R (M +1) is the history of joint i positions, and R ∈ R k×(M +1) is a fully parameterised matrix.", "Guidelines for designing R can be found in #OTHEREFR , but in the present implementation, we set R = I and M = 2." ]
[ "inverse dynamics learning" ]
background
{ "title": "Cascaded Gaussian Processes for Data-efficient Robot Dynamics Learning", "abstract": "Motivated by the recursive Newton-Euler formulation, we propose a novel cascaded Gaussian process learning framework for the inverse dynamics of robot manipulators. This approach leads to a significant dimensionality reduction which in turn results in better learning and data efficiency. We explore two formulations for the cascading: the inward and outward, both along the manipulator chain topology. The learned modeling is tested in conjunction with the classical inverse dynamics model (semi-parametric) and on its own (non-parametric) in the context of feed-forward control of the arm. Experimental results are obtained with Jaco 2 six-DOF and SARCOS seven-DOF manipulators for randomly defined sinusoidal motions of the joints in order to evaluate the performance of cascading against the standard GP learning. In addition, experiments are conducted using Jaco 2 on a task emulating a pouring maneuver. Results indicate a consistent improvement in learning speed with the inward cascaded GP model and an overall improvement in data efficiency and generalization." }
{ "title": "Derivative-free online learning of inverse dynamics models", "abstract": "Abstract-This paper discusses online algorithms for inverse dynamics modelling in robotics. Several model classes including rigid body dynamics (RBD) models, data-driven models and semiparametric models (which are a combination of the previous two classes) are placed in a common framework. While model classes used in the literature typically exploit joint velocities and accelerations, which need to be approximated resorting to numerical differentiation schemes, in this paper a new \"derivative-free\" framework is proposed that does not require this preprocessing step. An extensive experimental study with real data from the right arm of the iCub robot is presented, comparing different model classes and estimation procedures, showing that the proposed \"derivative-free\" methods outperform existing methodologies." }
2002.10621
1809.05074
I. INTRODUCTION
Derivative-free GPR models have also already been introduced in #REFR , where the authors proposed derivative-free nonparametric kernels.
[ "Instead of representing the system state as a collection of positions, velocities, and accelerations, we propose to define the state as a finite past history of the position measurements.", "We call this representation derivative-free, to express the idea that the derivatives of position are not included in it.", "The use of the past history of the state has been considered in the GP-NARX literature #OTHEREFR - #OTHEREFR , as well as in Eigensystem realization algorithm (ERA) and Dynamic Mode Decomposition (DMD) #OTHEREFR , #OTHEREFR .", "However, these techniques do not use a derivative-free approach when dealing with physical systems, e.g., they consider the history of position and velocity having double state dimension w.r.t.", "our approach (which might be a problem for MBRL) and do not incorporate prior physical model to design the covariance function." ]
[ "The proposed approach has some connections with discrete dynamics models, see for instance #OTHEREFR , #OTHEREFR .", "In these works, the authors derived a discrete-time model of the dynamics of a manipulator discretizing the Lagrangian equations.", "However, different from our approach, these techniques assume a complete knowledge of the dynamics parameters, typically identified in continuous time.", "Finally, such models might not be sufficiently flexible to capture unmodeled behaviors like delays, backlash, and elasticity.", "Contribution: The main contribution of the present work is the formulation of derivative-free GPR models capable of encoding physical prior knowledge of mechanical systems that naturally depend on velocity and acceleration." ]
[ "derivative-free nonparametric kernels", "Derivative-free GPR models" ]
background
{ "title": "Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements", "abstract": "In this letter, we propose a derivative-free model learning framework for Reinforcement Learning (RL) algorithms based on Gaussian Process Regression (GPR). In many mechanical systems, only positions can be measured by the sensing instruments. Then, instead of representing the system state as suggested by the physics with a collection of positions, velocities, and accelerations, we define the state as the set of past position measurements. However, the equation of motions derived by physical first principles cannot be directly applied in this framework, being functions of velocities and accelerations. For this reason, we introduce a novel derivative-free physically-inspired kernel, which can be easily combined with nonparametric derivative-free Gaussian Process models. Tests performed on two real platforms show that the considered state definition combined with the proposed model improves estimation performance and data-efficiency w.r.t. traditional models based on GPR. Finally, we validate the proposed framework by solving two RL control problems for two real robotic systems. Index Terms-Model learning for control, dynamics, reinforcement learning (RL)." }
{ "title": "Derivative-free online learning of inverse dynamics models", "abstract": "Abstract-This paper discusses online algorithms for inverse dynamics modelling in robotics. Several model classes including rigid body dynamics (RBD) models, data-driven models and semiparametric models (which are a combination of the previous two classes) are placed in a common framework. While model classes used in the literature typically exploit joint velocities and accelerations, which need to be approximated resorting to numerical differentiation schemes, in this paper a new \"derivative-free\" framework is proposed that does not require this preprocessing step. An extensive experimental study with real data from the right arm of the iCub robot is presented, comparing different model classes and estimation procedures, showing that the proposed \"derivative-free\" methods outperform existing methodologies." }
2002.10621
1809.05074
B. State Transition Learning With PIDF Kernel
Derivative-free GPRs have already been introduced in #REFR , where the authors derived a data-driven derivative-free GPR.
[]
[ "As pointed out in the introduction, the generalization performance of data-driven models might not be sufficient to guarantee robust learning performance, and exploiting eventual prior information coming from the physical model is crucial.", "To address this problem, we propose a novel Physically Inspired Derivative-Free (PIDF) kernel.", "The PIDF exploits the property that the product and sum of kernels is still a kernel, see #OTHEREFR .", "Define q i k − = [q i k , . . .", ", q i k−k p ] and assume that a physical model of the type y k = φ(q k , q k ,q k , u k )w, is known." ]
[ "data-driven derivative-free GPR" ]
background
{ "title": "Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements", "abstract": "In this letter, we propose a derivative-free model learning framework for Reinforcement Learning (RL) algorithms based on Gaussian Process Regression (GPR). In many mechanical systems, only positions can be measured by the sensing instruments. Then, instead of representing the system state as suggested by the physics with a collection of positions, velocities, and accelerations, we define the state as the set of past position measurements. However, the equation of motions derived by physical first principles cannot be directly applied in this framework, being functions of velocities and accelerations. For this reason, we introduce a novel derivative-free physically-inspired kernel, which can be easily combined with nonparametric derivative-free Gaussian Process models. Tests performed on two real platforms show that the considered state definition combined with the proposed model improves estimation performance and data-efficiency w.r.t. traditional models based on GPR. Finally, we validate the proposed framework by solving two RL control problems for two real robotic systems. Index Terms-Model learning for control, dynamics, reinforcement learning (RL)." }
{ "title": "Derivative-free online learning of inverse dynamics models", "abstract": "Abstract-This paper discusses online algorithms for inverse dynamics modelling in robotics. Several model classes including rigid body dynamics (RBD) models, data-driven models and semiparametric models (which are a combination of the previous two classes) are placed in a common framework. While model classes used in the literature typically exploit joint velocities and accelerations, which need to be approximated resorting to numerical differentiation schemes, in this paper a new \"derivative-free\" framework is proposed that does not require this preprocessing step. An extensive experimental study with real data from the right arm of the iCub robot is presented, comparing different model classes and estimation procedures, showing that the proposed \"derivative-free\" methods outperform existing methodologies." }
1607.04420
1305.0124
C. LOS blockage analysis
We start our analysis by acknowledging that V2V links can have their LOS blocked by two distinct object types, static and mobile, which have distinct impact on V2V links #REFR .
[]
[ "Furthermore, static objects such as buildings, trees, etc., typically block the LOS for V2V links between vehicles that are on different roads (e.g., perpendicular roads joined by intersections).", "On the other hand, mobile objects (predominantly other vehicles) block the LOS over the surface of the road.", "We use the LOS blockage classification provided by GEMV 2 , a freely available, geometry-based V2X propagation modeling tool #OTHEREFR .", "GEMV 2 uses the outlines of vehicles, buildings, and foliage to distinguish between LOS, NLOSv, and NLOSb links.", "In order to do so, GEMV 2 performs geometry-based deterministic LOS blockage analysis using the outlines of buildings and foliage from OpenStreetMap #OTHEREFR and vehicular mobility traces from SUMO #OTHEREFR ." ]
[ "V2V links" ]
background
{ "title": "Modeling the Evolution of Line-of-Sight Blockage for V2V Channels", "abstract": "We investigate the evolution of line of sight (LOS) blockage over both time and space for vehicle-to-vehicle (V2V) channels. Using realistic vehicular mobility and building and foliage locations from maps, we first perform LOS blockage analysis to extract LOS probabilities in real cities and on highways for varying vehicular densities. Next, to model the time evolution of LOS blockage for V2V links, we employ a three-state discrete-time Markov chain comprised of the following states: i) LOS; ii) non-LOS due to static objects (e.g., buildings, trees, etc.); and iii) non-LOS due to mobile objects (vehicles). We obtain state transition probabilities based on the evolution of LOS blockage. Finally, we perform curve fitting and obtain a set of distance-dependent equations for both LOS and transition probabilities. These equations can be used to generate time-evolved V2V channel realizations for representative urban and highway environments. Our results can be used to perform highly efficient and accurate simulations without the need to employ complex geometry-based models for link evolution. 978-1-5090-1701-0/16/$31.00 ©2016 IEEE" }
{ "title": "Geometry-Based Vehicle-to-Vehicle Channel Modeling for Large-Scale Simulation", "abstract": "Abstract-Due to the dynamic nature of vehicular traffic and the road surroundings, vehicle-to-vehicle (V2V) propagation characteristics vary greatly on both small-and large-scale. Recent measurements have shown that both large static objects (e.g., buildings and foliage) as well as mobile objects (surrounding vehicles) have a profound impact on V2V communication. At the same time, system-level Vehicular Ad Hoc Network (VANET) simulators by and large employ simple statistical propagation models, which do not account for surrounding objects explicitly. We designed GEMV 2 (Geometry-based Efficient propagation Model for V2V communication), which uses outlines of vehicles, buildings, and foliage to distinguish the following three types of links: line of sight (LOS), non-LOS due to vehicles, and non-LOS due to static objects. For each link, GEMV 2 calculates the large-scale signal variations deterministically, whereas the smallscale signal variations are calculated stochastically based on the number and size of surrounding objects. We implement GEMV 2 in MATLAB and show that it scales well by using it to simulate radio propagation for city-wide networks with tens of thousands of vehicles on commodity hardware. We make the source code of GEMV 2 freely available. Finally, we validate GEMV 2 against extensive measurements performed in urban, suburban, highway, and open space environment." }
1903.04788
1305.0124
A V2V channel model for large scale simulations (including urban scenarios) is presented in #REFR .
[ "This enables a fast simulation of non-stationary MIMO channels, and also supports simulation of arbitrary antenna patterns and array configurations.", "However, the highway model in #OTHEREFR does not include propagation effects that are vital for urban scenarios, such as obstruction and diffraction and higher order interactions.", "A few GSCMs for urban V2V scenarios have been presented in the literature #OTHEREFR .", "In #OTHEREFR the parameter estimates for highway scenarios in #OTHEREFR are applied, and the effects of building obstructions are added.", "The theoretical model in #OTHEREFR is based on multi-path clusters placed along walls and building corners." ]
[ "It is a geometry-based model which includes reflection, diffraction and paths that are obstructed by buildings or foliage.", "Large scale signals are calculated deterministically, whereas the small scale fading of the received power is determined stochastically.", "The models in #OTHEREFR are only validated against data of large scale parameters such as received power.", "To the author's best knowledge, we present the first V2V channel model for urban scenarios based on measured and highly resolved multi-path components.", "The aim of this paper is to be able to accurately model multi-path behavior in challenging V2V scenarios, in order to enable improved V2V MIMO techniques and V2V positioning and localization techniques." ]
[ "V2V channel model" ]
background
{ "title": "The COST IRACON Geometry-Based Stochastic Channel Model for Vehicle-to-Vehicle Communication in Intersections", "abstract": "Vehicle-to-vehicle (V2V) wireless communications can improve traffic safety at road intersections and enable congestion avoidance. However, detailed knowledge about the wireless propagation channel is needed for the development and realistic assessment of V2V communication systems. We present a novel geometry-based stochastic MIMO channel model with support for frequencies in the band of 5.2-6.2 GHz. The model is based on extensive high-resolution measurements at different road intersections in the city of Berlin, Germany. We extend existing models, by including the effects of various obstructions, higher order interactions, and by introducing an angular gain function for the scatterers. Scatterer locations have been identified and mapped to measured multi-path trajectories using a measurementbased ray tracing method and a subsequent RANSAC algorithm. The developed model is parameterized, and using the measured propagation paths that have been mapped to scatterer locations, model parameters are estimated. The time variant power fading of individual multi-path components is found to be best modeled by a Gamma process with an exponential autocorrelation. The path coherence distance is estimated to be in the range of 0-2 m. The model is also validated against measurement data, showing that the developed model accurately captures the behavior of the measured channel gain, Doppler spread, and delay spread. This is also the case for intersections that have not been used when estimating model parameters." }
{ "title": "Geometry-Based Vehicle-to-Vehicle Channel Modeling for Large-Scale Simulation", "abstract": "Abstract-Due to the dynamic nature of vehicular traffic and the road surroundings, vehicle-to-vehicle (V2V) propagation characteristics vary greatly on both small-and large-scale. Recent measurements have shown that both large static objects (e.g., buildings and foliage) as well as mobile objects (surrounding vehicles) have a profound impact on V2V communication. At the same time, system-level Vehicular Ad Hoc Network (VANET) simulators by and large employ simple statistical propagation models, which do not account for surrounding objects explicitly. We designed GEMV 2 (Geometry-based Efficient propagation Model for V2V communication), which uses outlines of vehicles, buildings, and foliage to distinguish the following three types of links: line of sight (LOS), non-LOS due to vehicles, and non-LOS due to static objects. For each link, GEMV 2 calculates the large-scale signal variations deterministically, whereas the smallscale signal variations are calculated stochastically based on the number and size of surrounding objects. We implement GEMV 2 in MATLAB and show that it scales well by using it to simulate radio propagation for city-wide networks with tens of thousands of vehicles on commodity hardware. We make the source code of GEMV 2 freely available. Finally, we validate GEMV 2 against extensive measurements performed in urban, suburban, highway, and open space environment." }
1203.3370
1305.0124
No path loss model, except a geometry based channel model published recently #REFR , is today available dealing with all three cases in a comprehensive way.
[ "In order to characterize the channel parameters separately for LOS and non-LOS conditions V2V communication links in this paper are categorized into following three groups:", "• Line-of-sight (LOS) is the situation when there is an optical line-of-sight between the TX and the RX.", "• Obstructed-LOS (OLOS) is the situation when the LOS between the TX and RX is obstructed completely or partially by another vehicle.", "• Non-LOS (NLOS) is the situation when a building between the TX and RX completely block the LOS as well as many other significant MPCs.", "The channel properties for LOS, OLOS and NLOS are distinct, and their individual analysis is required." ]
[ "The main contribution of this paper is a shadow fading channel model (LOS/OLOS model) based on real measurements in highway and urban scenarios distinguishing between LOS and OLOS.", "The model targets vehicular ad hoc network (VANET) system simulations.", "We also provide a solution on how to incorporate the LOS/OLOS model in a VANET simulator.", "We model the temporal correlation of shadow fading as an auto-regressive process.", "Finally, simulation results are presented where the results obtained from the LOS/OLOS model are compared against the Cheng's model #OTHEREFR , which is also based on an outdoor channel sounding measurement campaign performed at 5.9 GHz." ]
[ "geometry based channel" ]
background
{ "title": "A Measurement Based Shadow Fading Model for Vehicle-to-Vehicle Network Simulations", "abstract": "The vehicle-to-vehicle (V2V) propagation channel has significant implications on the design and performance of novel communication protocols for vehicular ad hoc networks (VANETs). Extensive research efforts have been made to develop V2V channel models to be implemented in advanced VANET system simulators for performance evaluation. The impact of shadowing caused by other vehicles has, however, largely been neglected in most of the models, as well as in the system simulations. In this paper we present a shadow fading model targeting system simulations based on real measurements performed in urban and highway scenarios. The measurement data is separated into three categories, line-of-sight (LOS), obstructed line-of-sight (OLOS) by vehicles, and non line-of-sight due to buildings, with the help of video information recorded during the measurements. It is observed that vehicles obstructing the LOS induce an additional attenuation of about 10 dB in the received signal power. An approach to incorporate the LOS/OLOS model into existing VANET simulators is also provided. Finally, system level VANET simulation results are presented, showing the difference between the LOS/OLOS model and a channel model based on Nakagami-m fading." }
{ "title": "Geometry-Based Vehicle-to-Vehicle Channel Modeling for Large-Scale Simulation", "abstract": "Abstract-Due to the dynamic nature of vehicular traffic and the road surroundings, vehicle-to-vehicle (V2V) propagation characteristics vary greatly on both small-and large-scale. Recent measurements have shown that both large static objects (e.g., buildings and foliage) as well as mobile objects (surrounding vehicles) have a profound impact on V2V communication. At the same time, system-level Vehicular Ad Hoc Network (VANET) simulators by and large employ simple statistical propagation models, which do not account for surrounding objects explicitly. We designed GEMV 2 (Geometry-based Efficient propagation Model for V2V communication), which uses outlines of vehicles, buildings, and foliage to distinguish the following three types of links: line of sight (LOS), non-LOS due to vehicles, and non-LOS due to static objects. For each link, GEMV 2 calculates the large-scale signal variations deterministically, whereas the smallscale signal variations are calculated stochastically based on the number and size of surrounding objects. We implement GEMV 2 in MATLAB and show that it scales well by using it to simulate radio propagation for city-wide networks with tens of thousands of vehicles on commodity hardware. We make the source code of GEMV 2 freely available. Finally, we validate GEMV 2 against extensive measurements performed in urban, suburban, highway, and open space environment." }
1903.09627
1712.04804
E. Energy-free communicating tags thanks to 5G backscattering
According to works summarized in #REFR , one can expect higher tag-to-reader distances (several meters) with more advanced detection algorithms than a basic energy detector.
[ "The SW code simply compares the received power with a moving average power threshold to determine the changes in the received power level.", "Then, the synchronization and the FM0 demodulation is performed, and the original image is retrieved.", "Figure 10 illustrates an actual measurement and the corresponding successful demodulation, performed in Orange Gardens, Chatillon.", "As illustrated in Figure 11 , the nearest TV source was 2 km away from our location and the tag-to-reader distance was of around 40 cm (i.e. almost a wavelength).", "The experiment was performed indoor at the ground floor, near a window." ]
[ "As a future operator of 5G networks, we believe that this technology could potentially help the massive development of IoT in a green manner.", "Indeed, as illustrated in Figure 12 , if applied to a 5G, the ambient backscatter concept can benefit from a large and dense population of sources and readers.", "Indeed, numerous 5G network base stations and 5G devices could play the role of sources.", "Also in addition to deploying RF readers, one could upgrade 5G devices and 5G networks with the reader capability.", "Figure 13 illustrates potential use cases of the ambient backscatter system." ]
[ "basic energy detector" ]
background
{ "title": "An operator’s point of view", "abstract": "Abstract-The exponential growth in networks' traffic accompanied by the multiplication of new services like those promised by the 5G led to a huge increase in the infrastructures' energy consumption. All over the world, many telecom operators are facing the problem of energy consumption and Green networking since many years and they all convey today that it turned from sustainable development initiative to an OPEX issue. Therefore, the challenge to make the ICT sector more energy-efficient and environment-friendly has become a fundamental objective not only to green networks but also in the domain of green services that enable the ICT sectors to help other industrial sector to clean their own energy consumption. The present paper is a point of view of a European telecom operator regarding green networking. We address some technological advancements that would enable to accelerate this ICT green evolution after more than 15 years of field experience and international collaborative research projects. Basically, the paper is a global survey of the evolution of the ICT industry in green networks including optical and wireless networks and from hardware improvement to the software era as well as the green orchestration." }
{ "title": "Ambient Backscatter Communications: A Contemporary Survey", "abstract": "Recently, ambient backscatter communication has been introduced as a cutting-edge technology which enables smart devices to communicate by utilizing ambient radio frequency (RF) signals without requiring active RF transmission. This technology is especially effective in addressing communication and energy efficiency problems for low-power communications systems such as sensor networks, and thus it is expected to realize numerous Internet-of-Things applications. Therefore, this paper aims to provide a contemporary and comprehensive literature review on fundamentals, applications, challenges, and research efforts/progress of ambient backscatter communications. In particular, we first present fundamentals of backscatter communications and briefly review bistatic backscatter communications systems. Then, the general architecture, advantages, and solutions to address existing issues and limitations of ambient backscatter communications systems are discussed. Additionally, emerging applications of ambient backscatter communications are highlighted. Finally, we outline some open issues and future research directions. Index Terms-Ambient backscatter, IoT networks, bistatic backscatter, RFID, wireless energy harvesting, backscatter communications, and low-power communications." }
1912.11170
1712.04804
More importantly, radio jamming attacks can be easily launched by using commercial off-the-shelf products #REFR , and thus they can cause serious consequences to human life, especially in mission-critical sectors such as healthcare, military, and transportation.
[ "is also estimated that around 500 billion devices will be connected to the Internet by 2030 2 .", "Obviously, with outstanding advantages and benefits, IoT has been becoming an indispensable part of human life in the near future.", "Despite the explosive growth, IoT is extremely vulnerable to security threats, especially jamming attacks, due to hardware constraints and the broadcast nature of wireless communications.", "In particular, by transmitting high-power jamming signals to a target channel, a jammer can degrade Signal-to-Interferenceplus-Noise Ratio (SINR) at the IoT receiver, e.g., an IoT gateway.", "Consequently, the IoT receiver is unable to decode information from the IoT transmitter." ]
[ "For example, an attacker used a cheap jamming device to perform a car lock jamming attack, with the intent of breaking into vehicles, caused chaos in a parking lot where nobody could unlock/lock their remote car locks and ended up triggering the number of alarms in the process #OTHEREFR .", "As a result, solutions to deal with jamming attacks are of urgent needs for future development of IoT networks.", "In this paper, we first give an overview about communication methods and potential vulnerabilities to jamming attacks in IoT networks.", "We then review emerging wireless jamming techniques and current effective countermeasures to defeat jamming attacks.", "After that, we develop a novel anti-jamming strategy which allows resource-constrained IoT devices to effectively to defeat powerful reactive jammers." ]
[ "radio jamming attacks" ]
background
{ "title": "\"Borrowing Arrows with Thatched Boats\": The Art of Defeating Reactive Jammers in IoT Networks", "abstract": "In this article, we introduce a novel deception strategy which is inspired by the \"Borrowing Arrows with Thatched Boats\", one of the most famous military tactics in the history, in order to defeat reactive jamming attacks for low-power IoT networks. Our proposed strategy allows resource-constrained IoT devices to be able to defeat powerful reactive jammers by leveraging their own jamming signals. More specifically, by stimulating the jammer to attack the channel through transmitting fake transmissions, the IoT system can not only undermine the jammer's power, but also harvest energy or utilize jamming signals as a communication means to transmit data through using RF energy harvesting and ambient backscatter techniques, respectively. Furthermore, we develop a low-cost deep reinforcement learning framework that enables the hardware-constrained IoT device to quickly obtain an optimal defense policy without requiring any information about the jammer in advance. Simulation results reveal that our proposed framework can not only be very effective in defeating reactive jamming attacks, but also leverage jammer's power to enhance system performance for the IoT network." }
{ "title": "Ambient Backscatter Communications: A Contemporary Survey", "abstract": "Recently, ambient backscatter communication has been introduced as a cutting-edge technology which enables smart devices to communicate by utilizing ambient radio frequency (RF) signals without requiring active RF transmission. This technology is especially effective in addressing communication and energy efficiency problems for low-power communications systems such as sensor networks, and thus it is expected to realize numerous Internet-of-Things applications. Therefore, this paper aims to provide a contemporary and comprehensive literature review on fundamentals, applications, challenges, and research efforts/progress of ambient backscatter communications. In particular, we first present fundamentals of backscatter communications and briefly review bistatic backscatter communications systems. Then, the general architecture, advantages, and solutions to address existing issues and limitations of ambient backscatter communications systems are discussed. Additionally, emerging applications of ambient backscatter communications are highlighted. Finally, we outline some open issues and future research directions. Index Terms-Ambient backscatter, IoT networks, bistatic backscatter, RFID, wireless energy harvesting, backscatter communications, and low-power communications." }
1912.11170
1712.04804
IV. DEEPQFAKE: A DRL-BASED DECEPTION STRATEGY TO DEFEAT REACTIVE JAMMING ATTACKS
These results have been verified based on information theoretic approaches as well as many experiments as shown in the recent survey #REFR .
[ "The harvested energy will be used to transmit data to the gateway and support operations at the IoT device.", "Similarly, ambient backscatter function is also very useful in supporting free-cost data transmission for the IoT device by reflecting RF signals from surrounding environment or jamming signals.", "In #OTHEREFR , the authors show that ambient backscatter can be used for the communications between two bateryless IoT devices.", "More interestingly, these two functions are even more effective under strong jamming attacks.", "Intuitively, the more power the jammer uses to attack the channel, the larger amount of energy the IoT device can harvest and the more bits the IoT system can successfully backscatter." ]
[ "Although the aforementioned deception strategies clearly benefit the IoT system in dealing with reactive jamming attacks, they can only perform best when they have some information about the jammer in advance.", "For example, given the jamming signal on the channel, the IoT device should harvest energy or perform an ambient backscatter transmission from the jamming signal? In addition, how to optimize energy harvesting, active transmission, and backscatter transmission processes without knowing jammer's capacity, e.g., frequency and power of attacks, in advance? The jammer is a malicious device which is used to prevent IoT communications, and thus its information is nearly impossible to obtain in advance.", "Therefore, to deal with the challenges in finding the optimal deception strategy for the IoT device without requiring the knowledge about the jammer in advance, in the next section, we introduce DeepQFake, a DRL-based deception framework.", "This framework allows the IoT device to learn the jammer's strategy through real-time interactions." ]
[ "information theoretic approaches" ]
result
{ "title": "\"Borrowing Arrows with Thatched Boats\": The Art of Defeating Reactive Jammers in IoT Networks", "abstract": "In this article, we introduce a novel deception strategy which is inspired by the \"Borrowing Arrows with Thatched Boats\", one of the most famous military tactics in the history, in order to defeat reactive jamming attacks for low-power IoT networks. Our proposed strategy allows resource-constrained IoT devices to be able to defeat powerful reactive jammers by leveraging their own jamming signals. More specifically, by stimulating the jammer to attack the channel through transmitting fake transmissions, the IoT system can not only undermine the jammer's power, but also harvest energy or utilize jamming signals as a communication means to transmit data through using RF energy harvesting and ambient backscatter techniques, respectively. Furthermore, we develop a low-cost deep reinforcement learning framework that enables the hardware-constrained IoT device to quickly obtain an optimal defense policy without requiring any information about the jammer in advance. Simulation results reveal that our proposed framework can not only be very effective in defeating reactive jamming attacks, but also leverage jammer's power to enhance system performance for the IoT network." }
{ "title": "Ambient Backscatter Communications: A Contemporary Survey", "abstract": "Recently, ambient backscatter communication has been introduced as a cutting-edge technology which enables smart devices to communicate by utilizing ambient radio frequency (RF) signals without requiring active RF transmission. This technology is especially effective in addressing communication and energy efficiency problems for low-power communications systems such as sensor networks, and thus it is expected to realize numerous Internet-of-Things applications. Therefore, this paper aims to provide a contemporary and comprehensive literature review on fundamentals, applications, challenges, and research efforts/progress of ambient backscatter communications. In particular, we first present fundamentals of backscatter communications and briefly review bistatic backscatter communications systems. Then, the general architecture, advantages, and solutions to address existing issues and limitations of ambient backscatter communications systems are discussed. Additionally, emerging applications of ambient backscatter communications are highlighted. Finally, we outline some open issues and future research directions. Index Terms-Ambient backscatter, IoT networks, bistatic backscatter, RFID, wireless energy harvesting, backscatter communications, and low-power communications." }
1906.09209
1712.04804
FundAmentAls oF bAckscAtter communIcAtIons
Moreover, the ambient backscatter configuration does not result in noticeable interference unless the devices are placed very close #REFR .
[ "Among the aforementioned configurations of backscatter communications, the ambient backscatter communication is perhaps the most energy-and spectrum-efficient.", "First of all, there is no requirement for a dedicated carrier emitter in ambient backscatter configuration.", "Ambient backscatter devices operate using the carrier waves of ambient RF sources like TV/ FM stations and WiFi.", "This saves the energy consumed by a dedicated carrier emitter #OTHEREFR .", "Second, by exploiting the existing RF signals, the ambient backscatter configuration becomes spectrally efficient as it does not require extra spectrum to operate." ]
[ "Finally, the low cost and small form-factor of ambient backscatter devices favor its large-scale deployment in myriad scenarios.", "From healthcare to home automation, the ambient devices can be successfully used to complete a number of tasks at a very low cost.", "Prior to shedding light on the wireless-powered ambient backscatter communications, it is worthwhile to report the basics of wireless power transmission.", "Wireless-powered communication has recently emerged as an active paradigm to increase the lifetime of devices.", "The rapid developments in hardware sensitivity, minimization of circuit power consumption, and improved RF-to-DC conversion solutions have paved the way for ambient RF energy harvesting." ]
[ "noticeable interference" ]
background
{ "title": "Applications of Backscatter Communications for Healthcare Networks", "abstract": "Backscatter communication is expected to help in revitalizing the domain of healthcare through its myriad applications. From on-body sensors to in-body implants and miniature embeddable devices, there are many potential use cases that can leverage the miniature and low-powered nature of backscatter devices. However, the existing literature lacks a comprehensive study that provides a distilled review of the latest studies on backscatter communications from the healthcare perspective. Thus, with the objective to promote the utility of backscatter communication in healthcare, this article aims to identify specific applications of backscatter systems. A detailed taxonomy of recent studies and gap analysis for future research directions are provided in this work. Finally, we conduct measurements at 590 MHz in different propagation environments with the in-house designed backscatter device. The link budget results show the promise of backscatter devices to communicate over large distances for indoor environments, which demonstrates its potential in the healthcare system." }
{ "title": "Ambient Backscatter Communications: A Contemporary Survey", "abstract": "Recently, ambient backscatter communication has been introduced as a cutting-edge technology which enables smart devices to communicate by utilizing ambient radio frequency (RF) signals without requiring active RF transmission. This technology is especially effective in addressing communication and energy efficiency problems for low-power communications systems such as sensor networks, and thus it is expected to realize numerous Internet-of-Things applications. Therefore, this paper aims to provide a contemporary and comprehensive literature review on fundamentals, applications, challenges, and research efforts/progress of ambient backscatter communications. In particular, we first present fundamentals of backscatter communications and briefly review bistatic backscatter communications systems. Then, the general architecture, advantages, and solutions to address existing issues and limitations of ambient backscatter communications systems are discussed. Additionally, emerging applications of ambient backscatter communications are highlighted. Finally, we outline some open issues and future research directions. Index Terms-Ambient backscatter, IoT networks, bistatic backscatter, RFID, wireless energy harvesting, backscatter communications, and low-power communications." }
2001.10180
1712.04804
A. Two-Hop Hybrid Relaying Scheme
By setting a proper load impedance and thus changing the antenna's reflection coefficient #REFR , the passive relay can backscatter a part of the incident RF signals, while the other part is harvested as the power to sustain its operations.
[ "Conventionally, the beamforming information can be received by both the relays and the receiver directly, as shown in Fig. 1(a) .", "Hence, the HAP's beamforming design has to balance the transmission performance to the relays and to the receiver.", "A higher rate on the direct link potentially degrades the signal quality at the relays and reduces the data rate of relays' transmission. Different from the conventional relay communications in Fig.", "1(a) , where all relays are operating in the AF protocol, in this paper we assume that each relay has a dual-mode radio structure that can switch between the passive and active modes, similar to that in #OTHEREFR and #OTHEREFR . This arises the novel hybrid relay communications model. As illustrated in Fig.", "1(b) , when the HAP beamforms the information signal to the relays, the relay-n can turn into the passive mode and backscatter the RF signals from the HAP directly to the receiver." ]
[ "Moreover, the backscattered signals from the passive relays can be coherently added with the active relays' signals to enhance the signal strength at the receiver #OTHEREFR .", "The HAP's beamforming in the first hop is also used for wireless power transfer to the relays.", "We consider a PS protocol for the energy harvesting relays, i.e., a part of the RF signal at the relays is harvested as power while the other part is received as information signal.", "Specifically, we allow each active relay to set the different PS ratio to match the HAP's beamforming strategy and its energy demand.", "In the second hop, the active relays amplify and forward the received signals to the receiver." ]
[ "passive relay", "incident RF signals" ]
background
{ "title": "Capitalizing Backscatter-Aided Hybrid Relay Communications with Wireless Energy Harvesting", "abstract": "In this work, we employ multiple energy harvesting relays to assist information transmission from a multi-antenna hybrid access point (HAP) to a receiver. All the relays are wirelessly powered by the HAP in the power-splitting (PS) protocol. We introduce the novel concept of hybrid relay communications, which allows each relay to switch between two radio modes, i.e., the active RF communications and the passive backscatter communications, according to its channel and energy conditions. We envision that the complement transmissions in two radio modes can be exploited to improve the overall relay performance. As such, we aim to jointly optimize the HAP's beamforming, individual relays' radio mode, the PS ratio, and the relays' collaborative beamforming to enhance the throughput performance at the receiver. The resulting formulation becomes a combinatorial and non-convex problem. Thus, we firstly propose a convex approximation to the original problem, which serves as a lower bound of the relay performance. Then, we design an iterative algorithm that decomposes the binary relay mode optimization from the other operating parameters. In the inner loop of the algorithm, we exploit the structural properties to optimize the relay performance with the fixed relay mode in the alternating optimization framework. In the outer loop, different performance metrics are derived to guide the search for a set of passive relays to further improve the relay performance. Simulation results verify that the hybrid relaying communications can achieve 20% performance improvement compared to the conventional relay communications with all active relays. Index Terms-Wireless powered communications, beamforming, hybrid relay communications, wireless backscatter S. Gong is with" }
{ "title": "Ambient Backscatter Communications: A Contemporary Survey", "abstract": "Recently, ambient backscatter communication has been introduced as a cutting-edge technology which enables smart devices to communicate by utilizing ambient radio frequency (RF) signals without requiring active RF transmission. This technology is especially effective in addressing communication and energy efficiency problems for low-power communications systems such as sensor networks, and thus it is expected to realize numerous Internet-of-Things applications. Therefore, this paper aims to provide a contemporary and comprehensive literature review on fundamentals, applications, challenges, and research efforts/progress of ambient backscatter communications. In particular, we first present fundamentals of backscatter communications and briefly review bistatic backscatter communications systems. Then, the general architecture, advantages, and solutions to address existing issues and limitations of ambient backscatter communications systems are discussed. Additionally, emerging applications of ambient backscatter communications are highlighted. Finally, we outline some open issues and future research directions. Index Terms-Ambient backscatter, IoT networks, bistatic backscatter, RFID, wireless energy harvesting, backscatter communications, and low-power communications." }
1807.09685
1705.01359
FOIL.
Also following #REFR , for the third task we replace the foiled word with words from a set of target words and choose a target word based on which one maximizes the score of the classifier.
[ "#OTHEREFR proposes three tasks: (1) classifying whether a sentence is image relevant or not, (2) determining which word in a sentence is not image relevant and (3) correcting the sentence error.", "To use our phrase-critic for (1), we employ a standard binary classification loss.", "For (2), we follow #OTHEREFR and determine which words are not image relevant by holding out one word at a time from the sentence.", "When we remove an irrelevant word, the score from the classifier should increase.", "Thus, we can determine the least relevant word in a sentence by observing which word (upon removal) leads to the largest score from our classifier." ]
[ "To train our phrase critic, we use the positive and negative samples as defined by #OTHEREFR .", "As is done across all experiments, we extract phrases with our noun phrase chunker and use this as input to the phrase-critic." ]
[ "target words" ]
method
{ "title": "Grounding Visual Explanations", "abstract": "Abstract. Existing visual explanation generating agents learn to fluently justify a class prediction. However, they may mention visual attributes which reflect a strong class prior, although the evidence may not actually be in the image. This is particularly concerning as ultimately such agents fail in building trust with human users. To overcome this limitation, we propose a phrase-critic model to refine generated candidate explanations augmented with flipped phrases which we use as negative examples while training. At inference time, our phrase-critic model takes an image and a candidate explanation as input and outputs a score indicating how well the candidate explanation is grounded in the image. Our explainable AI agent is capable of providing counter arguments for an alternative prediction, i.e. counterfactuals, along with explanations that justify the correct classification decisions. Our model improves the textual explanation quality of fine-grained classification decisions on the CUB dataset by mentioning phrases that are grounded in the image. Moreover, on the FOIL tasks, our agent detects when there is a mistake in the sentence, grounds the incorrect phrase and corrects it significantly better than other models." }
{ "title": "FOIL it! Find One mismatch between Image and Language caption", "abstract": "In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities. To this end, we propose an extension of the MS-COCO dataset, FOIL-COCO, which associates images with both correct and 'foil' captions, that is, descriptions of the image that are highly similar to the original ones, but contain one single mistake ('foil word'). We show that current LaVi models fall into the traps of this data and perform badly on three tasks: a) caption classification (correct vs. foil); b) foil word detection; c) foil word correction. Humans, in contrast, have near-perfect performance on those tasks. We demonstrate that merely utilising language cues is not enough to model FOIL-COCO and that it challenges the state-of-the-art by requiring a fine-grained understanding of the relation between text and image." }
1904.06038
1705.01359
Introduction
To do so, we evaluate an encoder trained on different multimodal tasks on an existing diagnostic task-FOIL #REFR )-designed to assess multimodal semantic understanding and carry out an in-depth analysis to study how the encoder merges and exploits the two modalities.
[ "The benchmarks developed so far have put forward complex and distinct neural architectures, but in general they all share a common backbone consisting of an encoder which learns to merge the two types of representation to perform a certain task.", "This resembles the bottom-up processing in the 'Hub and Spoke' model proposed in Cognitive Science to represent how the brain processes and combines multi-sensory inputs #OTHEREFR .", "In this model, a 'hub' module merges the input processed by the sensor-specific 'spokes' into a joint representation.", "We focus our attention on the encoder implementing the 'hub' in artificial multimodal systems, with the goal of assessing its ability to compute multimodal representations that are useful beyond specific tasks.", "While current visually grounded models perform remarkably well on the task they have been trained for, it is unclear whether they are able to learn representations that truly merge the two modalities and whether the skill they have acquired is stable enough to be transferred to other tasks. In this paper, we investigate these questions in detail." ]
[ "We also exploit two techniques to investigate the structure of the learned semantic spaces: Representation Similarity Analysis (RSA) #OTHEREFR and Nearest Neighbour overlap (NN) .", "We use RSA to compare the outcome of the various encoders given the same vision-and-language input and NN to compare the multimodal space produced by an encoder with the ones built with the input visual and language embeddings, respectively, which allows us to measure the relative weight an encoder gives to the two modalities.", "In particular, we consider three visually grounded tasks: visual question answering (VQA) #OTHEREFR , where the encoder is trained to answer a question about an image; visual resolution of referring expressions (ReferIt) #OTHEREFR , where the model has to pick up the referent object of a description in an image; and GuessWhat #OTHEREFR , where the model has to identify the object in an image that is the target of a goal-oriented question-answer dialogue.", "We make sure the datasets used in the pre-training phase are as similar as possible in terms of size and image complexity, and use the same model architecture for the three pre-training tasks.", "This guarantees fair comparisons and the reliability of the results we obtain." ]
[ "multimodal semantic understanding" ]
method
{ "title": "Evaluating the Representational Hub of Language and Vision Models", "abstract": "The multimodal models used in the emerging field at the intersection of computational linguistics and computer vision implement the bottom-up processing of the \"Hub and Spoke\" architecture proposed in cognitive science to represent how the brain processes and combines multi-sensory inputs. In particular, the Hub is implemented as a neural network encoder. We investigate the effect on this encoder of various vision-and-language tasks proposed in the literature: visual question answering, visual reference resolution, and visually grounded dialogue. To measure the quality of the representations learned by the encoder, we use two kinds of analyses. First, we evaluate the encoder pre-trained on the different vision-and-language tasks on an existing diagnostic task designed to assess multimodal semantic understanding. Second, we carry out a battery of analyses aimed at studying how the encoder merges and exploits the two modalities." }
{ "title": "FOIL it! Find One mismatch between Image and Language caption", "abstract": "In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities. To this end, we propose an extension of the MS-COCO dataset, FOIL-COCO, which associates images with both correct and 'foil' captions, that is, descriptions of the image that are highly similar to the original ones, but contain one single mistake ('foil word'). We show that current LaVi models fall into the traps of this data and perform badly on three tasks: a) caption classification (correct vs. foil); b) foil word detection; c) foil word correction. Humans, in contrast, have near-perfect performance on those tasks. We demonstrate that merely utilising language cues is not enough to model FOIL-COCO and that it challenges the state-of-the-art by requiring a fine-grained understanding of the relation between text and image." }
1805.06549
1705.01359
Performance on nouns:
We hypothesize the following reasons for this: (a) human responses were crowd-sourced, which could have resulted in some noisy annotations; (b) our gold object-based features closely resembles the information used for data-generation as described in #REFR for the foil noun dataset.
[ "The results of our experiments with foiled nouns are summarized in Table 2.", "First, we note that the models that use Gold 1 https://foilunitn.github.io/ 2 The authors have kindly provided us the datasets. Table 2 : Accuracy on Nouns dataset. † are taken directly from #OTHEREFR .", "HieCoAtt is the state of the art reported in the paper.", "bag of objects information are the best performing models across classifiers.", "We also note that the performance is better than human performance." ]
[ "The models using Predicted bag of objects from a detector are very close to the performance of Gold.", "The performance of models using simple bag of words (BOW) sentence representations and an MLP is better than that of models that use LSTMs.", "Also, the accuracy of the bag of objects model with Frequency counts is higher than with the binary Mention vector, which only encodes the presence of objects.", "The Multimodal LSTM (MM-LSTM) has a slightly better performance than LSTM classifiers.", "In all cases, we observe that the performance is on par with human-level accuracy." ]
[ "foil noun dataset" ]
method
{ "title": "Defoiling Foiled Image Captions", "abstract": "We address the task of detecting foiled image captions, i.e. identifying whether a caption contains a word that has been deliberately replaced by a semantically similar word, thus rendering it inaccurate with respect to the image being described. Solving this problem should in principle require a fine-grained understanding of images to detect linguistically valid perturbations in captions. In such contexts, encoding sufficiently descriptive image information becomes a key challenge. In this paper, we demonstrate that it is possible to solve this task using simple, interpretable yet powerful representations based on explicit object information. Our models achieve stateof-the-art performance on a standard dataset, with scores exceeding those achieved by humans on the task. We also measure the upperbound performance of our models using gold standard annotations. Our analysis reveals that the simpler model performs well even without image information, suggesting that the dataset contains strong linguistic bias." }
{ "title": "FOIL it! Find One mismatch between Image and Language caption", "abstract": "In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities. To this end, we propose an extension of the MS-COCO dataset, FOIL-COCO, which associates images with both correct and 'foil' captions, that is, descriptions of the image that are highly similar to the original ones, but contain one single mistake ('foil word'). We show that current LaVi models fall into the traps of this data and perform badly on three tasks: a) caption classification (correct vs. foil); b) foil word detection; c) foil word correction. Humans, in contrast, have near-perfect performance on those tasks. We demonstrate that merely utilising language cues is not enough to model FOIL-COCO and that it challenges the state-of-the-art by requiring a fine-grained understanding of the relation between text and image." }
1805.06549
1705.01359
Ablation Analysis
The accuracy of our models is substantially higher than that reported in #REFR , even for equivalent models.
[ "On the other hand, text-only models achieve a very high accuracy.", "This is a central finding, suggesting that foiled captions are easy to detect even without image information.", "We also observe that the performance of BOW improves by adding object Frequency image information, but not CNN image embeddings.", "We posit that this is because there is a tighter correspondence between the bag of objects and bag of word models.", "In the case of LSTMs, adding either image information helps slightly." ]
[ "We note, however, that while the trends of image information is similar for other parts of speech datasets, the performance of BOW based models are lower than the performance of LSTM based models.", "The anomaly of improved performance of BOW based models seems heavily pronounced in the nouns dataset.", "Thus, we further analyze our model in the next section to shed light on whether the high performance is due to the models or the dataset itself. Table 4 : Ablation study on FOIL (Nouns)." ]
[ "accuracy", "equivalent models" ]
result
{ "title": "Defoiling Foiled Image Captions", "abstract": "We address the task of detecting foiled image captions, i.e. identifying whether a caption contains a word that has been deliberately replaced by a semantically similar word, thus rendering it inaccurate with respect to the image being described. Solving this problem should in principle require a fine-grained understanding of images to detect linguistically valid perturbations in captions. In such contexts, encoding sufficiently descriptive image information becomes a key challenge. In this paper, we demonstrate that it is possible to solve this task using simple, interpretable yet powerful representations based on explicit object information. Our models achieve stateof-the-art performance on a standard dataset, with scores exceeding those achieved by humans on the task. We also measure the upperbound performance of our models using gold standard annotations. Our analysis reveals that the simpler model performs well even without image information, suggesting that the dataset contains strong linguistic bias." }
{ "title": "FOIL it! Find One mismatch between Image and Language caption", "abstract": "In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities. To this end, we propose an extension of the MS-COCO dataset, FOIL-COCO, which associates images with both correct and 'foil' captions, that is, descriptions of the image that are highly similar to the original ones, but contain one single mistake ('foil word'). We show that current LaVi models fall into the traps of this data and perform badly on three tasks: a) caption classification (correct vs. foil); b) foil word detection; c) foil word correction. Humans, in contrast, have near-perfect performance on those tasks. We demonstrate that merely utilising language cues is not enough to model FOIL-COCO and that it challenges the state-of-the-art by requiring a fine-grained understanding of the relation between text and image." }
1801.10576
1503.03305
Linear Gaussian model and mean regression
This is no surprise, since it is based on a structural assumption that lifts the curse of dimensionality #REFR .
[ "There are two main observations resulting from this simulation study.", "First, the √ n rate in the root-mean-square error (RMSE) clearly appears for both parametric methods.", "As expected however, the OLS is more efficient, with the relative efficiency of the copula-based estimator being between 60% and 70%.", "Second, the curse of dimensionality affects the convergence rates of both nonparam estimators.", "For the estimator based on vine copulas however, the convergence rate of a nonparametric regression with a single covariate is retained." ]
[]
[ "structural assumption", "dimensionality" ]
background
{ "title": "Solving estimating equations with copulas", "abstract": "Thanks to their ability to capture complex dependence structures, copulas are frequently used to glue random variables into a joint model with arbitrary one-dimensional margins. More recently, they have been applied to solve statistical learning problems such as regression or classification. Framing such approaches as solutions of estimating equations, we generalize them in a unified framework. We derive consistency, asymptotic normality, and validity of the bootstrap for copula-based Z-estimators. We further illustrate the versatility of such estimators through theoretical and simulated examples." }
{ "title": "Evading the curse of dimensionality in nonparametric density estimation with simplified vine copulas", "abstract": "Practical applications of multivariate kernel density estimators in more than three dimensions suffer a great deal from the well-known curse of dimensionality: convergence slows down as dimension increases. We propose an estimator that avoids the curse of dimensionality by assuming a simplified vine copula model. We prove the estimator's consistency and show that the speed of convergence is independent of dimension. Simulation experiments illustrate the large gain in accuracy compared with the classical multivariate kernel density estimator -even when the true density does not belong to the class of simplified vines. Lastly, we give an application of the estimator to a classification problem from astrophysics." }
1510.04161
1503.03305
Results for Scenario t5
Further, as discussed in #REFR in detail, by modeling only bivariate copulas nonparametrically the dreaded curse of dimensionality is evaded.
[ "In dashed lines the corresponding contour plot of the fitted parametric copula is shown (Joe copula with τ ≈ 0.25).", "The unsatisfying model fit is obvious and explains the rather high estimated MISE values.", "With this in mind, it is also understandable that a larger training sample size does not help to improve the model fit and prediction accuracy of D-vine quantile regression for this example.", "However, we observe that the nonparametrically estimated copula manages to model the non-monotonic dependence of the data quite well (for visualization purposes we added the data points transformed to have standard normal margins as well).", "Hence, by using a nonparametric copula to model the dependence of the pair (Y, X 1 ), a model misspecification as described in #OTHEREFR would be avoided." ]
[]
[ "bivariate copulas" ]
background
{ "title": "D-vine copula based quantile regression", "abstract": "Quantile regression, that is the prediction of conditional quantiles, has steadily gained importance in statistical modeling and financial applications. The authors introduce a new semiparametric quantile regression method based on sequentially fitting a likelihood optimal Dvine copula to given data resulting in highly flexible models with easily extractable conditional quantiles. As a subclass of regular vine copulas, D-vines enable the modeling of multivariate copulas in terms of bivariate building blocks, a so-called pair-copula construction (PCC). The proposed algorithm works fast and accurate even in high dimensions and incorporates an automatic variable selection by maximizing the conditional log-likelihood. Further, typical issues of quantile regression such as quantile crossing or transformations, interactions and collinearity of variables are automatically taken care of. In a simulation study the improved accuracy and saved computational time of the approach in comparison with established quantile regression methods is highlighted. An extensive financial application to international credit default swap (CDS) data including stress testing and Value-at-Risk (VaR) prediction demonstrates the usefulness of the proposed method." }
{ "title": "Evading the curse of dimensionality in nonparametric density estimation with simplified vine copulas", "abstract": "Practical applications of multivariate kernel density estimators in more than three dimensions suffer a great deal from the well-known curse of dimensionality: convergence slows down as dimension increases. We propose an estimator that avoids the curse of dimensionality by assuming a simplified vine copula model. We prove the estimator's consistency and show that the speed of convergence is independent of dimension. Simulation experiments illustrate the large gain in accuracy compared with the classical multivariate kernel density estimator -even when the true density does not belong to the class of simplified vines. Lastly, we give an application of the estimator to a classification problem from astrophysics." }
1603.04229
1503.03305
Summary and extensions
It implements a kernel estimator of general multivariate densities based on vine copulas #REFR , which use marginal densities and bivariate copulas as building blocks.
[ "Additionally, estimating a copula from discrete data necessarily involves modeling of the marginal distributions, which is deliberately avoided in kdecopula.", "2. It does not allow for more than two variables.", "One major issue is that kdecopula uses interpolation to evaluate and renormalize the estimators.", "In more than two dimensions the number of grid points explodes rapidly and renders the interpolation approach infeasible.", "A kdecopula-based solution for both points is the kdevine package #OTHEREFR ." ]
[ "Continuous convolution #OTHEREFR ) is used to handle discrete variables, which induces copulas similar to the multilinear copula (see, #OTHEREFR Genest, Nešlehová, Rémil-lard et al. 2014) ." ]
[ "bivariate copulas" ]
method
{ "title": "kdecopula: An R Package for the Kernel Estimation of Bivariate Copula Densities", "abstract": "We describe the R package kdecopula (current version 0.9.0), which provides fast implementations of various kernel estimators for the copula density. Due to a variety of available plotting options it is particularly useful for the exploratory analysis of dependence structures. It can be further used for accurate nonparametric estimation of copula densities and resampling. The implementation features spline interpolation of the estimates to allow for fast evaluation of density estimates and integrals thereof. We utilize this for a fast renormalization scheme that ensures that estimates are bona fide copula densities and additionally improves the estimators' accuracy. The performance of the methods is illustrated by simulations." }
{ "title": "Evading the curse of dimensionality in nonparametric density estimation with simplified vine copulas", "abstract": "Practical applications of multivariate kernel density estimators in more than three dimensions suffer a great deal from the well-known curse of dimensionality: convergence slows down as dimension increases. We propose an estimator that avoids the curse of dimensionality by assuming a simplified vine copula model. We prove the estimator's consistency and show that the speed of convergence is independent of dimension. Simulation experiments illustrate the large gain in accuracy compared with the classical multivariate kernel density estimator -even when the true density does not belong to the class of simplified vines. Lastly, we give an application of the estimator to a classification problem from astrophysics." }
1712.05527
1503.03305
Nonparametric copula density estimation and cVaR
Combining transformation and local likelihood estimation, the procedure actually takes advantage of the known uniform margins of C 1 , which results in remarkably accurate estimation (Geenens et alii, 2017 #REFR , De Backer et alii, 2017 .
[ "For U ∼ U [0,1] , Φ −1 (U ) ∼ N (0, 1), hence g 1 has unconstrained support with standard normal marginals.", "Local likelihood methods, in particular #OTHEREFR 's local log-quadratic estimator, are particularly good at estimating normal densities, hence the appropriateness of using this type of methodology for estimating g 1 in this context.", "Geenens et alii (2017)'s estimator, called 'LLTKDE2', is actually (4.5) withĝ 1 being the local log-quadratic estimator of the bivariate density g 1 based on pseudo-observations {Φ −1 F X (X t ) }. Its theoretical properties were obtained.", "In particular, under mild assumptions, it was shown to be uniformly consistent on any compact proper subset of I, and asymptotically normal with known expressions of (asymptotic) bias and variance.", "In addition, a practical criterion for selecting the always crucial smoothing parameters was studied and tested." ]
[ "Besides, the LLTKDE2 estimates typically enjoy a visually pleasant appearance usually peculiar to parametric fits.", "What is suggested in this paper is to use that LLTKDE2 estimatorĉ 1 in (4.4) and proceed with the extraction of cVaR. The nonparametric copula-based cVaR estimator is thus defined as", "whereF −1 X T |X T −1 is the generalised inverse of (4.4).", "Given thatF X is uniformly consistent for F X on R and ĉ 1 is uniformly consistent for the integrable c 1 on any proper compact subset of I, (4.4) is also uniformly consistent over any compact subset of R 2 by the ergodic theorem. It classically follows that, provided c", "but details are left aside owing to the rather unwieldy expressions in Geenens et alii (2017) ." ]
[ "local likelihood estimation" ]
method
{ "title": "A nonparametric copula approach to conditional Value-at-Risk", "abstract": "Value-at-Risk and its conditional allegory, which takes into account the available information about the economic environment, form the centrepiece of the Basel framework for the evaluation of market risk in the banking sector. In this paper, a new nonparametric framework for estimating this conditional Value-at-Risk is presented. A nonparametric approach is particularly pertinent as the traditionally used parametric distributions have been shown to be insufficiently robust and flexible in most of the equityreturn data sets observed in practice. The method extracts the quantile of the conditional distribution of interest, whose estimation is based on a novel estimator of the density of the copula describing the dynamic dependence observed in the series of returns. Real-world back-testing analyses demonstrate the potential of the approach, whose performance may be superior to its industry counterparts." }
{ "title": "Evading the curse of dimensionality in nonparametric density estimation with simplified vine copulas", "abstract": "Practical applications of multivariate kernel density estimators in more than three dimensions suffer a great deal from the well-known curse of dimensionality: convergence slows down as dimension increases. We propose an estimator that avoids the curse of dimensionality by assuming a simplified vine copula model. We prove the estimator's consistency and show that the speed of convergence is independent of dimension. Simulation experiments illustrate the large gain in accuracy compared with the classical multivariate kernel density estimator -even when the true density does not belong to the class of simplified vines. Lastly, we give an application of the estimator to a classification problem from astrophysics." }
1804.03724
1503.03305
Analytical tests
As expected #REFR , increasing the dimensionality of the problem both increases the amount of error in the system and decreases the rate of convergence.
[ "The analytical tests were performed using samples drawn from 1D, 2D, 3D unbiased unimodal Gaussian PDF distributions, which were reconstructed on a uniform grid of N d = 60 d gridpoints (d = 1, 2, 3 dimensions) using both the GMM method (Eq. 30) and the KDE methods (Eqs. 31-33, Gaussian-KDE and Epanechnikov-KDE).", "The results are reported in Figures 2 and 3 showing the convergence rate (Fig. 2 ) and the computational time (Fig.", "3) as a function of the particle number N mc .", "The most evident trend in the convergence rate plot (Fig.", "2) is the \"curse of dimensionality\" of a probability distribution of dimension d." ]
[ "As can be seen from the same figure, increasing the sample size from 10 1 to 10 5 in the one-dimensional Gaussian-KDE case decreases the error from 20% to 0.008%; in three dimensions, the same density estimate only decreases the M ISE from 30% to 0.3%.", "The GMM model has a less drastic decrease in convergence rate.", "However, while noticeably better with large sample sizes, the GMM performs worse in all three dimensions at a low sample size than the KDE methods; it only performs better than the KDE when the sample sizes are greater than ∼15, 30, and 100 in one, two, and three dimensions, respectively. The computational time for the density estimates (Fig.", "3) increases with both the number of particles N mc and the dimensionality of the problem, with approximately an order of magnitude increase per dimension.", "Notably, the cost of the GMM model only marginally increases with dimensionality." ]
[ "convergence", "dimensionality" ]
background
{ "title": "Density Estimation Techniques for Multiscale Coupling of Kinetic Models of the Plasma Material Interface", "abstract": "In this work we analyze two classes of Density-Estimation techniques which can be used to consistently couple different kinetic models of the plasma-material interface, intended as the region of plasma immediately interacting with the first surface layers of a material wall. In particular, we handle the general problem of interfacing a continuum multi-species Vlasov-Poisson-BGK plasma model to discrete surface erosion models. The continuum model solves for the energy-angle distributions of the particles striking the surface, which are then driving the surface response. A modification to the classical BinaryCollision Approximation (BCA) method is here utilized as a prototype discrete model of the surface, to provide boundary conditions and impurity distributions representative of the material behavior during plasma irradiation. The numerical tests revealed the superior convergence properties of Kernel Density Estimation methods over Gaussian Mixture Models, with Epanechnikov-KDEs being up to two orders of magnitude faster than Gaussian-KDEs. The methodology here presented allows a self-consistent treatment of the plasma-material interface in magnetic fusion devices, including both the near-surface plasma (plasma sheath and presheath) in magnetized conditions, and surface effects such as sputtering, backscattering, and ion implantation. The same coupling techniques can also be utilized for other discrete material models such as Molecular Dynamics." }
{ "title": "Evading the curse of dimensionality in nonparametric density estimation with simplified vine copulas", "abstract": "Practical applications of multivariate kernel density estimators in more than three dimensions suffer a great deal from the well-known curse of dimensionality: convergence slows down as dimension increases. We propose an estimator that avoids the curse of dimensionality by assuming a simplified vine copula model. We prove the estimator's consistency and show that the speed of convergence is independent of dimension. Simulation experiments illustrate the large gain in accuracy compared with the classical multivariate kernel density estimator -even when the true density does not belong to the class of simplified vines. Lastly, we give an application of the estimator to a classification problem from astrophysics." }